The new iPhone 3GS adds a compass to the set of sensors. Combined with the GPS, the motion detection sensor and some image change detection via the internal video camera, this enables a new breed of “augmented reality” applications.
NearestWiki for example displays WikiPedia entries about buildings and places in the vicinity.
NearestWiki is not the first augmented reality app for the iPhone, but it is the first that is not tied to a specific region or city (like Metro Paris)
Next versions of the iPhone may feature more precise sensors and a lower latency – giving a much better feeling (e.g. labels not jumping around in the scenery).
I have been thinking about Project Natal over the weekend. I do not want to discredit some of the innovations Microsoft has created over the last two decades – but for the most part Microsoft has not been able to create innovations on its own (but rather mimicking or buying stuff from outside). There may be some advances like C# and .NET – but generally this is insider stuff – meaning nothing to a wider public.
Project Natal may be the first true innovation with an Microsoft stamp on it. Fifteen years ago I have seen programmers trying to recognize 2D movements of arms and legs from a video – with results that were respectable – but never a game changer. Too much CPU power was required back then to be relevant in the consumer market.
To include the 3rd dimension in the motion detection is such a game changer. Combined with voice and face recoginition, this takes away the controller out of the control: your full persona is represented in the system – not just your fingertip. This is radical – and it has been a dream for many many years.
Just look at this example from game designer Peter Molyneux from Lionhead:
The device is so complex that a developer will have to have access to an SDK that allows simplified communication with the sensory system of Natal. Frameworks could provide automatic recognition of gestures to programmers – even in combination (so I you wave your arm, that would call another function than waving your arm and saying “Bye!”).
The level of precision could increase with future revisions. It could be combined with classical controllers. Maybe one day even finger positions, fluctuations/timbre of the voice, body temperature or point of view will be detected as well. Simple “lite” versions specialized on facial parameters could replace webcams in laptops.
So I do not look at Natal as a game controller – I see it as a complete new interface generation coming up.
Obviously Microsoft feels the need to claim back some market share the Nintendo Wii took away with a new controller type. Project Natal is utilizing a range of biometric sensors for body motion, face and voice recognition.
The video is more a vision than an actual feature presentation. But it is clear what the goals are.
Here is another Video from the demonstration that shows what is possible right now:
Bonnie Bassler discovered that bacteria “talk” to each other, using a chemical language that lets them coordinate defense and mount attacks. The find has stunning implications for medicine, industry — and our understanding of ourselves.
Shai Agassi is the CEO of The Better Place to get rid of oil dependency (especially for running vehicles). The idea: Give away electric cars for free (like mobile phones) and make the batteries part of the electric grid system (instead of a costly component of the car). You basically pay for miles, thus the service of mobility – not for the hardware.
Computer scientist Mark Seager of Lawrence Livermore National Laboratory claims that this will change the scientific method for the first time since Galileo invented the telescope (in 1509)”.
The reason for that is that simulation and approximation can be used to come to acurate models of complex phenomena instead of just reasoning about formula by theory and experimenting to prove those.
With 362 terabytes of memory and 1.059 quadrillion floating-point calculations per second the Jaguar of the Oak Ridge National Laboratory is tuned for scientific calculations like climate and energy models, drug discovery, new materials, etc.
The question arises if these amounts of speed and data processing could one day break one fundamental rule: that some problems will always be beyond discovery through calculation. Neurology, psychology, sociology, economy and cultural studies are scientific areas that haven’t really started yet. Large scale simulation can be the one scientific method that is missing for those (implied that the methods of observation deliver enough data to model upon).
And if so, there is a danger that even governmental policies may one day be driven by probability and not ethics.
Fluid basically is a bare-bones web browser that turns a website into an double-clickable application. It is a website – but it feels like an application (as long as you are not offline of course). The original idea for Fluid was inspired by Mozilla’s Prism project.
But wait… what’s happening here?
Is this a step back because it disregards the openness and hypertextuality of the web by suggesting to constrain web pages that are not meant to be pointing to other sites into windows?
It is an interesting trend that — after big browser vendors now finally comply to standards — new concepts appear that require users to use certain devices or browsers (or plug-ins) to use them. Actually the initial design goal (and the reason for standardisation) was to get rid of these dependencies.
But this is not just about the web as standard. It is about users being able to create applications from the rich offerings of the web. It is about DJ-ing with code, mingling logic and shining ideas. Users that can translate “cool ideas” into fun things without becoming an expert first. And it’s about developers creating pieces that are basic and yet well crafted and interoperable. It is about everyone contributing to the story.
While it right now does conflict a little bit with the device-independency that has made the web strong… it may turn out big on the long run.
Many month ago Yahoo introduced Yahoo Pipes to the public – allowing to mix and process data from sites and RSS feeds from different sources (I have a master RSS feed of a pipe that represents almost all my blog activities)
Now Yahoo has expanded this model to include widgets for displaying the resulting: Yahoo Badges.
Yesterday Apple introduced the new iPhone. It features a very precise touch screen and some other sensors. On the first look it may only seem like a fancy phone that manages to get rid of buttons and integrate features of an iPod. But I think it is much more than that.
I believe Apple has really defined a new type of device. Just think for a second that it is not called iPhone — let’s say you don’t have any idea what an iPod, PDA or Smartphone is. So you have a device, that does communicate wirelessly through certain protocols, stores 8 Gigabyte of data, comes with this multi-touch display, mircophone, earphones, camera, speaker, volume control and a singular button on the front. The iPhone is not only a universal device — is a principle.
Now – just imagine apple would have just delivered the hardware to the open source community maybe with that OS X basis and some development tools to create apps. The screen could show any interface for whatever application you can think of. It is called “phone” so people can connect it to certain activities and they see an instant reason why they may buy one.
But let’s assume it is called “iHeld” or “iTouch”. Can you see why people will loose the competition against Apple in the very moment they try to make a competing phone?
I am very eager to see what tools Apple is going to provide for developers to create new applications for the “iPhone principle”.
Update:This GIZMODO story says the iPhone won’t be an open system that one can develop for (similar to iPods today). That would really be a pitty and it would disqualify iPhone for a lot of things that are possible with SymbianOS used on Nokia phones today. If the iPhone is not hackable, I potentially don’t want to have one.
The diggnation guys are now also sponsored by Barterbee.com. It is some kind of rag-fair online for Movies, Music or Games. You can put stuff in that you want to get rid of. But instead of trading items for real money you get points which you can use to shop new items on Barterbee. The revenue for Barterbee comes from the handling of the transaction: each deal costs a $1 fee the buyer has to pay to barterbee.com.
While trading “points for goods” seems not to be a very exciting concept, barterbee.com in fact offers its own currency that can only be uses on the barterbee.com site. Users can buy points for $1.
But what if you gained 1000 points and you don’t want to buy anything at Barterbee anymore? Can you “trade” your points for money? Probably not. I am not sure if people think that through before signing up such a system.
John Maeda is one of the curators this year. On the Ars Electronica Website there is an opening statement from him:
SIMPLICITY is a complex topic that has no single, simple answer. We live in an increasingly complex technological world where nothing works like it is supposed to, and at the end of the day makes all of us hunger for simplicity to some degree. Yet ironically when given the choice of more or less, we are programmed at the genetic level to want more.“Would you like the big cookie or the smaller cookie?” or “Would you like the computer with ten processors or just one?” The choice is simple really, or is it? For the Ars Electronica Symposium on SIMPLICITY we think together about what simplicity (and complexity) means in politics, life, art, and technology. Expect more than you can ever imagine, and less.
I ran a seminar about this topic two years ago. Maybe it is time to have a »Simplicity Reloaded« seminar in winter?
Today during a train ride between Cologne and Aachen I let MacStumbler scan vor access points that I passed by. During the 70km ride it catched signals of around 65 wireless LANs:
Usually regular housing is only close to railways in cities. On the country site buildings are rather sparse. Taking these facts into account I’d suspect the average densitiy of access points in a city here is so high, that you probably would be in the reach of at least one everytime. I think that is pretty amazing and also a completely new development in recent years. Maybe one day the density will grow so much that you will be in the reach of at least one FREE access point one day?
Anyway if I’d allow MacStumber to sign into each public access point I could wardrive around the city and collect new Plazes along the way. And probably one would be able to easly beat Tantek Çelik’s record of 429 discovered plazes so far…
The problem with XUL has always been a lack of development tools. XULrunner seems to fill a huge gap here. Anyway it seems that the web browser technology is set out to take over the standard user experience one day. Vendors will be able to deliver grown-up applications (and even parts of it) over the net at the time of request.
I have a constantly updated presentation about »The future of computing«. One chapter of it is about security and surveillance technology – the face recognition approach in particular. Two computer science students in Haifa, Israel, have invented a face recognition method with a 3D scan. It can radically improve the success rate and it was even able to seperate them apart: they are twins. The problem is that this approach requires a database of 3D scans of the faces to be matched with the current sample: So how would you collect those 3D datasets?
Now there is a research group in United Kingdom showing a 40 millisecond 3D scan from a single video frame (see demo video). It uses a projection of stripes on the face and then calculates the surface from their distortion. This could even become a way to recognize faces from a running video… and combined with light sources/camera sensors outside of the visible wave length I suspect it could even be possible to aquire the 3D scans without notice of the scanned person. I don’t know the approach is capable of recognizing people wearing glasses or a large beard.
Yesterday there was a meetup of web people at Hallmackenreuther café in Cologne. The topic was »Web 2.0« (among others). Now I read through the comments and I found people complaining about the spontanous informal character and the absence of something what they call »web 2.0« in the few presentations given.
Here is my take on this:
The mere fact, that 80 people come together within few days of notice, arrange for beers, beamers, laptops without formal invitation just by using a Wiki and some keywords (here, here, here and elsewhere) to me is already »Web 2.0«. The »Web 2.0 presentation« they were hoping to witness were themselves! It would have been impossible to do it that way some years ago.
But speaking of »Web 2.0« as technological term:
People use it as a meme. It’s an abstract word like »peace«. It doesn’t mean a thing – it’s a mode. A mode where technology can be a catalyst for emergence, spontaneity and openess. It does not come with the flaws of the »old school« openess where the idea that »anything goes« needed to be reinforced by expressively doing ridicolous and artsy things. That’s not needed anymore: the concept is understood. Today we have »conditional openess«: there may be a license attached or a policy you have to agree on.
We just need to admit that the web as we know it was a beta release for the last 15 years and that it has matured technology-wise. And if one day we can get rid of the browser dependency then we have reached an original design goal! I would have no problem if someone calls that »Web 2.0« just because the first major milestone was reached. It’s a big project. It makes more fun to raise the version number at every milestone! No one wants to work on a beta for the rest of his life (well, some do indeed!).
While others are still resisting the idea to webcast course content online Stanford moves along by integrating (at least some of) their educational content with the iTunes Music Store. Can you see the iTunes Educational Shop coming up? One-click shopping for lectures and files that use the same DRM protection like the songs you buy at Apples Music Store? Right now the audio clips stream for $0.00 and you can download the pieces with a single click. Now nobody needs to stretch the imagination anymore. It is actually a no brainer – same technology, different application.