Documentary »Hello World! Processing«

Hello World! Processing from Ultra_Lab on Vimeo.

Hello World! Processing is a documentary on creative coding that explores the role that ideas such as process, experimentation and algorithm play in this creative field featuring artists, designers and code enthusiasts. Based on a series of interviews to some of the leading figures of the Processing open programming platform community, the documentary is built itself as a continuous stream of archived references, projects and concepts shared by this community.

It is the first chapter of a documentary series on three programming languages — Processing, Open Frameworks & Pure data — that have increased the role of coding in the practice of artists, designers and creators around the world.

The series explores the creative possibilities expanded by these open source tools and the importance of their growing online communities.

See more information at hello-world.cc

G-Speak

(Via Dynamic Information Design Seminar Blog)

Minority Report science adviser and inventor John Underkoffler demos g-speak — the real-life version of the film’s eye-popping, tai chi-meets-cyberspace computer interface. Is this how tomorrow’s computers will be controlled?

G-Speak is a really interesting concept. Right now I do not feel it is where it should be to be adopted on a broader scale: You need a certain environment with at least 2-3 square meters of space in front of a quite large screen.

I wonder if Microsoft will offer a extension to its Project Natal sensor some day — so that voice commands, body language and hand gestures create an immersive UI.

I can imagine that one day displays will cover complete walls so that you get a pretty cave-like situation. It is maybe time for another Display seminar?

Björn Hartmann: Enlightened Trial and Error

Björn Hartmann (Stanford HCI Group) talks about the different prototyping tools he and his collaborators have built to address two research questions:

1) How can tools enable designers to create prototypes of ubiquitous computing interfaces?

2) How can design tools support the larger process of learning from these prototypes?

(Duration: 1 hour, 13 minutes; this is from Stanford’s HCI Seminar lecture series, February 2009; This is a more in-depth version of the talk Bjorn gave at Interaction 09)

Next generation of devices: Tablets

There have been many attempts to make a computer work from your pocket and without a keyboard. Apple is rumored to work on a tablet device. It has invented the Newton Message Pad over 15 years ago – which was a marvelous (but expensive) device at that time.

Microsoft is working on a new prototype that features a dual-screen called Courier. Here is a design mockup (published by Gizmodo) that shows how the device could look like:

Here is a discussion from TechViShow:

I’m am a little bit skeptical looking at the design mockup. And I think Microsoft should take a different course: finish the product in the lab and market it as “availbale now” instead of creating new vaporware.

GPS + Compass + Motion sensors = Augmented Reality

The new iPhone 3GS adds a compass to the set of sensors. Combined with the GPS, the motion detection sensor and some image change detection via the internal video camera, this enables a new breed of “augmented reality” applications.

NearestWiki for example displays WikiPedia entries about buildings and places in the vicinity.

NearestWiki is not the first augmented reality app for the iPhone, but it is the first that is not tied to a specific region or city (like Metro Paris)

Next versions of the iPhone may feature more precise sensors and a lower latency – giving a much better feeling (e.g. labels not jumping around in the scenery).

Thoughts on Google Wave

I am just collecting some thoughts about some observations and issues – while I am trying to understand Google Wave.

google_wave.jpg

(see a demo on their site)

Google Wave is an integrated set of technologies (with protocols that allow semi-synchronous editing of outlines and their federation across several servers). With this approach Google Wave solves some difficult technical and infrastructural problems.

But it also generates some new problems that need to be solved to make Google Wave a success — otherwise I think users will not adopt the system (which in case of Google will set the seal on this project I suppose!).

1. Misty horizon

Google Wave is a frameworked solution for things people did not ask for and communication processes that no one is practicing yet (but no one has really “asked” for the mouse as input device either!). It is hard to see where Google Wave is going to be. This breeds creation, but it also challenges the non-developer. There will be best practices, but it will take a lot of time to identify use cases that people can learn “to wave” with.

So Google Wave challenges the imagination – and few people will be able to answer the “What is it all about?” question easily. The horizon is schrouded in mist.

Possible approach: A potential solution to this is to start with guided tours (a LOT of them) showing very common and powerful use cases for different scenarios. This is probably going to happen when Wave gets closer to the public beta.

2. Asynchronous patterns

We have learned to communicate in a turn taking fashion. It is polite to let someone speak until he has finished before starting to respond. It is not polite for everyone to speak up at any time. Waves allow people to reply or edit without obeying to the turn taking pattern. This can cause “stress” and also a lot of misunderstanding. People could reply to a text, that is going to change without them noticing that. Their reply suddenly become nonsense – the playback feature could become the only way to percieve a conversation properly. But playback is new – people have learned that the threaded view is a chronology – but in Google Wave it is not (or not necessarily).

Even with the playback feature, people need to become aware of the asychronicity in Google Wave – and learn how to recap conversations correctly.

Possible approach: Find a very good way to understand the chronology of a wave (e.g. making the playback as fundamental for navigation of a wave like scrolling)

3. Information (over)flow

While Google Wave may integrate many messaging systems – it also generates a lot of density. Means of communication that were apart from each other – using different URLs and applications for each – are now combined. The crucial part of that is to understand which option is suited for what purpose.

With Waves being set to “updated” by displaying them in bold typeface and sorting it to become a top item in the inbox, this also means that things are brought to my attention that should remain buried for a good reason. Google Wave users would have to learn how to manageund understand the “Inbox” and the “Active” areas properly, to be able to get the most out of it.

Possible approach: Allow users very powerful and fine grained control over the way they are informed about updates.

4. Scattered spaces and framgmented scopes

One of the things that really can make things too complex to be comprehended properly is that people can read & write to waves – but replies can extend or narrow the scope (e.g. who may read and reply to a new item. Who is reading? Who am I replying to? Is this part really private or not? Am I releasing a secret to the public accidentally?)

With a view from a different angle: What I can see within a wave may be different to what someone else is seeing. To make my communication appropriate to the situation I need to be able to “read” from a different standpoint. It is required to understand when communication could fail on the recieving end.

Whenever I want to understand the perspective of someone else – in need to be able to represent his/her view in my mind. The change of scope for parts of a wave within that wave can make this difficult.

Possible approach: Make any changes of the scope (e.g. recipient list) within a wave very visible and allow users to navigate them.

WebKit adds 3D

The developers of the Webkit HTML rendering engine (the one that is used in the Apple Safari Browser) have added 3D styles to CSS. It allows layers to be rotated, scaled and moved in a 3D space.

You need to download a nightly build of the browser to see it working.

There are quite a number of applications for this I can think of. I wonder if this approach will be adopted by the W3C for a new CSS standard.

Project Natal

Obviously Microsoft feels the need to claim back some market share the Nintendo Wii took away with a new controller type. Project Natal is utilizing a range of biometric sensors for body motion, face and voice recognition.

The video is more a vision than an actual feature presentation. But it is clear what the goals are.

Here is another Video from the demonstration that shows what is possible right now:

Director 11 – there’s life in the old dog yet

This is a surprise: Adobe announced Director 11 – the follow-up release to Director MX 2004. After years of speculation Adobe seems to be committed to develop Director further.

There is a rough comparison chart on the Adobe site which compares Director to Flash. I am not quite convinced the advantages of Director over Flash will set it apart and (re-)create its own market (or re-create its former market). The ubiquity of the Flash plug-in, YouTube & Co, ActionScript 3 and Flex have brought a lot of seriousness to the Flash platform in the past 3-4 years.

Alahup CMS

The CMS Alahup! seems to endorse a lot of the interaction design techniques of Web 2.0 applications (blind ups & downs, yellow flashes, spinning activity icons, etc) even though it appears to be a desktop application. The website states that the interface is based on Flash. So this application may well be an example what can happen when you pair Flash with the standard GUI and AJAX approaches.

Screenshot of the Alahup application

Besides of that it also seems to be a very easy to use CMS for smaller and middle sized sites (although I suspect the UI is not very accessible). The documentation for developers seems to be very good as well. On the site there are demo movies, a list of user features and developer features – and a blog. If you like PHP and Smarty then Alahup is worth a look.

Pavel

This is very interesting: a multi-user note-taking web-application. Click on this screenshot to get to the 5 minute screencast:

Pavel Screencast

I’d describe Pavel as some kind of “JotSpot Live with tinderbox-ish Notes” (see JotSpot Live and Tinderbox). The secret of the synchron updates of web pages between users is some code called LivePage. It is part of Nevow and that is part of Twisted (which is implemented with Python). Here is an article about Nevow. and some more detailed documentation as well. Twisted has become a grown up in the area of web application framworks. It is extremely powerful as it offers string modules & classes for almost anything you can do with a network attached to a computer. It offers ready made tools for activities many Ruby and PHP developers don’t even remotely heard about! Pavel is just an example.

Google Maps via Flash

Paul Neave shows how to integrate Google Maps with Flash. Amazing! This example shows the power of Web APIs combined with a cutting edge interactive tool like Flash (you even can rotate the maps via the compass wheel). Now he just needs to find a way to allow people to seamlessly replace the DHTML application provided by Google with his Flash client.

Paul has also some other very nice experiments in his Flash lab.

Audioscrobbler turned Last.fm – redesigned

I wrote about Audioscroblbler before. Now I found they’ve merged it with their Last.fm service and also reworked their design – and it seems they improved usability a lot.

Interestingly they offer blogs and tagging (they seem to have learned the lessons). The personal radio now can be used with a separate client (which is not yet available on OS X, so I can’t test it). Anyway Last.fm is a very useful way to get introduced to new music I probably like: I just have to browse the playlists of my neighbours.

I remember a similar feature back in the Napster days in 1999/2000: you could browse the music libraries of remote users that seem to share your music taste and learn about artists you never heard before. Last.fm and iTunes Music Store would make a good team. The “users that bought this also bought that”-pattern is too simple to be a real value.

Update: Last.fm just released an OS X Player.