Monday, September 7, 2009

Gestural computing

Touchscreen is one thing, but how does this experience really differ from using a tablet pen, or a mouse? It's all just point and click, whether it be by arrow, or by finger.

If you look at an Apple keyboard you'll see icons representing some commands outside of a typical interface. e.g.dashboard, or expose.

The idea is simple, rather than put these on a touchscreen keyboard, use simple dashboards to indicate what you want.

E.g. a circle pops up dashboard, a square: expose.


Of course this means a few problems:
Gestural commands can be easily misinterpreted. A square with soft corners, could be perceived as a circle, for example.
Gestural commands have to be quick. No drawing the sun to indicate brightness, it would take too long.
Gestural commands might be limited initially.

But the potential of this new world of gestures, changes computing from a simple display to something truly interactive. You can 'speak' to your computer with simple signals, and really its the next step of evolution in computing: binary entry, command line, point and click, signal with natural gestures.

Imagine this new wave of sign language computing, and the possibilities. Application and game developers can create custom signs for their programs, users can interact with their computer and with each other, through simple gestures. Imagine your web cam even understands simple gestures. Now you can talk to your friend online, and signal to HIS computer activities you want to do. (share a document, send a smiley face, anything!)

It elevates how people interact with computers (badly) and puts us in the drivers seat because we can teach computers gestures they can learn from.

No comments:

Post a Comment