Last Friday (2th October) I attended the Remix 09 (many thanks to Andreia for the invitation), a conference day promoted by (and promoting) Microsoft, targeting the world of web design and future interfaces.
My interest to this was drawn by a talk about the Microsoft Surface project, specially because Microsoft tends to hide specs and implementation, I was wandering what would this be about. The talk was Predicting the Past: A Vision for Microsoft Surface, so I expected a lil’ more self-promoting.
Actually, it turned out to be a very interesting talk (congrats to August on that) with the emphasis being on the history of Computer Interfaces, rather than on the Surface. So I’ll lay here some thoughts and notes that I took along the talk:
The evolution of hardware and software pretty much opened the possibilities for expanding the interfaces with the virtual systems (i.e.: computers). First we had the CLI (command line interface) which is a all time favorite for many but is not suited for almost any task nowadays (mostly the ones that need interaction). With the invention of graphical stations for the computer, we arrived at the GUI (Graphical User Interface – which now stars the word “User” in its name). GUI still had a long way to go, from the early Douglas Engelbart’s PARC all the way to our sexy-looking-favorite-OS.
Why is not GUI enough?
The key here, is to look at people (the users) and their tasks. Sure the mouse isn’t a bad invention (it’s actually very good and precise when you learn it) but it sure isn’t intuitive nor made for every task. Can you imagine a painter drawing a picture with a mouse device… that’s the answer.
So the next generation of interfaces took everything to the physical level. Natural User Interface (NUI) is the term for describing many forms of interacting with virtual systems but with our own body or moving objects (called tangible objects or tangibles for short).
We shouldn’t forget many inventions that cleared the path, and showed us that many tasks need more than a keyboard and a mouse: gaming controls (joysticks, guns, etc…), sketch pads (digital drawing tools), digital modeling pens (3d modeling tools of today), MIDI interfaces (we soon realized that we cannot make real-time music with a mouse), and so on…
But NUI set itself apart by creating a new metaphor, that its almost non-existing. With all these devices (mouse, keyboard, pen, and so on) we are creating a metaphor for simulating a physical movement that will translate into a different virtual action. But NUI sets things more closely, a interaction in a NUI is much more familiar and real because the metaphor is embedded in your consciousness.
Mental Mode… Physical <-> Cognitive
A thing that August mentioned was the difference between the mental modes of these three interface levels:
CLI: disconnected, you have to type certain “codes” to make things happen.
GUI: indirect, you indirectly are making things moving and going.
NUI: unmediated, nothing is between you mind and the interaction level, you have the commands in your mind.
*IU (next generation): The physical and cognitive will be together to assume a higher level of interaction.
A very (very!) interesting thing mentioned at Remix is the differences in these various layers, when it comes to interacting with the system to perform a task:
CLI: directed, you express a very direct command (that you must be aware of) and specify everything by text.
GUI: exploratory, in graphical systems we have the possibility to explore the world and discover how to perform the tasks using the given metaphors (grab, drag, etc..) and the simulated physical tool (mouse).
NUI: contextual. Interaction in a natural interface is always bound to context. If you are trying to move a dot from one point of the screen to the other, you drag it by hand (intuition/familiarity/social-aspect one may say…) but if you are trying to draw a line from one point to the other you will draw a line with your finger (which is exactly the same action – but in a different context).
1) I still have some more on this, probably will be posted tomorrow, and still spin around the concepts being NUI – specially regarding multi-touch on a table (this is just part 1 of probably 3).
2) Also August show an interesting way to learn/study how the users would like to interact with a certain task performed in a MT environment by doing a series of workshop-user-tests with just a plain paper. [very interesting for my work]