Tag Archives: Surface

Battle of the DJs: an HCI perspective of Traditional, Virtual, Hybrid and Multitouch DJing

16 Apr

To be presented at NIME 2011 (New Interfaces for Musical Expression, Oslo ). The remainder of the program is very interesting, so please feel free to look around here.

What about DJing?

How does it equates within an HCI perspective?

What forms of interaction exist with DJ gear?

How can one classify those interactions/gear?

The following paper addresses these questions, trying to create a framework of through concerning DJ interactions – a very idiosyncratic type of “music interfaces” – and proposes an evaluation of Traditional (analogue turntables, mixers and CD players), Virtual (software), Hybrid (traditional and software in synergy) and Multitouch (virtual and traditional in synergy).

The term multitouch here defines not a categorization but rather an implementation of a touch sensing surface DJing prototype. Explorations among new interfaces for DJing has already been hapenning for at least 10 years, but an evaluation from such perspective (interactions and devices) can aid researchers and developers (as well as DJs) to understand which tools fit in which style. 

NUI design talk

3 Dec

As a thread in the NUI group forum, there’s this nice video talk from Darren David, Nathan Moody – Designing Natural User Interfaces from Interaction Design Association on Vimeo.
http://vimeo.com/moogaloop.swf?clip_id=4420794&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1

A brief summary of its core topics, and how they apply in my MTdjing application for music control over multitouch surfaces:

  • NUI goal: “eliminate proxy control do increase immersion” -> Computer DJs can get rid of those mice device (interacion proxys) and manipulate the digital medium with their hands like they would on their natural hardware (mixer,s turntables, cd decks, fx processors, vynil records, etc…).
  • “NUI emotional conotation if for play instead of work (GUI)” -> this means the usual coment that is heard about computer djs (“that they’re not that physical / they don’t move that much / they’re checking emails”) is colapsed under the fact that with a MTdjing application a DJ can perform all the tasks of his favoutire Djing-software with his hands moving around a table like a regular DJ.
  • “Manage the user expectation issue” luckly when we think about a DJing application its  an easy situations because we have an advantage: “this is the users old method”. A table + hand interaction is the way everyone Dj’ed until recently (where softwares like traktor and others appeared to the main public).
  • “Expectations of GUI” this can be a problem, because DJs that use actual virtual solutions (traktor Dj and other software to spin records) are used to a “software application” metaphor – and in a touchtable where talking about a NUI where the interaction is a bit diferent than an application running inside a computer: no mouse, no keyboard, no windows nor operating system, etc… So its hard to develop metaphor that will serve both the normal djs and the computer djs.
  • “Multi-user paradigm” this is where it really gets tricky. As far as MTdjing prototypes we are not considering the detection of each user – that is one of the core challenges of multitouch nowadays – because Djing is a task that you can do with multiple users but there’s no imediate need to understand who’s who. Of course we can think of matters where a multi-user multi-touch system for Djing matters, and I’ll write a post soon of some thoughs on “if we knew who’s hand is this”.
  • “Predictable, guessable interface” because we’re aiming for a mix of metaphors (somewhere beetween what computer djs see on their laptop screen and what hardware djs use with their hands) the interface will be inherently discoverable and natural for the users.
  • Real gestures matter” nothing need to be said about this, it just goes back to the previous bullet.

Final note: watch the video.

Gesture Research #1 (multitouching…)

28 Oct

So I gather a collection of gestures (that later will turn into my pool of tests for users analyse user behaviour – prior to defining the gestures for my dj system) by means of observation of multiple systems and multitouch demos:

a) Reactable

(note: the reactable relies mainly on tangible objects, but parameter changes are done via gestures/touch, so its a interface worth of researching..)

reactableGesture1ReactableGesture2

Gathered info from: Luckly, I’ve used the Reactable once – after a concert/showcase integrated in a festival where I played with Whit – and got a first person experience with the incredible interface. Also I’ve seen it being performed live 2 times in Portugal and there’s a bunch of videos out there for analysis. On a more technical note, you can acess the articles on the Reactable project at the archive page from UPF.


b) tbeta demos

(i.e.: the very popular photo demo, google maps navigation and etc…)

TbetaDemosGesture1

TbetaDeosGesture2

TbetaDemosGesture3

Gathered info from: from the demos package available on the NUI group page.

c) surface

(note: the surface has pretty much the same gestures as the rest of MT tables)

SurfaceGesture1

Gathered info from: Microsoft is a closed one, but I’ve tryed surface recently on the Mix event – where I saw the talk of August de los Reyes (director of the project) – and had my first person experience with the product there. Also there are a lot of videos to analyze the type of gestures mainly used but theres a void of technical information available.


d) iPhone and similar PDA/mobile devices

MobileGestures1MobileGestures2

Gathered info from: Recently I participated in the Future Places festival and in the final day I played with the THE FUTURE PLACES IMPROMPTU ALL-STARS ORCHESTRA, which gathers many artists to improv with music – luckly there was two guys playing with iPhone apps – one using sounds to feed an analogue synth and other using a touch app that sequenced music and sounds. Whatching them interact with those portable “instruments” told me a lil’ something about the gestures.

And: Also I’ve seen and tryed myself apps for iPhone that use gesture recognition to extend the capabilities of touch.

Some thougts on MT inspired by August de los Reyes (pt.1)

6 Oct

Last Friday (2th October) I attended the Remix 09 (many thanks to Andreia for the invitation), a conference day promoted by (and promoting) Microsoft, targeting the world of web design and future interfaces.

My interest to this was drawn by a talk about the Microsoft Surface project, specially because Microsoft tends to hide specs and implementation, I was wandering what would this be about. The talk was Predicting the Past: A Vision for Microsoft Surface, so I expected a lil’ more self-promoting.

Actually, it turned out to be a very interesting talk (congrats to August on that) with the emphasis being on the history of Computer Interfaces, rather than on the Surface. So I’ll lay here some thoughts and notes that I took along the talk:

The evolution…

The evolution of hardware and software pretty much opened the possibilities for expanding the interfaces with the virtual systems (i.e.: computers). First we had the CLI (command line interface) which is a all time favorite for many but is not suited for almost any task nowadays (mostly the ones that need interaction). With the invention of graphical stations for the computer, we arrived at the GUI (Graphical User Interface – which now stars the word “User” in its name). GUI still had a long way to go, from the early Douglas Engelbart’s PARC all the way to our sexy-looking-favorite-OS.

Why is not GUI enough?

The key here, is to look at people (the users) and their tasks. Sure the mouse isn’t a bad invention (it’s actually very good and precise when you learn it) but it sure isn’t intuitive nor made for every task. Can you imagine a painter drawing a picture with a mouse device… that’s the answer.

The NUI.

So the next generation of interfaces took everything to the physical level. Natural User Interface (NUI) is the term for describing many forms of interacting with virtual systems but with our own body or moving objects (called tangible objects or tangibles for short).

We shouldn’t forget many inventions that cleared the path, and showed us that many tasks need more than a keyboard and a mouse: gaming controls (joysticks, guns, etc…), sketch pads (digital drawing tools), digital modeling pens (3d modeling tools of today), MIDI interfaces (we soon realized that we cannot make real-time music with a mouse), and so on…

But NUI set itself apart by creating a new metaphor, that its almost non-existing. With all these devices (mouse, keyboard, pen, and so on) we are creating a metaphor for simulating a physical movement that will translate into a different virtual action. But NUI sets things more closely, a interaction in a NUI is much more familiar and real because the metaphor is embedded in your consciousness.

Mental Mode… Physical <-> Cognitive

A thing that August mentioned was the difference between the mental modes of these three interface levels:

CLI: disconnected, you have to type certain “codes” to make things happen.

GUI: indirect, you indirectly are making things moving and going.

NUI: unmediated, nothing is between you mind and the interaction level, you have the commands in your mind.

*IU (next generation): The physical and cognitive will be together to assume a higher level of interaction.

Interaction type…

A very (very!) interesting thing mentioned at Remix is the differences in these various layers, when it comes to interacting with the system to perform a task:

CLI: directed, you express a very direct command (that you must be aware of) and specify everything by text.

GUI: exploratory, in graphical systems we have the possibility to explore the world and discover how to perform the tasks using the given metaphors (grab, drag, etc..) and the simulated physical tool (mouse).

NUI: contextual. Interaction in a natural interface is always bound to context. If you are trying to move a dot from one point of the screen to the other, you drag it by hand (intuition/familiarity/social-aspect one may say…) but if you are trying to draw a line from one point to the other you will draw a line with your finger (which is exactly the same action – but in a different context).

Final note

1) I still have some more on this, probably will be posted tomorrow, and still spin around the concepts being NUI – specially regarding multi-touch on a table (this is just part 1 of probably 3).

2) Also August show an interesting way to learn/study how the users would like to interact with a certain task performed in a MT environment by doing a series of workshop-user-tests with just a plain paper. [very interesting for my work]