Muscle-Propelled Force Feedback: accepted at CHI 2013

31 Jan

Our prototype electrically stimulates the user’s arm muscles via the shown electrodes, causing the user to involuntarily tilt the device. As he is countering this force, he perceives force feedback.

A participant from our user studies experiencing the force feedback sensations delivered by our prototype. 

More informationhttp://www.hpi.uni-potsdam.de/baudisch/projects/muscle-propelled-force-feedback.html

 

Keynote on Biohacking at Campus Party Europe 2012

5 Nov

Biohacking and Brain Computer interfaces: the future of Human Computer Interaction (video/youtube)

Image

As a discipline, Human Computer Interaction (HCI) has always wandered on the boundaries between art and science, mainly due to its multidisciplinary genealogy, but often because it is a process of envisioning and questioning the future. The ideas of Cyborg, Augmented Humans and Bio-Feedback have been strong research themes for decades in HCI.

However, two currents need to merge in order for us to grasp the “augmented human”: firstly, to sense and understand ourselves is crucial – reading and understanding our bio-signals and secondly, to let ourselves be controlled. My keynote (video here) on the Leonardo Stage revolved around such matters, where art and science naturally meet to discuss how contemporary bodies will sound and look.

Artists & scientists: Push Boundaries! My keynote at Body Controlled #4 / at LEAP Berlin

23 Jul

My keynote at LEAP Berlin, about art and science – focus on how artists and scientists push boundaries! [Photo, courtesy of Peter Kirn]

Read the bio/synopsis from the program of Body Controlled here.

More will be published on the upcoming issue of the Canadian Journal of Electroacoustics!

Interactive Construction (Constructable) accepted at ACM UIST 2012

23 Jul

Constructable by Stefanie MuellerPedro Lopes, and Patrick Baudisch

To appear in Proceedings of UIST ’12 (Paper). Constructable is an interactive drafting table that produces precise physical output in every step. Users interact by drafting directly on the workpiece using a hand-held laser pointer. The system tracks the pointer, beautifies its path, and implements its effect by cutting the workpiece using a fast high-powered laser cutter.

More soon!

#ITS2011: Augmenting Touch Interaction Through Acoustic Sensing

9 Nov

Pedro Lopes, Ricardo Jota, Joaquim Jorge
[Published at ACM ITS 2011, Kobe, Japan / paper]

Recognizing how a person actually touches a surface has generated a strong interest within the interactive surfaces community. Although we agree that touch is the main source of information, unless other cues are accounted for, user intention might not be accurately recognized. We propose to expand the expressiveness of touch interfaces by augmenting touch with acoustic sensing. In our vision, users can naturally express different actions by touching the surface with different body parts, such as fingers, knuckles, fingernails, punches, and so forth – not always distinguishable by touch technologies but recognized by acoustic sensing. Our contribution is the integration of touch and sound to expand the input language of surface interaction.

Update: some media coverage on the press [Portuguese press / Exame Informática]

#ACE2011 paper: Hands-on Tabletop LEGO Application

9 Nov

Daniel Mendes, Pedro Lopes and Alfredo Ferreira, ACE 2011. 

The recent widespread of multi-touch interactive surfaces make them a privileged entertainment device. Taking advantage of such devices, we present an interactive LEGO application, developed accordingly to an adaptation of building block interactions and gestures for multi-touch tabletops.

Our solution (LTouchIt) accounts for bimanual multi-touch input, allowing users to create 3D models on the tabletop surface. Through user testing, we compared LTouchIt with two LEGO applications. Results suggest that our interactive application can provide a hands-on experience, more adequate for entertainment purposes than available LEGO applications, which are mouse based and follow a traditional single input interaction scheme.

Upcoming paper presentations! #ITS2011 and #ACE2011

9 Nov

Two new papers:

ACM ACE 2011: Hands-on Interactive LEGO tabletop

ACM ITS 2011: Augmenting Touch Interaction Through Acoustic Sensing

SBIM 2011: Combining bimanual and pen-based input for 3D modelling

2 Aug

Multitouch enabled surfaces can bring advantages to modelling scenarios, in particular if bimanual and pen input can be combined. In this work, we assess the suitability of multitouch interfaces to 3D sketching tasks. We developed a multitouch enabled version of ShapeShop, whereby bimanual gestures allow users to explore the canvas through camera operations while using a pen to sketch. This provides a comfortable setting familiar to most users. Our contribution focuses on comparing the combined approach (bimanual and pen) to the pen-only interface for similar tasks. We conducted the evaluation helped by ten sketching experts who exercised both techniques. Results show that our approach both simplifies workflow and lowers task times, when compared to the pen-only interface, which is what most current sketching applications provide.

Battle of the DJs: an HCI perspective of Traditional, Virtual, Hybrid and Multitouch DJing

16 Apr

To be presented at NIME 2011 (New Interfaces for Musical Expression, Oslo ). The remainder of the program is very interesting, so please feel free to look around here.

What about DJing?

How does it equates within an HCI perspective?

What forms of interaction exist with DJ gear?

How can one classify those interactions/gear?

The following paper addresses these questions, trying to create a framework of through concerning DJ interactions – a very idiosyncratic type of “music interfaces” – and proposes an evaluation of Traditional (analogue turntables, mixers and CD players), Virtual (software), Hybrid (traditional and software in synergy) and Multitouch (virtual and traditional in synergy).

The term multitouch here defines not a categorization but rather an implementation of a touch sensing surface DJing prototype. Explorations among new interfaces for DJing has already been hapenning for at least 10 years, but an evaluation from such perspective (interactions and devices) can aid researchers and developers (as well as DJs) to understand which tools fit in which style. 

Sharing (old) knowledge#2: Starting Multitouch Dev

23 Mar

The following information was written by me one year ago on the Intelligent Multimodal Interfaces course internal forum. Since there are new students in need of fresh info, I reposted it here, make the best of it:

Here it goes: multitouch frameworks, possibilities and simulators – all of them open source + cross platform.

1) Multitouch possibilities
As far as developing MT (short for multitouch) applications you have a wide number of choices, the best way you can move around is stick to standarts (namely OSC and TUIO) because they are the main drive-forces behing MT communities (the biggest is NUI group), if you stick with frameworks or whatever that assures this standards you can find your way among new programs or easily integrate new code (or code from others!) – and new hardware too.

The most important standard is TUIO, which describes a multitouch event. Imagine that a user touches the table (surface, display, whatever) you will receive a TUIO formatted message saying:
– was it a touch or an identified object (these are called fidutial markers)
– position
– acceleration (and many things)
– unique identifier (this is the most important thing apart from the x+y)

With this you can code your application in no time (okay.. a lot of time can be used =P). Here’s the top TUIO software implementations that are availabe (full list here):

I’ve used the C++, Processing, PureData and Flash AS3. They are pretty much the same idea/concept but ported to the idiosyncrasies of each language – you are probably not familiar with either Processing nor PureData but they are programming languages designed for multimedia (Processing is java-derivate language developed by the MIT for graphical designers and PureData is mainly derieved to sound programming but it is a graphical language so there’s no code).

I recommend you base your decision in the following factors:
1 – are you comfortable in this language?
2 – what graphic back-end do you want do use?
3 – how fast should it run?

Because as you know 1) matters a lot… 2) depends on what graphics you want (OpenGL preferably uses C++ or Java, Processing or Pd) while Flash is a different possibility – there’s also a pythonMT (python multitouch framework) and you can join it with pyglet (opengl for python). The 3) is very easy, it goes like this, from fastest to slowest (based on my experience): C++, PureData or Java, Processing or Flash.

So its all up to you, for instance, in my thesis I decided to use Flash AS3 for the interface, because its easy to code (well.. should be) and gives nice graphics with few effort, because it is not an efficient framework I use a C++ core and the audio engine is in PureData (because I’m familiar and its medium-fast) – here’s a video.

2) Multitouch Trackers

These are the core components, they track the video input and determine how many touch points there are. Once again, if you use a standard tracker that can output TUIO messages, you can use all of the frameworks that I’ve mentioned.

The full list of trackers is here, I’d recommend:

  • reacTIVision: a computer vision framework for object tracking and basic multi-touch
  • Community Core Vision: an OpenFrameworks based multi-touch tracker (formerly tbeta)
  • Touchlib: the first free library for multi-touch surfaces based on FTIR and DI (currently discontinued in favor of CCV – see above)
  • gstreamer-tuio: a gstreamer plugin for blob detection, sending TUIO
  • bamboo-tuio: TUIO for Wacom Bamboo (Pen&)Touch tablets
  • Tongseng: a TUIO wrapper for Macbook multi-touch pads
  • Wiimote Whiteboard: platform-independent Wiimote IR tracker supporting TUIO
  • WiimoteTUIO: another application sending the locations of IR sources detected by a Wiimote via TUIO
  • TuioTouch: a simple web based TUIO server

In VIMMI lab we use CCV (Community Core Vision) so I’d recommend this one, and by far it is the simplest to set up and use (just run it..hehe).

3) Multitouch Simulators

As I said, not everyone has a touchtable. So there are simulators, that allow you to simulate TUIO messages with the mouse. Here in this video you can see my resizing some objects on flash, using the Simulator (from Reactivision project) – it is by far the most simple and complete one. But here’s the ones I recommend:

  • Java based standard TUIO Simulator application (platform independent)
  • SimTouch: a TUIO/FlashXML simulator using the Adobe Air runtime
  • QMTsim: Qt based multi-touch simulator alternative (Win32, Linux)

The SimTouch has a nice idea, it is transparent so you can put on top of your running application – but… sometimes it seems to have some bugs. So I’d rather use TUIO simulator from the Pompeu Fabra folks (the ones that developed the Reactable).

Conclusions

There’s a million more things that I could talk, and the text I’ve wiritten is already to long… so If you need something ask sorriso Maybe I can help.

Links

My first posts in the thesis blog show a lot of the decisions and possibilities that you have for Multitouch, I have also a couple of vimeo videos trying some Flash multitouch libs, etc… and playing pong.