Archive | HCI RSS feed for this section

#ITS2011: Augmenting Touch Interaction Through Acoustic Sensing

9 Nov

Pedro Lopes, Ricardo Jota, Joaquim Jorge
[Published at ACM ITS 2011, Kobe, Japan / paper]

Recognizing how a person actually touches a surface has generated a strong interest within the interactive surfaces community. Although we agree that touch is the main source of information, unless other cues are accounted for, user intention might not be accurately recognized. We propose to expand the expressiveness of touch interfaces by augmenting touch with acoustic sensing. In our vision, users can naturally express different actions by touching the surface with different body parts, such as fingers, knuckles, fingernails, punches, and so forth – not always distinguishable by touch technologies but recognized by acoustic sensing. Our contribution is the integration of touch and sound to expand the input language of surface interaction.

Update: some media coverage on the press [Portuguese press / Exame Informática]


#ACE2011 paper: Hands-on Tabletop LEGO Application

9 Nov

Daniel Mendes, Pedro Lopes and Alfredo Ferreira, ACE 2011. 

The recent widespread of multi-touch interactive surfaces make them a privileged entertainment device. Taking advantage of such devices, we present an interactive LEGO application, developed accordingly to an adaptation of building block interactions and gestures for multi-touch tabletops.

Our solution (LTouchIt) accounts for bimanual multi-touch input, allowing users to create 3D models on the tabletop surface. Through user testing, we compared LTouchIt with two LEGO applications. Results suggest that our interactive application can provide a hands-on experience, more adequate for entertainment purposes than available LEGO applications, which are mouse based and follow a traditional single input interaction scheme.

Battle of the DJs: an HCI perspective of Traditional, Virtual, Hybrid and Multitouch DJing

16 Apr

To be presented at NIME 2011 (New Interfaces for Musical Expression, Oslo ). The remainder of the program is very interesting, so please feel free to look around here.

What about DJing?

How does it equates within an HCI perspective?

What forms of interaction exist with DJ gear?

How can one classify those interactions/gear?

The following paper addresses these questions, trying to create a framework of through concerning DJ interactions – a very idiosyncratic type of “music interfaces” – and proposes an evaluation of Traditional (analogue turntables, mixers and CD players), Virtual (software), Hybrid (traditional and software in synergy) and Multitouch (virtual and traditional in synergy).

The term multitouch here defines not a categorization but rather an implementation of a touch sensing surface DJing prototype. Explorations among new interfaces for DJing has already been hapenning for at least 10 years, but an evaluation from such perspective (interactions and devices) can aid researchers and developers (as well as DJs) to understand which tools fit in which style. 

Sharing (old) knowledge#1: Speech recognition and synthesis

23 Mar

The following information was written by me one year ago on the Intelligent Multimodal Interfaces course internal forum. Since there are new students in need of fresh info, I reposted it here, make the best of it:


Sphinx Speech Recognition (open source), a cool demo can be seen here (controlling PureData with speech) – also I’ve found this how-to interesting (using python+sphynx, from the same author)

Sphinx4 (rewrite of Sphinx into java – more cross platform), there’s also a pocket version for mobile systems (iphone and so on) – it’s all part of this project from Carnegie Mellon.


eSpeak (written in C, either Win or Linux)
FreeTTS (java, cross)
flite (written in C, once again from CMU)

Web version: At&t text-to-speech (not open licensed)

Interacção 2010 final program announcement

25 Sep

The MtDjing has been accepetd in Interacção 2010 as previously posted, now the conference page already holds a final version of the program, that you can look via web of pdf document.

Our presentation will be held in the first session of the day followed by the talk of Shahram Izadi from Microsoft entitled “Making computers more natural to use”:


Orador Convidado

Making Computers More Natural to Use
Shahram Izadi (Microsoft)

16h00 Coffee Break

Sessão I – Interfaces Multi-toque

Conjuntos de gestos de comando para ferramentas de desenho em dispositivos sem teclado
Tiago Gomes, Carlos Duarte, Luís Carriço, Joana Neca e Tiago Reis

Técnicas de interacção para revisão de cenários 3D
Bruno de Araújo, Diogo Mariano, Ricardo Jota, Alfredo Ferreira e Joaquim Jorge

Interacção multi-toque no contexto do DJing
Pedro A. Lopes, João A. Madeiras Pereira e Alfredo Ferreira

Combining different types of interaction on multi-touch surfaces
Tarquínio Mota

Also in this “multitouch” session we have the presentation of yet another paper from VIMMI group (in this paper: Bruno de Araújo, Diogo Mariano, Ricardo Jota, Alfredo Ferreira e Joaquim Jorge), from the european project MAXIMUS, being presented by my college Diogo Mariano.