Mercedes Benz has shown a gesture based UI concept which highlights some important lessons about gestural interfaces.
Firstly, good on Mercedes Benz for experimenting with gesture based user interfaces for cars of the future, and before I get too far into critique mode, it’s worth saying that this concept, is just that – a concept. Nevertheless there are some interesting lessons to learn about gestural interfaces.
Using the Hand as a Mouse
As you watch this demonstration, you’ll notice that the user interface relies heavily on the position of the user’s hand. What that means is that the user has to rely on visual feedback from the screen to know if they have their hand in the right position.
In other words, the user interface treats the operator’s hand like a mouse pointer.
The problem with this is that it requires the operator’s attention – they need to monitor the display to place their hand correctly. Now, this is not a great idea in general because our hands and arms are fairly imprecise – especially when dangling in mid-air – but you can see that this is even more of a problem when you are driving and supposed to be focused on the road.
A couple of alternative approaches come to mind:
- Real Gestures
- Physical Buttons (gasp!)
Rather than relying on hand position, interpreting actual gestures would be a more effective approach. For example, different hand gestures (palm, fist, various numbers of fingers) could be used to represent different commands. (Maybe the system being used here just doesn’t have that level of fidelity.)
The advantage of using gestures is that they are independent of hand position. That means the operator can keep their eyes on the road, while simultaneously making gestures. The disadvantage of gestures is that they require the user to memorise them (except for maybe a few, more intuitive gestures – like palm for stop in many cultures).
Are physical buttons so bad in en environment like a car cockpit? Physical buttons have some particular advantages:
- You get physical feedback through your sense of touch to confirm you’ve selected the command.
- Once spotted visually, an operator can rely on spatial memory and proprioception (are sense of how our limbs are physically deployed) to select a button – they do not have to maintain attention.
- Once used a few times, muscle memory can kick in, thus negating the need for visual target acquisition (yes human factors folk refer to it as target acquisition).
- Most of the time a physical button has only one function which never changes. This allows for effective mental mapping.
When is a gesture not a gesture
To my mind, a gesture’s meaning is determined by the relative position or movement of limb and digits, not merely a position in space.
At the end of the day both approaches have their role in gesture-based products. Hand positions are best for new users, so long as they can be fully attentive. Gestures are better for situations where the operator’s attention is elsewhere – but they need to be learned.