September 27, 2021

Air-gesture interfaces

Here are some experiments I’ve been wanting to make for a while

Introduction

1. the airbar

https://img.designdevelopmenttoday.com/files/base/indm/ddt/image/2019/09/16x9/Neonode_zForce_Sensor_Interaction.5d8e3f3ddd8df.png?auto=format&fit=max&w=1200
image from Neonode

Neonode makes these awesome optical 2D touch sensors, aptly named ‘zForce Air’. I find this product fascinating because it is so flexible in its applications. You can retro-fit and stick it on any surface to make it interactive, such as the airbar they sell, stick it onto your laptop to turn it into a touchscreen. But the possibilities go further, why not turn it around and make the area beside the screen interactive? or as they suggest, create contactless touch interfaces by bringing the sensor layer forward? Or just point it into thin air?

That latter is what I wanted to try out, to create semi-3D gesture interfaces by just pointing the airbar into the air. This allows 2D air gesture interfaces. Now if you have worked with gestures like this before, the tricky parts are always triggering gestures (when / where does it start and end) and feedback (show the user that the gesture has been recognized and give feedback on the effect / magnitude of the gestures). Without these two factors gesture interfaces feel slow, confusing and hard to use.

For this reason 3D gesture interfaces often use an active area, something like a virtual touch screen in the 3D gesture space. Now the airbars 2D active area already defines this quite nicely, so I thought if you stick it on the edge of a product there is a clear physical definition of where the active area is, that’s why I’m calling it semi-3D.

2. Feedback

Now the other important thing is feedback. Since the active area is a 2D plane in the air, I was wondering if you could use an LED array for feedback. This is basically another simplification of our 2D space into just 1D, but serves as a nice direct feedback, by placing the LEDs on the same active sensor edge.

So I was wondering how fun such an interface would be to use, how precise will it be, how much functionality can you pack into a given active area, and how complex can you make the interface? How many different interactions can you allow at the time in one interface without losing simplicity and without different interactions masking or blocking each other?

Well, this is my first attempt at trying this out.

Airbar + Raspberry Pi + Node.js

I have been using node.js a lot recently, so I thought it would be fun to integrate this physical HMI into node, since node can then easily connect it to so much else, which is what I intended to do

1. Follower

This simple visualization just to test out the overall speed, accuracy and effect. Two versions, where the blob gets smaller the closer or the further away the finger is.

2. Buttons

The most basic, a digital button that triggers below a certain height. Here are three buttons, a switch, a multi-state button, and a button that opens another menu / function, in this case a slider.

3. Music Player

Now with buttons and a slider we already have a music player so I made this as an example. This has a Standby-button, a Play/Pause button, Volume control and Swipes for Skipping track. I connected it to the Spotify API so i can have some fanciness for at home.

So the music player interface is pretty good, there’s quite a few functions here and its not getting too tight yet.
For more interactions we still have a few options we haven’t used yet. We can play a bit more with the height, currently we are only using the area closest to the airbar for essentially a 1-D interaction. The airbar also recognizes up to 10 fingers, this is more than we will ever need, but two finger gestures allow us to add a whole extra layer of interactions.
The only drawback so far, which you might notice watching the videos too, is that you sometimes can’t see the LED feedback below your finger.

More to come

.