Experimenting haptics in online conversations

Last week we organized a special event with ThingsCon, the stay-at-home TH/NGS Jam. We invited people to work on tinkering projects from home, discussing via the diverse online conversations. On your own together. As theme we choose to connect the long tradition of IoT of making devices that bridge the distance in a physical way to the current situation of our (intelligent) lock-down life where conversations are held via tools as Zoom, Teams or Jitsi. The event was a success judging the feedback. You can check the projects in the demo session if you like (Youtube).

My own project was themed on haptics. What can haptics mean for these conversations?
In the quick brainstorm at the beginning of the day I was thinking on the situation that the screen is not only a barrier for the conversation, it also creates both a more close and direct relation with the other as you are looking to the person from nearby. On the other hand you can miss the presence of a person and with that you miss certain cues as nodding, rolling eyes, confirming mumbling, etc. And there is often the situation with the online conversations that people switch of their camera, for technical reasons or just because someone is deliberately in only-listen-mode.

If half or more of your fellow participants in the meeting are visually muted you not only miss any feedback cues, you also do not know if someone is paying attention at all. Maybe it is just a way to leave the conversation for other work, or even have parallel conversations, gossiping in a Whatsapp chat is rather common. That uncertainty is not equally, you don’t know exactly when someone is watching and paying attention. It is almost like a panopticon, the guarding system based on the uncertainty of the guarded person. The Zoom Panopticon so to say.

The idea that I worked out a bit during the TH/NGS Jam was to facilitate an extra layer of communication that is based on simple pokes to others at the meeting-table. Like kicking someone under the table.

For the demoing of the concept I used a haptic-demo-kit we made a couple of years ago at LABS of INFO to demonstrate the difference of feelings of haptic cues. A sweat band on your wrist is vibrating in different patterns based and these are linked to 4 buttons.

For this concept of kicking under the table the demo-kit should be connected to another one someplace else. That resembles the TaSST project where two sleeves are worn on two different locations and the first person can strike his sleeve to trigger a similar feeling in the sleeve of the other. Via a connection to your phone this can be anywhere. Another concept that was based on this is Hey-bracelets, modeled for long-distance relations where you can lightly strangle someone else’s wrist via a wristband-wearable device.

The hardest part in such meeting is to know who is sitting where ‘at the table’. The video-wall looks different for everyone in the chat. To solve this you can try to add a code to everyone’s screen, make a connection via face recognition (which would be complex solution.
I thought you can make it more low-tech and stimulate a certain dialog in using the second order communication. My plan was to make a little table with all the persons in the conversation represented by a comic figure. That figures are a button to trigger the haptics in the one of the other wristband.


There is of course a practical issue that everyone needs a band for using which makes is quite complex.

So this happened.

But!
Thinking back a day later I was thinking of a better application and implementation. Too bad I did not had that during the event, but let’s not waste the idea and share it here.

The starting point is the same; the zoom panopticon, but now not focused on meetings but on presentations. That makes much more sense. I have been doing this presentations online with all listeners (you home) visually muted and with that no feedback what so ever. What if every participant can send haptic cues to the speaker without having to switch on the video. These cues might even work better for second order feedback.
This also solves a practical issue; only the presenter need to wear something (like a wristband) that receives the impulses. All the participants can use their phone to send the feedback during the presentation. This feedback can be open interpretation, or can be chosen from a set of possible cues.
I need to think through (and test) what cues would work best, my hunch is that positive feedback works best. In different levels on intensity:
_nodding to agree,
_taking a picture of the screen‘i’m not distracted by my phone, I’m just checking something you say’,
_I tweet this now!
_And maybe as wildcard: – hey! you are muted.

I think this would be an interesting concept to test for real. Looking into ways to spice up virtual presentations is a popular topic. Matt Webb is writing in this post how his way of building and telling a story is changed by the virtual meetings. And Benedict Evens is reflecting on how to solve online events.

A nice project to continue thinking on!

Published by

iskandr

I am a design director at Structural. I curate and organize ThingsCon Netherlands and I am chairman of the Cities of Things Foundation. Before I was innovation and strategy director at tech and innovation agency INFO, visiting researcher and lab director at the Delft University of Technology coordinating Cities of Things Delft Design Lab.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s