Experimenting haptics in online conversations

Last week we organized a special event with ThingsCon, the stay-at-home TH/NGS Jam. We invited people to work on tinkering projects from home, discussing via the diverse online conversations. On your own together. As theme we choose to connect the long tradition of IoT of making devices that bridge the distance in a physical way to the current situation of our (intelligent) lock-down life where conversations are held via tools as Zoom, Teams or Jitsi. The event was a success judging the feedback. You can check the projects in the demo session if you like (Youtube).

My own project was themed on haptics. What can haptics mean for these conversations?
In the quick brainstorm at the beginning of the day I was thinking on the situation that the screen is not only a barrier for the conversation, it also creates both a more close and direct relation with the other as you are looking to the person from nearby. On the other hand you can miss the presence of a person and with that you miss certain cues as nodding, rolling eyes, confirming mumbling, etc. And there is often the situation with the online conversations that people switch of their camera, for technical reasons or just because someone is deliberately in only-listen-mode.

If half or more of your fellow participants in the meeting are visually muted you not only miss any feedback cues, you also do not know if someone is paying attention at all. Maybe it is just a way to leave the conversation for other work, or even have parallel conversations, gossiping in a Whatsapp chat is rather common. That uncertainty is not equally, you don’t know exactly when someone is watching and paying attention. It is almost like a panopticon, the guarding system based on the uncertainty of the guarded person. The Zoom Panopticon so to say.

The idea that I worked out a bit during the TH/NGS Jam was to facilitate an extra layer of communication that is based on simple pokes to others at the meeting-table. Like kicking someone under the table.

For the demoing of the concept I used a haptic-demo-kit we made a couple of years ago at LABS of INFO to demonstrate the difference of feelings of haptic cues. A sweat band on your wrist is vibrating in different patterns based and these are linked to 4 buttons.

For this concept of kicking under the table the demo-kit should be connected to another one someplace else. That resembles the TaSST project where two sleeves are worn on two different locations and the first person can strike his sleeve to trigger a similar feeling in the sleeve of the other. Via a connection to your phone this can be anywhere. Another concept that was based on this is Hey-bracelets, modeled for long-distance relations where you can lightly strangle someone else’s wrist via a wristband-wearable device.

The hardest part in such meeting is to know who is sitting where ‘at the table’. The video-wall looks different for everyone in the chat. To solve this you can try to add a code to everyone’s screen, make a connection via face recognition (which would be complex solution.
I thought you can make it more low-tech and stimulate a certain dialog in using the second order communication. My plan was to make a little table with all the persons in the conversation represented by a comic figure. That figures are a button to trigger the haptics in the one of the other wristband.


There is of course a practical issue that everyone needs a band for using which makes is quite complex.

So this happened.

But!
Thinking back a day later I was thinking of a better application and implementation. Too bad I did not had that during the event, but let’s not waste the idea and share it here.

The starting point is the same; the zoom panopticon, but now not focused on meetings but on presentations. That makes much more sense. I have been doing this presentations online with all listeners (you home) visually muted and with that no feedback what so ever. What if every participant can send haptic cues to the speaker without having to switch on the video. These cues might even work better for second order feedback.
This also solves a practical issue; only the presenter need to wear something (like a wristband) that receives the impulses. All the participants can use their phone to send the feedback during the presentation. This feedback can be open interpretation, or can be chosen from a set of possible cues.
I need to think through (and test) what cues would work best, my hunch is that positive feedback works best. In different levels on intensity:
_nodding to agree,
_taking a picture of the screen‘i’m not distracted by my phone, I’m just checking something you say’,
_I tweet this now!
_And maybe as wildcard: – hey! you are muted.

I think this would be an interesting concept to test for real. Looking into ways to spice up virtual presentations is a popular topic. Matt Webb is writing in this post how his way of building and telling a story is changed by the virtual meetings. And Benedict Evens is reflecting on how to solve online events.

A nice project to continue thinking on!

Metcalfe’s Law for the Apple Watch

Unboxing an Apple Watch and having the first experiences using it did deliver an interesting insight: it triggers the Metcalfe Law with the new form of communication.

Metcalfe’s Law describes how the value of a networked product increases with the number of nodes in the network. This goes in extremes with complete new technologies. For instance, the first owner of the fax machine had a useless machine. And so had the first couple of hundred or even thousand. The essence of using a fax machine is to have someone else to be able to receive the message.

Within the Apple Watch the same is happening with the taptic communication. It is a rather interesting feature to be able to share messages with others via the tapping on your watch. Just like the heartbeat and the little drawings. I believe that it could be very powerful in setting a new way of sharing your nearness on a more serious level than for instance Yo.

Still with so little people in your network having a watch, that is typical for this moment in the roll-out, it is hard to find others to seriously use this function. Everyone knows someone to create a little demo, but the real value of the function will arise if you can use it with lots of people.

This also part of the strategy of course. If this function turns out to be so strong and wanted by people it could trigger the sales of the watches. You need to have one not to be left out.

How enthusiastic I am on haptic interactions like this taptic communication, I doubt that it will be strong enough to trigger the sales. Or more precise: the on boarding for new users is to high with 350 euro’s. But maybe Apple will integrate this system into other devices in the end translating the tapping into sound for instance on your phone. If certainly would help Apple Watch grow benefitting Metcalfe’s Law.

Invisible apps paving way for watch life

Product Hunt is an important trendwatcher for developments in digital services via the new apps that are becoming popular. Yesterday they marked ‘the invisible app’ as a clear new trend. Ryan Hoover of Product Hunt made a list that consists of embedded functionality like Katch that record Meerkat live streams to Youtube, Magic as a SMS Siri and bots like Blippybot finding GIFs for you, and Clara scheduling your appointments.

At the Hackbattle of The Next Web we saw also this happen in some of the most interesting concepts that were presented. This was mainly triggered by the use of one of the companies providing SMS and voice APIs: Nexmo.

I think it is an important trend too. Not new per se. We talk about bots as service for a longer time, but it will flourish with the introduction of the new generation smart watches and other wearables. I talked on the ‘Notifaction Model’ for the new apps that are build on the context and sensor driven notification layer as binder of the services. See the presentation below for instance.

 

We are just at te beginning of our automated and artificial intelligence driven service layer we will use for daytime tasks. These invisible apps are the first iteration with simple tasks, but will grow in much smarter enhancements that we will control via our wearable devices.