I started using Medium a year ago and did not cross post it to this blog. Will start doing this again. These articles you can find at Medium:
We started a publication here on Medium that bundle the different articles that deal with the internet of things products as we like to see them. The internet of things is often mistaken with products that have an app with it to operate. We think that is not leveraging on the real value of IoT that emerges from the new space that evolves from the connecting and the service layer that is using this space. This is what we call a connectable.
In a research project on the design language for the connectable we found that (1) the dialogue will be the main interaction with our things, (2) hybrid interfaces mix screen interactions with haptics, voice, and (3) products get smarter while aging, also when still on the shelf. We will have this research outcomes published in this publication.
To make this IoT product-services possible you need to have an holistic view on both platform back-end, service layer, embedding of the sensors and connectivity and of course the integration with the user experience via both the product as the digital engine. There will be often a screen involved for managing or monitoring, or having a remote touchpoint, but the core functionality is not defined by the screen or app experience. The digital service layer is however crucial to the experience, it shapes the way a connected product behaves, often in an adaptive way to your own behaviour.
There are some examples of this coming to the market. The cars from Tesla that are more software than hardware is a good example. Or the toys from Vai Kai that get the experience from the connection with other toys. We will see this happen silently becoming a standard part of products.
IoT is often approached as a pure technological and infrastructural challenge. That is important for the success indeed, but the key is to design the right experience and platform, both from user perspective as technological. The new products will be more adaptive to the use which will lead to new trigger-based interaction models.
In this publication we will share research and visions on the connectable.
It cannot be denied that there is a kind of prestige in doing a session at SXSW. Not only because of the procedures and the popularity to send in proposals, leading to a small chance, but more because SXSW is one of the most well-known conferences in our field and it has the name to be the perfect thermometer what topics will be hot the years after. Sometimes that could be apps that change a piece of our behaviour like Twitter did and Foursquare, but more often just the themes that are hot.
So this year my proposal was selected. The topic on adaptive interactions and an internet of touch got more traction than the more generic internet of things, it turns out. It is not the first time I did this session the last year, starting at Vodafone Firestarters back in February 2015, after that atThingscon Berlin, Hack the Visual in London, Thingscon Amsterdam, and a couple of presentations. The subject remains interesting the coming time. For those who missed this session in Austin, we will do another edition at the first Thingscon Salon at April 1. With the presence of both Gijs Huismanand Aduen Darriba Frederiks, the researchers of social mediated touch and new touch garments.
My drive for the haptic interactions is mainly found in the believe this can work very well as a design material in the in the new reality where things and digital become one and we get new experiences of our real world. It is a very suitable way to communicate data from the cloud to human feelings and it strengthen the experience. This is one part of the presentation I give during the workshop. I use this example of a project of James Bridle as an extreme way to make your context tangible.
In the workshop at SXSW I started with some backgrounds on how touch works and the results from the research by Gijs Huisman. This is very insightful, also to experience yourself how to communicate feelings to someone else. We did a little exercise as designed by Gijs that I will not explain in detail in case you will attend the workshop next time.
After the research part we dived into the reasons why it is interesting as a design material, the broader context and relation with the internet of things. Things get rapidly connected to the internet, the next phase will be more about the second skin experience. Digital will be part of the self. This is even more relevant with the rise of the Artificial Intelligence as described in my last post on BigAI. Digital will live as a partner species in realising our needs. Haptics connects the digital and physical layer.
To experience the power of haptics as design material, we did a little design assignment in groups. Every group had a kit to try the vibrations. We provided a case and customer journey of Lyft ride sharing to let the attendees think on the ways haptic can work to help a better experience. The value of the assignment is in the discussion in the groups. Thinking on a concrete case make you dive deep in the possibilities. We had 8 groups that created all their own view on the topic. See the sketches below.
It was interesting to see that a couple of groups actively start creating a language for haptics. The cluttering of haptic feedback was also a theme with some of the groups. The conclusion was that it is a very strong way to focus and amplify certain decision moments in a flow.
We concluded the session with a presentation on some design thinking principles for wearables and in particular haptic interactions. On the shift from the app model to the moment based interactions, and the move to adaptive products and services and how to design rule-based interactions. In haptics is were the conversational UI and the internet of things meet. Creating feedback loops that you can feel fits the more interesting dialogues you can have in the new services. The workshop aimed to let people experience exactly that possibilities.
This edition of SXSW was a bit different for me compared to the three before. I did a workshop in the program for a starter. That meant that I had to prepare stuff together with the team (René and Won), so I missed listening to talks on Monday. I also had some deadlines. All and all I have the feeling I saw less talks this edition.
Still there were enough interesting content and insights. Or reflections on things you are busy with yourself, that was mainly the case. But before diving into that part, a couple of general things I noticed.
I think I discovered last year for the first time the power of the tradeshow. Now again, lots of inspiration could be found there. Too bad I did not do a full round, but I did visit SXCreate in Palm Event Center the first day and that was very interesting. For the feel, but also for a couple of interesting projects such as the the Parihug. A bear with haptic sensors and actuators that can transport the hugs over a distance. This is exactly what our workshop was about; social mediated touch. It was good to hear Dan Steingart from Princeton University claiming that haptics, notifications are the killer function for wearables. Not from yourself but on a distance.
More on that topic in a separate post.
Also on SXCreate was the production version of Jibo. That is assistent for you home like for instance Amazon Echo is, but is has some emotional interactions build in and also uses voice and face recognition. It worked very well in being a nice companion. And it is an platform with an SDK where you can create your own services using the communications tools that are provided. Could be very powerful.
It was one of the questions I had before SXSW; are we going to see the product as a platform for software as an important trend? This was not so ubiquitous present. In one talk of Brady Forrest on the hardware startup it was clearly touched. And in one of the better fashion panels you see that this is important approach for companies like VF (ao Vans, Northface, Timberland). I expect this will be something to expect for the next year to become bigger. As one of the speakers in the conversational UI panel predicts: In 10 years we do not carry a mobile phone. Access the internet will be via the conversation with the services.
It is a natural successor of the big trend this year: AI (artificial intelligence) and robots. Especially the AI was everywhere. It is clearly a new driving force for other areas. From media to products, in the conversational UI, also a hot topic. And as thing on its own with the bots of course. A room on the secrets of machine learning was stuffed with design oriented people.
Talks on robots where also numerous. I attended a couple. One of the most focused one had an interesting mix of people from someone that make robotised help in shops, another from Google(x) and a professor that researched the behaviour of people towards robots. Concluding that the design of the communication of intentions by a robot is very important for the acceptance and usefulness. A robot that cannot open the door but communicates it’s helplessness is much more accepted. She worked with Pixar to model the behaviour.
Julia Hu from Lark called it the concept of seamless emergence. The moment you give too much freedom to give answers, the bots become dumm. When designing for conversational UIs you need to control the environment.
We see robots as servants or even slaves. We need to take that into account when designing the interaction with them. An AI can outperform a human advice when the reasoning behind a decision is added.
The general conclusion on AI and robots is that will be very tasked based aassistantsthat are part of sets of services. An AI is very good in repetitive tasks, but also very useful for tasks that we as humans are likely to become blurred by our own experiences. For instance with brainstorming to trigger new routes.
We will have AIs all around and we will give them a place in our life. As we already have a feeling with the teasing of the robot dogs so will we have the robots in our lives.
Chris Messina: compare how children now use the touch screen and expect all screen to be touch. The next generation will do that with conversation with things.
So you can say that the AI and the execution of it is the key trend from SXSW. It is as Michiel Berger said, for the first time in a decade, since the rise of networked systems and social media, that we have a trend that is influencing all the things around us. That is the value of SXSW for me, grab the energy of the things that matter for the coming year or so.
And I cannot agree more with Kris Hammond from Narrative Science in the first panel on Friday: The AI should be designed by designers not (only) engineers optimising existing processes.
This week, just before SXSW, I was invited to take part in the Bosch Connected Experience conference. A prequel to the big Bosch Connected World relation event that focuses on unlocking new ideas and inspiration by organising a big hackathon and a parallel conference. I presented onAdaptive interactions and the core of things during the conference. It stroke me that an interesting trend popped up on using hardware as a software platform. See if it will be addressed in Austin too…
Bosch is entering the game of connected products with their own IoT Cloudsolution, that they announced on March 9. At the conference several people from Bosch shared their ideas and plans on stimulating startup culture within the organisation and innovation. One of the talks was by the special unit that creates solutions for the connected car, even the self driving car. Kay Herget told on SoftTec and the collaboration with TomTom, but most interesting was the approach on creating a platform for others to use. This is something that will be giving a new stimulant to the Internet of Things the coming time.
What is the idea? The internet of things is of course heavenly inspired by the business models that derives from the digital world. The good and the bad, not for nothing people are worrying on the data and privacy issues. That is another story.
Strong in the idea of don’t building strict products that have all their functions is that products will adapt by using them. That is a concept that I foresee for some time, but it need a healthy driver to invest in. That driver will be the platform economy. Just like software can be an operating system for the end products, hardware will get the same qualities. Hardware with API’s.
An example how this would work is the way we plan to make TaSST sleeve into more than a single product for a single purpose. We have defined the first use case (deaf-blind people) but it gets even more interesting if we manage to create a platform product with an API/SDK for everyone to build their own product on. For me the end result of the project is that package of the sleeves with the SDK and a good way to manage the products you want to create.
This approach is exactly what Bosch is showing in between the lines in that presentation. Building components for intelligent mobility to use by others to build upon, making the software for the physical world.
So I’m wondering if we will see some of this development in Austin at the SXSW conference we are attending the coming week. Keep you posted!
Interesting new interaction paradigm seems to be emerging: draw-select reality. As Foursquare introduced the possibility to draw the area on the map for filtering places.
And now there is the new DJI Phantom 4 drone that makes it possible to drawselect a person to follow.
This is not only good news because the drone is really doing what drones should be doing: fly themselves. It is also an interesting interaction concept how it let you intuitively mark things in the real world to connect to functions in a digital space. Just like draw-selecting within Foursquare. I can imagine this will be more and more a common way to interact as the tools we use gets more intelligent.
In this interesting overview on the move to conversational commerce by Chris Messina he touches also an interesting aspect on the new conversational language that we will have with our services based on task-based command lines. The example how to type in commands in Slack conversations with the slash and how the new app Peach is using a sub-language to communicate all kind of special short-cut messaging. Agree with Chris that we will learn in this first phase towards the conversational interactions to have these kind of dialogues.
In my trends-for-2016 post I shared the expectation that we will start to get used to have a dialogue with the products we use, and have more tangible interactions at the same time. Let’s elaborate a bit on that. These conversations will be a possible format to make an interoperability between the different services and products we use. We had the short hype of intelligent agents at the end of the last century, it was to early then and technical not ready missing the big data for instance. It will happen now. And the special behaviour will be the connecting of the different services. That goes two ways.
We will have the different services connected thru our own conversations, but we will also enhance our interactions with the services with more than one channel. Multimodal combining screen interactions with speaking, chats and physical experiences. As Chris mentioned the benefit of the knowledge of a person in a chatroom to verify payments, so is adding the physical contact points with the pure digital ones. That is the context for the learning dialogue.
A really interesting development. With a challenge to design these conversations using rule-based principles and machine learning support, applying it to both digital and the physical interactions. With an open setup for people to create their own language.