Hi, y’all!
There was a solar eclipse that was well visible in a heavily mediated area of the world, so it took over the news. It was special occasion that there was now a perfect fit of the moon and the sun (factor 400 is important here). This will not happen again for some 1200 years, and did not happen the last 1200 years. It makes people think and what remains and changes over time… A world with different speeds of consciousness.
I am thinking about how to make a bridge to the triggered thoughts on LLM-based interaction paradigms. Maybe make a slight detour first. I was at the STRP Festival last Friday, a yearly ritual I do not try to break; the festival always collects interesting art pieces. This year, half of the 12 works had a quality that stood out, I think. The immersive experiences were important: have yourself entering a hunger game-like situation with other visitors of that day. There are also always some artist talks, and I attended Ling Tan explaining her work “Playing Democracy 2.0”. The multiplayer game lets you make decisions that shape the playing field, and that has a great influence on the gameplay. It worked really nicely to make you think about the consequences through a relatively basic rule-setting play. Keep this in mind while considering the role we will give LLMs in helping us with decision-making….
Triggered thought
A returning category of new products is the task-based multimodal physical spatial AI-enabled devices. Like Rabbit and Humane. The latter had his review version delivered to tech journalists last week, and their experiences were not all 5 out of 5, to say the least. The one of The Verge is rather telling: it is a promising category, and the device is even increasing the belief there is a need for it, but this device is not even close to delivering the promises.
A central problem seems to be the choice to make the LLM-like interaction with a cloud-based architecture the central way of interacting. Even if the interactions were not LLM-based at all, like asking to play a song. It makes you wonder if LLMs as touchpoints are the future. If the quality is right, and the understanding of the commands is flawless, there is a strong case for it, as it is creating a much more flexible and forgiving interaction protocol. In the best execution, it helps to find unclear requests by being creative in asking and suggesting, making it kind of human, so to speak.
The review’s conclusion that LLMs are overkill apart from the current state of technology and hardware is not right. It is right that the current state of tech and execution makes LLMs a bad choice for now, but as a way of thinking, it is really interesting.
I have to think back to the Google Glass saga from 2013. As an early adopter trying to create apps for the device and presenting about it, I feel there are similarities in the way it triggers thinking on new interactions. What I liked about playing around with Google Glass was the new interaction paradigm it unlocked: a timely and context-sensitive way of thinking about services on the go. The limits of the screen of the Glass and the way to interact with it made you create timely based interactions. Rethinking from designing for destinations to designing for triggers. (Check this this old presentation, here presented in Dutch)
It became part of the way we design for mobile and wearable apps. For notification-based interactions, for divided services over multiple channels that are kind of part of a total service ecosystem. It unlocked the potential of mobile wearable services. Now, we are mixing in a layer of understanding that potentially is changing the way we interact with things again. The LLMs function as a touchpoint with the stored intelligence more than it is the end goal and will also trigger new interaction paradigms of real duplex conversations.
Read the notions of the news, paper for this week and events to track, via the newsletter. You can also subscribe for weekly updates on Tuesday 7 am CEST.