Weeknotes 239 – Finding the right AI verses with Bard

Hi you’ll!

Let’s do a short update, a week of working on AI roadmaps and mocking up the future functionality. Next to thinking about possibilities, it is also about making choices in what role the AI will play in the overall assisting or ‘consulting’ (I was triggered by the title of this article, which might be not living up to the piece itself, maybe). In the news of last week (down below), there was enough to connect to this question too.

Clippy in the style of Google – Midjourney

We are not living in our first chat interface revolution; I remember several concepts I developed back in the days of concepting financial assistants or conversational UIs. The challenge was always more on designing the interaction for a conversation with a scripted acting machine. Now in times of gen AI this might change, but I think there is a lot to design in the co-performance between humans and machines. I also created once a concept where the communication in the chat was driving the longer background information in a kind of dual view. This is becoming the defacto challenge for the current chat Uis. Luke Wroblewski is diving into possible chat-only services design challenges.


Furthermore, it was nice to be invited to the book launch of a book by Antoinette and Sabine (Lieve Chris), that promises to take me back into the 80s in Amsterdam, the pre-digital times (the crowd made also nostalgic for early twitter times :)

Events of interest

Notions from last weeks news

The biggest announcement, of course, was Google I/O which was all about AI and enhancing everything Google, and the new iterations on Bard. It has been introduced in 180 countries, but not in Europe due to the upcoming regulations on AI probably. With VPN, it is easy to test it out, and it feels very similar to ChatGPT, with the notion it is actual information. And the integration with Docs products is nice; however not perfect. This article compares the two big players.



ChatGPT is introducing actual information now for Plus members as a plugin to integrated web browsing in the chat



The integration of AI in Docs etc, is something to have a look at; Clippy all over in a good way? And creating a personal language model learning feels like making sense.


It brings art creation in partnership with Adobe.

But the biggest shift might be the integration in good old search; how will that work out?


I am happy to leave the analysis to Ben Thompson, reflecting on the development of Google AI over the years:


In the meantime, Anthropic is getting attention with quick reading of books.


Gary Marcus did a TED Talk promoting the worldwide AI control agency that was published last week.

We still have agency on the future of AI in our lives, Ethan Mollick thinks; “Rather than just being worried about one giant AI apocalypse, we need to worry about the many small catastrophes that AI can bring. Unimaginative or stressed leaders may decide to use these new tools for surveillance and layoffs.”


We are into a new digital revolution; if you were doubting, Douglas Rushkoff has renounced it.


Who should govern your access?


Designing forms of collaboration can be done like this:


How do we relate to the new mediated situations? “It’s a process: we will also find ourselves having new etiquette for these new technologically created situations.”

Matt is also describing here the value of pre-experience design, creating expectations. I had to think about research by an Industrial Design master student a couple of years ago on designing for calibrated trust, where the pre-use phase is key in his exploration.



I shared it before; Two Minute Papers is already a long time a source for AI research packed in short movies. This one shows how AI athletes start doing foul play unless you explicitly exclude this with rules; another human character that is not exclusive anymore…

How has tech impacted your life?


I have a hunch this connects some interesting visions on how to value tasks.


Paper for this week

Computational psychiatry, I was not aware of that concept. It can play a role in “Inducing anxiety in large language models increases exploration and bias”

Large language models are transforming research on machine learning while galvanizing public debates. Understanding not only when these models work well and succeed but also why they fail and misbehave is of great societal relevance. We propose to turn the lens of computational psychiatry, a framework used to computationally describe and modify aberrant behavior, to the outputs produced by these models.

Coda-Forno, J., Witte, K., Jagadish, A. K., Binz, M., Akata, Z., & Schulz, E. (2023). Inducing anxiety in large language models increases exploration and bias. arXiv preprint arXiv:2304.11111.


See you in two weeks after a small holiday break!

Published by


I am a design director at Structural. I curate and organize ThingsCon Netherlands and I am chairman of the Cities of Things Foundation. Before I was innovation and strategy director at tech and innovation agency INFO, visiting researcher and lab director at the Delft University of Technology coordinating Cities of Things Delft Design Lab.