Weeknotes 291 – the valuable friction of context

This week thoughts on showing sources is not enough; design encounters. And the latest notions from the news, a paper on AI and democracy, and more.

Hi, y’all! Welcome to the new readers! Below I share a bit more backgrounds with this (weekly) newsletter.

These are busy weeks with events and more. Last week, I attended a community gathering of expertise center systemic co-design. And I pitched the Wijkbot as the learning and empowering platform at the Robodam meetup. I started to prepare for two sessions this week, a presentation and a workshop. First, I will share thoughts on Generative Things at an evening meetup at CleverFranke. Drop by if you are in Utrecht! And we will be doing a workshop with the WijkbotKit at PublicSpaces conference this Thursday, using prototyping to dive into the meaning of urban robots for the public space. I am looking forward to seeing what the learnings will be.

The role of AI and our relation to it are developing every week. I just listened (divided in some separate moments) to the podcast of Lex Fridman and Roman Yampolskiy which discussed among others the dangers of superintelligence. What if AI takes the role of a manipulating dictator who will never leave? As an uplifting thought to start…

Triggered thoughts

The value of context in AI. The case of Google’s AI overview and the problems proves that we still need the context to make sense of what we read. And that the context needs to be in your face, part of the experience. Years ago, at the beginning of the digital era when we experienced the early days of the change of framing from GUI to UX, this book was popular: “Don’t Make Me Think”. It was a ‘bible’ for usability-driven interfaces. Remove as much friction as possible. I have always been part of the design “school”, that thinks that friction should be part of the experience. Design for friction. Make people aware of what the impact is of choices. This is also important in times of AI. Not only looking and presenting the sources with search results in an AI chat. Like the difference of Perplexity vs ChatGPT.

But Perplexity is putting the sources as a backstory for those looking for it. The superficial reader will not dive into the sources. The setup is, of course, similar to academic writing. The references in an academic paper will not be read and tracked down. What is the difference with Perplexity and comparable presentations of AI found results? Peer review: articles are peer-reviewed; you can trust that there is rigor and references are checked, demping any doubts. It is not only about adding the links with the response to the question in Perplexity, more is needed for trusted results.

The examples with Google AI Overview make that clear. The sources are right, or better, exist. A source mentions that non-toxic glue prevents cheese from getting loose from your pizza. The source was a joke, though; without the full context in your face, you might believe the end result. In a way, we are spoiled by media that are—in principle—doing our work to dive critically into what we read. Google has made improvements it seems.

There was an earlier article on a new approach by Anthropic –mapping the mind of LLMs– that I shared last week: Claude is not only presenting an answer but is reflecting upon the answer and question in combination and doing a kind of peer review of their own answer. Feels like a good first step. It would be even better if the reviewers were not one but multiple, with different backgrounds. Building such a system is complex but possible. If we add a human in the loop, it becomes even more balanced. It almost starts to look like the page-ranking ecosystem. Nothing wrong there.

Researcher Emile Bender addresses this as Information is Relational.

In short, designing interactions with AI output requires more than sources to prevent accidents; it also requires an active representation of the output’s context.

Continue reading for the

  • Notions from the news on human-AI partnerships, robotic performances, immersive connectedness, and tech societies
  • Paper for this week, this week on AI and Epistemic Risk for Democracy
  • Events for the coming week(s)

Read the notions of the news, paper for this week, and events to track via the newsletter. You can also subscribe for weekly updates on Tuesday 7 am CEST


Buy Me a Coffee at ko-fi.com