What did happen last week?
In addition to working on the civic protocol economies and preparing for the design charrette in September (see the call for participation), the week was dedicated to some short events. Design for Human Autonomy from the Design for Values institute of TU Delft (learning about social norms and Barbies from Cass R. Sunstein).
Next, a workshop on Civic Urban AI, organized by the University of Utrecht, primarily focused on the civic and civil servant aspects of organizing AI in our city life, specifically AI governance. How does AI almost enforce a new form of governing system and organisation? And the question of whether participation is the answer people are requesting in a highly complex situation. How to prevent the wrong conclusions and movements, and keep the citizen in the driving seat. Enough questions for future explorations.
The final event was the “Day of the Civic Economy”. Relevant for the research, of course, and good to see the mix of people who like to organize things themselves, and some larger entities. The city of Amsterdam is aiming for a significant increase in civic-based economies in the city. The day (afternoon) ended with an assembly and a manifesto that was more a tool for engagement than a final document.
Finally, a bit of a topic, but I was happy to be able to join the Op De Ring festival in Amsterdam, partying at the circular highway. Both the busy West and the relaxing East.
What did I notice last week?
Meta is on an acquisition tour and has garnered a lot of attention, offering high-profile AI researchers salaries that are even exceptional by US standards (in the 9-figure range). The Meta AI assistant at the same time is actively covers up mistakes. Apple is also on acquisition, and has some good news on Apple Intelligence. Gemini is first on device-AI. Andrej Karpathy got a lot of attention for his software 3.0 talk. Prompts are a coding language. LLM OS. Nate B. Johnson compares it to McKinsey’s view on AI. Common AI product issues, typical design failures of AI interface, and ux design. Protocols for multi-agent systems. Codecons for the agentic world. What does it do with our thinking skills? Is the returning question.
Tesla self-driving taxis were introduced in Austin. In New York, there is a driverless car with a driver. Supermarkets with delivery bots in Austin too. Would the building robot use less nitrogen?
Midjourney has launched a (serious) video-generating product. More Orbs via Reddit, and new smart contract standards.
Meta on the role of new standards. The AGI economy is ramping up. What are the consequences? Is the internet becoming a continuous beta? How will our world become more synthetic? AI and the big five. Is there a scenario where we will have a new resistance, a crusade against AI?
Scroll down for all the links to these news captures.
What triggered my thoughts?
This week, I am returning to a concept I covered before: the embodiment of intelligence. It was triggered by a presentation of Claire L Evans from a couple of weeks ago, shared by Sentiers newsletter. The embodiment is linked to the concept of the Slime Mold Computer and the Language Machine. Claire presented on slime molds and embodied intelligence, exploring how these organisms compute solutions through their physical substrate. No central brain, no memory storage—just continuous adaptation through form. The slime mold’s network is its intelligence, reshaping itself to solve problems in real-time. This principle—intelligence as messy, continuous adaptation—might help us understand what’s happening with Large Language Models. While my previous writing explored robotics as a path to embodying AI through literally connecting the physical sensors and actuators, there’s another form of embodiment emerging: conversational embodiment.
To connect more here, the presentation of Andrej Karpathy on his Software 3.0 vision positions plain English as the new programming language, with LLMs as “stochastic simulations of people.” These aren’t just simulations—they’re creating a new substrate for intelligence through dialogue itself. Each conversation shapes the response space, creating ephemeral, context-specific intelligence without permanent updates. Like slime molds computing through their physical form, LLMs might be computing through the conversational substrate.
This connects to edge computing principles: pushing intelligence to the point of interaction. No centralized processing, just adaptive responses emerging from the dialogue itself. The conversation becomes the body, the adaptation mechanism, the intelligence. Vibe coding—programming through conversation rather than precision—also represents that shift. We’re not writing instructions; we’re growing solutions through dialogue. It’s sloppy, unpredictable, alive.
Which brings us to the disconnect. McKinsey’s “Agentic AI Mesh” presentation, another shared vision, was compared to the one of Andrej by Nate B. Jones. They’re architecting top-down what might need to grow bottom-up, like a slime mold finding food sources. He sees it as a danger; not just in misunderstanding—it’s in trying to impose linear, hierarchical thinking on systems that thrive on messy adaptation.
So is there a parallel with the systems like slime molds that embody intelligence through physical substrate and environmental interaction. LLMs may achieve something similar through conversational substrates and human interaction. Both operate without central control, both adapt without traditional memory, both emerge rather than execute.
Are we witnessing the birth of a new form of embodiment—not through motors and sensors, but through the continuous, adaptive dance of conversation. The question isn’t whether LLMs are truly intelligent or merely simulating. The question is whether we can recognize intelligence when it doesn’t look like us, when it lives in the space between minds rather than within them.
As we shape these systems, they shape us back. The conversation itself becomes the site of intelligence, the place where human quirkiness meets computational possibility. Not a replacement for embodied intelligence, but a new form of it entirely.
What inspiring paper to share?
Curious to read this more in-depth: Untangling the participation buzz in urban place-making: mechanisms and effects
Findings include that designers of place-making interventions often do not explicitly consider their participation goal in selecting participatory mechanisms, and that place-making efforts driven by physical space are most effective in achieving impact.
Slingerland, G., & Brodersen Hansen, N. (2025). Untangling the participation buzz in urban place-making: mechanisms and effects. CoDesign, 1–23. https://doi.org/10.1080/15710882.2025.2514561
What are the plans for the coming week?
This seems like an interesting (online) event, “Is AI Net Art?” with among others Eryk Salvaggio and Vladan Joler. Also, that day, a new edition of Robodam. One of the largest meetup crowds seems to gather at ProductTank AMS. I need to skip this, though.
References to the notions
References to the captured notions about:
- Human-AI partnerships
- Robotic performances
- Immersive connectedness
- Tech societies

