WN349 – A Void AI Might Make Tangible

Hi all!

First of all. Let me introduce myself to fresh readers. This newsletter is my personal weekly digest of the news from last week, of course through the lens of topics that I think are worth capturing and reflecting upon: human-AI collaborations, physical AI (things and beyond), and tech and society. I always take one topic to reflect on a bit more, allowing the triggered thoughts to emerge. And I share what I noticed as potentially worthwhile things to do in the coming week.

If you’d like to know more about me, I added a bit more of my background at the end of the newsletter.

Enjoy! Iskander

What did happen last week?

Next to the running projects (CoT, CPE), and developing further on possible tools for responsible human-AI colabs, a significant part of the week was taken in preparing a short speculative design workshop that was themed about the impact of agentic AI on the addiction to immediacy. I updated my slidedeck that I use to kickstart (or provoke) the participants in the workshop, with the introduction of a triad of immersion (cult of immediacy+agentic AI+physical environment -> super stimuli effect), see also the triggered thought of this week.

We also fleshed out the theme for TH/NGS 2025: Resize, Remix, Regen

TH/NGS 2025 explores makers’ evolving relationship with digital intelligence as material rather than tool. Algorithms and AI are now embraced for their unique textures and potentials, creating a state of flow where makers intuitively “vibe” with these materials. This approach transforms invisible intelligence into tangible creations that serve both individual needs and broader communities. By vibing and remixing, we’re able to materialize intelligence in ways that are intuitive, playful, and responsible.

And we opened the RSVP for the Salon in September.

What did I notice last week?

Scroll down for all the notions from last week’s news, divided into human-AI partnerships, robotic performances, immersive connectedness, and tech societies. Let me choose one per category here:

“Using a Belief-Desire-Intention (BDI) framework helps AI learn and adapt like humans by reasoning about what they know, want, and plan to do.” O’Reilly.

Tools for teaching robots skills and tasks. The Robot Report.

CarPlay Ultra installed showing how the car will just be the bearer of a detachable software layer. Ars Technica.

Can we already predict the impact of AI on the economy, or do we need more data to confirm it? Noah Opinion

What triggered my thoughts?

Is AI only capturing what is real, what can be found in the data? Or is AI specifically adding opportunities to capture the in-between space of what is not said, not expressed, and captured in data? This came to mind, combining a story and seeing a movie. It linked to some thinking on what the enforcing power for immersion is as you combine the culture of immediacy, agentic AI, and AI as part of our physical environment.

In a touching story, Eryk Salvaggio is sharing observations after the sudden death of his father. How does AI relate to the real world, especially as it could not be based on data?

Culturally, there seems to be a slowly eroding belief in our culture about the separation of the simulated and the real world. More people assert that the core distinction between the experienced world and its reproduction is simply a matter of producing enough data or density of detail.

Another thought from Eryk reinforces this challenge:

People use AI for grieving loved ones, but a statistically likely reproduction of my father’s words would offer me absolutely no comfort. My father did not express himself through words. He expressed his love through what was not said: by keeping information about the pain of the world close to him.

I recently watched a film called Real Pain. What struck me most was how the strongest elements were the stories not explicitly told but deeply felt. It’s like architecture; a great building creates experiences not just through visible materials and forms, but through the perfectly calibrated spaces in between. What’s unsaid often carries more meaning than what’s articulated.

Last week, I developed some thoughts about the impact of Agentic AI on the addiction of immediacy, prompted by a question that shaped a speculative design workshop. I added a third element to it, that of the physical AI, or immersive AI, as I have framed it in the RIOT 25 publication (Ubiquitous immersive relations with generative things). Combining these three forces—the culture of immediacy, agentic AI, and the physical environment—might create super stimuli effects, fueling a cycle of dependency through hyper-personalized media, dopamine responses, and continuous environmental triggers.

It might be a stretch, but I think this model of immersion can be related to the notion that a data-void can still be part of the total of reality. Is the immersion tangible?

While I was thinking about the presentation on immediacy and immersion last week, the notion of predictive relations thinking came back, the research I started some years ago, and where I tried to model out what these weird shifting realities of using things could turn out to be as we mix in predictive knowledge. AI can very well create, potentially, that inverted space, fill in, or better, add on to the total experience, both tangible and untangible. Back then, I was wondering how this plays into the mental model of the working of things, the relation we have while operating things. AI making the void tangible. As I concluded back then, we need a new form of (understanding) design…

Find the story of Eryk Salvaggio here: https://mail.cyberneticforests.com/my-fathers-data/?ref=cybernetic-forests-newsletter

Find my explorations in designing for predictive relations in this short essay.

What inspiring paper to share?

This paper analyses the LLM way of reasoning and the impact. Chain-of-Thought Is Not Explainability

While this technique often boosts task performance and offers an impression of transparency into the model’s reasoning, we argue that rationales generated by current CoT techniques can be misleading and are neither necessary nor sufficient for trustworthy interpretability.

We show that verbalised chains are frequently unfaithful, diverging from the true hidden computations that drive a model’s predictions, and giving an incorrect picture of how models arrive at conclusions. Despite this, CoT is increasingly relied upon in high-stakes domains such as medicine, law, and autonomous systems—our analysis of 1,000 recent CoT-centric papers finds that~ 25% explicitly treat CoT as an interpretability technique—and among them, papers in high-stakes domains specifically hinge on such interpretability claim heavily

Barez, F., Wu, T. Y., Arcuschin, I., Lan, M., Wang, V., Siegel, N., … & Bengio, Y. (2025). Chain-of-Thought Is Not Explainability. Preprint, alphaXiv, v2.

What are the plans for the coming week?

Enjoying the summer quietness. There is an online session of the Summer of Protocols that may be worth watching. And I might drop by the new “Innovation Museum of the City of Amsterdam” at Marineterrein. And when in London, an IxDA meetup on prototyping.

Enjoy your week!

References with the notions

Check the full newsletter to find all references to the captured notions about:vHuman-AI partnerships (11), Robotic performances (7), Immersive connectedness (3), Tech societies (13)


Buy Me a Coffee at ko-fi.com