WN347 – Think with us, not for us

Hi all! This weeknotes 347; Preventing the Moloch trap of collective intelligence through designing the right intentions. And more on last week’s human-AI-thing news and beyond.

What did happen last week?

Last week was a mix of curating, writing, and engaging in conversations with interesting people. Sounds generic, but I cannot share everything in detail. It was about the internet of entities and what that might entail. About makerlabs and responsible AI in human-AI teams, and of course, on civic protocol economies. Speaking about the last, we had a good turnout with the first round of the call for participation. I look forward to reviewing them in more detail later this week.

What did I notice last week?

Scroll down for all the notions from last week’s news, divided into human-AI partnerships, robotic performances, immersive connectedness, and tech societies. Let me choose one per category here:

What triggered my thoughts?

Initially, my thoughts were triggered listening to an interview with Liv Boeree, specifically the Moloch trap angle she connects to the outcome when we mix competitiveness and collective intelligence. The thoughts get more layers, combining it with a post of Ethan Mollick yesterday, “Against “Brain Damage; AI can help, or hurt, our thinking”. Even more than usual, writing these thoughts, I had a good conversation with Claude to sharpen my own thinking.

AI should think with us, not for us

There is still a lot to do about the research that AI makes us less intelligent. Ethan Mollick’s recent analysis made me think of an interview I’d just heard with Liv Boeree about the Moloch trap: “This concept describes a situation where individual or group competition for a specific goal leads to a worse overall outcome for everyone involved”

Boeree’s example was personal: she wants to learn and grow, but social media algorithms want engagement and entertainment. It’s a perfect illustration of misaligned incentives. TikTok even artificially boosts your first videos to hook you as a creator, not just a consumer.

Here’s where Mollick’s analysis becomes crucial. He argues that the problem is not the use of AI as tools to extend our capabilities, but the design of AI that encourages us to be lazy. “The problem is that even honest attempts to use AI for help can backfire because the default mode of AI is to do the work for you, not with you.”

But here’s my thought: what if you really want to be whole? Current social media uses algorithms, not true AI. What if we replaced them with AI that we control? Imagine having a conversation with AI on a deeper level, that really understands what you strive for beyond instant gratification.

The algorithm itself isn’t the threat – it’s who controls its intentions. What if that person could be you? The intelligence tools we need should focus on reasoning and the exchange of insights, not just producing outcomes. AI should make us smarter through dialogue, not lazier through automation.

Mollick proposes sequencing: “Always generate your own ideas before turning to AI.” I agree. That’s exactly how I write these columns – my thoughts first, AI as editor second.

Mollick concludes: “Our fear of AI ‘damaging our brains’ is actually a fear of our own laziness. The technology offers an easy out from the hard work of thinking, and we worry we’ll take it. We should worry. But we should also remember that we have a choice.”

The Moloch trap isn’t inevitable. We can choose AI that aligns with our deeper goals—growth, understanding, and wholeness—rather than just engagement. However, first, we need to recognize that we’re the ones who should set those goals. And having an AI that thinks with us, not for us.

What inspiring paper to share?

I have been following this research for some time. We had a presentation by Seowoo Nam at ThingsCon some years ago, and it’s great to see how it has evolved. Diffractive Interfaces: Facilitating Agential Cuts in Forest Data Across More-than-human Scales

This pictorial challenges these limitations by exploring how interface design can transcend reductive, agent-centric representations to foster relational understandings of forest ecosystems as more-than-human bodies. Drawing on feminist theorist Karen Barad’s concepts of “diffraction” and “agential cuts,” we craft a repertoire of diffractive interfaces that engage with forest simulation data, revealing how more-than-human bodies can be encountered across diverse temporal, spatial, and agential scales.

Elisa Giaccardi, Seowoo Nam, and Iohanna Nicenboim. 2025. Diffractive Interfaces: Facilitating Agential Cuts in Forest Data Across More-than-human Scales. In Proceedings of the 2025 ACM Designing Interactive Systems Conference (DIS ’25). Association for Computing Machinery, New York, NY, USA, 135–147. https://doi.org/10.1145/3715336.3735404

What are the plans for the coming week?

This is the last week before many people here will start their holiday break. Not me, but it will change the pace next week. And this week, too, having more meetings before people are gone. And summer drinks.

Also, checking the entries for the call for participation, preparing an inspiration session on agentic AI, I will do next week, and finalizing outlines for responsible AI programs.

We are planning the next ThingsCon Salon, organized in collaboration with the Human Values for Smarter Cities research program, just as we did last year and in 2023. This time, it will be on 4th September, and the preliminary topic is: “Hold on to good intentions.” More to follow later this week.

I will attend an afternoon by Future Society Lab in Rotterdam this afternoon. A Service Designer in Amsterdam might check the summer drinks this Thursday. This looks interesting: Divination, Prediction, and AI. Monday in London.

References to the notions

This week about 30 references to notions from the news on: Human-AI partnerships, Robotic performances, Immersive connectedness, Tech societies

Have a great week!


About me

I’m an independent researcher, designer, curator, and “critical creative”, working on human-AI-things relationships. I am available for short or longer projects, leveraging my expertise as a critical creative director in human-AI services, as a researcher, or a curator of co-design and co-prototyping activities.

Contact me if you are looking for exploratory research into human-AI co-performances, inspirational presentations on cities of things, speculative design masterclasses, research through (co-)design into responsible AI, digital innovation strategies, and advice, or civic prototyping workshops on Hoodbot and other impactful intelligent technologies.


Buy Me a Coffee at ko-fi.com