Weeknotes 304 – confusing conversations by the synthetic uh

Hi, y’all!

This is the start of the academic year here in the Netherlands, and I think also in some other countries. This is a sign that the general vacation time is really over, so it’s time to leave. Welcome to these weekly reflections on the news, based on my subjective selection but with a specialization (or better said, default view) towards the changing relations of man and machine when this becomes AI, so aka go-performance in human-AI futures.

Triggered thought

These thoughts here are often triggered while listening to a podcast, and they are tech-related often—also this week. However, I made a note while driving (and pausing listening), and I planned to make sense of my thinking and dictate later. As a side note, this ritual is becoming increasingly common: capturing spoken thoughts with voice recognition, which can be both in Dutch or English, and when at the desk again, asking ChatGPT or Claude through Lex to make a readable text. Creating paragraphs, finetuning the grammar, and more. But explicitly not by adding new information from the unreliable generative brain.

But anyhow, away from this side note: in this case I was not triggered by the topic of the podcast (still interesting to listen, the Every podcast with Nashilu Mouen-Makoua from The Browser Company, discussing new stuff explected in de Arc browser like AI). I was triggered by a new phenomenon (at least for me) where the way she talked, using some uhs and uaha’s, was almost as if I was using an AI voice. Not that she is like an AI voice, not at all, but the way that these new AI voices – with the OpenAI iteration – try to mimic real people by adding these non-functional words.

So, is that what we are doing now? Are we creating a reality that breaks down our instincts regarding what is real and what is not? And will we adapt our behavior to it? Even before it is necessary, that is often described as the Chilling Effect; you change your behavior to sync with the perceived behavior. It is also my firm belief that we will see an acceleration in “co-performance” with machines as soon as we adapt our default behavior to the characteristics of the intelligence we gonna live with. We expect the AI to adjust to us, but it works both ways.

Read the full newsletter here, with

  • Notions from last week’s news on Human-AI partnerships, Robotic performances, Immersive connectedness, and Tech societies
  • Paper for the week
  • Looking forward with events to visit

Thanks for reading. I started blogging ideas and observations back in 2005 via Targetisnew.com. Since 2015, I have started a weekly update with links to the news and reflections. I always capture news on tech and societal impact from my perspective and interest. In the last few years, it has focused on the relationship between humans and tech, particularly AI, IoT, and robotics.

The notions from the news are distributed via the weekly newsletter, archived online here. Every week, I reflect more on one topic, a triggered thought. I share that thought here and redirect it to my newsletter for an overview of news, events, and more.

If you are a new reader and wondering who is writing, I am Iskander Smit. I am educated as an industrial designer and have worked in digital technology all my life. I am particularly interested in digital-physical interactions and a focus on human-tech intelligence co-performance. I like to (critically) explore the near future in the context of cities of things. And organising ThingsConI call Target_is_New my practice for making sense of unpredictable futures in human-AI partnerships. That is the lens I use to capture interesting news and share a paper every week.

Feel invited to reach out if you need some reflections; I might be able to help out!


Buy Me a Coffee at ko-fi.com