Weeknotes 299 – Inclusive AI by default?

Hi y’all!

Summer is starting, for at least some days. This is still the Netherlands. However, the holiday season kicked in, and enough news was happening. Especially in AI, as OpenAI is launching a search competitor, and Meta is an OpenAI GPT-4o challenger. More on that in the news section.

Last week, I spent rather some time planning ThingsCon events. We have another Salon in September on Participatory Design for ML and relations between municipalities and citizens. All in partnership with the research program Human Values for Smarter Cities. We have completed the line-up and opened the RSVP. Check it out yourself!

The organizational work for the yearly conference in December is ramping up. We had good success with the confirmations of the first two keynote speakers and made progress in the planning of the exhibitions. Those are the most urgent tasks, next to securing enough funding to make it to the celebration of the 10 years that we would like to create.

I sometimes use the LEX tool for writing, as it is made with that goal in mind. The people from Every are continuously shaping and improving the tool, and the set of checks is working well. It feels like a co-worker who has a fresh look at what you are writing and specific grammar capabilities. It sometimes interferes with Grammarly’s suggestions, which are becoming a bit annoying. It blocks the writing process literally.

The last thing I asked LEX to do was counter-read the new mailing for ThingsCon, and it was both constructive in confirmation and suggestions for some small improvements. It works nicely.

One function that I did not really test is the prompt for expanding, making your own prompts library if you like. I intend to try this in this very newsletter. And as an update after writing the triggered thought below: the review of the first draft definitely helped to make it more concise and focused and kill some darlings. The proposed examples felt too easy for me, felt not matching my style.

Triggered thought

Meta is introducing its latest version of Llama, which is on par with GPT-4o. Should we start using this, as it promises to be open source? That raises the question of what open source means in the realm of generative AI. Applying the models might be open source, but the intelligence depends on the data and learning capabilities, and these are not open. The blog by Interconnects dives deeper into this and discusses where the real guardrails are set in an open-source AI system.

If AI systems become integrated into our day-to-day lives, we want to trust the values the AI has and are in line with our own. Do we trust Apple more than Meta here? What does it mean as the model of Meta is, in principle, to rule by our own values, while Apple intelligence is ‘enforcing’ their design? While the datasets that are used might be feel more trustful by Apple than Meta. What is more dominant for Trustable AI?

An old interview with Aldous Huxley (part of Zomergasten last Sunday with writer Sana Valiulina) showed how people are victims of their inventions. New inventions need to be used, so we are not only the users but also the ones who use them. Humans always aim to organize and structure the world and the society they live in. That, above all, is what makes us human. The most efficient form, however, can lead to totalitarianism when it is too rigid.

Making a connection to AI and the design of the models. We can create an AI meant to optimize efficiency and structures. That might, however, lead to the ideal tools for totalitarian societies. We can also design AI to counter this natural flow towards efficiency. Use generativity to create outsider views without being extremists.

That is a tough design challenge. The chilling effect tends to play a role here. We adjust our behavior on the fly to what we think it will trigger and respond to, limiting our personal freedom. For example, responding to the turmoil after the Olympic Opening Ceremony on a tableau might offend religious groups because they can read it in the wrong way. As the artistic director mentioned: the intention was not to be subversive but to stress the positive power of diversity and unity, rather the opposite. Something that I felt for sure, was a blast in opposing longer-developing polarization tendencies. In that sense, the push-backs from radical right parties were confirming the message came across.

There is a danger if we will design AI with chilling behavior as a counter to the extreme polarising fueling behavior digital systems have turned out to be in the last decade. We need to design AI systems that keep the right agency with all actors, which is my long-term belief. Preventing alienation and chilling effects are elements to be aware of as signals of losing agency. Can we rethink AI as a positive uniting factor in bringing respectful but open and not afraid to confront? Making AI inclusive by default. That should be the benefit of using AI for search above anything else.

Read the full newsletter here, with

  • Notions from last week’s news on Human-AI partnerships, Robotic performances, Immersive connectedness, and Tech societies
  • Paper for the week
  • Looking forward to the week and events to visit

Thanks for reading. I started blogging ideas and observations back in 2005 via Targetisnew.com. Since 2015, I have started a weekly update with links to the news and reflections. I always capture news on tech and societal impact from my perspective and interest. In the last few years, it has focused on the relationship between humans and tech, particularly AI, IoT, and robotics.

The notions from the news are distributed via the weekly newsletter, archived online here. Every week, I reflect more on one topic, a triggered thought. I share that thought here and redirect it to my newsletter for an overview of news, events, and more.

If you are a new reader and wondering who is writing, I am Iskander Smit. I am educated as an industrial designer and have worked in digital technology all my life. I am particularly interested in digital-physical interactions and a focus on human-tech intelligence co-performance. I like to (critically) explore the near future in the context of cities of things. And organising ThingsCon. I call Target_is_New my practice for making sense of unpredictable futures in human-AI partnerships. That is the lens I use to capture interesting news and share a paper every week.

Feel invited to reach out if you need some reflections; I might be able to help out!


Buy Me a Coffee at ko-fi.com