Weeknotes 300 – sensors impersonating cameras to recreate reality

Hi y’all!

If you are reading this now, you are probably not on holiday (yet); you might enjoy a somewhat quiet environment and, finally, some pleasant summer weather (in the Netherlands). There are still some things to do, like last Wednesday, when I attended a meetup organized by SDN (Service Design Network), themed about the impact of AI (tooling) on the work of service designers. I saw three speakers (I had to skip the last one) focused on the helpful role of AI in creating customer journeys and personas with the usual AI tooling. Let me share some impressions.

Deloitte Digital created a set of prompts and databases for it. The customer journey they showed was visually rich but also felt a bit synthetic. It made me wonder how much your vision of the human actors might be distorted, especially if you can widen your research population by interviewing synthetic respondents. AI as inspiration can work, sharpening your questions, and even challenging your assumptions. It all depends on how you embed it in the total representation of the customer journeys.

The second speaker worked at Arcadis Innovation and coined the term “AI-lchemy” (…). Maybe not totally as meant, I think there is an interesting take on using the concept of alchemy as it is a sort of “occult” in chemistry science, in early medieval times. Is this the same phase we are in using AI in design?

Tanishqa Bobde shared how she is looking for the right mix of human and AI elements. I like to attend these types of events because they give a good insight into how others are achieving these concepts.

She is looking into a planet-positive future, which is what Arcadis strives for. This triggered me to think that now that Design Thinking has become a consultancy skill instead of a design skill, we can expect that co-design will be next.

Executing the human-AI mix is now rather first-level tooling: context research, iterating on outcomes, and PESTLE analysis. We are far from thinking about human-AI co-performances.

She is also realistic in noting that the outcomes are never as in-depth as with humans. Her takeaways are: use targeted prompts, see AI outputs as preliminary knowledge, and cast your critical human eye.

The last speaker (for me) was Serena Westra, a business designer for IKEA. She is not speaking about IKEA but about an initiative called AI-by-design, an approach aiming to embed AI in the double-diamond process. Inspired by CRISP-DM, she explains how she is building feedback loops for AI. She believes in the role of AI as a “bad intern,” as Kevin Kelly coined it (I’m not sure he sees it as a bad intern).

Listening to her presentation on the exchange of HCD (human-centered design) and AI, I felt interestingly as a kind of AI-centered design packaged as human-to-AI-centered design, as if we optimize the things we design to source the AI best. It was probably not exactly meant like this, but it was what she was signaling to me.

The conclusion from these talks was that AI is embraced in “corporate design thinking” but can result in design research performed less with humans. It can also be seen as a way to create a multiplier of human insights via synthetic enhancements, more positively framed. This can work, but you need to be very careful not to be carried away with the possibilities and force-fit humans into the synthetic-shaped contexts…

Triggered thought

I did not intend to, but that little report on the meetup triggered more thoughts than “just” reporting. Still, I was thinking about some other things, too, listening to The Verge podcast on Friday.

The hosts are discussing the state of AI photography and the need for watermarking. They are discussing what is real in photography, as it is always a representation; creating a better self. Samsung is a master at bending reality, but now Apple is also entering the field, as expected. There will be a new possibility for creating new differentiating propositions. Should you fill in the background based on suggestions or keep a blurry, unclear item in the background as unclear?

I wondered if there is a difference in AI-enhancing and synthesizing reality between people and places, humans and objects. We are kind of used now that our phone “cameras” are creating a synthesized version of ourselves and others in the pictures. With places and things, we might expect and like more reality. However, we are increasingly able to distort and clean up the context. In that sense, is the thing we capture not the reality we like to save for our later memories, but are we staging our scripted play we are part of? The play of our perceived life at that moment.

In the end, cameras are not used primarily to capture reality. Cameras are more like sensors that capture enough data to produce a believable and idealized representation of reality.

This is all amplified by the pressure of social peers and technical FOMO (fear of missing out; we fear not using the technical capabilities provided).

On the other hand, there is a counter-movement. The early generations of digital cameras seem to become popular with Gen Z and below. Force yourself to capture the reality not as real as possible but as honest as possible. Know the limits of the technology. See the flashlight as a flashed image in your picture. And by using the non-connected device, you distance yourself from oversharing. Build in a barrier, a more conscious selection you have to make if you look at the pictures later when importing them on your computer.

It is an interesting development to use analog-feeling digital devices. I’m unsure if it is a temporary hype or a fork in the use of digital technology. To prepare, I dug up my old, tiny Canon Ixus 40.

I make one final connecting leap here. There was an interview with professor Gusz Eiben in de Volkskrant on the missing link in ChatGPT: a body. He thinks ChatGPT is too focused on the “brain” angle of intelligence. However, intelligence is also very embodied; we learn through physical encounters. This is part of the theme and questions at this year’s TH/NGS 2024 on Generative Things. I could not help but connect it to the notions above. Is that “old” digital now part of better understanding our relation with our feelings, the tangible reality?

Read the full newsletter here, with

  • Notions from last week’s news on Human-AI partnerships, Robotic performances, Immersive connectedness, and tech societies
  • Paper for the week
  • Looking forward with events to visit

Thanks for reading. I started blogging ideas and observations back in 2005 via Targetisnew.com. Since 2015, I have started a weekly update with links to the news and reflections. I always capture news on tech and societal impact from my perspective and interest. In the last few years, it has focused on the relationship between humans and tech, particularly AI, IoT, and robotics.

The notions from the news are distributed via the weekly newsletter, archived online here. Every week, I reflect more on one topic, a triggered thought. I share that thought here and redirect it to my newsletter for an overview of news, events, and more.

If you are a new reader and wondering who is writing, I am Iskander Smit. I am educated as an industrial designer and have worked in digital technology all my life. I am particularly interested in digital-physical interactions and a focus on human-tech intelligence co-performance. I like to (critically) explore the near future in the context of cities of things. And organising ThingsConI call Target_is_New my practice for making sense of unpredictable futures in human-AI partnerships. That is the lens I use to capture interesting news and share a paper every week.

Feel invited to reach out if you need some reflections; I might be able to help out!


Buy Me a Coffee at ko-fi.com