Critical explorations in the new

A practice for making sense of unpredictable futures of human-AI partnerships

Welcome to the blog and home base of Target_is_New, future explorations by Iskander Smit. My focus is on the interaction of human, more-than-human, and intelligent technologies, embodied in things as actors. Learn more about future research services.

Next to these activities, Iskander is the founder of Cities of Things, and organizer at ThingsCon. Before I was the strategy, innovation and research director at digital agency INFO and the design director at Structural. I was a visiting professor and lab co-director at Delft University of Technology.

Read the latest posts and subscribe to the weekly newsletter here.

Weeknotes 285 – subtle robotic interventions for intense neighborhood communities

Hi, y’all! As every week, I would like to start welcoming my new subscribers and others who land here for the first time. A bit down, you will find more background on the themes of this weekly news update. As always, let’s begin with some triggered thoughts.

Triggered thoughts

I could easily follow up on last week’s thinking on LLMs as an interface to the real world or the new introduction of Meta with an LLM inside all chat apps, another example of the Meta strategy to ‘borrow’ a concept from Snap (MyAI) and try to excel in execution and levering the scale. But let me do that another moment.

Related, though, are the new robotic performances by the Atlas successor, Boston Dynamics’ most famous multi-purpose robot, which has gotten an even more humanoid look now. It has an “influencer-style” ring light 🙂 (others called it a desk lamp). See this short introduction movie.

Humanoids are clearly a popular wave of robotic performances now. It feels, though, more like a way to create and shape the market. I doubt this will be the end stage of robotic performances. It has been a topic almost every week, but it is much more interesting how “normal” objects that have certain tasks will be robotized. Maybe in the kitchen, at first, or in the garden. Or just become your partner in serving coach hanging with the right activity level. Something that will be entering our personal moving living rooms first: the car interiors, where massage functions were ultimate luxury but are introduced in lower market segments too.

Helpful, friendly robotic objects are in the near future. This week, I saw the first results of the student teams working on the ITD project designing neighborhood navigators, a type of hood that is shared and works in collaboration to become part of the future neighborhood life. All four teams chose not to focus on creating a typical robot but tried to explore the interactions of the robots with the community of residents in the neighborhood. A team created a hood, that collects leftover flowers from shoppers at the market and delivers these to neighbors that are stuck at home. Or get a wish that another shared, while taking a community activity. Another made an intuitive way to generate ideas together for making a greener neighborhood, with the robot as the initiating partner. Or one is making tags from virtual graffiti that would “stick” to landmarks in the neighborhood, stimulating working together.

The intention of the project was to get the team inspired to work on robots that do interventions for more community life in the neighborhood by taking action. The first concepts are promising and also make super clear how subtle choices have a big influence.

Read the notions of the news, paper for this week and events to track, via the newsletter. You can also subscribe for weekly updates on Tuesday 7 am CEST.

Weeknotes 284 – LMM devices; magic or magicians?

Hi, y’all!

There was a solar eclipse that was well visible in a heavily mediated area of the world, so it took over the news. It was special occasion that there was now a perfect fit of the moon and the sun (factor 400 is important here). This will not happen again for some 1200 years, and did not happen the last 1200 years. It makes people think and what remains and changes over time… A world with different speeds of consciousness.

I am thinking about how to make a bridge to the triggered thoughts on LLM-based interaction paradigms. Maybe make a slight detour first. I was at the STRP Festival last Friday, a yearly ritual I do not try to break; the festival always collects interesting art pieces. This year, half of the 12 works had a quality that stood out, I think. The immersive experiences were important: have yourself entering a hunger game-like situation with other visitors of that day. There are also always some artist talks, and I attended Ling Tan explaining her work “Playing Democracy 2.0”. The multiplayer game lets you make decisions that shape the playing field, and that has a great influence on the gameplay. It worked really nicely to make you think about the consequences through a relatively basic rule-setting play. Keep this in mind while considering the role we will give LLMs in helping us with decision-making….

Triggered thought

A returning category of new products is the task-based multimodal physical spatial AI-enabled devices. Like Rabbit and Humane. The latter had his review version delivered to tech journalists last week, and their experiences were not all 5 out of 5, to say the least. The one of The Verge is rather telling: it is a promising category, and the device is even increasing the belief there is a need for it, but this device is not even close to delivering the promises.

A central problem seems to be the choice to make the LLM-like interaction with a cloud-based architecture the central way of interacting. Even if the interactions were not LLM-based at all, like asking to play a song. It makes you wonder if LLMs as touchpoints are the future. If the quality is right, and the understanding of the commands is flawless, there is a strong case for it, as it is creating a much more flexible and forgiving interaction protocol. In the best execution, it helps to find unclear requests by being creative in asking and suggesting, making it kind of human, so to speak.

The review’s conclusion that LLMs are overkill apart from the current state of technology and hardware is not right. It is right that the current state of tech and execution makes LLMs a bad choice for now, but as a way of thinking, it is really interesting.

I have to think back to the Google Glass saga from 2013. As an early adopter trying to create apps for the device and presenting about it, I feel there are similarities in the way it triggers thinking on new interactions. What I liked about playing around with Google Glass was the new interaction paradigm it unlocked: a timely and context-sensitive way of thinking about services on the go. The limits of the screen of the Glass and the way to interact with it made you create timely based interactions. Rethinking from designing for destinations to designing for triggers. (Check this this old presentationhere presented in Dutch)

It became part of the way we design for mobile and wearable apps. For notification-based interactions, for divided services over multiple channels that are kind of part of a total service ecosystem. It unlocked the potential of mobile wearable services. Now, we are mixing in a layer of understanding that potentially is changing the way we interact with things again. The LLMs function as a touchpoint with the stored intelligence more than it is the end goal and will also trigger new interaction paradigms of real duplex conversations.

Read the notions of the news, paper for this week and events to track, via the newsletter. You can also subscribe for weekly updates on Tuesday 7 am CEST.

Weeknotes 283 – more or less human through AI?

Hi, y’all! Happy IoT Day! This is a long tradition initiated by Rob van Kranenburg years ago. And still some events all around the world. Furthermore another week with AI and robotics dominating the tech news. Welcome to the new subscribers! This week’s newsletter includes news on AI trust, things, and care for AI. Among others. And will Apple indeed introduce a roaming homebot? The paper for this week is related to the thoughts that were triggered by being human in an AI world.

Triggered Thoughts

In a talk by Karen X Cheng from a couple of months ago, she discussed her fight with herself being dependent on the algorithm. In social media mainly. She has 5 antidotes formulated. All very much makes sense, but let me focus on nr 4: Human + AI. One statement she makes at the end is that we are now (and with this presentation) thinking about strategies to protect ourselves from the algo-encapsulation. But is it our responsibility, she wonders? Are we now responsible for making our seatbelts in the car? It is an illustrative metaphor as it is relatable to people to think about the relationship between humans and AI through the lens of social media algorithms. It is a good example of how it potentially shapes our real-world behavior. And it can inspire us to overcome obstacles.

Combining with this video of Alice Cappelle on the role of technology in Dune, where she addresses how we work together with technology, using Weil and others to make the case in the way of a co-performance with technology rather than choosing the dichotomy of being ruled by tech or ruling tech. Tech and tools are cultures; how would they see archaeologists of the future look at our tools? The video is nice to watch if you are into tech philosophy/ers and concepts like Solarpunk and radical pragmatism.

In the same YouTube streak, I watched John Maeda’s latest design report. He references, amongst others, a paper on Humanlike Artificial intelligence (see below). He wonders what our approach to AI and our relationship with AI should be. Should we design against AI? Do we want to prevent humanizing AI? He concludes by focusing on creating “palpable customer-centric criticality value.” Stay resilient to delusion and illusion, circling back in a way to Cheng…

Read the notions of the news, paper for this week and events to track, via the newsletter. You can also subscribe for weekly updates on Tuesday 7 am CEST.

Weeknotes 282 – ramping up for really helpful writing tools

Hi all! Easter has just ended, but it was there when I was still in the middle of synthesising this newsletter. Next to that, I had to be careful about the latest news that was published on April 1: April’s Fool Day (Check how Gary Marcus is excited about the GPT-5 preview). This week’s newsletter includes thoughts on the ongoing build-up to really helpful writing tools. In the news roundup, we have new AI battles, personal agents, LLM houses. And robots. A paper on the City as a Licence. And there are also some events to attend or track.

Triggered thought

Grammarly’s new generative AI features are intriguing yet intrusive, raising questions about the balance between suggestion and takeover. Writing remains the signature case for generative AI, just as music was for recommender systems, with Spotify Discover Weekly as the prime example. In the realm of creative writing, the ideal tool acts as a buddy, writing coach, and background researcher, but it has yet to be perfected. Some tools, like Grammarly, focus on improving grammar and now aim to inspire better writing through generative AI. ChatGPT and Claude promise to help build stronger arguments by easily incorporating background sources. However, there is a tension: the writing produced by these tools, especially ChatGPT, can be cliché-ridden and uninspiring. Lex, a tool that has undergone iterations to enhance creativity in writing, may be worth revisiting, so I did.

While it doesn’t yet add references, it is still necessary to construct narratives by prompting different services. I invited Lex to rewrite my first version, and it did a nice job, I think. I tried the rewrite function with Claude Opus and Sonnet, and with GPT-4. The latter creates rather formal speech and removes all personality. Funny enough, Claude Sonnet took a different standpoint, creating an observing piece: “the author wonders, etc…” So the above argumentation is built with the support of Claude Opus. And Grammarly, that keeps suggesting…

The question remains: when will we reach a point where a ghostwriter can start researching based on triggered concepts and collaboratively build a case without taking over the writing process entirely? One aspect is touched upon in the robotic facial expressions (see the news item below): in creating a natural feel of interaction, the AI must predict human behaviour to respond on time. But that is maybe something to dive into more at a later time and thought.

Read the notions of the news, paper for this week and events to track, via the newsletter. You can also subscribe for weekly updates on Tuesday 7 am CEST.

Weeknotes 281: the 3rd way in a dichotomy of human-machine relations

Hi, y’all! This week’s newsletter includes some thoughts on a dichotomy of human-machine relations. In the roundup of the news, we have new AI battles, personal agents, and sweet city scanning. There are also some events to attend or track.

Triggered thoughts

On the IPO of Reddit and the role it plays as the last big player from the old web, where humans are making relevancy and not machines, as nicely sketched out in the latest Hardfork podcast: the Google-optimised world vs. the Reddit-optimised world.

Are we indeed in a dichotomy? Are we slightly taking the route to the machine-driven lens of reality, or is there a third route that runs in harmony with both? Will the human part be about mastering the digital one, or will it be the only option to switch it on and off?

Relate it to the narrative of Dune. I did not read all the books, but I listened to a very nice conversation by Tech Won’t Save us talking about the societal links between Dune and current reality. Of course, the way religion drives the decisions of the Freemen in the South, but I found a side note interesting, something that is not in the movies. Dune is set in the year 10000s, far away in the future, and it is not ruled by intelligence like, for instance, from the same time of creation, the future of Space Odyssey 2001 was. In the story of Dune, the world has actively switched off AI, as it became uncontrollable. Will we be in that pivotal moment somewhere in this century or even decades?

“Are we creating a place where people still have interesting conversations?”, Casey Newton in Hard Fork

Read the notions of the news, paper for this week and events to track, via the newsletter. You can also subscribe for weekly updates on Tuesday 7 am CEST.

Weeknotes 280 – multimodal AI as an act of performance art

In the newsletter this week, next to the notions of news on AI, robotics and beyond, I reflect on some thoughts triggered by the newly announced MM1 through a paper in combination with some thoughts triggered by two artists: Marina Abramovic and James Bridle. I end the newsletter with some possible exciting events for this week.

Triggered thought

The new battlefield is multimodal AI, I did mentioned it before, among others with the new Rabbit-device (edition 275). A logic successor in the process from chat to agents. Apple published a research paper last week to claim some of the space here. It would be wise for them of course, to put their foot between the door specifically in that domain; here lies the potential unique proposition for Apple with the huge installed base… Ben Thompson wrote about it in his update this Monday that MM1 is a family of models, the largest of which is 32 billion parameters; that’s quite small — GPT-4 reportedly has ~1.7 trillion parameters — but performance appears to be pretty good given the model’s size. Unfortunately, the models were not released, but the paper is chock full of details about the process Apple went about creating the models and appears to be a very useful contribution to the space

Check out the published paper, and some articles on MM1, like this and this.

Another way of thinking about AI connected to physical world interventions is described in a lovely new experiment by James Bridle: AI Chair 1.0. How do you prompt AI for instructions on how to design a chair based on a pile of scrap wood? They are also building the chair based on the instructions.

Browsing the article, I have to think about the recipes that Marina Abramovic is creating for her performances. Last weekend, I was at the exhibition in Amsterdam, and the well-known performance artworks are intriguing and make you think and feel through the absorption of the performances. An extra asset is the recipes that are hanging next to the pictures or videos of the work. Clean instructions are a key part of the experience. The AI Chair is also made through this kind of instruction, so you wonder if the interaction of generative AI with humans, as soon as the real world is part of the equation, is an act of performance art? What does that act of interacting mean? Who is the creator of the performance?

Read the notions of the news, paper for this week and events to track, via the newsletter. You can also subscribe for weekly updates on Tuesday 7 am CEST.

Weeknotes 279: robotic fashion as mild exoskeletons

In the newsletter this week, next to the notions of news on AI, robotics and beyond, I reflect on some thoughts triggered while reading a new book that connects directly -so it seems after 25% progress- to the human-machine co-performance.

Triggered thought

I am reading a new book on robots, specifically the collaboration between humans and robots: The Heart and the Chip: Our Bright Future with Robots (more on the author, Daniela Rus). I just started, but I found this an interesting angle.

“(…) imagine a future in which our clothing will double as soft exoskeletons, monitoring our muscles and vital signs to enhance our abilities, alerting us to health problems, preventing dangerous falls, and much, much more.”

Later, the authors dive a bit more into good old techno-optimism (as the title was foreshadowing), but what I like about this frame is the merge of human and robotic enhancements and the notion of keeping that enhancement as a gradual improvement of human capabilities instead of a big power move. A consequence of these more extended capabilities is a ‘marriage’ of both the strengths of humans and machines. Intelligent machines. She is describing a world where mundane routine tasks are delegated and creating more time for humans to do ‘human stuff’, a known frame, of course.

What if we think it through, though: what kind of relationship do we want to have with these AIs? Are the AI’s butlers, helpers or companions? Is the human to the new generative AI-powered machine as the creative director to the designer? A creative director inspires and steers based on a vision, while the designer is the one making it into reality with their own agency and contribution to shaping it. Or do we grow in totally new forms of relationships and authority models? Holacracy practices for robots-human communities. Thinking about agency is a key concept nowadays, so much is clear.

How will this relationship develop over time? Would the robot that supports the family stay with you to help out when you become elderly? Or are these different ones? I’m curious to find out if the book will dive into these kinds of topics and what the conclusions will be. I’ll keep you posted.

Read the notions of the news of this week in the full newsletter.

Weeknotes 278 – Self-organising cars for commons

In the newsletter this week, next to the notions of news, I reflect on the commons as part of new neighbourhood services and the virtual engaging neighbour. And I thought it would be interesting to connect it to the ending of project Titan, the long awaited Apple Car.

Triggered thought

With ThingsCon we are partner in a research project of Amsterdam University of Applied Sciences Civic Interaction Design research group on Charging the Commons that investigates the design of digital platforms for resource communities. Last week they organised an evening to reflect on digital platforms for car sharing. Next to some first findings from research, there were three citizen-initiated car-sharing solutions initiatives. However a couple of them used the same platform (or better, service provider), the way it was organised and managed differed. Just ‘users’ to cooperative and therefor shared ownership.

One conclusion was that the extra services and social structure in which the solution is embedded makes much more of a difference than the car sharing services themselves. We see that also in another research project Cities of Things is involved in on a neighbourhood hub in the center of Amsterdam: the success of the logistic-oriented services will be in social added value of the hub, of the collaborative actions made possible.

That makes the commons as a model interesting. Self organising and initiating is a powerful driver. The platforms are more governance providers than service providers. As was mentioned at the evening event: the features of reserving, opening a car etc are the basics, the social and governance and even more, an intelligent variant is what makes a difference (makes me think about an exploration of DAOs and protocol economy I did a year ago for a financial organisation).

How does this relate to the Apple Car project? What triggered my thinking how there is potential role for future car makers, to have this incorporated in the identity of their cars. Cars can become a fleet of self organising services that not only service the dwellers of a neighbourhood, it can become an active part of that communities. I am not thinking Apple is able to create these type of software; the track record of social-structure-supporting software is a problem, but as a platform/framework that enables it would have been an interesting pivot in the way we think about cars (improve the Lynk&Co execution).

But, maybe better after all, hope it will remain a bottom-up commons initiative.

Check the Notions from the news and more via the newsletter.

Weeknotes 277 – constructing system cards for your PLMs

More personalised large intelligence models that can unlock (or unhide) latent knowledge. And other news on ‘beyond human intelligence’ and embodiment (aka #AI #robotics) in this week’s newsletter.

Woke AI is the talk of the town. Or at least a town. The new Gemini and Gemma version of Google seems to work pretty well compared to the other suppliers, but to prevent misbehaving, the guardrails went too strict and opposite of the goals. Next to this, OpenAI has a new form or system crash, with ChatGPT sharing gibberishberserk even. Will RAG help?

Many more things happened in AI and robotics; this week is quite heavenly skewed towards these two topics; see below. And I was triggered on a specific subset.

Triggered thought

Sometimes, things pop up at the same time in different contexts. Like now, creating locally hosted LLM (Large Language Models) is possible. NVIDIA was in the news for their financial results (see below), but they also created a tool for people to host LLM themselves with RTX to chat. Not for everyone due to system requirements but the principle is clear. Another one is a tool called Jan.ai that is promising the same, and there are more. It is like a PLM, a Personal Language Model that combines and unlocks your personally collected data with the now well-known chat interface. And as discussed last week, this could very well become a more exciting interface thanfor Gemini and Perplexity are offering.

This is not new, I was pretty sure I used PLM before. That would be a possible question fora standard my own instalment. The second best is Notion, of course, where I write the first version of this newsletter every week. So quickly, I found via a normal search that I mentioned in editions 233 (4 April) and 263 (14 November). And I wrote a draft for a Cities of Things newsletter back in August that was not published due to lack of time. In April, it was linked to Sundar Pichai of Google, as he expected everyone to get their own personal model. In November, the reference was also linked to the personal collection of documents, and I was wondering what it would learn; so what you would learn from your own practicing past, so to say.a unique

That was also the setup of the article in August. Can we leverage the newly established conversational interfaces with stored intelligence to understand what we are looking for and build more insights after all? “There will probably be more tools trying this. And in a sense, I think Apple has the same intentions with the learning operating system. Apple is now not only becoming more intelligent but has the potential to connect all our physical space to it. And that will only grow if they succeed in the future with an augmented life platform.”

Intelligent note-taking as a second brain. It is still interesting to extend this sometime, hopefully in the context of a concrete project. For the “Beautiful Contracts AI platform”, I explored the role of System Cards that define the behaviour of LLMs. What will the systems cards of our second brain look like?

Check the complete newsletter here.

Weeknotes 276 – Is the end of prompt engineering near?

Hi, y’all!

Welcome to the newsletter, especially the new subscribers. There is a lot of interesting in the news going beyond the current platforms and new introductions in AI capabilities, making movies that also populate mainstream media. At the same time, Google is upgrading Gemini with some interesting features that might change prompt engineering, which definitely triggered a thought!

Before I continue, let me know if you want to dive deeper into these topics or create specific (near) future studies. Check out the Target_is_New website to find out what is possible.

Triggered thoughts

Apparently oio.studio did work Google’s Gemini new interface. That made me curious as I know them for their interesting thought experiments, as well as real experiments with more-than-human futures, partnerships with autonomous creatures, and creative AI overall.

The Gemini explanation shows how they create a reasoning interface that makes a debriefing of the intention of the human input. It makes me think about how we get more human interactions with machines, as we don’t need to adapt to machine-like thinking anymore to get the best. We got used to adapting our thinking a bit to the machine way of thinking, like with search engines. The first step was the translations from natural language to machine-understandable queries in the chat interfaces. Gemini seems to go further by adding an extra layer of ‘understanding’. So it is not only the interface like with chat interfaces. It is also the internal reasoning design that is made differently. In the demo, you see how, under water, Gemini is trying to make sense of what type of question it really is and is adapting the interface to that type.

Continue reading Weeknotes 276 – Is the end of prompt engineering near?