Weeknotes 233; collabs with nihilistic machines

Hi all! First of all, welcome to the new subscribers. Sometimes I forget to say this, but it’s great to have you. In an earlier edition, I took some time to explain what to expect from these newsletters. In short, there are four recurring elements every week:

  • Updates on things I did or noticed in general from last week, such as an event I visited or a major news story.
  • Next, I look at the calendar of events that reached me in one way or another.
  • The biggest chunk is a list of articles from the news that captured my attention. I try to annotate these a bit to explain why they could be an interesting read.
  • I close with an academic paper that I would like to read, either one from the past that I have read or a recent one that is often on my to-read list.


So, to begin, what about last week? All the buzz was about a letter to pause AI developments for six months. What to think about that initiative? It definitely ignited a discussion, which of course was already happening. More on an individual level, there were discussions about the intentions of some of the people who signed, like representatives from the huge social platforms that have been firing ethics teams meant to provide guardrails. They are now shifting to the government to deal with it, which is a natural role, although it is hard to keep track of the regulatory developments. Creating regulations on another level (promises, intentions, and values) might be better.

As others have made the link too, with the death of Gordon Moore, his Law might be outdated. The material is not limiting, the software and even more importantly, the application and spread. The biggest danger is not an AI that becomes an AGI soon, as we have time to prepare for that. The AI that is building trust on hallucinations can become more disruptive. We don’t need to stop developments or research for six months, but we should start thinking about how to make everyone generatively literate in six months.

Literacy is one of the goals of the Cities of Things project to develop a toolkit to understand what autonomous technologies can mean for neighbourhood life. Start making things with the goal of creating something for the neighbourhood, not just an individual goal. Hopefully, we will not end up in the scenario Melissa Heikkila sketches in the latest The Algorithm newsletter.“In doing so, they are sending us hurtling toward a glitchy, spammy, scammy, AI-powered internet. (…) As the adoption of AI language models grows, so does the incentive for malicious actors to use them for hacking. It’s a shitstorm we are not even remotely prepared for.”


Last week, I conducted further explorations in GPT-4 while attempting to use it in the process of writing a (pre-)proposal. I had a rough idea of the framing I was looking for, which was based on a couple of different sources linked to our core belief. I discovered that it can summarize books and papers even if they are not open access. I allowed it to combine the conclusions with my own claim, resulting in a first version that was partly a summary of the sources and reframed my claim, but also contained some parts that could serve as a kickstart for the writing.

Dan Shipper also described this phenomenon in his article, GPT-4 is a Reasoning Engine.

In other words, or as others have said, these are just nihilistic machines that require human collaborators…


Last week, I did not attend any specific events except for the opening of a new exhibition on alternative connections between mankind, nature, and technology called “Through Bone and Marrow” (https://brutus.nl/en/programme/current/through+bone+and+marrow/).

Notions from the news

Last week was all about AI, as mentioned above. Some new AI tools were announced again. Bloomberg is definitely the most interesting, as they are building their own LLM. As you do nowadays, you introduce this with a paper


In the meantime, the Italian privacy regulator responded by banning ChatGPT https://www.politico.eu/article/italian-privacy-regulator-bans-chatgpt/

I listened to a Hard Fork podcast edition with Sundar Pinchai of Google, among others. He talked about how he thinks everyone will get their own personal model, the personal digital agent that Google has been building over all these years translated in a PLM? https://www.nytimes.com/2023/03/31/technology/google-pichai-ai.html

The plugins are powerful, functional, and for the business model. Has anyone already compared the app store revolution?


The business predictions are starting to bubble up


This week’s reflection is by Gary Marcus


And one by Morozov: “The ultimate risk of not retiring terms such as ‘artificial intelligence’ is that they will render the creative work of intelligence invisible, while making the world more predictable and dumb.”


Are there physical barriers still? “There are more practical concerns too. The pace of innovation in the GPU chips that are used to run AI is lagging behind model size, meaning that pretty soon we could face a “brick wall” beyond which scaling cannot plausibly go.”


Are we indeed moving into an age of average? Or is this a fear that pops up with every new wave of technology (I remembered this kind of stories from some decades ago).


It inspires new SciFi stories.


Powerful, weird, scary, uncanny, giddy — how the hell do we collectively navigate all that? Kottke with reflecting on his experiences.


The new AI is also becoming part of Police Surveillance Tech, as it was always. I remember the Memphis predicting policing case from year ago, I used in presentations. It will keep track and become more impactful…

Will we transition away from neolibarism towards productivism? “

in this essay an approach that I call “productivism.” This is an approach that prioritizes the dissemination of productive economic opportunities throughout all regions of the economy and segments of the labor force.”

A silent invention that can have a big future impact?


On autonomous vehicles; it is also about the sound design.


Delivery robots are still happening.

Amazon taking the street. Might be helpful for their delivery bots…


This is silly; why make these look like humans?


The design of humans and animals is not that bad after all..


Alarming… The deep ocean circulation that forms around Antarctica could be headed for collapse, say scientists.


Visual candy. Altering real things unreal, and real feeling unreal through altering perspectives.


To close; Matt Webb AI Clock has a web-based version now. And a write-up on experiencing time in the context of current technology.


Paper for this week

This sounds as an interesting read indeed. Data, compute, and Labor.

The chapter will first examine the industry structure of AI. Far too often, the focus lies on the firms that use AI as opposed to the firms that provide AI. The latter, I will argue, are more important to understanding the nature of AI’s political economy. The second section will show that most research on AI monopolies has been on data as an input into the production process, but in the third section, I will set out a schematic model of the AI production process that shows data is only one small part of a larger set of inputs and tasks.2 The remainder of the chapter will then look at three key inputs— data, compute, and labor.

Srnicek, N. (2022). Data, compute, labour. In Digital Work in the Planetary Market (pp. 241-261). The MIT Press.

See you’ll next week

Published by


I am a design director at Structural. I curate and organize ThingsCon Netherlands and I am chairman of the Cities of Things Foundation. Before I was innovation and strategy director at tech and innovation agency INFO, visiting researcher and lab director at the Delft University of Technology coordinating Cities of Things Delft Design Lab.