Weeknotes 290 – designing agon in fair AI systems

Last week, the saga around Sky and Scarlett continued, not contributing to the reputation of OpenAI. There is an interesting angle to address that popped into my mind when listening to a discussion in a podcast (one of the many that kept addressing the topic). We passed the typical uncanny valley tipping point with this voice. The uncanny valley graph is often only referred to as the cringe factor of robotics, trying to be as real as possible but missing the mark for just that last bit. The graph continues after that cringe point, indicating the moment it feels real for real. That is what the Her moment is referring to. Another point that is interesting to address, of course, is that the ‘Her’ reference made by OpenAI is weird in a way as the movie does not end well in terms of human-AI relations. The benefit, though, is that we will be now prepared and not expecting that we will have a perfect monogamous relationship with our robot friends. Or rethink what monogamy means in that context…

Triggered thought

I would like to connect the triggered thought to a discussion during the PhD defense of Kars Alfrink’s work on contestable AI. The research has passed here before regarding earlier papers; it is very valuable in the discourse on how to relate to AI systems and services. A too brief summary, in my words, is that we should not focus on transparency to deal with the impact of AI but focus on contestability, building democratic structures around AI, and especially giving human subjects agency in how they are treated based on the AI interpretations of the interactions. A more extended definition is, of course, available via his website, contestable.ai.

Two topics from the defense that triggered my thoughts. The first was the last question posed: Kars’ research focuses on public AI, which is the system that is applied by governments and all. A lot of the systems that are influencing our lives will be made by private organizations and companies. Is there a difference in the impact, the expectations, and the contestability tactics?

There is a different structure in which the AI is part of and embedded. Democratic systems may not be the go-to solution, but it is too easy to propose market mechanics (”voting with your feet”). As the needed data and investment in computation are still very large, we can expect the big players to build AI with the deepest impact (and user value). So regulation is something to look into, more than market dynamics, like the European AI Act. That is the way to enforce contestability.

Another strategy that can be part of regulation or next to it: arrange a form of interaction literacy to express the values. More concrete: for example, with the Wijkbot project, we aimed to create an urban robot prototype kit that can be used in the process of designing real-world interactions with the urban robots and give citizens ways to formulate their wishes and boundaries. Those two-way systems should be part of AI systems to calibrate and control the behavior and ruling continuously. (For me, that is a key driver to continue to work on the WijkbotKit as an empowerment and educational platform).

The second thought is related. There was a very nice interaction with Liesbeth van Zoonen on embedding conflict into contestability, following Mouffe‘s thinking of agonistic pluralism, as referred to in Kars’s work. In short, as I understood, Liesbeth addressed that in the referred work, the conflicts are related to collectives, not individuals. Kars indicated that there is indeed a need for continuous research here. I was thinking of a way to refer to the interaction and conflict as if they are always proxies of collectives. There might be a relation with Wijkbot too (HCIMTAM :-) ), as we initiated a project for students at Industrial Design Engineering on the relation of individuals using Wijkbots in their services in their neighborhood as proxies, with the provocation that these urban robots might form their own collectives and ‘oppose’ the needs of the individual resident. How to deal and use these opposing civic robots?

We organized a series of events back in 2017/2018 on Tech solidarity, wondering how to build a grassroots community of tech workers in the Netherlands advancing the design and development of more just and egalitarian technology. To be honest, that is not an easy job to do. I think using the angle of creating more collective awareness with designers might still be a potential strategy with AI systems. That can stimulate a different mindset and, with that, different outcomes in how the AI interacts with their subjects, and building a real co-performing partnership… build on designing with collectives for collectives, based on new forms of democratic principles.

As always, this is just the beginning of the thinking… tbc.

Read the notions of the news, paper for this week and events to track, via the newsletter. You can also subscribe for weekly updates on Tuesday 7 am CEST


Buy Me a Coffee at ko-fi.com