Weeknotes 287 – a new layer of digital life twinning

Hi y’all!

Welcome to my new subscribers and readers! Check below a bit of background of this newsletter, but let me dive right in now.

I was thinking of reflecting on the event last Thursday I attended the yearly (since 2019) “de staat van het internet” (State of the Internet) organized by Waag. But then something else popped up; see below. I don’t want to let it pass completely. It was a different version this year, compared to last year, that was more deep and thoughtful with James Bridle. This year, it was much more “applied,” with European politician Kim van Sparrentak sharing her ideas and actions on digital and AI. To be honest, the talk was a bit bleak and predictable, what you might expect maybe from a complex political process. I respect her efforts, but it is more confirmed how hard it is to change things and create a real impact.

The panel afterward was, however, very nice and underlined the feelings that arose: are we really fighting the polarization with more politics, or do we need another angle? I wrote down for myself that we need to focus on democracy more than politics. More concretely, it is all about social fabrics, education, building trust in the system, and resilience. Use more fundamental beliefs of people to build a more fair debate and constructive policymaking. Hopefully, that will result in more reasonable politics, too, respecting fundamental rights above popular opinions.

Some examples made it clear that there are possibilities to use design for change, like creating platforms to find consensus, as in Taiwan, not focusing on opinions. I was happy with the remark from the audience stressing that we are talking too much about digital as a result of real life, as a separate reality that can be regulated separately, while it is now the other way around; digital is real life and should be governed similarly. And that is a segue into the triggered thought after all…

Triggered thought – LLM is a ‘digital twin’ for our digital lives

Benedict Evans has a column in his newsletter (sub) on the role and impact of LLMs for search behavior and the way search engines will change face with fewer and fewer links. True, but one phrase triggered a different thought: how search engines create a layer on top of the open internet. That is no news; we see much of our reality through the lens of the search engines we use (mainly Google) and partly the social networks. The adagium of propriety internet has been a common notion for a decade at least.

But we are now digital creatures because most of our emotional and social life is happening via all kinds of digital tools. The discussion on the change in role and presentation of search influenced by LLMs and other forms of AI-enhanced search is not only a change in search itself but might be an addition of a new layer. The way we interact with the LLM intelligence to enhance our own thinking with inspiration, advice, and reflections might tap into a different digital presence, a new type of digital consciousness. We might, in that sense, start treating the LLM-enhanced tools as partners more even than the ‘old’ search that are more tools to us.

In another opinion piece on AGI, Evans explores the promise of AGI and our relationship to it. For me, this is a related topic that impacts the way we treat things as citizens.

If we start by defining AGI as something that is in effect a new life form, equal to people in ‘every’ way (barring some sense of physical form), even down to concepts like ‘awareness’, emotions and rights, and then presume that given access to more compute it would be far more intelligent (and that there even is a lot more spare compute available on earth), and presume that it could immediately break out of any controls, then that sounds dangerous, but really, you’ve just begged the question.

(A form of) AGI might be there already as soon as the AIs are able to answer all questions by combining multiple sources. Perplexity is already on that path. Combining that with some interaction with humans to improve the results, we will see a form of AGI that is not autonomous but in constant conversation with humans, has a value reference and is not acting overly secure, bragging like the current generative AI tools tend to do.

The embodiment of AI is an important step towards a more AGI-like feel. AGI is often framed as a danger to human life, as in taking over more jobs and maybe starting to influence our own views and behavior in such a way that we are losing our own agency. These are possible dangers that we can overcome through education, by prebunking systems, and by making people more aware and literate. And so it connects back to the conclusion of the State of the Internet…

Read the notions of the news, paper for this week and events to track, via the newsletter. You can also subscribe for weekly updates on Tuesday 7 am CEST.


Buy Me a Coffee at ko-fi.com