Weeknotes 231; may the 4th be with you

Hi! Happy AI y’all.

Another week is full of AI news. Steered, of course, by that big happening in Austin as mentioned last week, it was a moment to announce new versions (GPT-4, Baidu, Claude), or integrations of AI in tools (Google, Office, Linkedin), language models (PaLM from Google). While Microsoft is getting extra attention, laying off the Responsible AI Team. Pretty sure GPT-4 was the most anticipated and discussed. The CEO did an impressive demo where the more conceptual thinking capabilities and the visual capturing stole the show. And now it is passing the bar exams with high numbers. Creating a -still quite ugly- functioning website from a sketch on paper was impressive.

I decided to take the plus subscription to start some conversations myself. I will do more later; I am specifically curious how well you can train it with existing concepts and develop a proper discussion on these concepts that deliver new insights, ideally letting you reframe your own thinking. This is one of the core applications we would use. My first quick conversation was still delivering a feeling of lip service than building a critique, but I am sure I need to tune the prompting. It feels like talking to a chatbot with media training.

By fostering open discussions about the potential risks and benefits of GPT-4 in human-tech relations, and actively working to mitigate potential negative consequences, we can ensure that the technology is used responsibly and contributes positively to the human experience.

Lex also integrated GPT-4, and it remains one of the nicest incremental improving integrations. Watch this introduction of GPT-4 in their writing tool.

A powerful concept part of Lex and other writing tools is now an image tool by Stable Diffusion: Reimagine. “Stable Diffusion Reimagine does not recreate images driven by original input. Instead, Stable Diffusion Reimagine creates new images inspired by originals.”

https://stability.ai/blog/stable-diffusion-reimagine

Events

Before diving into all the other news of last week, here are some events in the coming week and beyond that might be interesting**:**

And we announced a third speaker for ThingsCon Salon on Listening Things at STRP; Joep Frens will especially reflect on the insights from student explorations in the IOT Sandbox.

On to the news of last week. Or continue better said. Collecting my captured links shows how hectic it was. I will make a selection…

First, an overview of GPT-4

https://arstechnica.com/information-technology/2023/03/openai-announces-gpt-4-its-next-generation-ai-language-model/

As always, there are numerous additions to the AI critique.

Gary Marcus wrote on GPT-4, its successes and failures. “How GPT-4 fits into the larger tapestry of the quest for artificial general intelligence”

Last week I reported on the talk by James Bridle on other intelligences. In The Guardian he published a long read on the relationship of AI and culture. “This is a huge shift. AI is now engaging with the underlying experience of feeling, emotion and mood, and this will allow it to shape and influence the world at ever deeper and more persuasive levels.”Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous.”

https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt

More Bridle in this interview of Claire Evans, in case you cannot get enough.

An advance publication of a paper by Rita Raley and Jennifer Rhee discuss Critical AI: A Field in Formation

https://doi.org/10.1215/00029831-10575021

Asking questions and getting answers is not hard, validating the correctness is a different skill.

https://www.oreilly.com/radar/getting-the-right-answer-from-chatgpt/

Kevin Roose got lost of attention with his Bing/Sydney conversations just a couple weeks ago. He thinks GPT-4 is exciting and scary at the same time; risks we cannot anticipate.

Robin Sloan: “We are living and thinking together in an interesting time. My recommendation is to avoid chasing the ball of AI around the field, always a step behind. Instead, set your stance a little wider and form a question that actually matters to you.”

https://www.robinsloan.com/lab/phase-change/

Nir Eisikovits: AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it

https://theconversation.com/ai-isnt-close-to-becoming-sentient-the-real-danger-lies-in-how-easily-were-prone-to-anthropomorphize-it-200525

Jon Evans: on the role of languages, the change of having models; it seems very likely that language will be key, and that modern LLMs, though they’ll seem almost comically crude in even five years, are a historically important technology. Language is our latent space, and that’s what gives it its unreasonable power.

Nathan Baschez is calling the LLMs the new CPUs

https://every.to/divinations/llms-are-the-new-cpus

Matt Webb on AI in a loop: It’s not self-replication that we should be looking at. It’s self-evolution

https://interconnected.org/home/2023/03/16/singularity

Dan Shipper is seeing GPT-4 growing into a copilot for the mind.

https://every.to/chain-of-thought/gpt-4-a-copilot-for-the-mind

The openness of OpenAI is different with GPT-4

https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview

And how about those other tools? Did they lost the AI race?

Some applications. Nabla is a digital health startup with a copilot. Be my eyes for visual impaired. VALL-E is doing voice cloning. Midjourney is introducing a magazine.

https://www.freethink.com/robots-ai/7-uses-of-gpt4

And PWC is introducing legal business solutions, what seems to make sense.

https://www.pwc.com/gx/en/news-room/press-releases/2023/pwc-announces-strategic-alliance-with-harvey-positioning-pwcs-legal-business-solutions-at-the-forefront-of-legal-generative-ai.html

Reid Hoffman had early access and took that advantage to be the first to co-author a book with GPT-4

https://www.linkedin.com/posts/reidhoffman_i-wrote-a-new-book-with-openais-latest-ugcPost-7041773192712998912-A5pE/?mc_cid=47c1f3eb5f

With all that AI happening, the robots are less in the news. Or hidden. Disney was introducing their own humanoid at SXSW. With some smart tricks.

https://www.iotworldtoday.com/robotics/disney-robot-debuts-at-sxsw

The academic conference on HRI (human-robot interaction) sources new papers. Check the work of the DEI4EAI project in this thread. And check the tweets of @mlucelupetti for some pointers.

A self-driving lab is using AI on another level: “Autonomous Discovery and Optimization of Multi-Step Chemistry using a Self-Driven Fluidic Lab Guided by Reinforcement Learning”

https://news.ncsu.edu/2023/03/alphaflow-speeds-chemical-discovery/

And in other news, the newest Zipline drone delivery was announced. Silent, precise…

https://www.axios.com/2023/03/16/drone-delivery

What will be our future post-automobile?

https://popupcity.net/insights/the-free-street-manifesto-is-a-guide-for-a-post-automobile-future/

And what is the current state of Tesla’s full self-driving future?

https://www.washingtonpost.com/technology/2023/03/19/elon-musk-tesla-driving/

And to close the captured news section; in other other news, climate change…

IPCC came with alarming news today. With a positive, positive framing though, of a report that makes mostly very grim reading was a deliberate counterblast to the many voices that have said the world has little chance of limiting global heating to 1.5C above preindustrial levels, the threshold beyond which many of the impacts of the crisis will rapidly become irreversible

https://www.theguardian.com/environment/2023/mar/20/ipcc-says-world-can-avoid-worst-of-climate-breakdown-if-it-acts-now

And some specific consequences in changing climate in California.

https://phys.org/news/2023-03-california-mountains-scientists.html?utm_source=artifact

Paper for this week

To stay on topic, Algorithmic Black Swans: From biased lending algorithms to chatbots that spew violent hate speech, AI systems already pose many risks to society. While policymakers have a responsibility to tackle pressing issues of algorithmic fairness, privacy, and accountability, they also have a responsibility to consider broader, longer-term risks from AI technologies.

Organizations building AI systems do not bear the costs of diffuse societal harms and have limited incentive to install adequate safeguards. Meanwhile, regulatory proposals such as the White House AI Bill of Rights and the European Union AI Act primarily target the immediate risks from AI, rather than broader, longer-term risks. To fill this governance gap, this Article offers a roadmap for “algorithmic preparedness” — a set of five forward-looking principles to guide the development of regulations that confront the prospect of algorithmic black swans and mitigate the harms they pose to society.

Kolt, Noam, Algorithmic Black Swans (February 25, 2023). Washington University Law Review, Vol. 101, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4370566

Thanks for reading. I hope that next week will be a bit less packed…

https://knowyourmeme.com/memes/if-i-had-more-time-i-would-have-written-a-shorter-letter?ref=target-is-new.ghost.io

See you next week!

Published by

iskandr

I am a design director at Structural. I curate and organize ThingsCon Netherlands and I am chairman of the Cities of Things Foundation. Before I was innovation and strategy director at tech and innovation agency INFO, visiting researcher and lab director at the Delft University of Technology coordinating Cities of Things Delft Design Lab.