
Hi all! Hope you had a nice easter, as a short holiday break. I took the chance to read more from last week’s news articles and watch some videos. One of them was **The A.I. Dilemma** Tristan Harris and Aza Raskin. Discussing the impact of Generative AI compared to the Manhattan Project of the Atom bomb https://www.wired.com/story/how-to-make-sense-of-the-generative-ai-explosion/
Gollem class AI
That it goes fast and holds dangers is alone interesting to watch. The double exponential curve of AI training AI with humans in and out of the loop. But is more interesting to explore how to organise the consequences. Democracies might need to be reshaped.
An example of the impact; AlphaPersuade. AI is not only training how to become a better player, as in AlphaGo. It is training against itself to become a better persuader.
Sacasas writes about an Apocalyptic AI this week. “The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason.”, he quotes. Is there an AI apocalypse to be expected, and what will that mean? “AI is apocalyptic in exactly one narrow sense: it is not causing, but rather revealing the end of a world.” AI is here more than robotics driving these developments. AI safety is also the week’s topic for OpenAI themselves, publishing their vision. And we need global oversight.
https://theconvivialsociety.substack.com/p/apocalyptic-ai
https://www.theatlantic.com/technology/archive/2023/04/ai-robotics-research-engineering/673608/
More buzz of the week was about Twitter and Substack. An interesting episode of Sharp Tech today Clown Car History Lessons, Both Sides of the Twitter-Substack Fight, Parenthood Tech Strategies on strategies of Twitter irt Substack feud.
I had to think about the moments thinking about Twitter strategies in the past; how it should have been a messaging platform for the real world… https://targetisnew.com/category/twitter/
Events
I had to miss my planned attendance at the meetup last Wednesday; a proposal needed to be completed. So I can’t report on that. Check the video here, though.
This week is of course the week of STRP festival in Eindhoven, including the ThingsCon Salon we organise ourselves on Listening Things this Friday.
And other events that pop up:
- Data & Drinks in Amsterdam on Thursday. Never been there, seems a big one: https://www.meetup.com/data-drinks/events/292108501/
- London IoT next Tuesday in person again https://www.meetup.com/iotlondon/events/292386828/
- v2 has the monthly open lab on 14 April (online from Rotterdam) https://v2.nl/events/open_lab-2023-iv
- A little bit further ahead: CHI is something I would go for if I had the time and budget :) This time in Europe, the pleasant city of Hamburg. https://chi2023.acm.org/
- Same for the Salone. Never been there; the side program is often even more interesting, I understand. Check Tobias Revell if you are there.
- Also, next week, the Hmm at Tolhuistuin Amsterdam on 19 April on generative podcasts.
Notions from last weeks News
Let’s start with the weekly AI updates. From the apply-side, so to say, we see integrations of AI in Expedia, Siri.
Following the presentation of Aza Raskin and Tristan Harris, language is the core element in generative AI development. Multiple new systems are popping up.
Does one Large Model Rule Them All? The three authors are excited about the developments and believe in a diverse landscape of AI components with a few general AI models.
https://maithraraghu.com/blog/2023/does-one-model-rule-them-all/
The role of ChatGPT in education is interesting to explore. MIT did this and concluded that it would lead to a change and not destroy it.
https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/
The companionship of AI and humans is a hot topic this week too. The critical writer Ethan Mollick describes a companion for thinking. “it’s important to remember that it is a tool to support human decision-makers, not replace them.” That might be true for now and in the coming time indeed.
Speaking of companionship feel, I mentioned I did some quick programming with GPT-4 last week. This is also possible by voice commands. And it can create a GitHub repo and deploy it too.
The relationship with AI is a topic for careful consideration. An example is how easily Google’s Bard can be seduced to lie.
https://www.wired.com/story/its-way-too-easy-to-get-googles-bard-chatbot-to-lie/
Do LLMs have agency? Feedback from humans is feeding the agency of machines. Gordon Brander.
https://subconscious.substack.com/p/feedback-is-all-you-need
Mike Barlow from O’Reilly is adding the danger of Bias and especially bias that is in the eye of the beholder, and one step further, that the bias is hidden.
https://www.oreilly.com/radar/eye-of-the-beholder/
Ezra Klein pleas for public-initiated development, not leave it to the big companies. https://www.holo.mg/stream/hard-fork-ezra-klein-kevin-roose-casey-newton-ai-vibe-check/
A sketch of market parties and categories.
https://aspiringforintelligence.substack.com/p/bonus-post-game-on-in-generative
Excels can make a difference in shaping the intelligence of the AI
https://www.freethink.com/robots-ai/excel-for-llms
Meta is introducing a system to isolate objects from visually sensed imagery. Good for target advertising of course:
https://ai.facebook.com/blog/segment-anything-foundation-model-image-segmentation/
A returning notion: AI is physical too. Like the processors.
https://benschmidt.org/post/2023-03-07-webGPU-day/
The identity of AI is also influencing robotics. Jonny Thomson is a philosopher and proposes a fourth law for robotics (next the original Asimov rules): A robot must identify itself.
https://bigthink.com/the-future/3-rules-for-robots-isaac-asimov-one-rule-he-missed/
Time for some structure in our lives after all these items. The Grid is famous, a good grid design is learning the deeper structures.
https://alex.miller.garden/grid-world/
Still an issue; trust in IoT devices, will these stop working sooner than you expect. Bricks or bricked?
Gary Marcus is looking to GPT-5, that he expects will not be completely different, has the same way of working and lack of real understanding, the better pretending machine.
Mind control is getting near. As long as we are connecting control to agency.. Graphene is the promise.
https://www.freethink.com/hard-tech/bci-robot-dog
Buildings, and complexes get designed to be able to be recognisable from different angles, like from a satellite. It makes me think; why not create a building that looks like QR code from above?
Paper for this week
This week is again aligns with the topics discussed, the core of languages and companionship. HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace
It is quite common nowadays to introduce new products and features with a paper.
Considering large language models (LLMs) have exhibited exceptional ability in language understanding, generation, interaction, and reasoning, we advocate that LLMs could act as a controller to manage existing AI models to solve complicated AI tasks and language could be a generic interface to empower this. Based on this philosophy, we present HuggingGPT, a framework that leverages LLMs (e.g., ChatGPT) to connect various AI models in machine learning communities (e.g., Hugging Face) to solve AI tasks. Specifically, we use ChatGPT to conduct task planning when receiving a user request, select models according to their function descriptions available in Hugging Face, execute each subtask with the selected AI model, and summarize the response according to the execution results.
Shen, Y., Song, K., Tan, X., Li, D., Lu, W., & Zhuang, Y. (2023). HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace. arXiv preprint arXiv:2303.17580. Chicago
https://arxiv.org/abs/2303.17580