Hi all. Writing this newsletter as usual Monday evening/night, collecting interesting reads I missed from the RSS feeds and newsletters and being aware that time is too short to be complete. Which is fine, I guess…
Last week worked on the tooling for Structural most of the time, but we also had our ThingsCon Winter Unconference planned on Friday. It was a very nice event, as we had hoped for. With a small but super engaged crowd, nice semi-planned sessions by participants, fun student projects to explore, an inspiring opening talk, and cozy closing drinks. And on top of this, Dries helped to resurrect my Nabaztag! I can share a bit more next week as we hope the after-movie is ready.
Also, last week I caught up with Gert and visited the new initiative of Arjan, and the kick-off of the RAAK research on Human Values for Smarter Cities. Kars shared the latest on his research on designing contestable AI that is an important driver.
This week the event calendar is quite low, running toward the end of the year.
- 14 December, online event by The Hmm https://thehmm.nl/event/the-hmm-in-low-low-res/
- Trustworthy AI in the Netherlands, 15 December, also online
News captures for this week
The wave of new generative ai tools focusing is hard to keep track of, just a few here art, theater plays, summary videos, logos, replies, most of them taken from the extensive overviews of BensBites.
There is a real debate on the impact on the realness of text after ChatGPT, not for nothing, someone builds a detector to check the realness… There was a lot of continuing debate on the capabilities vs truth of ChatGPT and how it can fool you easily, especially on topics you are not an expert on. Last week I shared some good articles on this, and more this week. And more.
|Back to the Future|
SYNTHETIC REALITY – “What happens when ancestral lessons collide with groundbreaking advances in synthetic biology?”
|DeepMind’s latest AI project solves programming challenges like a newb |
MODELING GALORE – “Google’s AI division tackles programming languages with a language model.”
|Engineers Push Probabilistic Computers Closer to Reality |
FUTURE COMPUTING – “This week at the IEEE International Electron Device Meeting (IEDM 2022), engineers unveiled several advances that bring a large-scale probabilistic computer closer to reality than ever before.”
|Lonely Surfaces: On AI-generated Images |
GENERATIVE AI CRITICS – “Even if we don’t think a new tool “kills” art, we should be curious about how it might transform art, or at least some of the skills and practice we have called art.”
|How Amazon Robotics is working to eliminate the need for barcodes |
HUMANIZED – How new advancing in visualizing technology might replace visual checks in human-machine cooperations? “Why multimodal identification is a crucial step in automating item identification at Amazon scale.”
|Will ChatGPT Kill the Student Essay? |
GENERATIVE EDUCATION – “Nobody is prepared for how AI will transform academia.”
|ChatGPT Is Dumber Than You Think – The Atlantic|
GPT CRITICS – “Computers have never been instruments of reason that can solve matters of human concern; they’re just apparatuses that structure human experience through a very particular, extremely powerful method of symbol manipulation. That makes them aesthetic objects as much as functional ones.”
|Dowsing is a technology for intuition amplification (Interconnected)|
DESIGNING NEW INTERACTIONS – Dowsing as a metaphor for positive priming and interactions principles in human tech cooperations. “when there is an unconscious belief that there is water underground, you enter a positive feedback cycle and the intuition is amplified into a visible signal.”
|The Internet’s New Favorite AI Proposes Torturing Iranians and Surveilling Mosques|
GPT CRITICS – “ChatGPT, the latest novelty from OpenAI, replicates the ugliest war on terror-style racism.”Hope we will not run into the same mistakes and start designing a safe space upfront and not learn after done systems…
|Elon Musk’s Neuralink killed 1,500 animals in four years; Now under trial for animal cruelty: Report – Tech|
NEURO FUTURES – “Elon Musk’s Neuralink, a medical device company, is reportedly under federal investigation for potential animal welfare violations amid complaints from internal staff.”
|Thanks to AI, it’s probably time to take your photos off the Internet | Ars Technica|
FOOLING AI – Don’t trust your social connections! “AI tech makes it trivial to generate harmful fake photos from a few social media pictures.”
|San Francisco decides killer police robots aren’t such a great idea|
CONTESTING HUMANS – “Explosive robots were approved for the SFPD arsenal, then the protests started.”
|Autonomous vehicles from Waymo and Cruise are causing all kinds of trouble in San Francisco.|
AUTONOMOUS CLUTTERING – “Based on San Francisco’s experience, residents and officials in those cities should brace for strange, disruptive, and dangerous happenings on their streets.”
|China’s social credit score – untangling myth from reality |
SURVEILLANCE – “The idea that China gives every citizen a “social credit score” continues to capture the horrified imagination of many. But it is more bogeyman than reality. Instead, we should be worrying about other, more invasive surveillance practices – and not just in China.”
|Robotaxis are now available to hail on the Uber app in Las Vegas |
AUTONOMOUS – “Motional’s robotaxis are now available to hail on the Uber app in Las Vegas. The two companies signed a 10-year deal for ride-hailing and delivery. Los Angeles will be their next market.”
|This month’s Frame: How Stuart Hall’s identities framework can help understand the rise of Discord and Telegram|
UNDERSTANDING – “A framework for understanding the benefits and limitations of different digital identity models. The one question Elon Musk wanted answered before acquiring Twitter was: How many Twitter users are bots? While it proved to be a particularly difficult question to answer with precision, the broad response was “too many”. But why are there more bots on Twitter than on Linkedin or Facebook? Partly because Twitter doesn’t expect users to use their real identities. Why is that the case? And conversely, why should we need to use them on Facebook or Linkedin?“
|WeWalk raises cash to bring computer vision to smart cane for visually impaired people |
IOT CLASSICS – We don’t see these type of IoT product development that often anymore. “London-based WeWalk is looking to bring computer vision technology to its “smart cane” for visually-impaired people.“
Paper for the week
Deepmind is known for its inspiring or worrying both experimental and applicable AI. This title of the paper triggered attention.: Explainability Via Causal Self-Talk
Explaining the behavior of AI systems is an important problem that, in practice, is generally avoided. While the XAI community has been developing an abundance of techniques, most incur a set of costs that the wider deep learning community has been unwilling to pay in most situations. We take a pragmatic view of the issue, and define a set of desiderata that capture both the ambitions of XAI and the practical constraints of deep learning.
The paper is tech-heavy but describing also the concepts.
Roy, N. A., Kim, J., & Rabinowitz, N. (2022). Explainability Via Causal Self-Talk. arXiv preprint arXiv:2211.09937