Weeknotes 275 – new buddies to deal with everyday hidden complexity

Hi y’all! Another week passed by quickly. It’s good to see new people are finding the newsletter every week. Welcome! My* goal is to collect some interesting news from last week with my lens. I use a bunch of newsletters and tools to find the news through good old RSS, newsletter subscriptions and algorithmic tools (too bad Artifact is quitting, still signalling news, but strangely enough, only Apple-related). Perplexity is now the newest one; it also has a discovery part. I also like the pro-mode, that I got as part of my pre-order for the Rabbit.tech. The result page is much richer than ChatGPT with sources and images and is better for insights. ChatGPT is still better for generating stories it seems.

As an example, I asked Perplexity to make an overview of the Cities of Things activities. It is quite good. At the same time, it is a great way to test what needs more attention in the communication 😊

Triggered thought

New goggles… Another ex-Apple employee is starting an intelligent product, framesHelping make sense of the world and finding hidden knowledge is an inspiring use case for new product development since the beginning of Augmented Reality. I have vivid memories of the strategic concepts back in the day, starting with location-based services based on triangulating phone masts in the early 2000s and later with AR phones like the first Google G1 and companies like Layar. And, of course, Google Glass was closing off that era in the 2014s. At that moment, the revolution was location awareness with GPS in combination with other phone sensors to map a data layer in the real world. The new generation is flipping this with an intermediary device that is superposing reality with a layer built from knowledge and understanding what it sees. So, it is no longer necessary to map a layer of reality in advance; intelligence is using existing knowledge to connect the dots.

It is interesting to think this through a bit. We are already used to be able to have all the knowledge we search for via our phone, including a sense of space through the continuous location graph. We still have to initiate our knowledge base with a clear cue, the search prompt.

At the same time, we are just starting to get acquainted with the possibilities of generative AI and LLMs. We can use the tools to make unexpected connections leading to new inventions (the level of newness is still a topic of discussion, of course).

If we combine these two, the instant knowledge gratification and the superpower of extending reality in generative thinking, we might create objects that are more of a companion than a tool. Rabbit is an example, and so is Brilliant frames. But an Apple Vision Pro with generative capabilities will also become a game changer. I already saw a movie with a visually impaired user of Vision Pro magnifying the real world for navigating the streets. And in the near future the news will not be that IKEA created an AI assistant, but created a space that is prepared with all the right datasets to feed your personal AI agent. The witty friend that can entertain you or make you that friend with the best ideas. What will that mean for the way we relate to our city life? “What if our increasing reliance on digital technologies can diminish our capacity to experience the world in full?” Time to explore this more!

Find the notions from the news, events to track and paper for the week in the complete newsletter.

Have a great week!

I hope you have a great week. I will dive into contestability as an update of this research project, contribute to some academic paper writing on practice-based research with the wijkbot and proposals for new projects, discuss Cities of Things plans, and visit a museum and a movie or two. I hope to progress in this book.

See you next week!