Weeknotes 277 – constructing system cards for your PLMs

More personalised large intelligence models that can unlock (or unhide) latent knowledge. And other news on ‘beyond human intelligence’ and embodiment (aka #AI #robotics) in this week’s newsletter.

Woke AI is the talk of the town. Or at least a town. The new Gemini and Gemma version of Google seems to work pretty well compared to the other suppliers, but to prevent misbehaving, the guardrails went too strict and opposite of the goals. Next to this, OpenAI has a new form or system crash, with ChatGPT sharing gibberishberserk even. Will RAG help?

Many more things happened in AI and robotics; this week is quite heavenly skewed towards these two topics; see below. And I was triggered on a specific subset.

Triggered thought

Sometimes, things pop up at the same time in different contexts. Like now, creating locally hosted LLM (Large Language Models) is possible. NVIDIA was in the news for their financial results (see below), but they also created a tool for people to host LLM themselves with RTX to chat. Not for everyone due to system requirements but the principle is clear. Another one is a tool called Jan.ai that is promising the same, and there are more. It is like a PLM, a Personal Language Model that combines and unlocks your personally collected data with the now well-known chat interface. And as discussed last week, this could very well become a more exciting interface thanfor Gemini and Perplexity are offering.

This is not new, I was pretty sure I used PLM before. That would be a possible question fora standard my own instalment. The second best is Notion, of course, where I write the first version of this newsletter every week. So quickly, I found via a normal search that I mentioned in editions 233 (4 April) and 263 (14 November). And I wrote a draft for a Cities of Things newsletter back in August that was not published due to lack of time. In April, it was linked to Sundar Pichai of Google, as he expected everyone to get their own personal model. In November, the reference was also linked to the personal collection of documents, and I was wondering what it would learn; so what you would learn from your own practicing past, so to say.a unique

That was also the setup of the article in August. Can we leverage the newly established conversational interfaces with stored intelligence to understand what we are looking for and build more insights after all? “There will probably be more tools trying this. And in a sense, I think Apple has the same intentions with the learning operating system. Apple is now not only becoming more intelligent but has the potential to connect all our physical space to it. And that will only grow if they succeed in the future with an augmented life platform.”

Intelligent note-taking as a second brain. It is still interesting to extend this sometime, hopefully in the context of a concrete project. For the “Beautiful Contracts AI platform”, I explored the role of System Cards that define the behaviour of LLMs. What will the systems cards of our second brain look like?

Check the complete newsletter here.

Published by

iskandr

I am founder at Target_is_New, founder of Cities of Things knowledge hub, and organizer at ThingsCon. Before I was research director at digital agency INFO, visiting professor at TU Delft, and the design director at Structural