Iskander Smit is an innovation director at tech and innovation agency INFO, visiting professor at the Delft University of Technology coordinating Cities of Things Delft Design Lab, and chairman and organizer of ThingsCon Netherlands.
I also attended the PhD Defense of Péter Kun ‘Design Inquiry through Data’ which presents an interesting framework for those that are interested how data and machine learning become part of design practice.
Also last week we had a successful ThingsCon workshop we organized together with Lorna Goulden of IoT Eindhoven on ‘Don’t Be Evil – Building Trustable and Responsible Technology Business. We had an interesting group of participants, Miro worked very well as collaborative tool, and we are definitely planning to continue towards a definition of possible tools to support for instance tranparency and accountability.
The coming week is the final week before I will be off for a couple of weeks of vacation, so I’m planning ahead for some new projects starting with students as part of the research into Things That Predict and Cities of Intelligent Things, and the ThingsCon Salon in The Hague and NGI session. (latter not yet on the website).
Also looking forward to participate in a workshop on Designerly HRI (Human Robot Interface) as part of Ro-man 2020 conference. My contribution is about the potential role of predictive knowledge in the relation with robots.
More on human-nonhuman partnership in the news last week. And more AI. But let start with some quantum philosophy.
What happened last week? It was quiet with the graduation students, one is finalizing for graduation this week, another is planning for a green-light this week, two others are have these moment is end of September. I will update the DDL Cities of Things website soon as a project finishes.
Last thing to mention is the Robophilosophy conference organized by Aarhus University. Interesting to see how they solved the online conference experience. A mixture of live presentations and discussions, and pre-recorded talks. The talks were – as usual with academic conferences – linked to the accepted papers in the program. The discussion sessions tried to combine papers in themes like Design, Moral Robots, Ethics, etc. These sessions were very formatted with strict time-limits for the speakers. That was good to prevent long winding answers, but on the other hand, it did prevent a real discussion to happen IMHO.
I did not have time to follow all sessions. I liked the session of Aimee Van Wynsberghe, John Danaher, Selma Sabanovic. The latter stressed how robots are a means for humans to communicate, and you can in that sense use robots for building communities. Aimee introduced the notion of reciprocity in the interaction with robots, as mean to create social systems. Design for reciprocity should be part of design for HRI (Human-Robot Interactions). I was wondering how this relates to the notion of co-performance that Kuijer and Giaccardi described. I think there is an interesting different approach to look into: with co-performance there is a mutual goal.
Danaher did a good final presentation as he dived into the question how social robots change our values. I don’t have the answer yet. Danaher sketched the roles of robots in relation to agency: from tools (negative agency) to supervised (low agency), interdependent (high agency) and reflective (moral agency). I might need to chew a bit more on this.
In the news I share here I have always some robotics topics, every week new instances of robot companions are introduced so it seems, gradually but surely taking a place in our lives…
Temperature is dropping, and we are slowly getting out of vacation times. There was not a lot last week to right about so I keep it short. I missed the new format of Pivot podcast I’m following for tome time. The Pivot Schooled edition are five longer edition with some high-profile guests. It costs 30 dollars for all five; this week Uber CEO will going have a discussion with the founder of gig-workers collective. I am curious how it will be.
Also this week a new academic conference on Robot philosophy. This fits very well the topics of this newsletter so I will have a look at a couple of sessions for sure. It is also an slightly different setup from the DIS online conference I ‘attended’ earlier: all presentations are recorded, but there are live discussion sessions you can attend. It is too bad I did not have an excuse to go to Aarhus, the organising university, but it is on the other hand a lot cheaper (registration is 10 euro).
We are also very busy planning some events for ThingsCon. Of course our annual event that will happen in December in an online format. More on that beginning of September I expect as we open also call for participation. Next week Thursday we will have a smaller gathering in the form of a workshop we organize together with IoT Eindhoven on ‘Building Trustable and Responsible Technology Business’. Check more information and RSVP at the meetup page.
Together with the Smart Cities team of The Hague we are organizing a Salon end of September (24, 15-17h) on responsible onboarding public digital ecosystems. All on their living lab Scheveningen. Read more at the meetup-page. As soon as there are more details, I will share these of course here too.
Let’s dive into the news of last week. Futuring and robots are the main drivers of the news here. And some track & trace at the moment that the COVID-app is introduced in the app stores here in the Netherlands. I expect some articles next week on that. I have installed the app but disabled the functioning for now, as I am still not totally convinced I need to support the initiative. More on that next week, on with the robots etc.
Vacation version of this newsletter. It is not me that is on vacation, but the rest of the world is for sure. I think last and this week is peak vacation, next week things are starting up and preparing for the new year.
However, the news was intense with the Beirut explosion that was almost felt like a live experience. Combining all the footage will make it into a frightening AR experience (not too soon I hope, or better never).
Also: Reels is there. TikTok competitor on Instagram. The battle begins now for real (no pun intended): experience vs status. You now see that the ‘professional’ Tiktok-makers refer to Instagram-accounts to make money out of their videos. But the experience of a Tiktok stream is some much more appealing as Reels is now. Read this great essay on TikTok cultural tricks.
Finally, to follow-up last weeks announcement of my visit to Boymans Drive-Thru museum at Ahoy. The super short review: it is a fun experience. And the experience is more about the setting than the art. Partly because it misses background information on the pieces itself. It is interesting how it changes your relation with the art if you are trapped in a cocoon…
I did not capture as much news items as usually, which is fine. Still some nice and interesting reads….
Nothing will remain hidden…. Especially when you focus on poo-tracking apparently. We learned before that our sewage system is an early indicator for COVID cases, here some other use… “An orbiter saw signs of almost a dozen previously uncounted colonies in Antarctica, boosting known numbers for a threatened species. The discoveries were made by spotting the distinctive red-brown guano patches the birds leave on the ice.”
Is it a hack or a form of citizen participation you could debate: “by reverse engineering apps intended for cyclists, security researchers found they could cause delays in at least 10 cities from anywhere in the world.”
Let me share this here too. I think it is interesting indeed to think about the role of audio as AR and especially in combination with edge-computing on devices like the AirPod. The next generation might very well be extended with a GPS and accelerometer and intelligent behavior to switch between personal (noice cancelled), social (transparency) and AR-mode.
I shared the society-centered design before as it came out, in this post they explain what design choices were made for a compelling manifesto. Useful in case you might up to creating a manifesto some time.
Not directly related to intelligent systems, IoT or robotics or life automation, to name a few of the common themes in this newsletter. But it is looking very nice and it is about possilble futures for cities, so I think it fits: “A building of Bangkok’s Thammasat University is now home to Asia’s largest rooftop farm.”
Nice to share the Tis.tv newsletter for one time. Often nicely curated videos combining new and known in a theme. This one is about the new tools for making online meeting more exciting. The version of this week is on delivery robots, but those you all have seen in this newsletter before of course :-)
Last week I attended as planned the online seminar of Stacey on IoT with some nice panels with the theme Everything is Connected. The panels had good line-ups and it did a nice mixture of business in IoT with a touch on responsible implementations. I think it made clear how defining the services are for the IoT and both the design as the orchestration in partnerships brings challenges and opportunities. Due to the time difference, I could not experience roundtable discussions but luckily the video can be watched online.
I had to think about the book of Thinking in Services of Majid Iqbal of last year that also makes very clear how new types of things are not physical objects but services. Thinging in services.
On Thursday, I watched the NGI Forward session on ‘Dialogues on Digital Identity’ (watch replay) discussing what identities mean in times of fluid assemblages. This connects to that notion of services too. How do you know that the one that is onboarding your service is genuine? How to trust? With AI in the mix, it makes no sense to create ethical frameworks to regulate technology but focus on the processes. An important part of the conversation was on the question of centralized vs decentralized identity management. The UX of trust is super important.
We are looking into this topic too for the next ThingsCon event, a workshop on the Code of Trust on the 2nd of September. I will share the details next week. This week summer has really kicked in and there are no (online) events on my calendar. However, I look forward to visiting the temporary exposition of the Boymans Ahoy Drive-thru museum that only can be visited driving in electric cars. I will let you know how it was (or follow me on Instagram and you will see some pictures/stories for sure :-) )
Maybe a bit off-topic, rather analog however the cars are electric. But not autonomous. Maybe an idea for car-makers for the future. Robot news is there of course as all weeks.
For some years I send out newsletters via Getrevue. I started doing this weekly this year again. This week I start by sharing this newsletter also via my blog. One of the reasons is a possible revival of the RSS reader for reading and having this as blogposts here makes it possible for you to subscribe to this in your favorite reader. I hope this fulfills a need, and if not, it is a nice way to archive :-)
As a quick update of activities: as announced I participated in a DIS workshop on Expressive/Sensitive interactions with robotic objects. The workshop was well prepared and showed again the value of breakout rooms and a strict Miro template. Aspects of agency, contextual interactions, illusions of life where discussed. The value of these kinds of workshops is not in a specific outcome but finding common grounds with different researchers to kick-off more specific partnerships for the future. Looking forward to the follow-up!
Furthermore, this week is about catching up with graduation projects in the Cities of Things Delft Design Lab and developing the next step for the research. And we are discussing ThingsCon activities that are planned for August, September, and our annual event in December.
On with the news. Enough to share I think. Hiding robots, challenging COVID-tracking, and spatial interfaces. Some eye-candy and the 48 rules of Powerpoint.
Last week we organized a special event with ThingsCon, the stay-at-home TH/NGS Jam. We invited people to work on tinkering projects from home, discussing via the diverse online conversations. On your own together. As theme we choose to connect the long tradition of IoT of making devices that bridge the distance in a physical way to the current situation of our (intelligent) lock-down life where conversations are held via tools as Zoom, Teams or Jitsi. The event was a success judging the feedback. You can check the projects in the demo session if you like (Youtube).
My own project was themed on haptics. What can haptics mean for these conversations? In the quick brainstorm at the beginning of the day I was thinking on the situation that the screen is not only a barrier for the conversation, it also creates both a more close and direct relation with the other as you are looking to the person from nearby. On the other hand you can miss the presence of a person and with that you miss certain cues as nodding, rolling eyes, confirming mumbling, etc. And there is often the situation with the online conversations that people switch of their camera, for technical reasons or just because someone is deliberately in only-listen-mode.
If half or more of your fellow participants in the meeting are visually muted you not only miss any feedback cues, you also do not know if someone is paying attention at all. Maybe it is just a way to leave the conversation for other work, or even have parallel conversations, gossiping in a Whatsapp chat is rather common. That uncertainty is not equally, you don’t know exactly when someone is watching and paying attention. It is almost like a panopticon, the guarding system based on the uncertainty of the guarded person. The Zoom Panopticon so to say.
The idea that I worked out a bit during the TH/NGS Jam was to facilitate an extra layer of communication that is based on simple pokes to others at the meeting-table. Like kicking someone under the table.
For the demoing of the concept I used a haptic-demo-kit we made a couple of years ago at LABS of INFO to demonstrate the difference of feelings of haptic cues. A sweat band on your wrist is vibrating in different patterns based and these are linked to 4 buttons.
For this concept of kicking under the table the demo-kit should be connected to another one someplace else. That resembles the TaSST project where two sleeves are worn on two different locations and the first person can strike his sleeve to trigger a similar feeling in the sleeve of the other. Via a connection to your phone this can be anywhere. Another concept that was based on this is Hey-bracelets, modeled for long-distance relations where you can lightly strangle someone else’s wrist via a wristband-wearable device.
The hardest part in such meeting is to know who is sitting where ‘at the table’. The video-wall looks different for everyone in the chat. To solve this you can try to add a code to everyone’s screen, make a connection via face recognition (which would be complex solution. I thought you can make it more low-tech and stimulate a certain dialog in using the second order communication. My plan was to make a little table with all the persons in the conversation represented by a comic figure. That figures are a button to trigger the haptics in the one of the other wristband.
There is of course a practical issue that everyone needs a band for using which makes is quite complex.
So this happened.
But! Thinking back a day later I was thinking of a better application and implementation. Too bad I did not had that during the event, but let’s not waste the idea and share it here.
The starting point is the same; the zoom panopticon, but now not focused on meetings but on presentations. That makes much more sense. I have been doing this presentations online with all listeners (you home) visually muted and with that no feedback what so ever. What if every participant can send haptic cues to the speaker without having to switch on the video. These cues might even work better for second order feedback. This also solves a practical issue; only the presenter need to wear something (like a wristband) that receives the impulses. All the participants can use their phone to send the feedback during the presentation. This feedback can be open interpretation, or can be chosen from a set of possible cues. I need to think through (and test) what cues would work best, my hunch is that positive feedback works best. In different levels on intensity: _nodding to agree, _taking a picture of the screen‘i’m not distracted by my phone, I’m just checking something you say’, _I tweet this now! _And maybe as wildcard: – hey! you are muted.
I think this would be an interesting concept to test for real. Looking into ways to spice up virtual presentations is a popular topic. Matt Webb is writing in this post how his way of building and telling a story is changed by the virtual meetings. And Benedict Evens is reflecting on how to solve online events.
Things become networks, autonomous things with their own agency as result of the developments in artificial intelligence. The character of things is changing into things that predict, that have more knowledge than the human where it interacts with. Things are building a new kind of relations with humans, predictive relations. What is the consequence of these predictive relations on the interaction with humans? Will the things that know more than we humans do, help us understand the complex world, or will the things start to prescribe behavior to us without we even know? What is the role of predictive relations in the design practice of the future designer?
This notion of predictive relations is linked to earlier research in the research program PACT (Partnerships in Cities of Things) and the work in the Connected Everyday Lab by Elisa Giaccardi and others. The notion that we will have affective things that draw conclusions from the interaction things have with humans, and combine these with buildup knowledge from the network, is illustrated in the provocation by Iohanna Nicenboim and Elisa Giaccardi called Affective Things.
In a paper (M. L. Lupetti, Smit, & Cila, 2018) we described some near future scenarios how things connect to existing data and cloud services in the smart city and act in concert with people. In a few specific scenarios we sketched how these relations may play out. From a pizza delivery pod that know so much of the background information in combination with historical data on orders, that it can become an affective thing, starting a dialogue on the situation of the person ordering the pizza. She used to order always 2 pizzas but lately the orders became one pizza and combining with other behavior the conclusion is drawn the relationship with the boyfriend of the girl is ended. The delivery pod takes here a new role as good friend, a shoulder to cry on. A role that can do no harm if it stays within the domain of that one interaction. The links to other behavior in other situations indicates though that this is not the actual situation.
Another example describes a future public transport situation, based on a system of smaller transport pods that have a flexible route planning for going from A to B. This means that the pods don’t follow fixed routes and the travel time is severely reduced. But there is a catch. The system is not only flexible in the journey mapping, the planning is also taken into account who is travelling and including the social status of the person traveling. The service is there for planning its routes via a combination of actual efficiency in the route and the priorities. Consequence is that the journey time is hard to predict for the individual traveler. Creating more transparency in the decision making is key in building citizen robotic systems that are trusted by human citizens (M. Lupetti, Bendor, & Giaccardi, 2019).
The fundament of this future society
What is defining these systems to happen? The first driver is the digitizing of our world in all aspects. We have deconstructed our cities with increments of buildings or structures into a layered model where the basic layer is the physical layer. On top of that we have a digital layer that is connected to databases and computing capabilities. Entities can be physical or digital, and are using the digital layer to be assembled to a state in a service. This is the fluid assemblage (Redström & Wiltse, 2018). Not only can these assemblages be defined at the moment of use or interact, also the physical layer functions differently. Instead of setting the stage it is a blank sheet with the right components. Kitchin & Dodge described this situation as a Code/space (Kitchin & Dodge, 2011), a space where the digital computing layer has become crucial in defining the functionalities. No computing layer means no functionality. Something that already can be seen in ultimo at an airport. In the deconstructed city the services offered are totally open for interpretation but at the same time the control of that layer is more and more limited to a selective number of players.
The thing itself is changing too into an intelligent artefact. It connects with an existing network, collect real-time data and act proactively. And most interesting, it has a social behavior. These things take their own role in our society, things are citizens.
Predicting and prescripting
That things are becoming networked objects behaving as fluid assemblages is the start. These things can adapt to the data in the network and the interaction with other things and humans. This creates a situation that the thing has more knowledge on possible future developments than the human can have based on the combination of observation and anticipation. Anticipation is here based on knowledge from experience or learned interpretations. If we let loose of a ball we understand it will fall to the ground. When that same ball is an autonomous operating ball it can connect to the network and things start to predict outcomes, it means that it will feed forward on situations we did not anticipate.
The more complex the behavior of the thing is the more anticipation on expected results is steering the interaction. The more complex the thing the more depending we will be on the predictions made.
In the future we will shift continuously between the simulated future and the now. Think on simple examples as the weather app that is predicting rain based on radar data and sensors is ruling our perception of the exception of becoming wet when going outside more than the judgement of the real rain situation. And more specific the example of a Tesla that is predicting an accident and taking the initiative to brake before the first accident is really happening.
We are entering here an interesting domain of tension; what is ruling, the predictive system that helps us to understand the complex world, or a system that is prescribing our behavior?
If the things will form a framework for our decisions, will we transfer agency to the system of things? And if we do so, will that limit our own agency? This is no question; we are already put more trust in systems to keep knowledge and remove this knowledge from our memories. Google is the ultimo assistant. And this is an example what dependency entails. The filter bubble has become a recognized concept. What we think is true is depending on what tools like Google present to us.
As soon as we start to experience this disconnect from real world and (pre)scripted life alienation is a possible outcome. We feel disconnected from the devices as the working is more defined in the decentralised system than in the direct working. This even can cause physical unease (Bean, 2019).
A new design space
The interplay of predictions and actions creates a complex interrelated design space. Predictive behavior shapes our mental model on the acting of the thing. At the same time our actions shape the digital model of the thing. In a first model of predictive relations the interplay of the human and the autonomous operating thing is deconstructed into a combination of pattern recognition, interactions with a digital representation of the thing and knowledge from probable futures generated by similar instances in the network.
For designers of physical things, the span of control is already extending from the physical instance to the digital service that is incorporated or unlocked via the physical artefact. With the notion of predictive relations there is a need for designing contextual rule-based behavior. The choices made in the design defines the distribution of agency between system, thing and human. Systems of things form an entity on its own and the design is both influencing the system as the things, as it is influencing the interplay of the thing and the human. To deal with this complexity the default acting might be to automate the system behavior with machine learning and AI. But what does that mean for our position in that system. Can we keep a set of responsible rules? We like to work with known knowns and known unknowns. But what is the consequence for the way we design if we need to do this for unknown knowns?
Traditionally I look into the next year after the turn towards the new year. This year is a bit different for a lot of people as we are entering a new decade. So is at least the common feeling. What are the roaring twenties of the 21st century? It is the theme for a lot of thinkers (I like this one of Kara Swisher to name just one). However it remains interesting to have a peek into 2020 too and see how it will contribute to bigger changes.
But good to start with a look back to the last decade. Many is said, and I think there is not a discussion on what is the most changing development in tech, and also stretching beyond: the developing of the smart phone as center of everything. I think here it is not so much about features and sizes, but on how it changed the role of tech in our world. More than the introduction of the smart phone in its current form back in 2007 almost simultaneously by Apple and Android, the real breakthrough was the app-system that caters endless possibilities for makers to create new functions for the wearable technology, and the adding of the GPS made into a reality device, a link of our real life and digital life, and that is what turned out to be key. That was the driver for new transformational services, that the phone is now the hub of a huge knowledge graph the internet was already before and our real-life. And that can be linked to the impact of tech now in society, data ecosystem and the root for the current techlash.
In sketching developments in Internet of Things — one of my focus areas of the last decade for sure — I think that the phone is often too much seen as the IoT device in itself, and that this triggers uninteresting concepts where the object gets a remote control in the form of an app. That is however changing. The objects become more intelligent, more part of a product-service system with the phone only as connector to that system, and the first iteration to the edge-computing that became popular the last years. We only just now see the objects become more smart themselves, intelligence is becoming embedded in the objects, think AirPods as poster child. Keep this in mind for the look into the next decade.
So what really changed the last decade is the place of tech in our real-life. Computing was a tool used for specific functions, first word-processing and then with the first iteration of the world wide web as library and communication service. Now we have the possibility to create an app for everything we do. If it is navigating or taking pictures, that glass slab is the center of our personal universe. I don’t know if Steve Jobs had foreseen this when he introduced the iPhone as an integration of three functions: the phone, internet communication device and music player. The real power was the role of the phone as infrastructure for services.
We now see peak smart phone. At least in the form factor. A glass slab to interaction, with computational powerhouse and connectivity. It will be optimized, it will be foldable. It will in essence return to a single purpose device, the window to the system behind it. That system is becoming independent from the smart phone, a development started with the AirPods, HomePods, watches as intelligent touchpoints. Not limited as an Apple-centric development, Google is building a strong foundational infrastructure with services like Duplex and computational photography, and shifting towards more quality in the devices to stimulate different valuations of products that just features. The looks are becoming important.
This will continue, and I think a shift in the relation with the technology we use is key for the coming decade. AI is far away from artificial general intelligence (AGI), robots will not fully taking over humans, that will take more time, and more important, it makes no sense to strive for that. We will extend our human capabilities with technology on a deeper personal level. You can say that in the last decade the phone mainly made the services more intelligent facing towards we as users, in the coming decade the focus will be the human facing outwards. I think there are three drivers:
Boosted humanity. Technology is extending human capabilities. From electric last mile mobility to power to reason. We see also how daily life is now monitored continuously and planned rigorously. This is a feeding ground for the urge of improving/boosting. Doing Bodytec gym, etc. Even developments in DNA sequencing is a form of boosting humans by making it able to create better medicine.
Relations human-nonhumans. In the use of the technology we will not anymore see it as a tool, as a slave if you wish. We will build relations with the intelligent partners, value what they can achieve and respect how they bring a reflection to our own life choices. However the starting point will be boosting the human capabilities, the practice is one of mutual understanding.
Living together with pal-tech. Follow through with this we will (later) not only have this attitude in working together as a relation, we will allow the tech as part of our human systems. First in cities. Cities of things with things as citizens, the research program at TU Delft we run is linked to this.
Describing the third driver feels as if the artificial life, the nonhuman fellow inhabitants could be taking over, a fear that is sometimes expressed in AI ethics. We need to start understanding what the roles are. The first step here is to lose the meme that tech is neutral. We are still the designers of the tech, although we see that tech will become a creative force here too. Like the Spotify design process where the AI is part of the design team initiating new ideas for products instead of being only the ingredient of the end product.
It will be a super interesting decade we will live. But we need to be aware of doing it right. Breaking the power of the big players as the possibilities of an implementation that does not stimulate but harm freedom of living together with the tech, what a system based on the surveillance capitalism creates. A public stack could work, based on open source governance. We need a similar system as the app-store did for mobile; a way to cater creators and makers to build the partnering technology. And based on more human-based values, not economic values. We want objects that live longer. As a necessity to become more sustainable. Without losing the options to update. Fashion that adepts, cars that update, etc. We like to keep our stuff up to date. Phones were fashion items but are now longer around. The inside remains changing.
Looking into one year ahead, we will see in devices that the phones do not change fundamentally. They become more basic in the outside and more intelligent in the inside. The intelligence will be opened up for developers even more, hopefully that is the step Apple can take in the next OS-stack. The AR glasses are not so important, or foldables. More interesting is what is happening in the integration of the intelligence in the infrastructure. The alliance of the big players for an open IoT stack is here a first sign; in 2020 more examples will be introduced, compare IKEA Tradfri. The big boom will be later as the infrastructure is more stable.
Same goes for mobility. As the mobility revolution in the beginning of the 20th century drove developments, the unbundling of physical functions (cars) and infrastructure (engines, software) with the acceleration of electric cars, but also bikes and last mile vehicles (boosted boards, scooters) will continue and yielding for next forms of mobility services in this century.
There is of course happening more than tech. A lot of politics, geo-political changes, risks of serious crashes in both US and China leading to new kinds of internal conflicts. Tech plays here a great role too however. Think only on the surveillance systems in China and the counter attacks by the a new kind of ad-hoc demonstrating.
So to wrap up; we shift from a decade of empowering products and services with computational capabilities towards empowering humans with boosting human capabilities on all levels. For that we will see a further shift from new gadget-like devices towards integrated infrastructures in all things we use. To begin in creating tech that we can modify and hack into our own pals. Where we will build a new type of relations, based on hopefully more human-driven economical systems. Let that be our new-decade resolution. Have a good 2020(s)!
This was the 7th time that I traveled to Austin to experience the center of the (US) tech world. 2011, 13, 15–19 to be precise. As I always tell fresh attendees: it is not so much about the next new app anymore. SXSW got that name in 2007 when Twitter was launched and in 2009 with Foursquare. After that there was never such an impactful launch. You can however see how certain themes emerge. From social media to blockchain to service design and AI. And like last year AI was the most important trend of this year. Last year and also in 2017 it was more the new possibilities and the discovery what AI actually is and what is a possible future. Now it was much more all about the impact on society. Even without super specific examples.
Ethics and inclusiveness were the key themes. It indicates how much SXSW focuses on the society and the impact of technology nowadays, more than the one-off successes (or failures). It is something Bruce Sterling referred to in his closing remarks on Wednesday: we have entered a world that is ruled by the G-maffia, the GAFAM that is dictating our reality. And our reality is changing, so much is clear.
In the panel session that INFO organised together with Philips and UNstudio inclusivity was connected to the smart city, looking into design strategies.
– Design contextually and with equity, letting people make decisions, not just assuming they only want to give opinions
– Good interventions are based on a thorough understanding of the user experience in the eco-system
– All inclusive city processes should include ‘negotiations’ between citizens/users
At SXSW there are always several themes and focus points in parallel. For me the theme I focused on was automation, robotics, relation man-machine and AI. And also what it means for the designer.
There was enough to see and hear about that, as AI was in all other tracks too, it was hard to filter the right sessions even more.
So, before diving a bit more into this, here a TL;DR; a look back in a couple of bullets
AI as societal impact, triggering societal questions as key to discuss. As we entered a tech-reality ruled by big tech SXSW will be less on startups and more on impact in society
As soon as AI is ready for use, it stops being AI, it starts being a tool or a machine
Living with tech needs computational contracts as understanding, and computational design as future of design
With AI, computational contracts, etc, we will not design, we will be designed, and need to find the right way to cooperate with the machines
To make technology inclusive we need to focus on the outcomes on a different human level and we need to dare to choose if necessary for slow technology. And to quote Bjarke Ingels: don’t follow the dogma of thinking outside of the box, be obsessed by the restrictions
Podcasts are the hopeful promise for media
Scooters (electric steps) are a nice and handy drive but a huge clutter for the sidewalk when you let competition get loose
Social home appliances
There were a number of social robots in the LG house. A few large versions with screen for events, but the most interesting was CLOi, a SocialBot as they call it. This is a handy robot, about the size of a Google Home or Alexa, specially developed for emotional interaction. The design looks a bit like the recently ‘deceased’ Jibo robot.
I saw Jibo for the first time on SXSW, at the time at a presentation by the creator Cynthia Breazeal. It then remained silent for a long time, until last year. Jibo appeared to have been overtaken by the law of the inhibiting lead. The robot was extremely good at displaying human behavior, but not intelligent enough to compete with the Alexa and Google Home. Jibo was also far too expensive.
Interesting how LG follows a similar path in the development of its SocialBot, with the difference that attention is a bit more focused on the eyes than on movements. In the LG house also other automated machines were displayed, such as a beer machine and an ice machine. The SocialBot is not a bone in its own right, but must be seen as an intermediary for household appliances from LG, which of course will all become much more intelligent.
The role of such bots is interesting. I often use the Chinese Nio Nomi car in presentations that has built in a similar interface to shape the contact between functions and occupants.
Relationship between man and machine
A lot of discussion at SXSW is about the relationship between man and machine, our intelligence and AI and the ethical aspects thereof. It is super interesting that Asian companies choose social robotics instead of the more functional way of Amazon or Google Home.
Douglas Rushkoff mentioned the collaboration between robots and non-humans too. Rushkoff has a mission that he calls Team Human; “We don’t need a substitute for real life.” He argues that robots should not be treated as slaves. We must not go back to feudal times, that brings us down as people: “Respect non-human rights.”
It was also discussed in the “Academia and the Rise of Commercial Robotics” panel. We are now on an engineering platform, the next step is to use social science to enable cultural interactions.
Another panel spoke about Active and Passive AI, where passive stands for serviceable AI that you can call to execute an assignment, while Active AI itself takes the initiative. You can deduce from the questions from the audience that people are not completely comfortable with it. In addition to concerns about privacy, there is a great deal of fear that robots and AI will take over the world.
When it comes to applications, it also makes sense to zoom in on ethics. A good example was the last session I attended with Stephen Wolfram, the creator of Wolfram Alpha. This is a computational search engine that is widely used in science and education. Wolfram believes that a new language must be developed, the computational language, and he spoke about computational contracts.
His tool is a smart machine that contains a lot of AI. His story ties in with the discussion about blockchain, so it is not surprising that his talk was called The Future of AI in Blockchain. That title has a high buzzword density, but Wolfram knows what he is talking about. His presentation was therefore not a list of empty words or superficial views. With his tool he showed how we can communicate with machines in a different way.
Wolfram was not the only advocate of computational thinking. John Maeda, also a regular SXSW speaker, commented on this during his Design In Tech update and he also writes a book about this: How To Speak Machine Many stories came along in his story, but an important starting point is his focus on Computational Design, a discipline that he puts alongside Classical Design and Design Thinking.
Maeda also makes clear how design changes if you take the dynamics of computer-driven services (of which AI is part) as a starting point. Where Wolfram concentrates on the new language and functions, Maeda is concerned with the impact of AI on design practice. His report on this goes deeply in depth and contains many examples.
Ethics is a theme that was explicitly discussed during SXSW this year, especially when it comes to AI and robots. Both Johan Maeda and Stephen Anderson pointed out, for example, how the designer’s field of work is changing. Not one artefact is the subject of our work, it extends much further. The underlying system is key. Both the DesignInTech report by Maeda and the Framing model by Anderson are recommended for those who want to know more about this.
How are we going to collaborate with AI? How are we going to understand each other. SXSW is much about dangers and the role of robotization and AI, but also about how we will experience our world under the influence of new forms of intelligence, tools and interfaces. It was noticeable that AI is currently high in the Gartner hypecycle. Certainly, last year it was a lot about AI, but then as a concept. This year the relevance was discovered and AI was visible in new services.
War of Cyberpunk
SXSW is the place where you hear new themes for the first time, or where the themes that matter in the coming years are confirmed. As the closing speaker of the interactive festival, Bruce Sterling summarized the state of affairs with a literary statement entitled “War of Cyperpunk”. Sterling concludes that high-tech is now definitely the dominant factor in society.
This has a number of consequences, but one is that there is no room for startups anymore. Whether this is so certain is the question. What is certain is that fewer “things” were shown at SXSW this year. The most important talks were about major changes. Themes such as computational language (the new language that we must learn to speak) and computational design, reflect the change that is taking place. These are developments that call for what Anderson calls a Design 3.0 with which we will relate differently to the things that we design.
This theme coincides in an interesting way with the developments in robotics. It will be another interesting year!