The Discussion: AI, Agents and Swarm Intelligence - Episode #26

This week I’m talking to Louis Rosenberg, CEO and chief scientist of Unanimous AI

The Discussion: AI, Agents and Swarm Intelligence - Episode #26

This week I’m talking to Louis Rosenberg, CEO and chief scientist of Unanimous AI

Apple Podcasts:


Google Podcasts:

Amazon Music:

This week I’m talking to Louis Rosenberg, CEO and chief scientist of Unanimous AI and the chief scientist of the Responsible Metaverse Alliance.

We discuss the impact of conversational AI on our everyday interactions and whether marketing will be morphing into manipulation as virtual salespeople become a reality.

We also discuss the data hidden in our movements and the work being done to protect our motion privacy. The conversation ends with a discussion about swarm intelligence and what we might learn from biological systems in nature.

Unanimous AI

Louis Rosenberg film link

The Future of You is a finalist in the Independent Podcast Awards 2023.

Tracey's book 'The Future of You: Can Your Identity Survive 21st Century Technology?' available in the UK ( and US (


Tracey Follows  0:19

Welcome to The Future of You. This week, I'm chatting with Louis Rosenberg. He's the CEO and chief scientist of Unanimous AI, and also the chief scientist of the Responsible Metaverse Alliance. Now he's a lifelong technologist, researcher and entrepreneur, who has been working on virtual and augmented reality for well over 30 years. In 2014, he founded unanimous AI on the basis of the novel technology known as artificial swarm intelligence. And that's based on the decision making abilities of biological swarms, and it's utilised to enhance the collective intelligence of networked human groups. Now, Louis has become a vocal critic of the potential risks that a virtual reality augmented reality and artificial intelligence pose to society. And please do check out the link to his film previously lost in the show notes. In this chat, we cover motion prints in VR, our interactions as embodied avatars, generative AI, and its impact on our culture, the design of AI agents, and of course, swarm intelligence and how it works. It was such a privilege to get this access to the wisdom and perspective of someone who has been working on AI for so long, and has been so thoughtful about the effects of AI on our identity and society in general. I really hope you've enjoyed this discussion with Lewis Rosenberg.

Tracey Follows  1:48

Louis Rosenberg, thank you so much for joining me.

Louis Rosenberg  1:52

Yeah, thanks for having me.

Tracey Follows  1:53

Not at all I've wanted to talk to you for ages. Obviously, I haven't been following your work for a while. But on one of the earlier episodes of the podcast I was talking about something you'd written that I found is fascinating, which was about motion prints. So I wondered if we could start off talking a little bit about motion prints what they are. And basically, maybe you could summarise the point you were making in the article about the way in which we might be able to be identified or recognised, at least in their virtual reality, or augmented reality situations.

Louis Rosenberg  2:25

Yeah, so as a little bit of context, I've been involved in virtual reality and augmented reality for over 30 years. And for most of that time, there's certain types of data that I thought were benign, that I thought were safe. Most people did, in fact, everybody did. And the most basic data that gets tracked when you're in a virtual environment, or an augmented reality environment, is the motion of your head as you move your head around and the motion of your hands. Every system tracks your head and your hands, usually referred to as telemetry data. And if all the data that we've ever considered that we always just kind of ignored that, like that's safe, like the dangerous stuff we thought about was when cameras are pointed at your face, or cameras or pointed at your eyes, or there's all kinds of data that is obviously invasive. And so about a year and a half ago, I started collaborating with a research team at at UC Berkeley in California, where we started looking at telemetry data, the actual motion data, with some really advanced artificial intelligence methods where we can take large amounts of data and process that data and look for patterns in the data that you we never before were able to see. And the researcher, the lead researcher at Berkeley, a fella named Vivek Nair, he got a hold of a really big block of data from an application called Beat Sabre, which is the most popular virtual reality game that there is. So hundreds of 1000s of recordings from players playing Beat Sabre, and we said, well, if you process this data with AI, could you uniquely identify people inside of that data by how they play the game? or even just by your just five or 10 seconds of the moving their head and hand around in the game? And it turns out that you can. It turns out that everybody has very, very unique motions, very unique head motions, hand motions, to the point where if you had 50,000 recordings, you could uniquely identify a single individual, just based on five or 10 seconds worth of them moving their head and hands in the game. And so it turns out that the way you move is actually just as informative as your fingerprint, or a retinal scan. And so to use the phrase new emotion print in that, it's the way you move is really, you know, you could think of it as a fingerprint. But the thing about fingerprints is that we keep them private, right? Like, like you, you're not, if you put your finger on a fingerprint scanner to log into your phone or your computer, you don't expect that your phone provider is going to take your fingerprint and put it on the internet, right? Like they treat your fingerprint, as with very high security, that it's like associated with your identity. It's considered biometric data, it's considered data that is the highest security, like Apple or Google, like they won't let that leave your phone. Because otherwise it's no longer a security tool. And yet, this motion data, when somebody's in a virtual environment, or an augmented environment it's just streamed out into the world. It's sent out to other players. And so the papers that we published showed, you know, hey this data is now far more informative than anybody thought, we probably need to treat it the same way where we're not just streaming this data into the world. And the research team at Berkeley is working on ways to actually protect that data. Now, it also means that when you're playing a game with another person, or you're in a virtual environment, we need ways to obscure that data. Because the thing is it's not like we're streaming that data out into the world for no reason. We're streaming it out because if another player wants to see you perform, they want to see the motion of your arms, like they need to see that data. But they don't need to see the high precision version of that data that could actually identify you uniquely. So it was really I mean, to me that research was shocking, because again, I had been involved in the field for 30 years. And really, nobody thought that data was dangerous. And it wasn't, but AI is really changing. It's changing what data means. It's changing now what was benign data is now actually very informative when a deep learning system can process these big blocks of information and earn things about people. The one last thing I'll say about this study was that there was a follow up study that was just presented a week ago, the research team looked at correlations between your motion data, and other characteristics. And what they were able to do is not just identify uniquely identify an individual, but from a little bit of your motion data, but predict your age, your gender, your race, your level of education, they were even able to predict with statistical accuracy, your political affiliation, just from your motion data, which is crazy. And it just again, it goes to, you know, AI really changes how we have to think about what is private data, and what is public data.

Tracey Follows  8:05

Because when I read your piece it had come out before this study, you know, the Google study on the proxies in social AI agents, and when they set them up as proxies, and when modelling and looking at their social behaviour, proxies for human beings. And having read that then I was thinking, well, this motion data, what use is it put to these motion prints that give off the motion data? Can they take that kind of data and then model you specifically? So they're not just predicting, you know, people like you, but it is literally you because they've got the most personal data, I guess. And they're kind of aggregating lots of dimensions of it. So how far can they go with this in terms of modelling you and really creating a proxy of you, if you like?

Louis Rosenberg  8:52

Yeah, so there's really a lot of different facets to that. One is we've really rapidly entered this new age where computer systems are conversational, and emulating humans, and we're going to interact with computers in new ways. And ways that I think are very dangerous, because these computer systems can draw us into conversation, and potentially manipulate us, potentially influence us. But in order for these AI systems to be effective at potentially being a form of persuasion or manipulation they have to know things about us. And so people work hard to have privacy online, where you're going to a website, if you're going to some other location. You don't necessarily want people to know things about you. Well, if they can look at your motion, they can see your motion data. They can immediately just infer from that age, race, gender, potentially political affiliation, all kinds of things that you might think are private, or not necessarily private. They can also infer medical conditions from your motion data or they can detect by just looking at how you move, they can detect if you have certain neuromuscular disabilities. They can detect actually, predict whether you have depression or other mental issues. So it's, you know, again, this data is really very, very informative.

Louis Rosenberg  10:14

The other part of it is that when we go to this conversational world we are interacting with computers. And even if you could protect your motion data and other private data, you know, I worry that the conversational systems are very good at just extracting data from you voluntarily. And that's something that we're really not prepared for. If you go to a website today and you have your text information, and they want to get information from you, they might give you a bunch of questions. And when you entered, you know you answer questions on a website, by clicking boxes, or choosing options, you're giving that information to a third party. You might even think, oh, that's gonna go into a database somewhere. And you might choose carefully, like, what information do you want to reveal, what information you do not want to reveal? In the very near future, those forms and, you know, old style websites are just going to be replaced by an avatar and a webpage that you go to, and it's just going to engage your conversationally. I call them a virtual spokesperson, but there'll be a virtual spokesperson that represents whatever website you're going to, we'll see that in the near term. When we go to the metaverse will be very photorealistic in 3d. But even just in flat computing, we're very close to this transition when virtual representatives are on websites, and they'll engage you in conversation. And they will very casually asked you questions, you know, oh, you're looking for an automobile? What kind of car? What kind of car do you drive right now? What kind of range are you looking for? If it's an electric car, they might ask you what your profession is, they might kind of ask you questions about your income level. And you just think you're having a conversation with a salesperson in a store. And not really is like no, like you're having conversation with a computer system. It's recording all of this data, storing all this data. That data can be going into a database that stored, it's shared with other people. And so you talked about your identity, like I feel like we humans are about to go through a collective identity crisis because we were not prepared to interact with computers as embodied. Avatars that look very human, that act very human that speak very human. We will behave as if they're human, because we're cultured to be polite when we're talking to other humans. But we're not realising you're not actually filling out a form, you're actually entering data into a database that just happens to look human. And it's very good at pretending to be human, but it's not.

Louis Rosenberg  13:00

And the thing that makes it even more dangerous is that in the past, these conversational systems like Siri and Alexa, were very one directional. We would issue a command and it would respond, we'd ask a question, it would respond. Now, with large language models, these systems are interactive to the point where they can ask follow up questions to us, they can probe us for more information. And so, you know, when we're holding a conversation with these AI systems, it's, you know, again, we're filling out a form, essentially if it's trying to gather information from us. But it's, you know, a human-looking avatar, that's potentially very skilled at extracting information from us asking follow up questions, probing to get more and more information out of us, and doing it in a way where we let down our guard, because it feels very human. So we think about like the privacy risks. Yes, I worry about privacy risks of being able to look at your motion, but I worry much more about the privacy risks of all of humanity very, very soon. Just talking to computers that are designed to extract information from us, and then potentially designed to persuade us because the same systems will be very, very skilled at influencing us because they've been trained on billions and billions of artefacts that about how people behave. So we're about to enter a very different world were the two things that we humans always thought of as allowing us to distinguish humans from Non Humans, language and intelligence, are no longer just going to be the domain of of humans. They're going to be the domain of computer systems that are that are just as skilled at languages as we are and are at least good at emulating human intelligence, acting and behaving in ways. They make us feel like they're intelligent, also make us feel like they have values and morals and emotions and empathy. And so we will behave as if they really are human. But they're not really any of those things. They're just good at pretending to be those things. And we're not prepared for that. We didn't evolve for a world where there can be entities that look and act and sound and behave like empathetic humans but are not.

Tracey Follows  15:29

Yeah, I mean, gosh, there's so much to think about there. A couple of things, when you say we're filling out forms. I mean the description you've given, it sounds like we're filling out forms whilst fully naked. That's how it feels. But also, I think what you're describing is a kind of asymmetry, isn't it? It's an asymmetry in the communication or interaction that you obviously don't get in the real world, because you're weighing and measuring the other person in front of you, because you've got all of the physical dimensions to play with. And your brain is kind of working that out as it's listening and watching and smelling and whatever, invoking all the senses. And we're not going to have that we're basically relying on being machine readable. And that's the kind of asymmetric relationship, right?

Louis Rosenberg  16:15

Absolutely. So the asymmetry is something that I actually think about a lot. And because sometimes when I talk about the dangers of of AI systems, and their ability to persuade or manipulate people... like one of the responses as well, there are human salespeople. And so if you go to a car dealership, and you're talking to a human salesperson, that human salesperson is going to try to influence you. But that's a much more symmetric relationship.

Tracey Follows  16:42

Yeah, because you know, because they've got hair gel, or whatever. [Laughs]

Louis Rosenberg  16:46

Right? Like, they're gonna walk in, they're going to size you up from what you look like and how you act and you're going to size them up. You're also going to know what their motivations are, their motivation is to sell you a car, they're going to know your motivation is to get the best price you can - see? So it's symmetric. You both are drawing on your sets of outside information that are probably pretty reasonably balanced. And you can read their facial expressions and you know, their reactions, they can read your facial expressions. When we go to now, artificial salespeople, which will happen, we will be interacting with say a virtual salesperson, online or in a Metaverse environment. That person will look and act very human, though. They'll smile, though, and we will think we understand what they're thinking. But we won't, they'll understand what we were thinking, because very soon we'll be cameras pointed at us and reading our motions. And that's already the case in virtual worlds. When you put on a headset,  it is actively looking at your facial expressions and your emotions in real time. But even we're doing that on 2d environments, we're very close to that being commonplace. And so we're going to be interacting with this virtual AI-driven avatar. It's going to be able to see our facial expressions, it's going to be able to see our pupil dilation, it's going to be able to see our eye emotions. And it's actually going to be able to read our emotions even better than humans.

Louis Rosenberg  16:54

So we can read each other's facial expressions, a computer using a camera can actually read what are called micro expressions. So expressions on our faces that are so subtle that a human wouldn't notice it, but the AI can notice it. It can also an AI with a camera can also read what are called blood flow patterns on your face. So we humans can do that if somebody blushes like there are really dramatic facial expression, changes of blood flow that we humans evolved to detect. AI systems with a camera can detect things that are almost invisible to detect, so they can they can infer your emotions. Whether you're engaged when you're getting angry, whatever you're feeling in real time. So now you're engaged with his artificial agent. It looks human, it acts human, it's smiling. You think you're understanding its motives. You think it's your understanding its thought process, you think you're understanding its emotions, it you're not because it's a computer system. It doesn't think like us, it doesn't feel like as it doesn't act like us.

Louis Rosenberg  19:16

On the flip side, it is legitimately reading your facial expressions, your eye emotions, your pupil dilation, your blood flow patterns, it's reading the vocal inflections in your voice. And so it knows your emotions, it knows. And it also is detecting how you respond. And so it's completely asymmetric. The other thing that makes it asymmetric is that you walk into a store to buy a car, the salesperson that shows up as the salesperson shows up right? Online, if I go to an online sales store to buy a car, the avatar that shows up is going to be designed specifically for me, right? They're going to know things about me they're going to choose an age, gender and look and hairstyle and hair colour that they've already determined over time, is most effective for influencing me. They're gonna choose a vocal style, are they going to talk to me in a very intellectual way? Are they gonna talk to me in a very salesy way? They can talk to me in a very acting like they're my friend? There's a lot of different styles that that salesperson could take to take on. And it's going to take it on based on my historical behaviours with other salespeople online. And it's going to know things about me, it's potentially could know, if I'm a fan of a particular sports team, or we're having a particular profession. It could know that before I even walk in, so it could craft the conversation right off the bat, to ease me into being befriending this artificial agent who has an agenda. It has a promotional agenda to sell me a vehicle, or to sell me a pair of pants or to sell me some service. It knows enough about me to tailor the conversation on topics that I'm that knows I'm going to be interested in, it might know that if I'm most responsive, it's offering me a good deal, or maybe I'm most responsive, if it's trying to make me feel like I'm missing out on something like there's lots of different sales tactics it could be trained on. And so it's, again, completely asymmetric. And so it's not the danger that we see headed our way. It's not just emulating a human salesperson. It's these artificial salespeople will be far more skilled, and have far more information at their disposal about us that it can draw upon. And it could share that information across experiences.

Louis Rosenberg  21:53

So imagine if it's Amazon that's selling me things this way, right. So if I'm buying a pair of shoes on Amazon, and I see a certain avatar is gathering information about me, and then I go to buy something else, music on Amazon again, I could have that same experience. So in the real world. They're human salespeople, and they're not interconnected with each other, when you go from one store to another store, but they potentially will be in this conversational AI world that we're entering really fast.

Tracey Follows  22:29

Yeah. Sometimes it's hard to imagine how we'll develop new interests. If some of these conversational AI keep bringing us back to some of our old interests or established patterns. When I was doing some research a few years ago, I was talking to somebody about music interest and Spotify, and we were talking about the media. And they said, the algorithm doesn't know if suddenly you start dating someone who's into jazz, and suddenly you're into jazz, but the algorithm doesn't know that. So you have to set up a new identity, to literally dig yourself out of the algorithm. And I guess that's one of the challenges - will we become less rich and complex over time, because we are treated like so by some of the agents?

Louis Rosenberg  23:10

Right? Yeah, absolutely. These AI agents will predict our wants and needs and interests and purchases. And that's stifling. That is limiting who we are, it's confining who we are, these are tools that likely make people less interesting over time. And in some ways, I worry about that also, on a global level. Obviously the other big technology, these generative AI tools that can create artwork and images and it's doing the same thing on a global level. Because we think of  these generative AI systems as being creative, because they can generate new things that didn't exist before. But they're doing it based on a statistical model of the past, right? They're trained on billions of artefacts of the past. And then they're creating new artefacts that are the most statistically responsive result to whatever prompt you put in, based on our culture of the past. And so it's backwards looking. Right?

Louis Rosenberg  24:15

And so, you know, artists, so usually forwards looking. Artists usually, you know, all your human artists are influenced by the past, but then they're bringing something of themselves. And they're trying to break free of the past. Generative AI systems are not doing that. They're not bringing anything of themselves. They're creating a statistical model from the past. They're extremely skilled at doing it. It's remarkable. But again, it could keep our culture from evolving. If your most artwork is being generated from statistical models of the past rather than from human artists that are thinking about the future then there's this secondary problem, which is now the generative AI systems are creating content that's going out into the world in large quantities, and that content is now being some is now becoming part of the content it's being trained on. And so human artefacts are now being replaced in the database with computer generated artefacts, which again, is amplifying the past.

Louis Rosenberg  25:19

So reductive? Yeah. I usually refer to it is a form of generative inbreeding, because we're creating artefacts there based on the past. And then we're using those artefacts to create more artefacts as they go into the database. And so I definitely worry a lot about the impact that these very recent AI systems have just have on our culture, and on the progress of our culture. And it's very similar to, you know, on an individual level, like you point out that it's not, you know, these AI assistants are gonna do the same things to us on an individual level, they're going to confine our behaviours. We've always done things a certain way, and they'll make sure we always do it that way.

Tracey Follows  26:01

Amazing. I always say it. I was doing a presentation on 'Is AI the death of the Artist?' And I was saying, well, actually, AI and the artist are both in the search industry. It's just the AI is searching for what's been said and done in the past, and the artist is searching for what's yet to be said and done. So could the blend be interesting? Or will the blend be destructive in the end? It's going to be an interesting experiment, isn't it? But I wonder, you know, how Eric Schmidt is saying that we will all have our own personal AI agents, sort of like an additional perspective, or multiple perspectives, possibly, that will give us a more complex, so far undiscovered view of the world. Like we'll be able to access a new reality, if you like from these new perspectives. Do you think that could head off or challenge some of the problems you've just been explaining from agents who are working on behalf of say, you know, like a salesperson working on behalf of a brand almost as a marketeer? Do you think we could train our own personal agents to cope with some of this potential manipulation or persuasion? [Laughing] Or does that just render the human completely out of the loop? And what's the point?

Louis Rosenberg  27:01

[Laughs] Yeah, so it's, I mean, you could certainly imagine the design of AI agents that are there to look out for your best interests. And the question is, who's going to make those products? And what is their motivation gonna be? And I say that from the perspective that most AI agents, there are very clear motivations to use AI agents, to influence people, right, to influence people to either to sell them on information or to persuade them of ideas. And I say that from the perspective that most of the services that we use online are sold to us through marketing models, where we're getting free services in exchange for advertising. Which is really just a nice way of saying we're getting services in exchange for being willing to have somebody else try to persuade us about something. And so if we think about how search engines are now transitioning to be conversational and other online services are transitioning to be conversational - that's the way we're going to interact with computers. And these companies are not planning to change their business models. So if we engage with let's imagine I go to a website, you know, a year from now or two years now, and let's say, I'm a sports fan, and I want to get the latest scores, I might engage a conversational agent that's gonna say, Well, what games are you interested in? and engage me in conversation. If I'm getting that for free, which I most likely will be, then that AI agent is going to be paid for by some kind of promotional messaging. And so as part of that conversation, it's very likely going to weave in promotional content. And unless there's regulation and policy, I might not even know where the informational content ends, and the promotional content begins, because it's just part of one conversation.

Louis Rosenberg  29:06

I mean, I could go in the near future, I can go to a website, because I'm interested in, let's say, I own an electric car, and I want to know where the nearest charging station is. And so I just asked you, Hey, where's the nearest charging station? And in the conversation, he asked me, Well, what kind of car do you have? And I say, and, and it tells me where the charging stations are? And then he could just conversationally, say, you know, if you had this other car you could have a much greater range. And I might not realise that was a paid promotional comment. And so the question is could I have an AI agent that's going to protect me from that? You could imagine that there's, you know, an arms race and as an agent that say, Hey, you should be aware that that's likely, it's likely a promotional comment. [Laughs]

Tracey Follows  29:53

[Laughing] But then the government will introduce their own AI sponsor. It'll be in triplicate

Louis Rosenberg  30:00

Yeah, I mean, to me, I mean, instead of having to have an arms race of of AI agents that are trying to influence us and AI agents that are trying to protect us, you know, best would be if there was policy that said, Hey, you, there's certain things that you can't do with these conversational AI systems. The first thing is, I think that if you're engaged with a conversational AI system for some service, and it transitions to promotional content, it should have to tell you, okay, this, this is sales information. If it is reading your emotions, or if your facial expressions or your eye emotions are your blood patterns - it should have to tell you. We're going to get to a point in the not too distant future where we're these AI agents, avatars will look photorealistic. It should be required to make them distinguishable from actual humans. If you're speaking to an AI, you should be allowed to know that you are. They should require them to look different, even though it's very close, within just a couple of years, they will, they'll be indistinguishable, like, you looking at me on on Zoom, or would look just as real as if it was a completely simulated AI agent, it you'd have to look different, because then at least you could know, okay, this is not a person. This is an entity that is expressing facial expressions and emotions that don't reflect anything, any kind of sentiments, and it has access to infinite amount of information. And it might have a persuasive agenda on behalf of a third party. And at least if we know who we're engaging, whether it's an AI agent, and if it has a promotional agenda, we can have some defences, we could be sceptical. But I also think that there should be limits on how, how these systems become how these systems are used for promotional information. We're all familiar with advertising, but these systems are so powerful, they will cross the line from marketing to manipulation. And I think the policy has to prevent these systems from being able to be manipulative,

Tracey Follows  32:16

Do you think it's going to be more manipulative than some of the neurotech that are getting quite advanced? I've spoken to some of the founders working in those businesses. And obviously, they're talking about kind of having brain data reading, as we understand it, reading the patterns of your brain data, either at work or putting it into cycle helmets or caps... because that has the potential to be really very manipulative as well, I guess, which is more dangerous.

Louis Rosenberg  32:43

Yeah, it's absolutely another data sources, very scary data source. Because there's a lot of information that can be registered with your face. Especially when you consider that these AI systems can detect things that are a human wouldn't detect just off your face. But reading your actual brain signals, even at a coarse level is giving additional information. So I think it's absolutely dangerous, and becomes really dangerous, when people start talking about brain implants, or going into special MRI machines. But at least those are situations where you have to have informed consent. If you're going to have a brain implant or go into a real, you know, sophisticated scanner, there's informed consent, you know, why it's being done. It's very likely going to be regulated by medical protections. And so it's a little less dangerous in the near term, I'm less worried about it. I mean, I'm more worried about the things where we assume they're safe. And we don't question them, like your web camera, web camera pointed at you doesn't feel dangerous. But again, when it's feeding an AI system, it can detect your emotions in real time. And that means that system is easily designed to be able to manipulate you by tailoring its conversations in response to the emotions that it's seeing. So if we are going to be manipulated and unduly influenced by some of these things, then is swarm intelligence the answer to that? Tell us a little bit about swarm and what you're trying to do with that...

Louis Rosenberg  34:21

Yeah, yeah, so I am currently the CEO of a company called Unanimous AI that focuses on a technology called swarm AI, which is based on the biological principle of swarm intelligence. And I started working in this direction about 10 years ago when I realised that AI systems were getting powerful, and were likely to become really powerful. And that 99% of the work in AI was really focused on how algorithms can replace people. Replace people for decision making, replace people for all kinds of tasks and abilities. And while there's a lot of good play, this is where it makes sense to replace people with algorithms. When it comes to important decisions, I think it's really dangerous, and yet they are being used. We're increasingly automating everything from medical decisions and insurance decisions and loan decisions. And even certainly in the US, the judicial system uses AI to make decisions related to parole hearings and other things. And to me, that's terrifying, because these AI systems. They lack human values and morals and emotions.

Louis Rosenberg  35:36

And so my interest in starting 10 years ago was saying, Well, you know, we're using AI to replace human human intelligence, or is there an alternate way to use AI to connect groups of people together? And amplify human intelligence, amplify group intelligence? And unlike a lot of different fields, I look to Mother Nature and say, Well, are there biological examples? And it turns out that evolution has looked at this, and evolved systems that solve this problem many times over hundreds of millions of years. And good examples are schools of fish, swarms of bees, flocks of birds, these are systems where nobody's in charge. There's no nobody's in charge of a school of fish. There's 1000 individuals, they're all interacting, and yet they can navigate the ocean as a superorganism, and they make decisions. Biologists call it a swarm, whether it's a school or a flock, or a swarm biologists call it swarm intelligence. These organisms in groups make decisions that are significantly smarter than the individuals can make on their own. And so at Unanimous AI, we ask a question of - could we connect groups of people into the same types of systems, and the way these biological systems work is that they are in real time? A school of fish, all the individuals, they have different view of the world different perspective, maybe slightly different personalities, slightly different history. And when something happens in their environment, they all see what happens from a different perspective. And they basically form this multi-directional Tug of War, where the individuals are moving in different directions, essentially, they're detecting vibrations in the water around each other. And then by pushing and pulling in each other in real time as a system, they collectively make a decision.

Louis Rosenberg  37:29

And it's a really good decision for the whole group, and so, unanimous. We created a system called swarm that allows humans to do that to make decisions, predictions, forecasts, they can be anywhere in the world, they log into the system, and we could have 50 people or 500. People make a decision together in real time. And it turns out that nature works that when we connect groups of humans together this way, they can make significantly better predictions, forecast decisions. We've had groups of financial analysts, we did a study with MIT, had groups of financial analysts predict the price of gold, the price of oil, the price of the S&P 500 in swarms, versus just that group just taking a vote, which would be the kind of traditional way. And we found that they were 26% more accurate when they work together as a swarm and making these predictions. We did a study with Stanford University Medical School with groups of doctors making diagnoses, small groups, who's just like four or five doctors that could either take a vote or work together in this real time swarm. And when they work together as a swarm, they reduced their diagnostic errors by over 30%.

Louis Rosenberg  38:41

As a swarm, one of our customers has been the United Nations where they use swarm to forecast famines around the world. So they have a group of experts. Those experts have different disciplines. The experts can be experts in the economies of different nations, the climate, the political stability of different nations, and they come together and they predict what's the likelihood that this country is going to have a famine in the next 18 months? And so they could come into swarm and quickly combine all their different perspectives. And it turns out that they get very accurate answers, and they get them faster than they would if they just sat around a room and just argued about it. So yeah, what we work on is keeping developing new and better technologies that connect groups of people together, and allow them to basically amplify their combined intelligence. And the thing that motivates us is that we're leveraging the power of AI. Because AI works to connect to groups of people together. We're leveraging the power of AI to make people smarter, but we're inherently keeping the human sensibilities human values. Human morals are part of the process. We're not losing anything human. We're actually just amplifying the effectiveness of humans.

Tracey Follows  40:02

What exactly is it that makes the difference between that outcome and a kind of mob consensus, though? I mean, is it the constant iteration? What is it exactly that gets it to intelligence rather than just a majority point of view?

Louis Rosenberg  40:19

Right. So it's interesting, because you sometimes think about, like a mob mentality where, you know, a group just flies into some rage or bad or bad behaviour. And you can actually look at nature to see the difference. So in nature, there's really two different structures, there's a herd, and a swarm. And the thing about a herd is that if you start a herd, a single individual will start running. And as soon as that single individual starts running, then other individuals will start running and then others and then so... It's this sequential process, like a herd is a sequential process. And it creates what you call social influence bias, a single influencer, will propagate through over time. And that's why you could have, you know, a herd of sheep that are rumoured to run off a cliff to get one sheep to run off a cliff. And the thing is, like, we humans do the same thing on online, right? So the herd mentality is, is social influence bias, it's been studied in online forums. So there's lots of different sequential voting systems online. If you give a thumbs up in Facebook, or five stars on Amazon, or you upvote something on Reddit, if you're the first person who does that, you have more influence on everybody else. And in fact, this was a pretty famous study that showed that, like the first upvote, on Reddit influences the direction of the whole vote over 1000s of votes by 30%, it has 30% influence on the entire system. Now, the thing about a swarm, everyone behaves at the exact same time. It's not sequential, it's parallel, and so when we work with a swarm of people, the same question appears in everybody's screen at this exact same time, and they're all interacting at the exact same time. There's no leader, there's no follower of their interest, they're equally influencing each other. And so you get rid of that social influence bias. And so instead of having this kind of mob mentality, you end up with the benefit of, you know, again, like, like a school of fish, or, or a swarm of bees. Again, they can make really smart decisions. And these are successful species that have been around much longer than humans, because they make decisions that are best for their entire population, without putting anybody in charge. And it turns out, it works for people if we use technology and AI to connect them together.

Tracey Follows  42:48

It's just fascinating. So my final question to you then is, we don't seem to be making brilliant decisions about AI and its direction. So should we be using your swarm intelligence to make the decisions about what we do with AI now?

Louis Rosenberg  43:04

So I mean, I'm biased, but I absolutely do believe that a really good way to get groups, whether it's groups of policymakers, or groups of corporate governance, people who are trying to make decisions, if you want to find the really the best decisions that will satisfy a large population, the swarm intelligence is a great way to do it. The problem is most policymakers, including governance, people in big companies, like open AI and others. You know, they look at polling, they'll take a poll of the population to see what the population sentiment is. And the thing about a poll is it's polarising. What a poll does is it tells you where groups disagree. But it doesn't do anything to show you where could the group actually agree? A swarm is actually the opposite. So what nature figured out was that when a group interacts as a swarm, if everybody's pulling in a different direction, the school of fish doesn't go anywhere. And so the system actually finds the direction that they can best agree upon. And so a swarm amplifies common ground. It shows the group, what would be the policy decision that would satisfy a diverse population, whereas a poll just shows us here are the differences, and it doesn't help us find common ground. And because we publicise polls so much, we actually then drive the population to get even more extreme and entrench their positions. So polling is polarising. But nature does have a way to solve that. It's called a swarm. And it would be helpful in finding solutions not just to AI governance, but to all kinds of social problems where your groups are polarised and they just can't find the solutions. There are solutions that people can best agree upon. We're just not finding them through polling.

Tracey Follows  44:54

Maybe we need a third political party, which is a swarm party - that's what it is. [Laughs]

Louis Rosenberg  44:59

Yeah. [Laughs]

Tracey Follows  45:00

I don't know. That's brilliant. Thank you so much for spending a bit of time with me there. We've gone from interbreeding and nakedness to polls and swarms but fascinating stuff. Thank you so much for joining me. I really appreciate it.

Louis Rosenberg  45:12

Yeah thanks for having me. It's been fun.

Tracey Follows  45:20

Thank you for listening to The Future of You hosted by me Tracy Followsl. Check out the show notes for more info about the topics covered in this episode. Do like and subscribe wherever you listen to podcasts. And if you know someone you think will enjoy this episode, please do share it with them. Visit for more on the future of identity in a digital world and for the future of everything else. The Future of You podcast is produced by Big Tent Media.


More posts from this author