In this episode of The Briefing we cover several news topics around technology of identity and the digital self, primarily focusing on voice tech; voice design,re-voicing and voice cloning. We look at digital wallets and digital credentials in Europe with the EU ID scheme, and we revisit the current status of the Twitter bluecheck mark. I’m also joined by Nik Badminton to talk about his new book.
- 0min 40: Voice cloning
- 4min: Discussion with Sam Gregory, Executive Director at WITNESS about generative AI
- 5mins 45: ElevenLabs
- 5mins 58: Twitter verification
- 13mins 33: Digital wallets and digital credentials
- 15min: Discussion with Andrew Tobin about eIDAS and big tech
- 19mins 24: Discussion with Nik Badminton about his new book ‘Facing Our Futures’
Tracey Follows 00:20
In this episode, we talk a lot about content creation, about voice and what people are doing with theirs, what's real and what's fake, and how we will know. We also revisit Twitter blue check verification, we talk about digital wallets across Europe, and I have a quick chat with Nik Badminton about his new book.
Tracey Follows 00:40
I continue to believe, as I said in my book back in 2021, that Grimes is the only artist who really understands the value in giving up control over some of the personal elements of her work, to allow other people to interpret it in their own way. She did that when she released the music video for You'll Miss Me When I'm Not Around, in 2020. That was accompanied by an unexpected bonus of all the raw files available for download. That was lyrics and artwork and fonts and video footage and even the song stems. And in a Tweet at the time, she said, we thought if people are bored and want to learn new things, we could release raw components of one of these for anyone who wants to try making stuff using our footage. She's a master at inviting in the audience to play an active role in creating and augmenting content. Well she's at it again. After an AI fake Drake and The Weeknd track went viral. Grimes said she would split any royalties 50/50 on any successful AI generated songs that use her voice. "Feel free to use my voice without penalty", she tweeted, declaring she was interested in killing copyrights. Holly Herndon is another artist experimenting with voice tech. In collaboration with Never Before Sounds, she says she makes artwork with her voice. In fact, her doctorate focused on the interplay between machine learning and voice and the implications of the technology for IP and vocal sovereignty. The voice tool that she uses allows anyone to upload polyphonic audio and receive a download of that music sung back in Holly's voice. She does not retain any of the original audio uploaded but does archive copies of the output audio. And on her website, she states "you are free to use the material generated as you see fit", adding that accreditation is always appreciated. She goes on to explain that with recent projects like Deep Mind's WaveNet, and Google's Tacotron and others in the voice generation space, she's confident that generating spoken and sung voices will become the standard practice for artists and other creatives, as presaged by the popularity of celebrity vocal deep fakes already found all over YouTube. And that's why she believes there will be a demand for official high fidelity vocal models of public figures in the future, which is why she's experimenting now. She wants to decentralise access, decision making and profits made from her digital twin Holly+, and thinks much of that can be addressed by doubt governance.
Tracey Follows 03:20
Meantime, there's more and more reporting on deep fake technology like voice cloning, now that AI is once again making headlines. The Guardian published a piece on scammers using voice cloning, suggesting that last year 11 million was stolen from unsuspecting consumers who hadn't realised they weren't really talking to their loved ones, doctors, lawyers, etc. but were in fact talking with scammers. There are lots of reasons we would want to clone our voices or the voices of our loved ones. And I go into detail about that in the book. And I made a programme for Radio 4 about that. But the technology that enables it is of course going to get hijacked at some point for more nefarious means. One mother in Arizona was warning of a kidnapping scam, in which family members hear their loved ones' voices and the caller demands a ransom, in what appears to be a really terrifying situation.
Tracey Follows 04:17
Last week, I met with Sam Gregory, Executive Director at WITNESS the Human Rights Network. And we talked in depth about generative AI. And that episode will be with you in a few weeks time. But here's a little of what he had to say on voice.
Tracey Follows 04:33
I'm really worried about voice. I mean, I'm really worried that I've seen what ElevenLabs and the like are doing I think I spoke to the bank the other day, I rang up the bank and I go through obviously voice biometrics, and like we say your name, say your address and I said to her, I really do not feel confident about this technology anymore. And I started to talk to her about this. She said oh it's okay. Our systems you know, you can't override our systems. Like the naivety of it is just incredible.
Sam Gregory 05:00
Audio has gotten a lot easier to do. And we're seeing it used in very common everyday contexts like you're describing someone called actually the Australian National biometric system recently, we're seeing scam calls being used. We're seeing swatting calls in the US, where it's an automated voice right calling to do swatting on on folks. So audio is improving rapidly and I find very worrying also looking globally we've seen a lot of misuse of audio. It also lacks all the sort of semantic clues around it right? A video like this, so many things you would look at to see if it's wrong. While audio you're just kind of listening and you're going really hard is it it's a little too electronic. Right? It's there's just no surrounding clues, particularly for example, so like when it's for example, shared in a WhatsApp group, as in many parts of the world where voice messages in WhatsApp are like you know how people communicate.
Tracey Follows 05:45
I think of all the companies that I've been looking at, perhaps ElevenLabs is the most interesting. Eleven claim to have the most versatile and realistic AI speech software, creating lifelike voices for creators and publishers, who are seeking new storytelling tools. Their voice design, as they call it, is offering a text to speech demo on the website. And I really would say it's worth going to the website and having a play around with it. Because it serves you have a catalogue of different synthetic voice options. You can choose from accents, nationality, gender, etc. They claim that their system can better convey emotion and sound less robotic, that the AI understands emotions expressed in any writing, and then can decide whether that should sound happy or angry or sad, or even neutral. Another great product proposition they've got is the automatic dubbing tools that let you speak a language that you don't. In what might be bad news for voiceover artists who do re- voicing, the AI does this automatically, while preserving the actor's original voices across languages. Speech to speech translation that preserves speaker identity between languages. That could be huge in my view, and it opens up a whole new world of content to people who couldn't immediately make themselves understood further afield. One of the questions, of course, is where does the training data come from? Have permissions from real people for the use of their voices in this process been sought? Has it been given? And is anyone getting reimbursed for that? Well, I have no idea in the case of ElevenLabs. But there was a great piece in Bloomberg last month profiling voiceover artists who had accidentally discovered that text to speech services online, were offering their voices. One company Revoicer told Bloomberg News, it couldn't share where it got its voice data from, but that the process was entirely legal. The other firms soundly didn't respond to requests for comment. But clearly those who do voice audio books or commercials for a living, are in for a rude AI awakening. I've tried to reach out several times to the founders of ElevenLabs in the hope that we could have a conversation about the technology and its implications for identity. You know, I'm thinking in terms of what it might mean for authorship, and for creativity, originality, authenticity, and all of that versus profelicity. But an answer has yet to come. So it's worth sharing what they see as their mission, and clearly our future, on their website. They say to quote, "voice conversion and voice cloning technology promise to revolutionise filmmaking, television, content creation, game development, podcast and audio book, as well as advertising industries. But their applications go beyond the commercial with potential uses in medicine, education and communications". They continue, "voice cloning is paving the way for a future where any content can be generated in any language and voice to reach millions of people worldwide, and to create an entirely new economy. Our goal at Eleven is to bring this future about."
Tracey Follows 05:58
Let's turn to social media for a moment. It's worth recapping on where we are on Twitter verification. This week, Scott Galloway's Section Four ran a piece in their newsletter asking how did Twitter's blue checkmark go from status symbol to total embarrassment? The piece contains a useful timeline of the blue checkmark history, how it was first envisaged as a branding tool really, to create trust for celebrities distinct from parody accounts. Then it became a status symbol. And now it's a mark of paid subscription. The point of the piece seems to be to highlight how a once coveted item is now far from scarce since the hoi polloi can buy one. But this isn't the problem with the blue tick. The problem is that it really isn't any kind of verification at all, not of status, not of celebrity rather than parody, nor of identity. When I met up with Cameron and Dave, in last week's episode, they made it clear why the current approach of social media giants just isn't going to work.
Dave Birch 10:25
"What should happen is, I go to create a Twitter account, or I go to log into Twitter the next time, Twitter bounces me to my bank, I do my strong customer authentication two factor login to the bank. And the bank sends Twitter a credential which says this is a person. End of, that's it. Doesn't say which person it is not relevant. Now Twitter can put a tick next to my name that says this is a person. It's none of their business who I am, right? If I want to put another tick next to my name in a different colour, which means I run a business, or a tick in a different colour, which says, you know, whatever, somebody else provides, it's not it's not up to Twitter to go out and find out those things. So you know, and so you end up, they have this kind of black and white verified, not verified, Facebook are going to introduce the same thing. But actually, it's kind of two levels. There's, there's an AI a person which Twitter wants to know. There's am I Dave Birch, which might be relevant in some cases? Maybe I want to be Dave Birch on Twitter. I don't have to be but maybe I want to be. And then there's Am I this Dave Birch, and Twitter, I have no idea whether I'm this Dave Birch or not. And it's expensive and time consuming to find out, which is why the whole blue tick thing is a bit of a mess.
Cameron D’Ambrosi 11:45
I do, I have many thoughts here. You know, the first is and this ties back to Twitter as well. You know, what has been leaked from Facebook, or Meta I should say across, you know, Instagram and Facebook is they want to make identity, a premium product, which I think in many ways I fundamentally disagree with. And I think it's a well, both from my personal perspective as well, from a business perspective, I think that's a fundamental mistake, this notion that, you know, you have to pay in order to get trust and safety out of the platform is ridiculous. You know, imagine if, and while I believe Uber did do this by tacking on like a rider safety fee. But imagine if when you got into an Uber it said, Hey, do you want to pay an extra $5 to make sure that your Uber driver isn't a criminal or drunk driver? You know, that would be absurd. And I think people would rightfully object to that. I hope that Facebook's announcement and Twitter's announcements that they're going to have these premium products that mean, you pay a monthly fee, and only in exchange for paying that monthly fee, can you tie your account to your real identity? I really hope that's the world that that we don't end up living in. But you know, let's see how, let's see how the market responds. I hope that that people are going to push back against that and say, this is something that we think should be done for free. Now, obviously, you have a business to run and and do you need to make people pay for access to new and exciting features? Certainly, there's an argument to be made there. I would argue that at the fundamental level, you know, are you a real person? And are you the identity you're claiming to be should kind of be table stakes for operating one of these accounts?
Tracey Follows 13:33
Dave mentioned credentials there. So let's have a chat about that for a minute.
Isabel Webster 13:38
Good morning to you, Tracey. A very quick rebuttal from Downing Street, some might say before, perhaps you've had the chance to properly debate all of this. What's your view? Do you think? You know, they're controversial. But is there? Is there appetite perhaps growing for them given the way we live now?
Tracey Follows 13:54
Oh, absolutely. I think from about 2016/17, I'd noticed that we were starting to need more digital verification and authentication, because obviously, a lot of our public sector services are becoming digitised. But it's not so much a digital identity card that's being proposed, it's a set of what they call verifiable credentials, which might sit in a digital wallet. So in a sense, we're echoing what we do in the physical world, but with a digital version. I think that's that's quite an important point to make, really, it's not a card. It's some credentials that are given to you, that you then use in the way you want to and only share the information that you want to with certain people. So it's, it's privacy protecting and I think that's a very, very important point.
Tracey Follows 14:43
That was me with Eamonn Holmes and Isabel Webster on GB News's Breakfast Show, attempting to explain a little about digital identity. Following on from Tony Blair and William Hague's report, which included recommendations on building a digital identity infrastructure. If you listen to the last episode, you will be pretty clear on our opinions on that. But suffice to say that this TV interview was an example of the way in which the mainstream media approach the topic. And that is by talking about digital identity cards. Regular listeners will know that there is no such proposal to introduce identity cards, and that wallets are the probable future for digital identity. So much so that the European Union is developing a Pan-European identity solution with digital wallets at the core. I thought that we should dedicate the whole episode to this. So the next one will feature Andrew Tobin, who I have personally found to be the very best communicator on Europe's eIDAS scheme and its evolution. Here's a little taster of what he has to say in next fortnight's episode.
Andrew Tobin 15:56
So I think we need to get away from thinking about ID cards and digital ID and get towards thinking of digitised versions of the credentials you've already got. And once you think about it like that thing, well, it's just the same thing. I've got, and I can present it in the same way. But we've we've better privacy and better security. And that then is a bit of a game changer. But to know that you need to know about the space. I think the easiest way to think of it is what have you got in your wallet at the moment. So let's say I've got my employee ID for my organisation. So in that situation, in order to issue that ID card, that physical ID card, the employer has to know who I am and have a picture and all of that kind of information. And then they print out and issue that employee ID card to me. So instead of printing it onto a physical piece of plastic, they can take exactly the same data and put it in a credential and send it into my wallet. And how that might look is that work on the governor on a work system, I'm already authenticated into scan this QR code to get your employee ID on your phone, and you scan it with your wallet app, it pings up and it says, Here's your employee ID. And then you're done. When you want to use it, let's say you're booking a flight with your employer's travel agency, and you need to prove that you're an employee. They might have a QR code there that says scan this to prove who you are, that sets up a connection with your wallet, your wallet will then pop up and say, Please share the fact your an employee of Gen Digital, and you don't need to know any other information. But I just share that.
Tracey Follows 17:36
And we'll also touch on big tech. These are the solutions that are waiting in the wings. If systems like eiDAS fail, or are not adopted in certain jurisdictions. It's an interesting future that we're heading towards with all of this. And as you've pointed out, there are lots of things that we don't know yet because it's all in play. But I know that you've noted in some of the stuff you've written on this topic, that if this doesn't pan out in the way that you have brilliantly explained on this podcast, that big tech will step in and fill the vacuum. And that's a very different future, isn't it? In summary, how would that play out in terms of some of the aspects? We've just been discussing your anonymity, getting rid of passwords, interoperability, would it work at all?
Andrew Tobin 18:24
Well, that again, is a really good question, Tracey. Big tech is already there. So you have your wallet, on your Google your Android phone, or you have your wallet on your Apple phone. And then you have other wallets. Like you can have a Google Wallet on a Samsung phone or a Samsung wallet. Not so much on the iPhone side of things. And you think well, what are the drivers there for these folks doing this? If they own your identity, if they own your wallet, if you can't easily transfer out your stuff and put it somewhere else, then you're going to stay with them. Right? And the question for people is going to be whether that's a comfortable thing or not.
Tracey Follows 19:05
So hit the subscribe button and the notification bell because you aren't going to want to miss that conversation on the future of digital wallets and credentials. Honestly, you won't need to listen to anything else. Andrew covers it all and brings great clarity to both the technical and the ethical dimensions.
Tracey Follows 19:24
Right last but by no means least, I was joined by Nik Badminton to hear a little bit about his new book, Facing our Futures.
Nik Badminton 19:32
So it's a book born of the pandemic, I was invited by Bronwyn Williams and Theo Priestly to write a chapter for the book, The Future Starts Now. And I wrote a chapter on starting with dystopia and I've been doing lots of work around exploring dystopian futures, through my dark futures event and through a lot of writing and the speaking that I do, as well as looking at positive futures as well. And it just got to a point where the work that I was doing with clients was really improved by looking at possible dystopian futures, as well as possible positive outcomes. So, you know, I developed an entire process during the pandemic, I did all of these workshops, worked with all these clients, big and small, startups and a number of other people, using this framework and I thought, you know, this, there's some legs in this book. So I pitched Bloomsbury and they loved it. And in late 2021, I sat down, I was down in California, in Palm Desert, and I sort of wrote the book in about three months, after about nine months of research. And yeah, it's proven to be something that a lot of people are talking about, there's lots of my clients picking up there, there's, there's a lot of buzz around it, because it's kind of hitting this point where people in the world are willing to look into the darkness. You know, we shouldn't be afraid of looking into the darkness, we need to question our biases around, you know, everything's gonna be okay to like, it's okay to realise that if we make bad decisions today, there are going to be effects. Let's just explore that. Let's feed that into the whole overall futures process. So I've kind of, it's a super real book around that. And I talk about everything from, you know, how do you scan for signals, to trends, areas of exploration, and then to how to weave them into stories and even how to establish a futures function within your organisation as well.
Tracey Follows 21:23
So tell us about the positive dystopian futures framework. I particularly like the dystopian bit
Nik Badminton 21:28
For sure, so what you do is you actually start out with a set of principles, right, you say that, you know, what's a positive future? I mean, I define it very explicitly. And I think that's important to do. So this is my stance, but I think it's the stance that the world needs. You know, a positive futures perspective is, you know, you've got a global view, you've got infrastructure to support that. You've got a perspective around improving health and wellness, reducing wealth disparity, we create a world that's designed for humans, it's human centric, it's balanced, it's egalitarian. People can own it, people can be a part of it. It's basically the opposite of what is a dystopian future, which is this industrial complex, where very few people, mostly men, are in charge of how the world operates. And we're caught in this web of working, you know, nine to five, five days a week, six days a week, and for the betterment of no one but the people that we make money for, right? So these two sides, we start off with these principles, like positive future principle is people before technology, a dystopian principle is technology before people, profit before people and those kinds of things. And what you do is you scan for signals, you look at trends, you explore scenarios, but you put them through these lenses. So on the positive side, you get all of these wondrous effects that potentially make for a better place to live in the world and a betterment of humanity. And when we look at it, and we process, you know, these, these these signals and trends and scenarios, through that dystopian lens, we see all the risks, and all the struggle and the negative effects that come out of that. And you've got sort of two sides of the mirror here. And what's great is when you mash them together, it's this very real, critical, and very empowering view of the world. And that's what the framework tries to do. It tries to get you to the point where you've got these scenarios that really help you understand, strategically what you can do today to start on the path towards a better world whilst dealing with everything that can go wrong as well.
Tracey Follows 23:38
You also talk about pro topia, of course, so Kevin Kelly's phrase that he coined in fact, I've got the book open at the quote of his "I think our destination is neither utopia nor dystopia nor status quo, but protopia, protopia is a state that is better and today than yesterday, although it might be only a little better. Protopia is much harder to visualise. Because protopia contains as many new problems as new benefits. This complex interaction of working and broken is very hard to predict." In some senses, sometimes I feel that's where we are this, this interaction of working and broken, because it's how a lot of things feel at the moment, especially in the UK. I know you're not in the UK, for our listeners, you can tell us where you are, but a lot of things are almost working or working just about but there's a lot of stuff that's broken.
Nik Badminton 24:24
You know, we're in a constant state of collapse, right? If you look at data for futures, you know, it's like growth business as usual transformation collapse. You know, we're constantly stuck in this like, sort of despairing loop of, you know, hitting moments of collapse, transforming our way out of it, you know, through money and grit. And this is kind of where we are, and people don't like to talk about the fact that we're in a constant state of collapse, but we have to realise that I mean, I'm in Canada now. It's got its own challenges, known problems, and our neighbour down south, certainly challenging as well. Obviously, I'm from the UK. I empathise deeply with everything that's happening there. But yeah, it's really interesting when we look at these things and try and break out of these political cycles, that really do restrict our thinking, and cause us to have a state of paralysis about what to do. If you don't know what comes next, then you either just shut down and say, oh, you know, today's gonna be fine. So climate change - I don't care. I'm not going to be here when the world's burning. Well, that's not really the way that we have to look. So you know, I use futurism as activism to really get people to wake up.
Tracey Follows 25:38
Do you think that AI is going to impinge on some of the work you set out in the book here, or even in general, on futures, work and consulting?
Nik Badminton 25:48
I, you know, I've sat down with GPT3 and you know, gone through, I've tried to write scenarios, I've tried to feed in design fiction and get it rewritten in the style of certain poets. I've tried to use it. And it's like, it's so average. Like, I've been working with AI and language since the 90s, I did my degree in Applied Psychology Computing at Bournemouth, and I did AI, you know, artificial neural networks, with linguistic processing. So I've been in the game for a long time, and I've been following it. And whilst it's exciting to have all of these, you know, you know, Dali and MidJourney and Stable Diffusion, and you know, the large language models stuff like GPT3 and whatever, people were just so desperate for something that's going to save them from their, like, really challenged work situations that everyones like, This is it, this is the future! But what you get out of is absolute garbage. It's just a mirror reflection of what's been said before. And it's attribution as well. Like, we live in a world. My book has got dozens of really amazing thinkers and references to articles and whatever. And you attribute them and you say, this is interesting, go and read more. It's gonna get interesting when you've got these platforms that say, you know what, we've generated this 2000 word opinion piece. Cool. Here are the reference points to go deeper. Here's, here's our responsibility to the world. They won't say it, because then it's a copyright perspective. Right? So you know, they're like modern highwaymen. You know, they're sort of standing to deliver on all the content on the internet. I think in the next two to three years, there's going to be a real backward step here, because I think that people are over, over egging this and saying that this is going to change everything, and I think they're going to realise.
Tracey Follows 27:37
So where I think it's gonna go is into more personal AI development. I mean, if you can have your own personal Chat GPT that learns your tone of voice, your personality, your previous work, your dreams, and hopes and aspirations where you'd like to appear, the content you'd love to make. I would love to see that. And I think it will blossom as a kind of industry or a sector where we've got our own personal AI. And in that regard, I can see it as an extension to the self. And I can see it learning over time. And it being quite a mutual kind of reciprocal relationship, the human and the technology, just like any other tool is really. You use or refer to Rene Rohrbeck's piece on corporate innovation, where they obviously went and tracked for seven years about 100 companies, and looked at the increase in profits as well as the increase in overall revenue, for those companies that were future facing. It always staggers me that because you use that example, I've used it, it's staggering that we haven't got more examples, sometimes.
Nik Badminton 28:44
You know, what, when I, when I finally read Rene's in back in 2018, when it was published, I was like, thank goodness, that someone's done something that I can at least hang our hat on to and say, Look, more profit, more growth, you know, more vigilance, better organisations. Do do the work of futures, because everything before that, and still many of the things going forward, it's like narrative, you know, it's good to do futures work. And it's like hanging the new rug on the wall and saying, ah, that's nice, isn't it? But you know, what Rohrbeck actually did was he went inside of organisations that were using that. In my book, I try and link back using back casting or methods of back casting, the futures work that we do, so we can actually use that today. And it's like, you know, here's 21 different areas that we need to do research on because they're absolutely going to affect our business and we know what that's going to feel like in 20 to 30 years. You know, that's what's really, really important here and I'm starting to see more companies really step up. But like people are woefully woefully sort of underprepared even for the next three to 12 months, you know.
Tracey Follows 29:58
Well, that's been the Future of You: the Briefing. We've covered voice cloning, digital wallets and digital credentials. We've revisited the Twitter blue checkmark and its status and we faced our features with Nik Badminton. Until next time on The Future of You, this has been The Briefing
Tracey Follows 30:27
Thank you for listening to The Future of You hosted by me Tracey Follows. Check out the show notes for more info about the topics covered in this episode. Do like and subscribe wherever you listen to podcasts. And if you know someone you think will enjoy this episode, please do share it with them. Visit thefutureofyou.co.uk For more on the future of identity in a digital world and futuremade.consulting for the future of everything else. The Future of You podcast is produced by Big Tent Media.