Start the Week - From Sapiens to AI - BBC Sounds
Introduction
From the Start the Week programme episode page:
Yuval Noah Harari’s best-selling Sapiens explored human’s extraordinary progress alongside the capacity to spin stories. In Nexus he focuses on how those stories have been shared and manipulated, and how the flow of information has made, and unmade, our world. With examples from the ancient world, to contemporary democracies and authoritarian regimes, he pits the pursuit of truth against the desire to control the narrative. And warns against the dangers of allowing AI to dominate information networks, leading to the possible end of human history. The classicist, Professor Edith Hall, looks at how information flowed in Ancient Greece, and how the great libraries of Alexandria and Pergamon were precursors to the World Wide Web. Homer wrote about intelligent machines in his epic poetry, which suggests that the human desire for AI goes back a long way, along with the hubris about being in control. By understanding and appreciating the past, Professor Hall argues we can look more clearly at our current condition. Madhumita Murgia is the first Artificial Intelligence Editor of the Financial Times and the author of Code Dependent: Living in the Shadow of AI. She investigates the impact AI can have on individual lives and how we interact with each other. And while there are fears that companies have unleashed exploitative technologies with little public oversight, cutting edge software has unprecedented capacity to speed up scientific discoveries. Producer: Katy Hickman
Abbreviated transcript
Let's turn from those information networks that we've learnt to live with sort of writing and printing and so on to these more recent ones. You value rights about a kind of recent witch hunt the genocidal attacks on the Rohingya in Myanmar which began in 2016. You described them as the Canary in the coal mine. Why are they particularly the Canary? What are they telling us?
Yuval Noah Harari: Because it was the first large scale massacre and one of the first large scale historical events which was in part the result of decisions made by a non-human intelligence by the algorithms that control the social media platforms in this case Facebook. People talk a lot about automating jobs and it's strange to think about it that one of the first jobs that was automated was news editors. We thought they would come first for the taxi drivers of the coal miners but actually they came first for the news editors and for much of modern history editors were one of the most powerful people in society they determined what people talk about. They shaped the conversation. You know a Lenin before he was dictator of the Soviet Union was an editor of a newspaper Iskra. Mussolini began as a journalist then he was the editor of Avanti, the fascist newspaper and then dictator of Italy. So this was the kind of ladder of promotion journalist editor dictator. And now if you think who are the editors of the most important news agencies or news platforms in the world these social media platforms algorithms, not human beings. And these algorithms were given a mission by the owners of the platforms to increase user engagement. And what they discovered the algorithms are by experimenting on millions of people that the easiest way to increase user engagement to make people spend more time on the platform is to press the hate button or the fear button or the greed button in people's brains. And one of the first places this happened was in Myanmar in the middle of the 2010s. Nobody in Facebook had any ill intentions against the Rohingya. They probably didn't even know they existed. Nobody in Facebook hate cultures in California even spoke Burmese. But Facebook was the main social media platform and news source in Myanmar. And the algorithms were deliberately promoting recommending auto playing all these hate-filled conspiracy theories about the Rohingya. And this propaganda campaign led in this was one of the reasons, not the only reason, but it led to an ethnic cleansing campaign against them. Now just to clarify this,
Tom Sutcliffe: it wasn't the algorithm that was creating the hate. No, no, no.
Yuval Noah Harari: That wasn't in the program at all. You were creating the content. I mean, this is 2015, 2016, 2017. Today, the algorithms can even create the content. Back then, humans created all the content but humans created all kinds of content. You had people creating hate-filled conspiracy theories. You had people giving Buddhist sermons of compassion. You had people giving biology lessons. And then it was the algorithms who are the editors. They decide what is at the top of the newsfeed, what to recommend, what to auto play. And the algorithm decided to spread hate. Well, it's decided to spread hate.
Madhumita Murgia: We talk previously about good information versus bad and AI systems and algorithms are the best example of actually bad is prized over good information.
Tom Sutcliffe: But the algorithm itself is not pricing the good. It's just, it's pricing engagement, which is a neutral term.
Madhumita Murgia: And it doesn't, it's neutral in the sense that it doesn't know what good or bad is. But people tend to be shocked by extreme things. And extreme things tend to be bad things. And so I, for example, have been getting repeated posts on my Twitter about child bereavement. So people losing their children. And I've never clicked on any of this, but possibly lingered on it a second too long, maybe because I have young children or I'm about to have another. But I'm just being shown just millions of these things. And as you say, it's not that everybody's just writing posts about losing their young children. There's lots of other content on there. But the AI systems decide what you see. So it creates a mirage. And when, as we start to talk to AI systems, we will also only hear what they decide that we should hear. And so it's very, very manipulative in a very sort of deep and individual way. The AI systems take away our agency to decide what we would prefer to see. I don't prefer to see things about child bereavement on the internet, yet I am forced to see it.
[...] I don't think people can control, right? If you see something really horrific online, you don't just close it. But that's the point.
Tom Sutcliffe: You stay. You stay.
Madhumita Murgia: But not because it makes you happy. No, no.
Tom Sutcliffe: But I mean, that's the thing that the AI is exploiting.
Yuval Noah Harari: I think maybe it goes back again to also ancient Greece and the thought of Plato and Aristotle. Humans have complex entities. We have vices. We have virtues. What do you cultivate? That was always a key question in human societies. And the algorithms, at least if they are not tweaked correctly, they tend to cultivate our vices. And then the owners of the algorithms say, but I mean, this is your choice. We didn't do anything, either it's free will. It's your personality.
Edith Hall: Well, no. Can I just not so much about ancient Greece? What is the difference between a market, the market mechanism, which is not human, has no agency, but that actually reward, you know, has goes in similar trajectories because it's centripetal. And it means that you get fewer and fewer companies, right? Information has the same tendency. What is the difference between the bots that are making those not moral judgments, but quantifying things? And the four market forces, what is the difference?
Yuval Noah Harari: I think the difference is the market was always a metaphor. All the people actually making decisions were human beings. I mean, companies can, I mean, we talk about the company made a decision. There was a human being or a couple of human beings making a decision. Now for the first time, you can have non-human beings actually making the decisions.
Edith Hall: I hear what you're saying about the point at which the yay or nay actually happens in the consciousness. But I don't see the difference. If you have a very big factory that makes a product, you can sell it cheaper than if you've got a very small one. This inevitably means that people are going to make the decision to buy up the smaller factory, right? Humans are only secondary agents in that. There is an inhuman force that is making things centripetal and driving them forward. And the one thing I really missed in your book was a thoroughgoing engagement with capitalism and Marx's laws, which are inhuman laws.
Tom Sutcliffe: How do you respond to that? Why do you not talk about the profit motive quite as much?
Yuval Noah Harari: No, I mean, the profit motive is there, but I still think there is a fundamental difference between a world in which the only entities, I mean, even if there are market forces, I believe that people can resist them. We see it with religions, we see it with ideologies. The idea that the whole world is just this kind of intricate Newtonian system of market forces, of material forces. That's not my view of history. I think that humans have a lot of agency and that stories, mythologies, theologies sometimes are more powerful than market forces. I think I would disagree.
Edith Hall: If I think about voices...
Tom Sutcliffe: I want to bring in Madhumita Murgia here, because her book is exactly about that point where market forces are employing AI and what that does. You set out to write a book about this, Madhumita, you mostly ended up talking to people without power, without much money, without social agency. Was that where you thought the research would take you? Or were you surprised by that?
Madhumita Murgia: I was really surprised at the patterns that emerged. Amongst the people who were most impacted by the technology. Maybe I shouldn't have been, because the fact that the tech industry is obviously driven by a profit motive means... It means that it's inevitable, I think, that those who are going to benefit most are those who actually already have power in today's society. Those who are marginalised or vulnerable are going to suffer just as they do today. But I think the other surprising thing was actually AI exacerbates that inequality. It makes things worse for you if you're already badly off, and it actually makes your life much better if you're already doing pretty well. And that's where the market force comes in. That's why they're doing it. And I think that's also why the sort of... The concentration of power is so dangerous, because the whole way AI is designed is it has one objective function. Algorithms, we talked about social media. Their objective function is eyeballs, right? But each AI system tries to maximise one thing only. And if you only have profit-driven corporations building and designing those systems, that's what you're going to see.
Tom Sutcliffe: You write about social enterprises in Nairobi, which are helping to build the AI intelligence. And this takes people from Kibera, you know, a slum area of Nairobi, gives them jobs which, on the face of it, look... So they're much better than other opportunities there. And they are doing absolute sort of grinding, sort of almost piecework, teaching the AI how to think about the world.
Madhumita Murgia: And then... Exactly. But, you know, as you, I think, said earlier, truth is complicated and nuanced. So for me, also, what I want to convey is there is no AI good and AI bad. There's everything in between. So yes, good, because these people get jobs that they would never otherwise have. They're being paid a minimum living wage, getting digital skills. But at the same time, are never allowed to protest against Facebook and Tesla, which are the people they're ultimately working for, and actually end up doing some quite horrific things, like filtering out all of the terrible social media content that we don't want to see.
Tom Sutcliffe: And that was a very interesting case, I thought, because it was a case where the, where human agents were, I mean, literally, as in the Industrial Revolution, exposed to toxic materials for somebody else's benefit. I mean, just explain what they have to do. They have to kind of go through the...
Madhumita Murgia: Yeah, I mean, we would be seeing just the absolute worst of humanity constantly every time we went on the internet. YouTube, Facebook, Instagram, TikTok, because, as we've discussed already, you know, algorithms push us towards, and also mean, you know, we are fascinated, maybe, more, but whatever. Our engagement is maximized by negative content. But we are not, we don't want to see bestiality and child pornography, torture, terrorism, and violence constantly. Otherwise, we never use it.
Tom Sutcliffe: Clearly somebody does, because there wouldn't be a problem otherwise.
Madhumita Murgia: Well, we don't want to see it all the time. Yeah, we would, we would, it would be too much. Well, not all of us want to see it ever, but, you know, and I take a point. And the only way for them to filter it out, because up to now, the AI systems weren't good enough, doing it was to use humans to say, this is satire or political statement, but this is actually really violent, so terrorism or unacceptable. And imagine being a person, you know, from Kibera, sitting there nine hours a day, just looking at that content, drawing boxes and categorizing this content. It's, it's led to PTSD and all sorts of sort of downstream. And we now have these people suing Facebook for the first time, for, for their algorithms causing, you know, harm to them.
Tom Sutcliffe: That is a whole other difficult isn't it suing the AI? Because, because of the black box nature of it. Before we move on to that, I just wanted to ask you, you do write about the benefits of AI, the fact that it can be used in certain circumstances. Tell us about Dr. Ishita Singh and QTRAC, which is applying it medically.
Madhumita Murgia: Yeah, well, this is quite relevant this week after the Nobel Prizes where we've seen, you know, AI's role in biology and breakthroughs and, you know, downstream how that could impact medicine as well.
Tom Sutcliffe: Dennis, the service of DeepMind11, I think, for, for, for, for working, working out how proteins folded,
Madhumita Murgia: which is, exactly, to predict the structure of proteins. And so for me, I think, for the, the most positive outcome of these systems would be to accelerate scientific discovery and progress and healthcare is a big part of that. And I went to India where I spent time with a doctor in a very rural part of the country, mostly citizens from a local tribe who don't even really rely on Western medicine or believe in it. And she's using an AI system to help diagnose tuberculosis there. And her, you know, what she talks about is, you know, the importance of human doctors is so much, it's medicine is more than a science, it's an art, it requires empathy and humility, which these systems never bring, but they can be an amazing, you know, democratizing tool for huge parts of the world that don't have any access. And you don't have to go to rural India for that, you could travel around the UK and find people who've waited months to have a specialist diagnosed their cancer, for example, which you could do within seconds with these information.
Tom Sutcliffe: And it has to be said, these information systems are very good at it. Talk to them how to do it. They perform at a higher level. Yeah, no, they're very well trained doctor.
Madhumita Murgia: Exactly. And that's a more objective sort of tool that you can use. But I think, again, you know, the lesson is, we can't cut humans out of the process because what happens when things go wrong, you can't allow an AI system to be autonomous because it can't self-correct. And then you end up having far more damage than you would have had with, you know, with human doctors. So I think it's how we implement that alongside human beings.
Tom Sutcliffe: You write about the use of AI to maximize surveillance, which is a fear that a lot of people has because AI, as it were, will take any amount of drudgery, you know, it doesn't mind looking at 10 million faces in a row and it doesn't get tired doing it. You've all the anti-hid jab laws in Iran are now enforced with a degree of error.
Yuval Noah Harari: Yeah, with facial recognition cameras. They were there in the books for years but saying that women, when they go out, they must wear the hijab. But it was difficult to enforce them because you would need policemen everywhere and it also causes friction. Now they largely switched to an automatic system in which you have surveillance cameras everywhere equipped with facial recognition software and algorithms, AI's, that not just recognize if a woman says driving in her own car, but without the hijab, also identifying the woman, finding her phone number and sending her instantly doesn't go through a quote or something. They instantly send a message, you broke the law, your car is impounded. And also other penalties and this can spread, this type of systems, they create for the first time in history the possibility, again, it's not certainty but the possibility of annihilating privacy and creating total surveillance regimes. Because previously in history, even if a Stalin or a Hitler wanted to follow everybody all the time, it was just technically impossible. Like in the Soviet Union, you have like 200 million citizens, you don't have 200 million KGB agents to follow each citizen 24 hours a day. Even if you have these agents, they just write paper reports about what you did and who you met, whatever, you don't have millions of analysts in KGB headquarters to analyze it. Now you do. I mean, you don't need human agents and analysts, a regime can rely on all these cameras and microphones and smartphones and drones and on machine learning to analyze it.
Tom Sutcliffe: And there is nobody that you can go to. Should there be, I mean, you might not call that justice at all, but should there be a miscarriage of justice under some system, there is no one you can appeal to.
Madhumita Murgia: Yeah, I'd be really interested to compare that to previous sort of historical examples of centralization of information. Because what we see with AI today is there's so much automation bias, which is sort of us believing that it's better than us. It's superior to what we're able to do less biased, you know, doesn't get tired. All the things you said that we don't, we aren't even creating mechanisms for recourse. So today, if you want a liver on the NHS, if you're waiting to be, if you're an organ recipient or waiting to get a transplant, there's an algorithmic system that decides who goes to the top of the list and who should get a liver. And, you know, there's no human involvement, a doctor can't come in and say, no, I disagree. It just goes on to the next person if you refuse. And a family who I wrote about discovered that actually it was a biased system against anybody under the age of 40. So if you are under 40 and waiting to get a liver through the system in the UK, you're never going to get it, never. Because the system is never going to change its mind and there's no human alternative or way to appeal. How can we live in a society like that where, you know, when things go wrong or if the design isn't, you know, is biased against certain types of people, there's no way out.
Tom Sutcliffe: They did discover that bias was inherent, which suggests that transparency can work, that if we're told the way in which the algorithm is constructed, there is a remedy of sorts.
Madhumita Murgia: Yeah, in this case, they didn't make it transparent. It was the sister of a patient who managed to find it and the NHS sort of denying it. So yes, if we have people who are in power who choose to make it transparent, then there's a way for us to fix it.
Edith Hall: This goes back to, in philosophy, bang to Aristotle's Nicomachean Ethics, where he writes pages and pages and pages on the difference between equity and equality. And he is so clear that you cannot have judges not having flexibility and sentencing. I mean, long before the three strikes law, you know, he is so clear that no lawmaker, and we could apply this to the bot, right? No lawmaker can ever anticipate all the possible cases and details and nuances that will come before, there is no way that they can do that. So they now... So his nomothete, his lawmaker, has got to be separate from his judge who is applying it in individual. So you could have an algorithm, which is your nomothete, but you'd have to have the intermediary of the human judge who applies not equality, what the algorithm actually does, however unfairly, is say same rules for everybody, but the judge who applies equity, and he produces... And interpreted. He uses one of his most beautiful images, which is for equity, which is, he says he's seen, he lived on the island of Lesbos for a long time, they have a ruler there made of lead that the stone mason's use for measuring curving stones. So that is the law, because you've got, you know, it is a crime or whatever, or it's a liver, but you've got to be able as a judge to bend it round the case. Round the case. And I just... This is just so staringly obvious that we just need to get our rights together to get our judges in there, correcting the nomothete. Tom Sutcliffe: Well, that's the next stage, that when the artificial intelligence starts to acquire an intelligence, you make a very interesting distinction about we should think of the A in AI as alien intelligence.
Tom Sutcliffe: Yes, I mean, we tend to think of artificial intelligence as being a better version of us. That's not what we're going to get, is it?
Yuval Noah Harari: Yeah, I mean, there is this debate when will AI reach human level intelligence? And the answer is never, because it's not on a path to a human level intelligence. Humans are animals, we are organic. AI is not... It's not organic. It's more different than us, not just than rats and pigs. It's more different than us than trees and amoebas. It function in a completely different way. I mean, one salient factor is that organic entities, or organic systems work by cycles. Day and night, summer and winter, time of activity and time of rest, we need to sleep. AI's are always on. They never need to sleep. And what we see around the world more and more is that you have these kind of alien intelligences. Again, alien, not in the sense that coming from out of space, but they really process information in a fundamentally different way than us.
Tom Sutcliffe: And in a way that we wouldn't understand.
Yuval Noah Harari: Yeah, and going over an enormous amount of information that we can't process, and they're increasingly in charge of their healthcare services, of the financial systems, of the new cycle, of politics. And the question is, who will adapt to whom? Will we have to adapt to the way they manage things? Or can we make them adapt to the organic style of human beings?
Tom Sutcliffe: You've asked a very large question there. We're running out of time, but we started talking about this tape. Is do you feel confident about the solutions we're going to come up with? Or pessimistic?
Yuval Noah Harari: I try to be realistic. I mean, when I talk with people in places like Silicon Valley and also in China, which was the other main center of development, which we haven't talked about, many of the people who are leading it are thoughtful people. They are, they understand us better than most people, the danger of what they are doing. They are concerned, but ultimately, they are locked in an arms race. When you ask them, why just not slow down a little and give society time to think about it and to adapt, they say, we would like to slow down, but we can't trust our competitors. And the next question is, but can you trust the AIs? And then they say, yes. So this is the big paradox. They can't trust the humans, but they think they will be able to trust the AIs.
Tom Sutcliffe: Yes, well, I mean, one thinks they should listen to the Nobel Prize winners, who's saying it could go out of control. That's a reassuring thing in itself, isn't it?
Edith Hall: I think they should listen to Aristotle, and they should listen to Homer. Homer's got, Odysseus has to get home to Ithaca and the Phoenicians have sentient ships. This is an ideal art, he's got to get there. There's no one to take him there, but the ships are completely sentient, they can't get any decisions about the navigation, right? But they only do what you, before you go to sleep, because he sleeps on board, and told them you want to get to Ithaca. They can go any way they like. But I think that that is the perfect image of what we should have as our relationship with AI. Tom Sutcliffe: The control of programming. Madhumita, do you feel, do you, I broadly feel confident about the future of AI or troubled by it?
Madhumita Murgia: Since we're talking about humans versus machines, I'm, I guess, handicapped by my innate optimism. That's the kind of person I am. So that's the lens that I apply to everything. And so it's interesting, a lot of the reviews of my book saw it as dystopian reality, and many of the stories are. But somehow coming out the other end of the book, I didn't feel hopeless or pessimistic, because the final third of my book looks at humans who are fighting back. At China, in particular, a particular human rights activist who's fighting against the might of the big data system there. And every day people who are putting a forward to try and understand these systems better and to establish some agency. And so that's what I hope will happen that will move away from just the machines and look at our own power in that. Tom Sutcliffe: There is a resistance if you know where to look. Thank you to all of my guests, Edith Hall, Professor of Classics and Ancient History at Durham University. Look out for her new book that's out in 2025, taking an ecological look at the Iliad, Epic of the Earth. [...] Madhumita Murgia's Code Dependent, Living in the Shadow of AI are both out now.