Return to Transcripts main page

The Amanpour Hour

How Technology Is Changing Us For Better Or Worse; Interview With Actress Julia Louis-Dreyfus And Director Daina O. Pusic; How India's Caste Discrimination Became Modi's Undoing. Aired 11a-12p ET

Aired June 15, 2024 - 11:00   ET

THIS IS A RUSH TRANSCRIPT. THIS COPY MAY NOT BE IN ITS FINAL FORM AND MAY BE UPDATED.


[10:59:44]

CHRIS WALLACE, CNN HOST: Along with a card that reads Reihan, we know you're new to the game, just remember when you're down to one card, you've got to shout UNO. Happy Fathers' Day." So here is all of your UNO stuff.

REIHAN SALAM, PRESIDENT, MANHATTAN INSTITUTE: That's amazing. Thank you, Mattel. And thank you, Chris.

WALLACE: Well, I had nothing to do with it. But I'll tell you the marketing people at Mattel are --

KARA SWISHER, PODCAST HOST: They are.

LULU GARCIA NAVARRO, CNN POLITICAL COMMENTATOR: They know what they're doing. They know what they're doing.

KRISTEN SOLTIS ANDERSON, CNN POLITICAL COMMENTATOR: I would just like to say I've never driven a Ferrari.

WALLACE: Actually, that's exactly what we were thinking. Or drink Dom Perignon.

Happy Fathers' Day to all you dads out there.

Thank you for spending part of your day with us. And well see you right back here next week.

CHRISTIANE AMANPOUR, CNN CHIEF INTERNATIONAL ANCHOR: Hello, everyone. And welcome to THE AMANPOUR HOUR.

Here's where were headed this week.

(BEGIN VIDEOTAPE)

AMANPOUR: Divine intervention, Pope Francis makes history weighing in on ethical A.I. at the G7 summit.

This hour, the risks and rewards of artificial intelligence with my panel of industry leaders.

CONNOR LEAHY, CEO, CONJECTURE: They don't get tired. They don't get distracted. They have no emotions, they don't care about humans. What do you think happens?

AMANPOUR: Managing misinformation in a critical election year.

BARONESS BEEBAN KIDROW, MEMBER, HOUSE OF LORDS: The A.I. has no guard rails, no regulation, no accountability, no -- no liability.

AMANPOUR: And how technology is changing what it means to be human.

NINA SCHICK, A.I. EXPERT: These are all philosophical questions that touch upon the heart of what it means to be human.

AMANPOUR: Also, this hour.

JULIA LOUIS DREYFUS, ACTRESS: This is what parents do. They do what they have to do.

AMANPOUR: Julia Louis-Dreyfus on confronting death and a giant parrot in her new movie, "Tuesday".

LOUIS-DREYFUS: My daughter in the film is really the parent to me.

AMANPOUR: Then from my archive, why Narendra Modi's resounding rejection by India's most oppressed caste comes as no surprise.

And finally, the new documentary taking us behind the scenes for the tearful final days of Roger Federer's tennis career.

ROGER FEDERER, TENNIS PLAYER: These are the nerves I'm going to miss once I'm officially retired.

(END VIDEOTAPE)

AMANPOUR: Welcome to the program, everyone. I'm Christiane Amanpour in London.

The risk and reward of artificial intelligence was high on the agenda at the G7 summit in Italy late this week. But did Pope Francis steal the show? He's the first pontiff to speak at the summit putting ethics at the part of the debate on artificial intelligence alongside leaders of the world's most advanced economies.

Pope Francis, who was infamously deep-faked wearing a big white, puffy jacket wants A.I. to serve humanity not to mutate into a 21st century, Frankenstein's monster.

(BEGIN VIDEO CLIP)

POPE FRANCES, ROMAN CATHOLIC CHURCH (through translator): We would condemn humanity to a future without hope if we took away people's ability to make decisions about themselves and their lives by dooming them to depend on the choices of machines.

(END VIDEO CLIP)

AMANPOUR: This week on the program with cutting tech being showcased at the A.I. summit here in London, we begin with a conversation about what it means to be human and how technology is changing us for better or worse.

In the studio with me a round table of industry leaders and a range of views on the opportunities and threats of A.I.

Dame Wendy Hall is a distinguished computer scientist and a leading figure in the field of artificial intelligence; Baroness Beeban Kidron is an advocate for children's rights online and a member of the U.K. House of Lords; Nina Schick is an expert in artificial intelligence known for her work on the weaponization of A.I., deep fakes, and the impact of A.I. on elections and misinformation; and Connor Leahy is the CEO of Conjecture, a company focused on A.I. safety and its alignment with human values.

So welcome to you all.

What I heard recently, is that we should look at A.I. as an enhancer, not a replacer. So first, I want to ask you, is that reasonable when we keep hearing these threats to our jobs, our existence, et cetera?

DAME WENDY HALL, PROFESSOR OF COMPUTER SCIENCE: It is absolutely reasonable. And I like to think of A.I. and them and us. The A.I.s and us in enhancing what we can do in so many different ways and leading to breakthroughs in worlds like healthcare, where this already seeing the breakthroughs.

So the A.I. can read the scans, can diagnose much more quickly and more accurately than a human being leaving the doctors or nurses, the radiologists to think about the patient.

We tend to go straight for the risks and they are there and we're going to talk about them. But I would like I just wanted to start by saying there are fantastic opportunities here.

A.I. will enhance what we do and actually changed the world of work, but it will create more jobs than it destroys.

AMANPOUR: Ok. That is something that not everybody agrees on -- the creating more than it destroys. But we're going to get into that.

[11:04:49]

AMANPOUR: Last week -- last month rather, 13 current and former employees of OpenAI wrote a letter talking about the benefits and the potential unprecedented opportunities, but also the risk up and to the loss of control of autonomous A.I. systems potentially resulting in human extinction.

Is that -- I mean, I can't grapple with that concept but you can, you still think so. We asked you about it a year ago.

CONNOR LEAHY, CEO, CONJECTURE: Well, for me, it's often presented as this extremely exotic weird sci-fi scenario. For me it doesn't really feel like this weird of a scenario.

If we assume that you can build A.I. systems that are as smart or smarter than humans that are better at business, politics, science, and everything else. They don't get tired, they don't get distracted. They have no emotions. They don't care about humans. And we don't know how to control them.

What do you think happens? It's like we will be out competed and very abruptly so and with very little care.

AMANPOUR: Do you have a solution?

LEAHY: I wish I did. Fundamentally, this is an unsolved problem both on the technical and the social level.

On a technical level, it's very important to understand that A.I. systems are not like normal software. They're not written line by line with code by human.

A.I. systems are more like grown, they more like huge piles of numbers that are made by big supercomputers. And the4se numbers can do very amazing things.

We don't really understand why or how they work or what's going on inside of their minds. And this should be disturbing to us, like this is a -- this is a problem.

So currently we don't know both on technical side what's going on, and the political side is like, how do we regulate this? How do we decide who gets to build these systems or not build systems.

AMANPOUR: So on the regulation, Beeban Kidron, you have worked very hard. You called A.I. the new frontier in the battle against its use. And you've had some success with your activism on privacy and California is now, you know, doing even more to try to regulate A.I.

Do you think there are ways of regulating? And what worries you.

KIDRON: Well, let me start at the end with what worries me. I mean, what worries me is that we keep on going to technologists to solve the problem. And I think it's a societal problem.

The technologists are rightly very excited about what they're doing and perhaps not looking really at the societal cost of what they're doing.

So I think my voice in this is to say, hang on a minute. Weve got to use the tools of democracy. We've got to decide what our high-risk situation is. And I also think we take a little bit too much time talking about the frontier A.I. and not enough about the main specific A.I.

How is it?

AMANPOUR: What does that mean.

KIDRON: What that means is in education, we spend years training our teachers in specialist subjects. And if we're going to replace them by bit of software, then I'd like to know that software really works at an educational level for learning. And what we're finding is we're replacing them, but it doesn't work.

HALL: I hate the rhetoric that says we're replacing teachers with A.I. we're not. It's not about to happen anytime soon.

AMANPOUR: But what if it happens in ten years?

HALL: Well, it won't be ten years. And it's not a zero-sum game.

SCHICK: It isn't as though there is this amount of work and activity which is now done by humans and now A.I. which will obviously be integrated into more and more intelligent processes. It's going to take that workload. You have to have -- looking at it from a business perspective, you know, if you are more productive, if you have more ideas, if you are able to be more profitable than why would you stop hiring.

No. It isn't a zero-sum game. And I want to echo what Wendy's been saying because where you actually -- the debate is so distorted, right? So there is so much disinformation around A.I.

I have worked on the risk because and specifically on synthetic content and deepfakes. But the debate is dominated by the existential risk argument. And when you actually start talking to A.I. researchers about who actually believes that this is the biggest risk and what evidence if you're going to make a huge claim like that that all of humanity might be killed by artificial intelligence. What evidence is there to support that. And you start digging in. There isn't.

I would mark evidence apart from (INAUDIBLE) --

KIDRON: It could happen.

HALL: I totally agree with Nina on this that we are -- that the scare mongering of the existential threat isn't the evidence. Its hypothetical risk that we could get to at some point in the future.

We might debate when but we aren't there yet and it's not going to happen with frontier models. It just isn't going to happen (INAUDIBLE).

And you started the conversation is it going to enhance (ph)? Yes, it's going to give kids better education.

SCHICK: That's where we're actually seeing it, right. Talking about enhancement. Some of the most exciting stuff. Drug development, you know, looking at engineering seeds that are drought-resistant when you think about climate change. That is what's actually happening.

[11:09:51]

KIDRON: So I think, I think I've been misunderstood here because I think we're all agreeing that we need to look at some of the near-term risk and not concentrate on that. So I think that was what I was saying and just to be --

AMANPOUR: When we come back -- go ahead. KIDRON: And just to be clear, I think that what we're seeing in edtech is actually not enough concentration on whether its helping children learn.

AMANPOUR: We'll continue our conversation about search engines, A.I. elections and dystopian coffee shops.

We'll be right back.

[11:10:20]

(COMMERCIAL BREAK)

AMANPOUR: Back now with my A.I. panel: Dame Wendy Hall, Baroness Beeban Kidron Nina Schick, and Connor Leahy.

Let's continue the conversation with the big players in the A.I. arms race and how jobs are being impacted by emerging technologies.

So we were talking about the good, the bad, and the distant future in the last segment and the now. So what I want to ask you is what people really worry about. And a lot of people worry about jobs. They do. It's existential right?

There is a new book out by an NYU professor about how algorithms can hijack the job and steal the future by, according to her theory, if A.I. tools do a bad job, algorithms in hiring, firing, and recruiting can screen with bias, could filter out candidates by zip codes, right -- you know, leading to some kind of discrimination by nationalities. You know, they could be, you know, directed for male-dominated pursuits. Does that sound like fantasy or reality?

LEAHY: That's literally reality already. I talked to quite a lot of people in the industry, including at private events since I run a company.

And the number one question, I get from any decision-maker in any industry in private, not in public, in a private is how many people can I lay off if I buy your product? That's the only thing people care about.

Like there's a huge, huge appetite in the industry for laying off people, being more restrictive on hiring, you know, having plausible deniability about like how a decision was made so that they don't fall afoul of anti-discrimination laws. All of these things are packaged together because that's what corporations do to maximize profit.

AMANPOUR: Why are you not concerned about that.

HALL: Well, because I think it's a dystopian future that we're not going to walk into blindly.

Remember when everybody -- everything was offshored, right? That was going to be the end of jobs in the U.K. and you know, Musk said that stupid thing on the interview with Rishi Sunak where he said, well, if we get -- if we get this right, we get AGI, nobody will need to work. What a rubbish -- what a stupid thing to say because, well, is that a

future we want. I mean, we have some say this and firms will have to have -- think about the social impact of what they're doing.

And they will find, I am sure that if they take that approach people will not be buying their products, will not be using their services because -- because they won't get from the company what they need.

Because A.I. can't be trusted at the moment to get the right answers.

AMANPOUR: Can I make a giant leap to the weaponization, particularly in elections. We know that the Russians and a whole new article -- I mean Russians, Chinese, Iranians, whoever are influencing elections, Russians particularly have stepped up the weaponization of disinformation.

How do we protect against that? Because we can see the results of it.

SCHICK: So first of all, I want to say that I'm all in on A.I. because I think what is happening here is a huge step change in which we're actually going to be able to productionize and create intelligence.

And I don't think that the risks are so much to do with the existential risk that we're talking about, or that nobody is going to have any work to do. And I wish we had more time to talk about that.

But we do see risks comes down ultimately to people using technology to do bad things as they have done historically. And my first kind of deep entry point into what's happening now with the latest capabilities with artificial intelligence was with looking at election integrity as the advancement of so-called synthetic content deepfakes were starting to emerge back in 2017.

And no doubt this is a problem, but mis- and disinformation has always been a problem for our society.

It isn't just because of A.I., it's also to do with how the Internet ecosystem, the information ecosystem had change.

AMANPOUR: Correct. Could it be exponentially increased.

SCHICK: Exponentially and now --

(CROSSTALK)

SCHICK: -- absolutely. And now you also have the ability to create synthetic contact at scale, including hijacking people's biometrics. So yes, this is a problem.

However, there are already technical solutions including content provenance at transparency embedded in to the Internet.

But I think the bigger thing is this is a challenge for society writ large, because this isn't only about A.I. It's about being in an exponential age where technology is so quickly transforming society and economic reality and opportunity by the way. I'm half Nepalese (ph). My mother grew up in a village in the foothill of the Himalayas and had no economic opportunity.

Within one generation, my entire community in Nepal has been transformed thanks to technology.

AMANPOUR: It's huge.

But I want to ask you as a politician, right? The weaponization, first of all California, which has taken up what you did on the privacy law.

[11:19:46]

AMANPOUR: Now lawmakers are leading the way on A.I. restrictions to try to protect jobs curtail use of personal data, fight disinformation, legislation votes coming in August.

Around the political space, what has to happen. You're a member of the House of Lords.

KIDRON: I think that part of what we're talking about here is perhaps tech exception I think. It's not necessarily the A.I. that is the problem. It's that the A.I. has no guardrails, no regulation, no accountability, no -- no liability.

AMANPOUR: And the big people like Sam Altman, Elon Musk -- they're not helping.

KIDRON: They're not helping. And I think that governments are actually shirking their responsibilities and meeting -- you know, they're their meeting and oh, yes, let's have some voluntary codes here and, you know, let's be nicer to each other.

And I think we can look at the last 20 years and saying actually the tech sector hasn't been that responsible to society.

So I think that the tech exceptionalism is the problem rather than the technology. And I do want to just join in on some of the benefits because, you know, in the -- in the -- there's a group of women political leaders who all get together and (INAUDIBLE) that we love it.

We don't have any speech writers. We don't have any backup. We don't have anybody, now we feel that we can actually contribute, you know, from the global south, from -- from our situation on an even basis.

So it's not that there's good and bad. It's absolutely about how we put the guardrails and who is making the choices.

AMANPOUR: So I wanted to ask you then because I teased the dystopian coffee shop. Is it a reality or not? In this case, like an Orwellian vision, there's a company called NeuroSpot (ph) which posted a video showing how we can use A.I. to monitor staff productivity and customer satisfaction in a coffee shop.

LEAHY: Absolutely. With the exponential technology, with very little control ideas of how to shape reality. If we only have techno optimism, if we only care about technology, not about humans, then we will see a future with a lot of technology and very few humans. It won't be good for people.

Fundamentally technology alone does not make a good world. It is a tool with which you can craft a good world. If we just allow technology to proliferate with -- as fast as possible, and we will get technology as fast as possible and nothing else.

AMANPOUR: I'm going to come to the human factor in our segment later on. We're going to continue this conversation later in the show with a look at how A.I. is changing our relationships and challenging what it means to be human.

But first, from a funny Veep to a grieving mother, Julia Louis-Dreyfus tells me about her dramatic new movie "Tuesday".

When we come back.

[11:22:26]

(COMMERCIAL BREAK)

AMANPOUR: Welcome back to the program everyone.

She has spent much of her career making audiences laugh. Indeed, Julia Louis-Dreyfus was among dozens of American and global comedians invited to chat with Pope Francis on Friday at the Vatican ahead of the G7 summit.

She got her big break on "Saturday Night Live", but she's best known for playing "Elaine" on "Seinfeld". And then as the sharp tongue striver (ph) Selina Meyer on "Veep".

Now in a new film called "Tuesday", she steps into a very different role, confronting an issue that's no laughing matter. That's death. Playing Zora, the mother of a terminally-ill child, Dreyfus showcases her serious side as her character copes head-on with things that face us all and yet we rarely talk about.

She and director Daina O. Pusic joined me ahead of the film's opening.

(BEGIN VIDEOTAPE)

AMANPOUR: Julia Louis-Dreyfus and Daina O. Pusic, welcome to the program.

LOUIS-DREYFUS: Thank you for having us both.

DAINA O. PUSIC, DIRECTOR, "TUESDAY": Thank you so much.

AMANPOUR: It's an extraordinary film. It's very weird, at least to start off.

But I just want to first start by asking you, Julia, I guess people do typecast you a little bit with the comedy thing. But you've done, you know, clearly a number of films that aren't comedy. And I wondered what about this one attracted you? LOUIS-DREYFUS: Well, what attracted me to this role was the script, of course, but the script in a very fantasy, magical, realism kind of way, explores issues of grief, death, dying, denial, acceptance, in addition to really exploring the bond between parent and child.

All of those themes were, I -- of course, they're very fundamental, and they really appealed to me to explore from a storytelling point of view.

AMANPOUR: And it should be noted that the -- one of the main stars anyway, is a CGI giant morphing parrot.

So Daina, tell me about it because it's kind of an unusual vehicle. The parrot is death, the Grim Reaper.

PUSIC: Well, I really -- I designed death the way that I did really through a sort of a process of deduction. I knew what the character was like. I knew what he needed to do in the film. I knew he needed to talk, which parrots are famous for, and I knew he needed to sing and dance and tell jokes.

I felt also that his personality was sort of birdlike. He is kind of cuddly and friendly in one moment and then at the turn of the head is frightening and foreign and dangerous.

[11:29:54]

AMANPOUR: Julia, your character, the mother is trying to delay, deny the obvious, which is that your daughter, Tuesday, is dying and has an incurable disease.

And I want to play just this clip which is from the so-called bathroom scene where you're essentially telling her, you know, to get a grip, weirdly. Let's just play it.

(BEGIN VIDEO CLIP)

LOUIS-DREYFUS: It's the reality of the situation, isn't it? This is what parents do. They do what they have to do. OK? And it's good to be honest about that.

So you need to look reality in the eye instead of just getting angry at me about it.

LOLA PETTICREW, ACTRESS: Are you being serious right now?

(END VIDEO CLIP)

AMANPOUR: Gosh, the actress who plays Tuesday is just so phenomenal. And that is -- I mean, that's exactly the best line, because there you are telling her to, you know, get a grip, and she's the one dying.

Just Julia, put that into context, because you've spun a whole load of lies just to get out of the house, so that you don't have to confront your dying daughter. LOUIS-DREYFUS: Yes, exactly. I would say that the dysfunction that we sort of begin the film with is that my daughter in the film is really the parent to me.

And she's -- and I -- my character is in such pain and suffering that she is -- refuses to face the reality that her daughter is in.

And so she's making one decision after another that doesn't seem, on its face, is not -- these are not nurturing decisions.

And which includes not working. She's overcome with depression. She's selling off everything that's in their house to make ends meet. Nothing makes real rational sense.

But I have to say, as someone who played the character, I certainly understand where she's coming from.

And by the end of the film, the tables will have turned in the sense that my character, Zora, realizes that it's time for her to parent her child in the way that's necessary and critical.

AMANPOUR: And what about comedy? Obviously, you are burnt into everybody's minds with "Seinfeld," with "Veep". Any more -- what is it that you like about comedy? Because you're obviously taking to this other stuff, like a duck to water.

I mean, you know, you're not typecast. But you are so good at the other as well. What do you like about it?

LOUIS-DREYFUS: Well, what's not to like? I mean, there's -- it's so -- it's such an elevated experience to hear people laugh. And I -- it's a blessing, really. And so -- and it's something I've sort of, in my career, have sort of fallen into.

These are the -- most of the jobs I've gotten in my career have been comedic. So, I love doing comedy.

But having said that, I love doing drama, and they're related on so -- in so many ways. And I'm -- I -- what I really like is trying new things and trying -- and sinking my teeth into material that's unfamiliar and challenging and artistically satisfying.

So, that's what I'm looking for. I don't want to do anything derivative. And certainly, this film is not that.

AMANPOUR: Daina O. Pusic, Julia Louis-Dreyfus, thank you so much indeed.

LOUIS-DREYFUS: Thank you.

PUSIC: Thank you.

(END VIDEOTAPE)

AMANPOUR: And the film is out now.

And still to come on the program we'll tackle how technology is changing, what it means to be human later on.

But first from my archive, elections matter, how India's low-cost voters slammed the brakes on the nations powerful nationalist Prime Minister Narendra Modi.

[11:34:01]

(COMMERCIAL BREAK)

AMANPOUR: Welcome back to the program.

The world's biggest election this year has just concluded in India, where Hindu nationalist Prime Minister Narendra Modi was sworn in for a rare third term, but he and everyone else was shocked by the results that trimmed his sails.

India's infamous caste and class system was the deciding factor in the end. Dalits, once known as Untouchables, the very lowest and the most oppressed decided that Modi had not done much to ease their poverty stricken and humiliating lives.

They're considered so unclean to the point where many people won't even touch them or share wards with them. Their only crime being born at the bottom of an ancient Hindu hierarchy that divides everyone along rigid social lines.

To understand just how bad things off for them, we turn to the archives and my report from 1999 about the deplorable reality of some 200 million Dalits and the start of their organization into a political voting bloc that delivered India's shock results this time.

(BEGIN VIDEOTAPE)

[11:39:52]

AMANPOUR: Sometimes the smallest detail can reveal the whole picture. These Untouchable villagers are taking their shoes off, not because they want to but because they have to.

They're about to pass their upper-class neighbors sitting here in the shade. It's a daily ritual of petty humiliation. The Untouchables can only wear their shoes again when they reach their own part of town.

Why are you guys always taking off your shoes and putting them back on again?

UNIDENTIFIED MALE: If we don't take our shoes of, we'll be fired from our jobs.

UNIDENTIFIED MALE: We'd like to stand up for them. But we know we don't have a chance.

AMANPOUR: Have you ever been punished for anything here.

UNIDENTIFIED MALE: They punished us several times. We have to fall at their feet two or three times. AMANPOUR: You have to fall at their feet.

UNIDENTIFIED MALE: We have to on the ground and beg forgiveness.

AMANPOUR: Oh, my goodness.

And the discrimination continues at prayer. Untouchables aren't allowed to enter the Hindu temple in this village. So the priest blesses them outside.

In tea houses all over India, Untouchables have to drink from separate glasses.

And they have to wait until someone comes to serve them outside. Even access to clean water is determined according to caste. Untouchables can't use this public well, because even their touch would pollute the water says this upper-caste villager.

UNIDENTIFIED MALE: These customs have been practiced forever. And if the government passed new laws against it, nothing would change. And I personally don't believe it should.

AMANPOUR: This is where Rungama (ph) an Untouchable woman was forced to get her water and muddy pond polluted by animal feces. Rungama's encounter with the pig proves just how dirty this water is.

A lot of people get sick from drinking the bad water.

UNIDENTIFIED FEMALE: Yes. And when our children became sick, the doctors blamed us saying, you people are unclean.

AMANPOUR: After enduring years of this kind of discrimination Rungama and her friends took up the fight.

It began with a small act of defiance. One day they decided to take clean water from the public well but they were stopped by infuriated upper-caste villagers. And even worse, their own husbands were too afraid to support their cause

UNIDENTIFIED FEMALE: All the women in the village we decided that if our men didn't help us get clean water, we wouldn't cook for them.

And so four days later, they joined our fight.

AMANPOUR: Rungama and her friends created such an uproar that eventually the upper caste in this village were forced to back down.

UNIDENTIFIED FEMALE: Now we have clean water and water is life.

AMANPOUR: And increasingly, India's 200 million Untouchables are resisting through the power of the ballot and political protests. And its changing the face of India.

SUNIT KHILNANI, HISTORIAN: Caste as a form of social imprisonment is beginning to break down, I think. It's -- its beginning to break and people are beginning to assert their rights. They are beginning to say, well, look constitutionally, this is illegitimate. These are my rights as, as an Indian citizen.

AMANPOUR: But it's the local Untouchable leaders like Dr. Krishna Swamy (ph) who are really shaking up the system by building a political movement on centuries of pent-up anger.

Is India a democracy for all?

DR. KRISHNA SWAMY, UNTOUCHABLE LEADER: No. It is a fake democracy. We are fighting for our self-respect.

(END VIDEOTAPE)

AMANPOUR: And now a quarter of a century later still fighting for self-respect, their vote has changed the face and the fate of India once again, fearing Modi could change the constitution and remove affirmative action protections, further sideline religious minorities, and the press, Dalit voters denied him the super-majority expected his nationalist BJP party to win. And it does show that elections do matter. And the world's biggest democracy has lived up to its name.

When we come back, more from our panel on A.I. Utopia versus dystopia and how technology is changing our relationships for better and for worse.

[11:44:33]

(COMMERCIAL BREAK)

AMANPOUR: Welcome back.

And our roundtable with A.I. industry leaders Dame Wendy Hall, Baroness Beeban Kidron, Nina Schick and Connor Leahy.

So here, we want to talk a little bit about the challenge of remaining human. First of all, human relationships. I spoke to Professor Galloway, Scott Galloway, who's quite famous with his podcast, NYU professor on love and isolation and how it changes relationships.

Here's what he told me.

(BEGIN VIDEO CLIP)

SCOTT GALLOWAY, NYU PROFESSOR: A young man without a romantic relationship not only fall off the tracks from a relationship romantically, they fall off the tracks professionally.

And there's nothing more dangerous than a young, broke and lonely man. And were producing too many of them in the West.

(END VIDEO CLIP)

[11:49:54] AMANPOUR: I don't know whether a young, broke and lonely man, but Connor, tell me what you make of what he said in terms of relationships.

LEAHY: I mean, there's a lot of truth to this. We can look at across the board, you know, in both men and women, frankly speaking, and the younger generation are having less contact with the other sex, less friends, less public engagement, and less everything.

The only thing that's up is video getting consumption. And I mean, I don't know what the future will hold, obviously. But if I made one prediction about human relationships in the age of A.I., it's that if there's, if there's a way to commoditize, charge for making money off and increase the addictive potential of human relationships then tech companies are going to do that.

AMANPOUR: Beeban, in terms of local or national political action.

KIDRON: We're in the middle of an election here in the U.K. and this week, the Labour Party did make a commitment to putting the voluntary A.I. code on a mandatory footing. They also look like they're going to do a lot more oversight and bring the regulators together. And they are taking it quite seriously.

And just speaking to what Connor just said, I spend a lot of time with young people and I'm afraid I spend quite a lot of time with the police looking at A.I. generated child sexual abuse.

Both of those things, young people are anxious and that is becoming difficult to put a young person's picture in a public arena because its being abused, used wrongly.

And I think that we really do want to meet this better so that we can actually have a public square we can all be proud of.

AMANPOUR: Dame Wendy, in terms of a global United Nations kind of response--

HALL: Yes, I'm on the United Nations A.I. advisory body and it's been a fantastic journey.

We are charged with producing report about global governance of A.I. for humanity. We report to the secretary general. Our report is due out next month and it will feed into the U.N. summit for the future.

And we've taken a lot of soundings working with national governments and looking at the -- we talk about global governance, the regulation will happen in the nation states. But this is about what can we agree on globally? Rather like we do with climate change, nuclear disarmament and the huge challenges that we have. How you deal with the pandemic, all those sorts of things.

What can we agree on? And this includes all of the world. China, as well as the western countries and the global south.

AMANPOUR: The artist Marina Abramovic, she is very vocal on all of this and she's basically saying, I've interviewed her a lot, our brains can't compete with algorithms. The technology that should give us more freedom, more quality time, you know, enhance our lives. Instead, enslaved us, you know, all these gadgets I need to be -- you know, I need for consumption is just constantly being fed.

Can humans keep up with this incredible technological reality?

SCHICK: Well, I think things are just going to be different. And humans as a species, we're not static, right? We're evolving all the time. If you look at the history of technology, you see that when general purpose technologies are integrated into human society, it changes the society not only economically, but it changes humans on a biological anatomical level.

And I'll give you an example. The invention of fire changed the way we structured our societies. It actually physically changed our digestive tract because of how we consumed nutrition. So it biologically changed us.

And yet were in this new age of exponential technologies. We were talking about the Internet and social media and everything that's come from the microprocessor computing, which we still haven't figured out how its affecting us and impacting us.

So these are all very valid questions, but this is all in a spectrum. These -- these are all philosophical questions that touch upon the heart of what it means to be human.

AMANPOUR: All right, on that note -- Nina, Wendy, Conner and Beeban -- thank you so much for being with us.

And when we come back a new documentary takes us behind the scenes for tennis great Roger Federer's tearful, final farewell.

[11:54:04]

(COMMERCIAL BREAK)

AMANPOUR: And finally, we end with the king of the court. For more than two decades, tennis great roger Federer dominated the sport. Now, a new documentary follows the last 12 days of his extraordinary career, leading up to the all-star match that ended it.

The film is called "Federer, 12 Final Days". And it's co-directors, Asif Kapadia and Joe Sabia told me how this very personal take came about.

(BEGIN VIDEO CLIP)

AMANPOUR: I think it's amazing. I think it's really amazing. First of all that he's a grown man who is not afraid to show his emotions there, therefore gives a lot of boys and men permission to show their emotions.

But I wonder --

UNIDENTIFIED MALE: The film has lots of grown men crying.

AMANPOUR: Yes, it really is. Especially with (INAUDIBLE).

(CROSSTALK)

UNIDENTIFIED MALE: This sets everyone else because he's coming to the end of his career and all of his rivals are there in the room and they can see it's going to be me soon.

AMANPOUR: Yes.

[11:59:46]

UNIDENTIFIED MALE: What am I going to do and all of them are thinking what next, what do we think how?

AMANPOUR: I didn't think that. I just thought it was -- you're right, of course, because now Nadal is in question, Djokovic had to pull out of Roland Garros and head to surgery.

(CROSSTALK)

UNIDENTIFIED MALE: You can't fight time. So that's what's really interesting is its about him, but its really about them. And I think the audience, all of us go for these moments and we come to the end of a certain part of our life. What are we going to do next?

(END VIDEO CLIP)

AMANPOUR: Nadal himself has pulled out of Wimbledon. We don't know what will happen with Djokovic. Now, the documentary on Federer airs on Amazon Prime, June 20th.

And you can watch our conversation in full at amanpour.com.

And that is all we have time for.

I'm Christiane Amanpour in London. Thank you for watching. And I'll see you again next week.