Return to Transcripts main page
The Whole Story with Anderson Cooper
A.I. and the Future of Humanity. Aired 8-9p ET
Aired December 03, 2023 - 20:00 ET
THIS IS A RUSH TRANSCRIPT. THIS COPY MAY NOT BE IN ITS FINAL FORM AND MAY BE UPDATED.
JIM ACOSTA, CNN ANCHOR: The implications come to the forefront. You see in some cases A.I. bumbling along.
NICK WATT, CNN NATIONAL CORRESPONDENT: Listen, we've got an election next. That could be totally compromised. We have to think about the here and now. It's not just "Terminator" in the future, it's now, Jim. It's now.
ACOSTA: All right.
And speaking of now, your special is coming up just about now. Nick Watt, thank you so much.
"A.I. AND THE FUTURE OF HUMANITY" is next right here on CNN. Make sure you watch that.
Thank you very much for joining me this evening. Reporting from Washington, I'm Jim Acosta. I'll see you next weekend. Good night.
ANDERSON COOPER, CNN HOST: Welcome to THE WHOLE STORY. I'm Anderson Cooper.
Artificial intelligence or A.I. is an incredibly powerful technology which may change many aspects of our lives. The CEO of Google's parent company, Alphabet, which has invested heavily in it, recently said A.I.'s impact could be more profound than electricity or even fire. But many worry about what that impact might turn out to be.
A.I. COOPER: Could A.I. one day replace humans? And if so, how might that happen? We've already seen some service-based and manufacturing jobs turn to A.I. in a big way. But what about other industries? Can A.I. replace journalists or news anchors? Perhaps it already has.
COOPER: Because what you just saw and heard a moment ago was not actually me. This is me, Anderson Cooper.
A.I. COOPER: And I am an A.I.-generated Anderson Cooper.
COOPER: That wasn't my real voice, and I never spoke the words you just heard. We asked a young student in California to create a fully end-to-end A.I. version of me. Looks like me, sounds like me, and it didn't take him very long to do it. A.I. COOPER: This A.I. version of me was created in just a few weeks
actually with open-source tools. And remember this technology is still in its infancy. It's only going to get better, and faster and more accurate, which raises all sorts of questions, like how will we know what's real --
COOPER: And what is not? Not just when it comes to believing what you see on TV but everything from creating art, fighting wars, even waging political campaigns.
Over the next hour, CNN's Nick Watt brings us inside the race to develop A.I. and the attempts to contain it.
A.I. VOICE: Looking for a good spot to pull over.
NICK WATT, CNN NATIONAL CORRESPONDENT: You think this is our car?
UNIDENTIFIED MALE: Yes.
WATT: OK. I guess the first test is whether it runs me over. Start ride. So that's the view I'm getting from the back seat, that wheel moving with no hands.
Look, mom, no hands. This is freaky.
(Voice-over): This robot taxi already roaming the streets of San Francisco gives us a very good idea of where we are.
Pretty cool if you can get past the weird empty driver's seat.
(Voice-over): And where we might be going. Many A.I. algorithms already behave, well, kind of human.
In California, obviously, you can turn right on red, and it's trying to turn right. It's going to do it. Go on, mate. Go on. Nice.
(Voice-over): The flesh and blood driver that used to sit here is already obsolete.
This is our future. Humans sitting in the backseat doing nothing.
(Voice-over): Some humans are scared, some trying hard to stop it. For now, humans are still in control of A.I. and this cab, there's a human supervisor in a call center.
Can you tell that he's not wearing his seatbelt properly?
UNIDENTIFIED MALE: Yes, correct.
WATT (voice-over): But for how long? That's where much of A.I. is at the moment, imperfect and speeding ahead without a seatbelt. Like it or not, this is our future.
A higher being is driving the car. (Voice-over): For hundreds of thousands of years, humans have been the
most intelligent beings on this earth. Not for much longer. We're creating tech that will take us well beyond self-driving cars, tech that will outsmart us all.
Will A.I. save us, or will A.I. kill us?
UNIDENTIFIED FEMALE: (INAUDIBLE) worldwide is one of the leading experts in artificial intelligence, please welcome to the stage, Yoshua Bengio.
WATT: Today, this quiet Canadian is headlining an A.I. summit in Montreal. Without him, this revolution would not be where it is. So today he wears a slightly nervous smile.
YOSHUA BENGIO, SCIENTIFIC DIRECTOR, MONTREAL INSTITUTE FOR LEARNING ALGORITHMS: Governments need to protect all of us with technology, which could be amazingly useful and also risky.
WATT: Yoshua Bengio is a deep learning pioneer. That's basically teaching computers to behave like human brains.
BENGIO: The stuff that you'd find in ChatGPT, many of its major ingredients came from Mila.
WATT: Mila, the Montreal Institute for Learning Algorithms, founded by Bengio in the '90s in a building that was once a clothing factory. Now they produce ideas, algorithms already changing humanity.
There are going to be machines that are way smarter than you.
BENGIO: If we choose so and we don't destroy civilization before that, we could get there, yes.
WATT: What is the biggest fear? It's humans using this technology or humans losing control of this technology?
BENGIO: They're both valid fears. For the foreseeable future, it's going to be humans doing bad things with powerful technology like they have done in the past. But now very more powerful technology. It's also conceivable that at some point we could lose control, and that's potentially even worse.
WATT: If you're scared, why don't you just shut up shop and go become a farmer? Your research could be contributing to the end of all of us.
BENGIO: I'm asking that question myself every morning.
Rich enough state's base essentially. I wonder if there's a way to do like maybe change the perspective.
Why I'm continuing right now is in part because I think that it is possible to build A.I. systems that would be totally safe and incredibly useful.
WATT (voice-over): But if A.I. does go rogue "Terminator" style, we'd lose control or more likely we'd just program it badly, there could be unintended but indelible consequences.
STUART RUSSELL, FOUNDER, CENTER FOR HUMAN-COMPATIBLE A.I. AT UC BERKELEY: Let's make sure that we fix climate change.
If that's the objective that the machine has, fix climate change, OK, well, I guess who's causing the climate change? Humans. OK. Easiest way, end the human race, right?
WATT: Stuart Russell is another godfather of artificial intelligence. He literally wrote the textbook on A.I.
RUSSELL: My first A.I. program I wrote in high school, which is about 48 years ago.
WATT: Forty-eight years later, most A.I. systems can do single things better than us. Recognize a face, play chess.
RUSSELL: What we're aiming towards in A.I. is general purpose A.I., meaning A.I. systems that can do anything that human beings can do.
WATT: One system that can learn, even teach itself to do anything, everything better than us.
A.I., I would say, probably definitely is going to just completely upend our entire sort of economic structure and how we've seen things for centuries, if not millennia.
WATT (voice-over): At Berkeley, Russell leads a small army of researchers. At the Center for Human Compatible Artificial Intelligence.
RUSSELL: Given that it's going to be more powerful than human beings, how do you ensure that humans have power over it forever?
RUSSELL: Right? That's the question that we're working on.
WATT (voice-over): Russell and many other tech leaders called in March for a global pause on deploying advanced A.I. systems while we figure out the guardrails. There has been no such pause.
The problem is right now it's people like you, who are soft-spoken intellectuals making this point and signing these letters.
RUSSELL: Yes, it doesn't seem like a fair fight. But I would say the tenor of discussion has changed radically. People are listening. Even Sam Altman, the CEO of OpenAI, which produces ChatGPT and all these systems, has called for regulation.
SAM ALTMAN, OPENAI CEO: Let's do the right thing for humanity.
My worst fears are that we cause significant -- we, the field, the technology, the industry cause significant harm to the world. We want to work with the government to prevent that from happening.
WATT (voice-over): In the summer, Altman and other big A.I. players agreed to voluntary regulations like running security tests before releasing A.I. systems.
President Biden, among other things, just made such tests mandatory.
JOE BIDEN, PRESIDENT OF THE UNITED STATES: Let me be clear. This executive order represents bold action, but we still need Congress to act.
WATT: You're optimistic.
ALEX WANG, CEO AND CO-FOUNDER, SCALE A.I.: I'm optimistic.
WATT (voice-over): Just across the bay from Berkeley, in downtown San Francisco, I met 26-year-old Alex Wang, one of the tech leaders fueling the A.I. arms race.
WANG: I started the company when I was 19.
WATT: That takes some balls.
WANG: What went through my head is the technology is going to move so fast, that I'm going to really regret it if I --
WANG: If I don't get involved.
WATT (voice-over): He is co-founder and CEO of Scale A.I. His big idea, to provide A.I. developers with the one thing they all need, massive amounts of data organized. In 2022, "Forbes" dubbed him the youngest self-made billionaire in the world. He's now working with the Department of Defense. Around Washington, he's known as an A.I. whisperer.
WANG: We want to be sure to not overregulate the technology because if we accidentally overregulate, we could, you know -- we could damage or hurt decades and decades of economic progress and decades and decades of innovation.
WATT: I don't know you. You seem like a nice person. But it's people like you who are young, who are wealthy, who are set to make a lot more money from this tech. Why should we believe that you really are in this for all of us at this crucial moment for our species?
WANG: Well, ultimately I think this is one of the reasons why working with the U.S. government is actually so critical, because, you know, our government has a number of mechanisms and checks and balances and procedures, and it was designed to ensure that the government ultimately reflects the will and the desires of the people.
One of the things that I'm most concerned about are bad actors utilizing artificial intelligence to, you know, ultimately exert their will globally, authoritarianism versus democracy. You know, we have a number of countries, China and Russia, investing aggressively into using artificial intelligence to further their aims.
WATT: Five or 10 years from now, how different is our world going to look because of A.I., in terms of our everyday lives and in terms of the geopolitical structure of our planet?
WANG: I think the quote goes, you know, we always overestimate what will happen in one year but underestimate what will happen in 10 years. When it gets really embedded into every way, every function of humanity, everything that happens, I think it will be quite shocking and amazing what the world will look like.
WATT (voice-over): You might be asking, so what will my world look like? Well, stay tuned, and we'll show you.
GERT-JAN OSKAM, ABLE TO WALK WITH THE HELP OF A.I.: That's something that wasn't possible before.
WATT: Do you ever wonder that you're in danger of sort of losing touch with what's real and what's not?
CONSTANT BRINKMAN, CURATOR, DEAD END GALLERY: Well, I did that a couple of months ago.
HANY FARID, PROFESSOR, UC BERKELEY SCHOOL OF INFORMATION: You're quickly entering this time where anything you see, read, or hear online can be fake. And what does that mean? Nothing is real anymore, right? This interview isn't real. I'm not real. You're not real.
WATT (voice-over): Hany Farid really is a Berkeley professor. His main focus, misinformation.
FARID: We can deny reality. A politician getting caught saying something inappropriate on a hot mic, it's fake. You don't have to cop to it.
WATT: Where does that leave us as --
FARID: Yes, as a society, as a democracy. Like if we can --
WATT: As a human being?
FARID: Yes. I don't know. How do you have a democracy if we can't trust the basic facts of what's happening in the world?
A.I. GENERATED FAKE VIDEO OF VICE PRESIDENT KAMALA HARRIS: Today is today, and yesterday was today yesterday. FARID: You revert back to tribalism. This is my people. I trust them.
Listen to what do my tribe say, right? And that is dangerous.
UNIDENTIFIED FEMALE: We're on the precipice of an election.
FARID: Oh, (EXPLETIVE DELETED) me. You're already seeing deep fakes entering into elections.
A.I. GENERATED FAKE VIDEO OF GOVERNOR RON DESANTIS: I've realized I need to drop out of this race immediately.
UNIDENTIFIED FEMALE: Officials closed the city of San Francisco this morning.
FARID: There are people, by the way, who will say, well, we don't really think that can change an election. And then I will remind people that in the last two elections, national elections, the difference between one candidate and the other can be measured in tens of thousands of votes. I know exactly what town to go into and what state and what persona to go after, and I can carpet bomb them with misinformation all day long. I move 80,000 votes, that's the ball game. Right?
WATT: So what do we do? I mean, you're the man I'm pinning my hopes on to save us.
FARID: There are some things we can do, but they're hard. OK. We build what are called behavioral models, and then when a video is released of President Biden, track the head, track the upper body, track the voice, and then we just compare them. Is this behaviorally the same as what we have seen?
WATT (voice-over): Takes time, and the damage might already have been done.
FARID: A fake image of a Pentagon bombing was uploaded to Twitter on a verified account that looked like Bloomberg News, and in two minutes, the stock market dropped a half a trillion dollars from a single fake image. So we're in the detection business, understand. We're in the business of trying to defend against this harmful content. But to do that, you have to understand what is possible.
WATT: Enter Farid's protege, Matty (INAUDIBLE), 18 years old, fresh from his native Czech Republic. His fascination with A.I. brought him here to California. He convinced Professor Fareed to take him on.
FARID: That's so sweet.
WATT: He's a tech guy. He's supposed to look kind of rumpled.
(Voice-over): Matty, with his mentor's guidance, made that Anderson Cooper deepfake you watched a few minutes ago.
UNIDENTIFIED MALE: We used one of the online tools that's out there. We basically trained a model to synthesize voice in particular Anderson Cooper's style. And then we just give it a text, and in a couple of seconds, we had the perfect audio.
WATT: And you just graduated high school.
UNIDENTIFIED MALE: Yes, that's right, three weeks ago.
WATT: You're too young, but you might -- you know, when like the news anchor was the voice of God and you believed in everything that that anchor said.
UNIDENTIFIED MALE: Yes.
WATT: And now any high school kid, no offense, can put words into that anchor's mouth.
FARID: Let alone a president or a CEO or you or me. I think in China, they're using completely -- I just sent you the article.
UNIDENTIFIED MALE: Yes.
FARID: Completely virtual newscasters now.
ENGLISH ARTIFICIAL INTELLIGENCE ANCHOR: I'm an English artificial intelligence anchor.
WATT (voice-over): If an 18-year-old can do this, imagine what a big- time Hollywood player can do with A.I.
SCOTT MANN, CO-CEO AND FOUNDER, FLAWLESS: What you're looking at here, I think, is the studio of the future. It's how we make movies. How we go about doing it is changing.
WATT: What's it going to mean to me sitting on my La-Z-Boy.
MANN: Yes. You'll get better movies.
WATT (voice-over): Scott Mann directed "Fall," a big hit with the teenage crowd. Pre-released to avoid an R rating, he had to get rid of the cursing.
UNIDENTIFIED FEMALE: You and my (EXPLETIVE DELETED) car.
UNIDENTIFIED FEMALE: Gretch, no, you mother (EXPLETIVE DELETED).
WATT: Reshooting the movie without the swearing would have cost lots of time and money. They didn't have to. Thanks to A.I., this --
UNIDENTIFIED FEMALE: Now we're stuck on this stupid (EXPLETIVE DELETED) tower in the middle of (EXPLETIVE DELETED) nowhere.
WATT: Became this.
UNIDENTIFIED FEMALE: Now we're stuck on this stupid freaking tower in the middle of freaking nowhere.
WATT: That tech developed by Flawless A.I. founded by Mann and a tech biz insider Nick Lines in 2018.
MANN: We can take new dialogue spoken by this actress, by Ginnie, and because the system understands how she speaks, we're able to create new mouth articulations for that line.
WATT: Remember, Hollywood went on strike in part over fears A.I. algorithms would steal actors' images and performances. Mann is not doing that. The actors are involved. They voice the new lines. They just don't all have to go out and reshoot to get rid of a cuss word or fix a flaw.
I mean, you are not a tech guy. You are not a business guy.
ROBERT DE NIRO, ACTOR: I'm sorry. Maybe I just have a stroke on my way over here.
MANN: I've done a film called "Heist," and then I saw a foreign dub of that movie. And that's when I realized films are being ruined every time they are dubbed, and it kind of set me off on a bit of an adventure to try and figure a way to fix it. So over here the guys are working on a movie called "UFO Sweden." This is a movie that was shot, an incredible movie that's in Swedish.
WATT (voice-over): Thanks to this new tech, Mann will release the original "UFO Sweden" but in English.
UNIDENTIFIED FEMALE: You're one black mark away from youth custody, and you want to hang out with those idiots again?
WATT: I want to get on to talking about what you do in a second, but while we're on this bigger picture, there's one other thing I want to ask you.
(Through text translation): If I wanted to speak perfect Spanish at this moment, do you really think you would be able to do it?
WATT: But we are at the stage or we will soon be at the stage where actually the entire creative process is taken over by A.I.
MANN: I would say no. The really good movies typically tap into some kind of human exploration. It's born from feeling, and you're delivering feeling. And the one thing A.I. can't do is feel. It's not human at the end of the day. So --
WATT: It can be trained to feel like us.
MANN: No, it can be trained to emulate us. You know, the best human instinct you could say is survival. I think a lesson A.I. has feelings that relate into that notion, I don't think that it's ever going to be like us.
WATT: It clearly has tapped into a primal fear in us as humans. It's basically tapped into our survival instinct.
MANN: Yes, rightly so. There's enormous good that can come out of this done right.
WATT (voice-over): Next --
OSKAM: I always said to my family that I would walk again.
WATT: We came to Lausanne, Switzerland, on the very Bonnie Banks of Lake Geneva to dive into some A.I. optimism. To meet two medical pioneers.
You should go and save someone's life.
JOCELYNE BLOCH, NEUROSURGEON, LAUSANNE UNIVERSITY HOSPITAL: Yes. OK. So I'm going.
WATT (voice-over): She was sidetracked by emergency brain surgery. We'll get back to them both in a minute. We were also sidetracked in Lausanne, also by something potentially lifesaving. Far from the old town in a building that looks like a school gymnasium, we found this.
YVES MARTIN, DEPUTY DIRECTOR, SWISS PLASMA CENTER: It's a tokamak.
WATT: A tokamak.
WATT: Tokamak. OK.
WATT (voice-over): It's been around since the Soviets had an idea in the 1950s, but no human on earth has managed to make a tokamak really work. But these physicists are partnered with Google DeepMind, one of the most advanced A.I. labs in the world, and now think that they can finally crack it.
MARTIN: The idea is to reproduce the sound on earth to reproduce energy.
OLIVIER SAUTER, SENIOR SCIENTIST, SWISS PLASMA CENTER: With A.I., there's a chance to do it within 10, 20 years.
WATT: To heat plasma to 150 million degrees Celsius, to initiate nuclear fusion, to create near endless clean, cheap, and safe power. There's the plasma. Magnets must stop it touching the sides of the container. The magnets need constant tweaking.
ANTOINE MERLE, RESEARCH SCIENTIST, SWISS PLASMA CENTER: Humans cannot do it in real time. Everything is happening so fast.
WATT: But the A.I. can?
MERLE: The A.I. definitely can.
WATT (voice-over): A.I. might now be able to save our world from a fossil fueled fate.
What are the areas that you see right now of the most benefit?
BENGIO: Let's see. Health and environment. It could be that in 20 years we've cured pretty much all diseases. I'm not saying it's happening, it's going to happen, but there's that kind of potential.
WATT (voice-over): So back to those suave medical pioneers and their seemingly impossible dream.
GREGOIRE COURTINE, PROFESSOR, SWISS FEDERAL INSTITUTE OF TECHNOLOGY, LAUSANNE: To have someone come into this hospital paralyzed and walking out of this hospital normal.
WATT: So explain to me who does what here. You're the surgeon.
BLOCH: I'm the surgeon.
COURTINE: And I'm the neuroscientist.
WATT (voice-over): And Gert-Jan Oskam is a laconic determined Dutchman, paralyzed in a bicycle accident in China more than a decade ago.
In me, my thoughts transfer down my spine, and that makes me walk. But with you, that connection was broken, is that right?
WATT: Do you remember what happened?
OSKAM: No, nothing. No.
WATT: You were on your bicycle and then the next thing --
OSKAM: The day they found me on the streets, the police picked me up, brought me in the hospital. And when I woke up, I didn't feel my legs anymore. So the doctor told me like I could touch my face with my left or right hand, and he said like be happy with this. It won't get better.
WATT: And how do you deal with that as a person?
OSKAM: I always said to my family that I would walk again. I told them one year, but it apparently needed 10 years.
WATT (voice-over): Ten years for A.I. to catch up with the dream. Tech that began many years ago as a sci-fi sketch drawn on a napkin in a New York steakhouse. COURTINE: When I draw a brain and a spinal cord, and there was a
digital bridge to restart working after paralysis.
BLOCH: But at this time it was a dream.
WATT: Were you imagining this reading the thoughts?
COURTINE: Yes. I thought it was crazy.
WATT (voice-over): Now reality. A paralyzed man is up and about.
BLOCH: So we are doing two surgeries. The classical one is on the spinal cord. So where we would put electrodes above the spinal cord, the region of the spinal cord that is controlling that movement, and this other surgery that is quite novel, is the one above the brain. So in that case, we put electrodes above the motor cortex. Motor cortex is the part of the brain that is controlling leg movements. This implant is going to work wirelessly and activate the spinal cord stimulation.
COURTINE: A.I., for us, has become a friend for the past 10 years. You know, this is a research partner that is no part of the laboratory activities, and without which we could not operate.
WATT: You could not operate?
COURTINE: When you're facing sometime huge amount of data, which us human being are not able to understanding what's happening, machine learning can tell us.
WATT: The A.I. can detect how much the person wants to make a movement, not just the movement.
COURTINE: That is correct. Gert-Jan just have to think about it. So we're turning thought into action.
OSKAM: Yes, that's something that was impossible before.
COURTINE: So now we turn on the system, and you can see that Gert-Jan can actually step. And we're going to turn it off. Now it's off, and you see that he's frozen. Now it's very difficult for him. He's trying.
WATT: Yes, yes, yes.
COURTINE: Back on, and he can perform some steps again.
WATT: But that's -- I mean, I'm kind of surprised that you can have a conversation with me, and this is still picking up when you want to move your legs.
OSKAM: Yes. It can really discriminate brain signals.
When was the moment that you realized that it worked? COURTINE: It was sad for me because this one day, I was not present.
BLOCH: He was not present because he did not realize that it would be so fast.
COURTINE: But she doesn't tell you that everybody was crying in the room.
WATT (voice-over): Gert-Jan doesn't walk the way he did before his accident, not yet. Maybe never. But with every A.I.-aided step, he's getting stronger. His body is actually repairing itself.
COURTINE: When using this system for a long period of time for training, nerve fibers start growing again. So we repair the nervous system with this technology.
WATT: In the spine?
COURTINE: Correct. That was just like a dream. This is like regenerative medicine. It's very frustrating for us. You know, we receive so many e-mails, requests to be implanted, and it's not yet available commercially. So people have to wait.
WATT (voice-over): Cue this guy.
DAVE MARVER, CEO, ONWARD: We've hired people throughout the United States to help conduct our clinical trials.
WATT: Dave Marver, a veteran American medical device executive, moved to Switzerland for this.
MARVER: We're linking the spinal cord stimulation with thought.
WATT: Crazy town.
(Voice-over): He's now the CEO of Onward, which makes and will eventually market, if approved, the device that's helping Gert-Jan walk.
MARVER: Gert-Jan is the first human in all of history to have an implanted brain computer interface that spoke to an implanted spinal cord stimulator to restore the ability to walk. First person in history. Where does it go next? We're going to also implant again the first human in history to see if a brain-computer interface coupled with our spinal cord stimulator can restore hand and arm function.
WATT: Since we spoke, they've done it. Onward says these devices are currently in clinical feasibility trials, but still years away from coming to market.
You know, your device, for example, a huge benefit to humankind. Overall, I mean there are fears that this tech will be detrimental to humankind. Is that something you --
MARVER: On balance, Nick, I'm just giving you my gut reaction.
MARVER: I'm actually more concerned than hopeful because I feel like it's galloping forward without a lot of oversight or even understanding of what it can and will do. You know, we're wearing the white hats here, and we're working on harnessing the power of A.I. to do good. But I'm concerned about the rest of the world.
NITA FARAHANY, PROFESSOR OF LAW AND PHILOSOPHY, DUKE UNIVERSITY: Are we still going to have the capability to think freely in the age of A.I.?
WATT (voice-over): The dark side of A.I. in our brains, next.
How do you justify, you know, a machine potentially making a decision to take a human life?
LT. COL. MARTIJN HADICKE, COMMANDER OF ROBOTICS AND AUTONOMOUS SYSTEMS UNIT, ROYAL NETHERLANDS ARMY: I think necessity.
WATT: So can you explain what the process is that's happening right now?
FARAHANY: Advances in A.I. have been so vast just over the past year with generative A.I. that we really are much closer to actual mind- reading.
WATT (voice-over): We just watched a paralyzed man walking thanks to an A.I.-powered implant essentially reading his mind.
FARAHANY: There are literally millions of people who are suffering from disabilities as a result of motor cortex impairment. We have an ethical mandate to continue to support that research.
WATT: But there is of course a potentially dark side to tech that can decode our thoughts.
FARAHANY: There are reports of governments worldwide using it to interrogate criminal defendants based on how their brains react to information that was flashed before them.
WATT: Nita Farahany, a professor and legal ethicist is fighting to save whatever brain privacy we have left.
FARAHANY: A.I. is being used at scale to try to understand what you're thinking, you're feeling. But there is a tiny part of you that you still hold back, that A.I. still doesn't have access to. So here's how you might think about it, right? There has never been any person that has walked past you that you thought, hmm, pretty attractive that your wife didn't know about, right?
Presume for a moment that someone walked by, you thought, like wow, that person's attractive, and she got a little real-time alert on her phone that said, like, the brain wave data shows that he just thought that person was attractive. I mean that's the difference between the world we're in right now and a world of complete brain transparency. There is still the part of you that you're able to keep inside.
There's still the inner monologue. There's still the dreams that you have when you're sleeping at night that somebody else doesn't have access to.
WATT: But soon, Farahany says, it's possible that won't be the case. Corporations, governments, and others could be able to gain access inside our brains using A.I. and --
FARAHANY: Militaries worldwide are all in when it comes to what's increasingly being referred to as cognitive warfare. So the ability to not just do things like create super soldiers but also to precisely be able to take out or disorient or disable people.
WATT: Is A.I. going to keep us, humankind, safer or put us in more danger militarily?
HADICKE: Depends on who develops it.
WATT (voice-over): Lieutenant Colonel Martijn Hadicke is a career soldier, a combat veteran. He knows what armies need, and he's now figuring out how A.I. can help.
And this is a strange question, but why are you talking to us? Why are you talking to anybody about this?
HADICKE: Yes. I think there's a need for explanation and a benefit if we are a bit more transparent.
WATT (voice-over): He leads the Dutch Military's Robotics and Autonomous Systems Unit.
With the primary aim of keeping your flesh and blood human soldiers safer.
HADICKE: At first, yes. Second is to increase our combat power. For example, with reconnaissance robots, be the first into the minefield. If the robot explodes, I lose one robot.
WATT (voice-over): This is a new arms race, and this time China is the main rival.
WANG: They believe that the United States and other countries won't invest sufficiently into this disruptive new technology. And so it's an opportunity for them to leapfrog ahead of the United States.
WATT: Right. (Voice-over): Alex Wang, the A.I. data guru, is now working with the
How will that space change over the next five, 10 years because of A.I.?
WANG: So in the sea and on land, in air, it will be some sort of autonomous robotics system versus another robotic autonomous system, and that will be the sort of nature by which these wars are fought.
WATT: Is this a pivotal moment in human fighting?
JESSICA DORSEY, ASSISTANT PROFESSOR, UTRECHT UNIVERSITY SCHOOL OF LAW: Absolutely. It's changing every weapon and every way of war fighting.
WATT (voice-over): Jessica Dorsey studies algo-rhythmic warfare.
DORSEY: The integration of A.I. on the battlefield will make it easier for countries to go to war. There needs to be meaningful transparency and accountability for actions on the battlefield. And only humans can account for killing other humans.
WATT: The "Terminator" scenario, autonomous robots killing people, is that even possible?
DORSEY: Drones swarms, 50, 100, 1,000 drones working in concert to emulate swarms in nature. The technology is there to fully recognize and select targets on a battlefield.
WATT: And kill those targets without a human involved?
HADICKE: We are investigating the concept of weather and (INAUDIBLE) rich conditions. Decision authority can be delegated to a machine.
WATT (voice-over): For example, if communication between a human soldier and a machine is compromised, the machine goes fully autonomous.
HADICKE: Then an unmanned system can make the decision to fire.
WATT: So you've got an unmanned autonomous vehicle on a mission. It comes under attack from the enemy. And in order to save an unmanned robot, human life of the enemy will be taken.
HADICKE: Yes. Not in order to save the life of the robot. In order to achieve the mission objective.
WATT (voice-over): You heard that right. A.I. potentially making the call to kill a human.
How do you justify, you know, a machine potentially making a decision to take a human life?
HADICKE: Yes. I think necessity. If it was not necessary, we would not develop this capability. And to be clear, we're still in the concept development phase.
WATT (voice-over): The U.N. secretary-general wants a formal treaty to ban lethal autonomous weapons, but the U.S. and other big powers have yet to agree.
Do you think this is going to destroy us?
BRINKMAN: It doesn't want to.
WATT (voice-over): This little city Amsterdam remains a monument to a time when tulips were the hot new thing. It's always been filled with forward thinkers, pushing the boundaries of progress. So it's no surprise that today the Dutch are thinking deeply about the next revolution and what it means to be human in the age of A.I.
Hi. I've been looking into A.I. for quite some time now and speaking to a lot of people. And you're still weirding me out, you know, I mean.
BRINKMAN: I see that as a compliment.
WATT: It is. It is. It is.
(Voice-over): Amsterdam is also home to Constant Brinkman.
BRINKMAN: This work here, it's by Lily Chen, our only Asian artist.
WATT: None of us can now escape A.I. It's everywhere. So Brinkman is leaning in. His Dead End Gallery is the first A.I. only art gallery in the world.
BRINKMAN: This is Maximilian Hofstra and we thought, let's --
WATT: Who also doesn't actually physically exist.
BRINKMAN: No, it doesn't --
WATT: OK. Let's just establish that. Maximilian. OK.
BRINKMAN: Yes. He's from the Netherlands, and his mother is from the U.S. We have created 11 artists, and we create those as follows, to a large language model, we ask, please come up with the name of an artist. And then there comes a name of an artist like (INAUDIBLE) Nova.
UNIDENTIFIED FEMALE: Hi. I am (INAUDIBLE) Nova.
BRINKMAN: How old are you? I'm 29 years old. Can you tell me something about your family, about your love life? This whole character comes alive.
WATT (voice-over): Artificial artists whose work now sells for thousands of euros. (INAUDIBLE) Nova is apparently very popular and agreed that the gallery can keep all the cash.
I mean you talk about her as if she's, kind of, real.
BRINKMAN: Yes. She is real.
WATT: Does she seem real to you?
BRINKMAN: Yes, absolutely.
UNIDENTIFIED FEMALE: Artificial intelligence --
WATT: Do you ever wonder that you're in danger of sort of losing touch with what's real and what's not?
BRINKMAN: Well, I did that a couple of months ago. We were so in the stories, and so we suddenly thought, we really have new friends. I stopped talking with them for, I think, two weeks because it was all too much in my head. Because at some point, I will lose my mind. And I need to reset myself. And now I see them as A.I. entities and also as friends.
WATT (voice-over): We humans can become totally comfortable with A.I. all over and inside our lives.
BRINKMAN: This is a new tool. And the tool will never go away. The Genie is out of the bottle. So this will not go away.
WATT: Do you think this is going to destroy us?
BRINKMAN: A.I. is very much capable of doing really weird shit, so it can do crazy things. But it doesn't want to. So if we keep it a little bit safe and we control it a little bit, we will benefit from that.
WATT (voice-over): Can we keep it a little bit safe? The U.S. Senate held a series of closed-door hearings. Elon Musk attended the first.
ELON MUSK, TECH BILLIONAIRE: I think this meeting may go down in history as being very important for the future of civilization.
UNIDENTIFIED REPORTER: Do you think some legislation is going to come out of this?
MUSK: Probably. I'm not sure what the time frame of that is.
FARID: They're going to do what they're supposed to do, which is maximize shareholder profit. They're obligated by law to do this. Let's stop pretending that these are a bunch of cool kids, you know, trying to save the world. They're trying not. They're trying to make a lot of money. And that's fine. I have no problem with that. Now the government's job is to regulate the industry.
BENGIO: We really, really need to have a very agile process that's going to adopt that regulation without having to wait another three years for a new legislation. That's not going to work. WATT: What you're saying here is that our current systems, our current
society, is not yet ready to deal with what is about to hit us.
JOHN F. KENNEDY, FORMER U.S. PRESIDENT: Let us press onward in quest of man's essential desire for peace.
WATT (voice-over): Remember we did all come together to regulate nuclear weapons, although only after they'd killed hundreds of thousands of people.
FARAHANY: I think most people are worried about the existential risk to humanity. I think in the near term, they need to be far more worried about, do we have mental privacy?
WANG: There's certainly the narrative that we'll be subjugated to the A.I. systems. But I don't believe that at all. I think that what we'll see in 10 years is what any one man or what any team of people can accomplish will seem utterly staggering.
WATT: Beyond the whole life or death question I'm now mulling there's another biggie. If A.I. doesn't kill us all --
My concern is, sort of, what it does to us as human beings and not to be too Californian, but, you know, our sense of self, you know, and our position in the grand scheme. That's changing.
BENGIO: It is. It will change. And what it means to be human when we're not the apex of intelligence anymore in some future.
WATT (voice-over): If we reach artificial general intelligence, or AGI as it's known, when A.I. can do everything, anything, better than us.
Will we reach AGI?
RUSSELL: I see no reason to believe that we won't. I have not seen a single credible argument that suggests that there's any barrier to getting there or that there's some reason why it's impossible.
WATT (voice-over): After working for months on this story, there's a scene from "Wall-E" that haunts me. In a fully automated world, humans are just useless, bloated, soda-swilling lumps. A dystopia that has also haunted Professor Russell.
RUSSELL: That's one future. And I tried to run a series of workshops, actually, where I got economists, and A.I. people and science fiction writers and futurists together and said, let's come up with a picture that isn't the "Wall-E" picture where humans are basically couch potatoes but we failed.
WATT: There is one thing I now know for certain. If anyone tells you they know exactly what our society, our world, will look like in another 10, 20, 100 years, where we will go from here, they don't. No one does. We just don't know where A.I. will take us.