Return to Transcripts main page

Fareed Zakaria GPS

Artificial Intelligence, Its Promise and Peril; Interview With Artificial Intelligence Pioneer Geoffrey Hinton; Interview With Academy Award-Winning Filmmaker James Cameron. Aired 10-11a ET

Aired September 03, 2023 - 10:00   ET

THIS IS A RUSH TRANSCRIPT. THIS COPY MAY NOT BE IN ITS FINAL FORM AND MAY BE UPDATED.


[10:00:21]

FAREED ZAKARIA, CNN HOST: This is GPS, the GLOBAL PUBLIC SQUARE. Welcome to all of you in the United States and around the world. I'm Fareed Zakaria.

(Voice-over): Today, a GPS special. "Artificial Intelligence, Its Promise and Peril." We'll bring you an in-depth look at the brave and frightening new world that faces us all today.

UNIDENTIFIED FEMALE: It's nice to meet you.

ZAKARIA: Startling powers of this technology exploded into the public consciousness late last year.

ABBY PHILLIP, CNN HOST: ChatGPT has basically exploded in popularity.

SCOTT PELLEY, "60 MINUTES" HOST: Machines that could teach themselves super human skills.

UNIDENTIFIED MALE: If this technology goes wrong, it can go quite wrong.

UNIDENTIFIED MALE: ChatGPT.

UNIDENTIFIED FEMALE: ChatGPT.

UNIDENTIFIED FEMALE: ChatGPT.

ZAKARIA: We start with a look at the past, the present and the unknown future of artificial intelligence with Eric Schmidt, the former CEO and chair of Google who has long been interested in AI.

ERIC SCHMIDT, FORMER EXECUTIVE CHAIRMAN AND CEO, GOOGLE: So what happens if it can build a pathogen and it ends up in the hands of an Osama bin Laden type of person and that pathogen can kill a million people?

ZAKARIA: Then the absolute worst-case scenario. AI run amok. The extinction of us all. The human race, that is. I'll talk to a man known as the godfather of AI who left a top job in tech so he could warn us of the risks.

GEOFFREY HINTON, ARTIFICIAL INTELLIGENCE PIONEER: And then there's the problem of if these things get smarter than us, which I believe they will, then the question is, will we be able to control it?

ZAKARIA: On the flipside the beauty of what AI can help humans create. James Cameron made his AI inspired "Terminator" films long before artificial intelligence became a buzzword. Now he uses AI to make his new productions ever better. He'll explain how it all works.

JAMES CAMERON, FILMMAKER: We're taking an accurate performance and translating it onto a CG character and we want it to be as accurate as possible but I think where it's going really feeds into one of our greatest social ills right now which is that we can't trust what we see.

ZAKARIA: Then, an unusual fascinating application of AI. I've been mesmerized by a piece of artificial intelligence artwork at New York's Museum of Modern Art. A human artist fed the computer data from 200 years of paintings, drawings, sculptures and more from the museum's collections. And this is the ever-evolving result. I'll talk to the human beings behind the work.

UNIDENTIFIED MALE: Art will be for anyone, any culture, any background. But I'm trying to find this language of humanity.

ZAKARIA (on-camera): And let's get started.

In 1945, the brilliant computer scientist Allen Turin wrote that computers would someday be able to play very good chess. The term artificial intelligence hadn't even been coined yet. That would take another decade. Today the Oxford English dictionary defines AI as the capacity of computers or other machines to exhibit or simulate intelligent behavior. And in the meantime Turin's prophecy has come true, and then some.

In 1997, IBM supercomputer Deep Blue beat the then world chess champion Garrick Kasparov in a six-game match. 20 years later, in 2017, Google's Alfa Go beat the world's number one player of Go, an even more complex board game to master especially for a computer.

But it's not all fun or games. The meteoric rise of ChatGPT and other AI programs in recent months has shown us that AI can pass graduate level exams, get a top 10 percent score on the bar exam, write legal briefs, find cancerous growths better than radiologists, and much more.

I wanted Eric Schmidt to help us understand it all better. The former Google CEO and chairman joined forces with the legendary diplomat Henry Kissinger and the MIT computer scientist Daniel Huttenlocher to write a book titled "The Age of AI and Our Human Future." He's also been working to keep America at the forefront of the field by serving as chairman of the National Security Commission on Artificial Intelligence.

I should note I'm a senior adviser at Schmidt Futures, his philanthropic initiative. To begin, I wanted to understand what the next few years could bring for AI applications.

[10:05:02] SCHMIDT: Many people believe the computer will be able to recursively self-improve. In other words it'll start to get better on its own. That's a very, very big change in history. Up until now, the tools that we as humans have built have been under our control. Maybe poorly but they have been under our control. There is a point many people think that it will be within the next four to five years, some people think sooner, some people later, I think maybe five years, where the system will be able to learn something new and act on it.

This is called tool use. There is indeed a paper from Deep Mine recently on extreme risks and it goes through some speculation of what would happen. Imagine if one of these things learns how to get access to weapons. Clearly we don't want that.

ZAKARIA: Right. So the canonical example that people use when talking about the dangers of AI is you tell the machine to make paper clips and it says sure and it makes paper clips and runs out of material to make paper clips with and then it starts turning other things into material that can make paper clips. And evenly runs out of that and then it starts using human beings and killing human beings because it is trying to just fulfill this one objective.

How -- is that too simplistic? How should we think about this?

SCHMIDT: It's an easy example of how you can make a mistake. And those are called wrong objective function. So the way you would actually, through the paper clip example, you could say here are some rules. You can't use more energy than is available to you. You can't harm any people. You have to make money, and by the way, we want you with those constraints to make as many paper clips as possible.

Now in human society, you start to think about all the rules you as a human, you have to behave, there are these laws, there are all these cultural laws, you have to use language, you have to stay within the limits of human behavior. One way to think about it is it's a constitution. So one company, this is Anthropic, decided to write a constitution for the system, wrote its own view of what constitution life should be, and fed it in.

So there are ideas about essentially limiting either the knowledge or behavior of these models to keep them in a human space. I'll give you another example. Imagine you go and you say to the computer, and this is when it can recursively self-improve, this is maybe -- this is speculation maybe five years from now, you say work really hard, start right now, learn everything. And it goes through French and it goes through biology and it goes through science and so forth.

And at some point it starts asking itself questions it doesn't know the answer so it starts e-mailing physics professors and things like that. No problem. And then it realizes it needs more power so it steals the power from the hospital next door. So, you know, all of these cases, there's an implied permission set which has to be written down and controlled.

ZAKARIA: How would you respond to somebody who says, look, if AI is so great, how come we still don't have self-driving cars? There are -- things can be very much more complicated than they look to actually execute.

SCHMIDT: The examples that we're celebrating right now all have errors in them. The fact that a reporter had the computer fall in love with him and convince him to leave his wife for the computer is very humorous. Right. No one died in that scenario. And by the way, he didn't leave his wife for the computer so we're clear. Anything involving human health is very different. Right. You want a human flying the airplane, watching the auto pilot.

It's going to be a while before we have universally self-driving cars. It's just a really hard problem especially because of the mixing. But it's not because we don't know how to do it. It's because of the tolerance for risk, its --

ZAKARIA: The error rate has to be close to zero.

SCHMIDT: It has to be really low.

ZAKARIA: What are you worried the most about? I mean, I've heard people talk about the dangers of AI in war. I've heard people talk about the dangers of AI in medicine. When you think about it, for you, what's the scary part?

SCHMIDT: I'm now convinced that what are called the frontier models, the big now four companies that are spending billions of dollars on these things, I'm now convinced personally they're going to be regulated. They're too powerful, they're too visible, they're too dangerous. There'll be rules from the White House and the Congress and other countries. The E.U. is already doing this. Britain has done this. China has done this. They're going to be regulated for this reason, for safety.

What really worries me is that there is diffusion from these very, very powerful models to the next tier which are called open source models. A famous example here is called llama, L-L-A-M-A. And it's roughly 10 times smaller in size, cost and so forth. But it looks like if you do something in the frontier model, within two or three years the technologist can figure out a way to do it much cheaper, much more cheaply.

[10:10:07]

You're building a system where you have open source, which means anyone could get access to it, and you don't know what it can do. So what happens if they can build a pathogen and it ends up in the hands of an Osama bin Laden type of person, and that pathogen can carry -- can kill a million people. So you say no problem, we'll put what are called guardrails or alignment on that. We'll prevent it from being misused.

If you give me all the weights that is open source and I'm evil, which hopefully I'm not, I can strip those constraints out and return it back to its bad news.

ZAKARIA: Next on GPS, more with Eric Schmidt. I'll ask him to explain how exactly he would rein in AI. Can you do it? His answer, when we return.

(COMMERCIAL BREAK)

ZAKARIA: And we are back with the former Google CEO Eric Schmidt.

What's the solution?

[10:15:01]

Is there -- you know, you talked about writing a constitution, essentially a kind of software architecture that puts constraints on AI. What do you think? What do you propose as a solution?

SCHMIDT: I am concerned about the extreme risk that is extreme or existential risk of this. I'm concerned that they have polymathic capabilities that will allow somebody who does not have a PhD in biology and who is evil to do something that could really harm people. That's my primary concern.

There are plenty of other things to complain about. The copyright issue, misinformation, the dog ate the homework kinds of problems. These are real people's problems that they're concerned about. But those are not extreme risks. They won't hurt 10,000, 20,000 or 30,000 people. I think primarily the initial threats are in biology and cyber. Eventually, and of course misinformation and so forth.

Eventually we're going to have a situation where these systems do what is called step wise refinement. So basically they can say -- they can't do this now. They say here are the steps to build a recipe. Here's the steps to solve a problem. Here's the steps to build a bomb. At the point of which it could do steps, each step it's doing a little bit of thinking. Not our kind of thinking, but its own to choose the next step.

That's the beginning of consciousness. At the point at which those steps are put together, you're going to have super intelligence. There is a scenario, many people believe, where once you have one super intelligence, it could find the others. And in that scenario, it can develop the ability to speak to itself in a language we can't understand. That is unchartered territory in humanity and we need to prevent that.

ZAKARIA: Do we have the ability to put into this constitution or software architecture essentially a kill switch? Or is the 2001 "Space Odyssey" scenario correct which is that the computer will figure out a way to override the kill switch.

SCHMIDT: You can always unplug them. The standard joke is, at the end of the day, this thing can be doing whatever it's doing and there's going to be a guard with a machine gun to protect the computer and another guard who has only one function, which is to turn it off on command from the president. And that's probably the eventual state.

There is good news before everyone gets too worried about this. The kind of damage that I'm talking about will be done by large teams in very large systems. There won't be very many of them. So my own view is that the militaries and national security around the world will be monitoring them.

Today if you launch a missile of any kind, you know, a satellite or what have you, there is a process to let every government in the know that it's going to happen because that way they know you're not launching a deadly missile, a satellite or what have you. And then they use that information to tune their observations systems. You'll see something similar. That these unfettered systems that are so powerful, unmanaged, unmonitored, will be too dangerous without the monitoring.

ZAKARIA: So in a sense, what you're describing is just as we developed a kind of framework of controls for nuclear weapons, and the president having that ultimate control with the football, there may be a second football as it were, a second set of constraints, this time on artificial intelligence?

SCHMIDT: There will be an internet monitoring group in every country and it will be monitoring for these things. There are many people who believe that the only way to fight AI offensively is defensively because they're so fast, so you can imagine lots of defensive network systems that are watching for this, making sure there's nothing awry and responding very quickly. You could imagine an automated kill switch in that moment because turning it off is not necessarily offensive work.

ZAKARIA: Does this leave you excited or scared?

SCHMIDT: I forgot to say the most important thing. Can you imagine the development of an AI doctor for the world? An AI tutor for every person in the world? Can you imagine solving every problem in plastics, materials, science, power, energy density, solving climate change? The overwhelming benefit of intelligence, we need to get there and not kill ourselves in the process.

But I want the benefits. I think that society will be so much richer, so much better educated, so much more powerful as humans because of these tools. We just have to make sure that these edge conditions such as the extreme risks are kept under control.

ZAKARIA: Eric Schmidt, a pleasure.

SCHMIDT: Thank you.

ZAKARIA: Eric Schmidt just told you all about the potential upsides of artificial intelligence. But my next guest who has been called the godfather of AI has deeper concerns about the existential risks that AI poses. Hear from him in just a moment.

(COMMERCIAL BREAK)

[10:24:10]

ZAKARIA: When ChatGPT burst into the public consciousness late last year, its ability stunned the world. Headlines blared that it was able to pass the bar exam and hold human-like text conversations. It writes computer code, term papers and even Shakespearean iambic pentameter. It is not just that one program. Google, Microsoft and many other companies have their own artificial intelligence software.

People have been fascinated and frightened. The fright was heightened in May when more than 350 computer scientists and tech executives signed on to a one-sentence statement that said, "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."

One of the signatories is the man who has been called the godfather of artificial intelligence, Geoffrey Hinton.

[10:25:06]

Hinton left his job at Google so he could freely discuss the risks of AI and I wanted to ask him about those.

Geoffrey, welcome.

HINTON: Thank you.

ZAKARIA: When did you start to go from being exhilarated about all this to worrying?

HINTON: Really only a few months ago. So I -- I mean, I was always worried about things like, what would happen to the people whose jobs were lost to an AI? And would there be battle robots? And what about all the fake news it was going to produce? And what about the eco chambers being produced by getting people to click on things that make them indignant?

All those worries I was worried about. But the idea that this stuff will get smarter than us and might actually replace us, I only got worried about a few months ago when I suddenly flipped my view. My view had been that I'm working on trying to make digital intelligence by trying to make it like the brain. And I assume the brain is better. We're just trying to sort of catch up with the brain. I suddenly realized maybe the algorithm we brought is actually better than the brain already. And when we scale it up, we'll get things smarter than us.

ZAKARIA: So when you think about, you know, the concerns about AI, how would you describe them very simply to somebody? What is it that you worry about?

HINTON: So I would distinguish a bunch of different concerns. So it's what I call the existential threat which is about whether they will wipe out humanity. That's definitely a threat to humanity's existence. The other threats aren't existential in the same sense but it's existential however used. They are very bad like they'll make a lot of jobs much more efficient by getting chatbots to do it instead of people.

There'll be a huge increase in productivity and the big worry is that huge increase in productivity which should be good for us will cause the rich to get richer and the poor to get poorer, and that's going to be very bad for society. Then these things like battle robots where obviously defense

departments would like to have robots that replace soldiers. That's going to make it politically much easier to start wars. Those fake news, where it's going to be very hard to know what's true. And there's the division into these warring camps by the big companies trying to get you to click on stuff that will make you indignant and so you get these two different eco chambers.

ZAKARIA: And these are the small problems.

HINTON: Those are the small problems. Those are more immediate and they're not small problems at all. They're huge problems but they don't involve the end of humanity. So I don't call them existential. And then there is the problem of if these things get smarter than us, which I believe they will, and many areas, and I begin to believe they will and in not too long, like in, you know, not in 100 years.

So I wish we had a simple solution. Like with climate change, there is a simple solution. You stop burning carbon and it will take a while but you'll end up OK. And it's politically unpalatable for the oil companies. But if you stop burning carbon, you'd solve the problem. Here there isn't anything like that. The best people can come up with, I think, is that you try and give these things strong ethics.

The one advantage we have is that they didn't evolve. We made them. We evolved and we evolved in small warring tribes of hominins, we wiped out 21 other different species of hominins because we're very competitive and aggressive, and these things don't have to be like that. We're creating them. Maybe we could build them with strong ethical principles wired in.

ZAKARIA: And you could do that with the algorithms? Because I noticed --

HINTON: Maybe.

ZAKARIA: I notice when you ask ChatGPT a question, say about homosexuality, it gives an answer that is clearly curated in a way to be thoughtful, to be, you know, not to reflect every crazy view about it. But, you know, kind of -- politically correct may be too strong but it's a sensitive answer.

HINTON: Yes.

ZAKARIA: So there is some shaping that takes place. If you ask it how do you build a nuclear weapon, it says I won't tell you that.

HINTON: But if you've ever written a computer program, you know that if you got a program that's trying to do the wrong thing and you're trying to do the right thing by putting guardrails around it, it's a losing proposition because you have to think of every way in which you might go wrong. It's much better to start with ethical principles and say, you're always going to follow these principles. But it's going to be hard because, for example, defense departments want robots that will kill people. So that seems to conflict a bit with putting ethical principles in. There is one piece of good news which is with nuclear weapons they

were an existential threat and so even during the Cold War, Russia and the United States could cooperate on trying to prevent a nuclear war because it was clearly bad for both of them. And with this existential threat and all of the other threats, but with the existential threat, if you take the U.S. and China, and Europe and Japan and so on, they should all be able to agree, we don't want them to wipe us out. And so maybe you can get cooperation on that.

[10:30:27]

ZAKARIA: All right. Well, such a pleasure.

HINTON: Thank you.

(END VIDEOTAPE)

ZAKARIA: Next on GPS, how will artificial intelligence change the way movies are made? Well, I'll ask the great director James Cameron of "Avatar" fame next.

(COMMERCIAL BREAK)

ZAKARIA: We've talked about A.I.'s effect on war and on society, but I wanted to delve deeper into an industry where it could have lasting seismic consequences, the movies. Consider this.

[10:35:00]

Tom Hanks is shooting a film right now that will reportedly use artificial intelligence to make him appear younger. The technology can sift through the proliferation of images of Tom Hanks from his youth and actually generate brand-new content. As Hanks himself said in a podcast, I could get together and pitch a series of seven movies in which I would be 32 years old from now until kingdom come.

The impact of this technology on the film industry could truly be enormous. So, I wanted to talk about A.I.'s role in film with the director who has been for years at the forefront of not just using new technology in his films but also imagining what the future of technology has in store for society. I spoke with James Cameron, the director of the "Avatar" franchise and many other films.

(BEGIN VIDEOTAPE)

ZAKARIA: So, when you look at all this technology, because you're so immersed in it, does it excite you, does it scare you?

CAMERON: I'd say right now I'm a little more scared than I am excited. We've used a lot of smaller A.I. tools very specifically targeted in the development of our avatar process to speed up our work flow and increase the accuracy of our facial pipeline because we're taking an actor's performance and translating it on to a C.G. character and we want it to be as accurate as possible.

But I think where it's going really feeds into one of our greatest social ills right now which is that we can't trust what we see. You know, with deepfakes and so on and now with the chatbots we won't be able to trust our sources as much. But it's going to get harder and harder and harder as we go along because you'll actually see a piece of video that looks completely compelling and you can't believe it.

So now unless you're physically present, how do you know that you're not being -- you know, it becomes this kind of phenomenon -- logical crisis, right? I mean, you know, that the -- Socrates and -- you know, always said that, you know, we're in the back of a cave and we're seeing only the shadow of that which is real. And I think that's where we're going. We won't know if our feeds are accurate.

ZAKARIA: I mean, to me, you know, Henry Kissinger and Eric Schmidt wrote this book about A.I. One of the most haunting parts is it talks about how, you know, the renaissance and the enlightenment really the enlightenment allowed human beings to use their reason --

CAMERON: Yes.

ZAKARIA: -- and to dispelled what was before kind of myth making --

CAMERON: Yes.

ZAKARIA: -- and fantasy. And so, when you look at a phenomenon like the sun rising, people used to say, well, that's the sun god --

CAMERON: Yes, sure. The chariot of the sun, right?

ZAKARIA: -- going across the sun. And then reason took over and we were like, no, we can understand why this happened.

CAMERON: Yes.

ZAKARIA: With A.I., we're almost going back to that world where we know the answer but we don't know why it's the answer.

CAMERON: That's right.

ZAKARIA: So, the computer will tell you have the answer.

CAMERON: Yes. And it can't tell you. It can't tell you.

ZAKARIA: And it can't tell you. Right. And so, we're -- we used to trust religion.

CAMERON: Yes.

ZAKARIA: Now we trust -- we will end up trusting A.I.

CAMERON: That's right.

ZAKARIA: We no longer will trust human reason --

CAMERON: That's right.

ZAKARIA: -- because we know how limited it is. CAMERON: Yes. And people have asked me because I did a film called the "Terminator" and "Terminator 2" where SkyNet, the evil superintelligence from the future, was manipulating the past to get the outcome that it required and it destroyed humanity in a nuclear war.

Well, I don't think an AGI, a superintelligent AGI, would need to use nuclear weapons. In fact, it wouldn't want to. This electromagnetic pulse would wipe out too much of its own electronic infrastructure. I think it would do exactly what's happening right now. Get us addicted to our devices. Every phone that we -- first of all, if you look around anywhere, everybody is always on their phone. So, the cat has been belled, right, and they -- and so this is just handing the keys in my mind to a techno dictatorship or authoritarian regime of some kind which could easily be run by a super computer to its ends.

And so, I see us in a new arms race. Whoever gets to that superintelligence first will have world dominance and that's what Putin has said. He was actually quoted as saying that as you reported.

ZAKARIA: But -- and what I'm -- so the way you see it happening, this is fascinating, is now, you know, you don't need to conquer people physically.

CAMERON: Yes.

ZAKARIA: You conquer them mentally.

CAMERON: Exactly.

ZAKARIA: You trap them mentally.

CAMERON: Yes. Just look around. In fact, I was -- I was, you know, in a speaking engagement the other day and when asked this sort of question, I ended with, now, how do we know it hasn't already happened?

From sitting here and observing the world, nothing that's happening out there makes a whole lot of sense to me right now. How do we know we're not being manipulated by an emergent AGI that's already been developed? We wouldn't know. You know, because it's --

ZAKARIA: We may be -- we may be in a simulation.

CAMERON: Well or we may be in the transitional state to a simulation. That's another question whether we're already in a simulation.

[10:40:00]

ZAKARIA: Right, right.

CAMERON: Yes. Although it seems pretty good.

ZAKARIA: So where is the optimistic part? You said that you -- you know, you go between both. Is there a part of you that says, well, this is -- you know, AGI is going to cure cancer and all of that? CAMERON: Yes. Look, I think that AGI can do a lot of things that we can't. I'm more interested in applications of just A.I. and taking --

ZAKARIA: We should explain. A.I. is artificial intelligence.

CAMERON: Right.

ZAKARIA: AGI is artificial general intelligence which is kind of like --

CAMERON: Which we don't --

ZAKARIA: -- superintelligence.

CAMERON: Yes.

ZAKARIA: We still haven't gotten.

CAMERON: We think we don't have it yet. We probably don't, you know? I mean, I talk to a lot of people in A.I. and that does scare me. Because to me nothing that we've done that's really transformative technology has not already been weaponized as we saw with, you know, nuclear energy and all of that.

But in terms of A.I., it is very powerful. You know, they've done -- they've done studies, obviously, where they compare a panel of doctors analyzing scans to an A.I. that's trained to identify tumors and things like that. The A.I. scores better.

So, in some of these tasks of looking at very large data sets and coming to the right analytic conclusion, they're better than us at that. And we should rely on them there. But they should be these separated tools. The second we start with these integrated systems and become too reliant, it will become a new religion.

ZAKARIA: It will become like "2001 Space Odyssey."

HALL 9000, FICTIONAL CHARACTER: This mission is too important for me to allow you to jeopardize it.

CAMERON: Yes. Right. Exactly.

ZAKARIA: When you try to turn it off it doesn't.

CAMERON: Yes. And you might find yourself locked out of the spaceship.

DAVE BOWMAN, FICTIONAL CHARACTER: Do you read me HAL?

HAL 9000: Affirmative, Dave.

ZAKARIA: James Cameron, a pleasure to have you on.

CAMERON: Thanks.

(END VIDEOTAPE) ZAKARIA: Next on GPS, from A.I. in film to A.I. in art. I'll show you one of the most interesting art works I've seen in years and talk to the team behind it.

(COMMERCIAL BREAK)

[10:46:08]

ZAKARIA: You are looking at the oldest known figurative painting in the world. Located in a cave on an Indonesian island, this pig painted with red ocher pigment is estimated to be at least 45,500 years old.

Painting has come a long way since the days of cave art. In fact, my next guest uses no physical paint at all. No ochers or oils or water colors or acrylics. He paints with data.

Turkish born artist Refik Anadol set out to answer the question, what would a machine dream about if it could see a museum's art collection? The result is "Unsupervised," a 24-foot by 24-foot installation that dominates the lobby of the Museum of Modern Art in New York.

This dream like imagery is being generated using artificial intelligence. Anadol trained the A.I. model using data from more than 200 years' worth of MoMA's art collection, which included nearly 90,000 works of art from over 26,000 artists. It is always learning and changing. Imagining art that could have existed in the collection as well as new art of the future.

If you watch forever, you would not see the same screen twice. I sat down with Anadol and the museum's curator of paintings and sculpture Michelle Kuo to discuss this extraordinary work and the future of art and artificial intelligence.

(BEGIN VIDEOTAPE)

ZAKARIA: Thank you both for joining us.

REFIK ANADOL, DIRECTOR, REFIK ANADOL STUDIO: Thank you.

MICHELLE KUO, CURATOR OF PAINTING AND SCULPTURE: Thank you.

ZAKARIA: So, Refik, if somebody were to ask you just very simply, what is this? What would your answer be?

ANADOL: Yes. So, this is an A.I. data sculpture running real time by using for me the most inspiring art collection of humanity, MoMA's art works and their metadata. And A.I. real time dreaming or hallucinating, art works that doesn't exist but may exist.

ZAKARIA: And to me the most interesting thing is that what you see on the screen second by second is a new image.

ANADOL: Yes.

ZAKARIA: It never repeats. ANADOL: Yes.

ZAKARIA: How long can that go on?

ANADOL: As long as the museum and the people here together, it will go --

ZAKARIA: If we kept this on for 100 years there would never be a repetition.

ANADOL: Yes.

ZAKARIA: To me, what is fascinating about this, Refik, is that you fed in all these images but unlike the way we think of these images, there is no hierarchy. The computer does not know that the Picassos are supposed to be the most famous and the most expensive in the collection. Do you think in a sense this is -- you know, this kind of -- it shatters the hierarchy of art?

ANADOL: I think it's a great question. First of all, I think in my humble opinion, art will be for anyone, any culture, any background. I'm trying to find this language of humanity and it's a really hard task and a hard challenge and to unify the humanity in one beautiful idea.

But, I think, what A.I. does here, this as an experiment, when we were training this A.I. model, as we all know A.I. needs data and A.I. needs to be trained. So, it's a truly human mission collaboration. It's not A.I. decides everything.

There is a human agency. There's decisions, parameters, numbers. So, it's not just A.I. doing everything. Just because there's some misconception about -- A.I. does everything. Actually, it's a really institutional collaboration, artistic collaboration.

A.I. is like a triangular of like dialogues. But what here inspired me so much is we didn't worry about like mediums of art like painting, sculpture, photography, videography. We didn't think like that.

[10:50:00]

What happens if there is no category? What happens if everything becomes one concept? And what happens if you don't like worry about those bias categories to understand life and art better?

And then it was a starting point. The name of the piece is "Unsupervised." Literally, we don't supervise this A.I. to direct it to this new or old worlds of understanding concept in life. And then I think it makes this serendipity much powerful, much inspiring. And the change and control became the part of the imagination, scientifically. I think that is where we found that this output, this every day new worlds create a new awe and inspiring moments that I hope that reflects the future of humanity when human admissions hopefully collaborate safely and equally.

ZAKARIA: Do you think that there is a copyright or borrowing problem which is -- Michelle, when you look at this, there must be artists who still have copyright for their work. Is there a problem in any way with feeding it all this art?

KUO: In this case this is completely new what you're seeing. Refik and his team created this machine learning model. So, everything that's sort of falling from that -- following from that is a human creation in tandem with this complex machine learning model they've created.

And so, what happens is it's really designed to steer as far away as possible from ever exactly recreating any specific work that exists in history. Even if it wanted to do that, I don't think A.I. is good enough to do that at this point. But this is steering away --

ZAKARIA: People refer to ChatGPT as sort of essentially very elaborate cut and paste. And is this like that?

KUO: This is not. It is actually not only different in degree but in kind. So, the best way to think about it almost is that this complex machine learning model that Refik's team has built has learned and learned and learned.

Imagine if you just as a superhuman learned and learned and learned about hundreds of thousands of art works in MoMA's collection for quite some time and then you built your own map, maybe your own even imaginary museum of all these different works. You decided to put some of the works over here together, other works over here together.

ZAKARIA: Right, right.

KUO: And then you're wondering around that imaginary museum and saying, oh, well, there's a gap here. There's emptiness between these works that actually exist. So, what could exist there? And that is what this is constantly creating. So, in each step of the way, it is doing something new and not replicative and it's actually kind of the farthest actually you could get from a kind of cut and paste.

ZAKARIA: Michell, do you think this is the future of art?

KUO: I think it's one future of art. And, I think, it's actually a kind of fascinating experiment that is really the beginning of something. It's not resolved. It's not the end or conclusion of something.

I think it's -- it's an experiment in every sense of the word and that's something that we really try to support artists in doing. Because so many works and artists in the history of modern art, for example, were devoted to abstraction, to abstract art, to not depicting the world as it is in a kind of realistic or photographic way but to speculate about forms and shapes that don't exist in the real world. This, to my mind, is squarely in line with those kinds of experiments.

ZAKARIA: Do you worry, Michelle, that it's -- it's art we can't explain? You know, it comes out of a black box and that the future of the black box will get even more complex and more mysterious to us in a sense. KUO: I think that it's a combination. There is a black box but there is also an incredible amount of information that we do have and an incredible amount of understanding that artists of the cutting-edge like Refik are actually deploying and getting in there.

And I think, you know, one thing that always resonates is at the beginning of photography, the camera was called the pencil of nature. Literally that there would be some other entity that's not human, that's controlling the images that you see. And yet we all know that that pencil of nature, which is also involving a machine, a tool, has become one of the most incredible devices for artists and for humans to use.

So, that is just something that I think really allows us to get past, I think, knee-jerk kind of reactions of either fear or euphoria. It allows us to actually, you know, get in there and confront what is the complexity that is -- that is all around us right now and what do we have to do to understand it.

ZAKARIA: Michelle, Refik, thank you so much.

(END VIDEOTAPE)

ZAKARIA: After the interview, I stood with Refik and Michelle in front of "Unsupervised" for a close-up look.

[10:55:06]

For a moment the viewer can watch on the screen as the machine pauses to calculate its next set of images before erupting in a new set of dazzling visuals. So, what you see there is an intelligence that is calculating or plotting or visualizing how it is going to produce art next. That is, if you will, a visual representation of A.I.

Thanks to all of you for watching this GPS special. We will keep watching this very important issue and continue to bring you the latest on it. And as always, you can find us right back here next Sunday.

(COMMERCIAL BREAK)