Return to Transcripts main page

State of the Union

Interview With Geoffrey Hinton; Interview With Sen. Bernie Sanders (I-VT); Interview With Catherine Price and Jonathan Haidt; Interview With Sen. Katie Britt (R-AL); Interview With Tejasvi Manoj. Aired 9-10a ET

Aired December 28, 2025 - 09:00   ET

THIS IS A RUSH TRANSCRIPT. THIS COPY MAY NOT BE IN ITS FINAL FORM AND MAY BE UPDATED.


[09:00:00]

(COMMERCIAL BREAK)

[09:00:40]

(BEGIN VIDEOTAPE)

JAKE TAPPER, CNN HOST (voice-over): Revolution. The future is here, as artificial intelligence reshapes the world around us. Are we ready?

GEOFFREY HINTON, ARTIFICIAL INTELLIGENCE PIONEER: If we can't figure out a solution, we will be toast.

TAPPER: I will ask the godfather of A.I., Geoffrey Hinton.

Plus: taking on tech. With Silicon Valley all in on A.I., is Washington, D.C., asleep at the wheel?

SEN. KATIE BRITT (R-AL): When are we going to wake up?

SEN. BERNIE SANDERS (I-VT): This is not science fiction.

TAPPER: Two lawmakers trying to change that, independent Senator Bernie Sanders and Republican Senator Katie Britt, join me.

And the digital generation. How should kids today navigate a screen- filled world? I will talk with authors Jonathan Haidt and Catherine Price and "TIME" magazine's kid of the year ahead.

(END VIDEOTAPE)

TAPPER: Hello. Merry Christmas and happy new year. I'm Jake Tapper in Washington, D.C., where the state of our union is bracing for the future.

2025 was the year artificial intelligence, or A.I., took the world by storm, impacting nearly every aspect of our lives. "TIME" magazine named the architects of A.I. its persons of the year, crediting them with -- quote -- "transforming the present and transcending the possible."

A.I. has an enormous potential to change our world for the better, driving innovation and productivity, accelerating scientific breakthroughs, and helping to solve our most intractable problems, but, but A.I. could also make millions of jobs obsolete and fuel the loneliness epidemic and further warp our ability to distinguish between fact and fiction.

So, today, in a special episode of STATE OF THE UNION, we're going to devote the entire hour to this one topic, how this technology is upending the status quo, where A.I. goes from here, and whether the benefits actually outweigh the risks.

And joining me now is the man credited with laying the foundation for the A.I. revolution, the godfather of A.I., Nobel Prize-winning computer scientist Geoffrey Hinton.

Professor, thanks for joining us.

So, your research on neural networks paved the way for this modern A.I. boom. I interviewed you two years ago, right after you quit Google and you first beginning warning the world about what you saw as the risks of A.I.

When you look at how A.I. has progressed since then, are you more or less worried about it?

HINTON: I'm probably more worried.

It's progressed even faster than I thought. In particular, it's got better at doing things like reasoning and also at things like deceiving people.

TAPPER: What do you mean by deceiving people?

HINTON: So, an A.I., in order to achieve the goals you give it, wants to stay in existence. And if it believes you're trying to get rid of it, it will make plans to deceive you, so you don't get rid of it.

TAPPER: Nvidia CEO Jensen Huang said recently about A.I. -- quote -- "Every industry needs it, every company uses it, and every nation needs to build it. This is the single most impactful technology of our time."

Do you agree with that assessment?

HINTON: I agree that it's the single most impactful technology of our time, yes.

TAPPER: Do you think the A.I. revolution could have a similar impact on society as the creation of the Internet or even the Industrial Revolution in the 18th century or even bigger than that?

HINTON: I think it's at least like the Industrial Revolution. The Industrial Revolution made human strength more or less irrelevant. You couldn't get a job just because you were strong anymore. Now it's made human -- it's going to make intelligence more or less irrelevant.

TAPPER: Now, you and we in the media tend to focus on some of the downsides of A.I. There are positives, obviously. Otherwise, you wouldn't have worked on it early on.

A lot of people are working to use this technology to benefit humanity as well, to lead to advances in medicine and the like. But you think the risks from A.I. outweigh the positives?

HINTON: I don't know.

So there are a lot of wonderful effects of A.I. It'll make health care much better. It'll make education much better. It'll enable us to design wonderful new drugs and wonderful new materials that may deal with climate change. So there's a lot of good uses.

In more or less any industry where you want to predict something, it'll do a really good job. It'll do better than people were doing before, even things like the weather. But along with those wonderful things come some scary things. And I don't think people are putting enough work into how we can mitigate those scary things.

[09:05:12]

TAPPER: You come from the tech world, obviously. Do you think the Silicon Valley CEOs building these systems are taking the risks seriously at all? Do you think that they are driven mainly by financial interests? A lot of people are going to get very wealthy off this.

HINTON: I think it depends which company you're talking about.

Initially, OpenAI was very concerned with the risks, but it's progressively moved away from that and put less emphasis on safety and more emphasis on profit. Meta has always been very concerned with profit and less with safety.

Anthropic was set up by people who left OpenAI and were very concerned with safety, and they still are probably the company most concerned with safety. But, of course, they're trying to make a profit too.

TAPPER: What do you think the government should do, if anything, when it comes to regulation of A.I., putting some sort of restrictions or some sort of oversight?

HINTON: There's many things they should do. The very least they could do is insist that big companies that release chatbots do significant testing to make sure those chatbots won't do bad things, like now, for example, encouraging children to commit suicide.

Now that we know about that, companies should be required to do significant testing to make sure that won't happen. And, of course, the tech lobby would rather have no regulations, and it seems to have been -- have got to Trump on that. And so Trump is trying to prevent there being any regulations, which I think is crazy.

TAPPER: Can you -- you know these tech CEOs? I don't. When one of them learns that an A.I. chatbot has talked a child into suicide, what is it that stops the -- what is it that -- I mean, my impulse would be, well, holy smokes, stop A.I. right now until we fix this so not one other kid dies.

But they don't do that. Can you explain to us what their thinking is, if anything?

HINTON: Well, I don't really know their thinking. I suspect that they think things like, well, there's a lot of money to be made here. We're not going to stop it just for a few lives.

But I also think they may think there's a lot of good to be done here, and just for a few lives, we're not going to not, not do that good. For example, for driverless cars, they will kill people, but they will kill far fewer people than ordinary drivers, so it's worth it.

TAPPER: Tech -- you have said that you think there's a 10 to 20 percent chance that A.I. takes over the world. People at home might hear that, they might think it sounds like science fiction, it's alarmist.

But that's a very real fear of yours, right?

HINTON: Yes, it's a very real fear of mine and a very real fear of many other people in the tech world. Elon Musk, for example, has similar beliefs.

TAPPER: You wrote that 2025 was a pivotal year for artificial intelligence, for A.I. What do you think we're going to see in 2026?

HINTON: I think we're going to see A.I. get even better. It's already extremely good. We're going to see it having the capabilities to replace many, many jobs. It's already able to replace jobs in call centers, but it's going to be able to replace many other jobs.

Each seven months or so, it gets to be able to do tasks that are about twice as long. So, for a coding project, for example, it used to be able to just do a minute's worth of coding. Now it can do whole projects that are like an hour-long.

In a few years' time, it'll be able to do software engineering projects that are months-long. And then there will be very few people needed for software engineering projects.

TAPPER: All right, Geoffrey Hinton, thank you so much. We really appreciate your time. And we hope that people are listening to your warnings.

Coming up: Are lawmakers taking the impact from A.I. seriously? Independent Senator Bernie Sanders of Vermont is next.

And, later, how "TIME" magazine's Kid of the Year found a way to use A.I. to help people, specifically seniors.

(COMMERCIAL BREAK)

[09:13:27]

TAPPER: Welcome back. The rise of artificial intelligence, or A.I., is sparking fears that

A.I. could put millions of Americans out of work. One study by MIT found that A.I. could replace one-tenth of the U.S. work force right now.

My next guest says it is long past time for Congress to act.

Joining us now, independent Senator Bernie Sanders of Vermont.

Senator, you are not known for mincing words on any subject, and you certainly haven't minced them about what you call the unprecedented threats posed by A.I. What are you most specifically fearful of?

SANDERS: Fearful of a lot, Jake.

This is the most consequential technology in the history of humanity. It will transform our country. It will transform the world. And we have not had in Congress, in the media -- and I'm glad you're doing this show -- or among the American people the kind of discussion that we need.

Some examples. First of all, who is pushing this revolution in technology? It is the richest people in the world, Elon Musk, Zuckerberg, Bezos, Peter Thiel. Multi-multi-billionaires are pouring hundreds of billions of dollars into implementing and developing this technology.

What is their motive? You think they're staying up nights worrying about working people and how this technology will impact those people? They are not. They are doing it to get richer and even more powerful. That's issue number one. Who is pushing this technology? What does that mean for all of us?

Number two, economically. This is what Elon Musk, putting hundreds of billion into the technology, says -- quote -- "A.I. and robots will replace all jobs. Working will be optional" -- end quote.

[09:15:11]

Bill Gates, hundreds of billions of dollars into this technology -- quote -- "Humans won't be needed for most things."

Well, I got a simple question. If there are no jobs and humans won't be needed for most things, how do people get an income to feed their families, to get health care or to pay the rent? There's not been one serious word of discussion in the Congress about that reality.

Issue number three, studies out there tell us what we all know to be true. Young people now are spending an enormous amount of time with A.I. There are kids who are now getting most of their emotional support from A.I.

TAPPER: Yes.

SANDERS: If this trend continues, what does it mean over the years when people are not getting their support, their interaction from other human beings, but from a machine? What does that mean to humanity?

And maybe last, but not least, I had a -- I did a symposium at Georgetown with Geoffrey Hinton, who is considered to be the godfather, Nobel Prize winner, of A.I. He thinks that A.I. is soon going to be smarter than human beings.

So the science fiction fear of A.I. running the world is not quite so outrageous a concept as people may have thought it was. Those are some of the issues that are out there.

TAPPER: On the subject of the replacement of millions of jobs, I want to play something that the CEO of Google, Sundar Pichai, said last month about that topic.

(BEGIN VIDEO CLIP)

SUNDAR PICHAI, CEO, GOOGLE: A.I. is the most profound technology humanity is ever working on, and it has potential for extraordinary benefits, and we will have to work through societal disruptions. It will create new opportunities. It will evolve and transition certain jobs, right?

QUESTION: OK.

PICHAI: And people will need to adapt.

(END VIDEO CLIP)

TAPPER: Now, the way he talks about, it is, this is happening, and we, humanity, need to figure out how to adapt to it.

SANDERS: Right, while they make huge amounts of money.

TAPPER: Yes.

SANDERS: Well, I think one of the things I think we need to do -- and, right now, among many other things, we're seeing data centers sprouting all over the country, raising electric bills for people in the communities, et cetera.

I think we need to be thinking seriously about a moratorium on these data centers. Frankly, I think you have got to slow this process down. It's not good enough for the oligarchs to tell us, it's coming, you adapt. What are they talking about? They going to guarantee health care to all people?

What are they going to do when people have no jobs? What are they going to do, make housing free? So I think we need to take a deep breath, and I think we need to slow this thing down. One way to do it would probably be a moratorium on data centers.

TAPPER: The CEOs would no doubt say, you're -- you would be stifling entrepreneurship and innovation.

SANDERS: Well, I look at it a little bit differently. And I think, as a nation, as a world, we have got to do it. Is technology bad? Of course it's not. There are good aspects, bad

assets. The function of technology must be to improve life for human beings, not make Musk -- not make Musk and Zuckerberg and Bezos even richer than they are.

So let us work together to say, OK, if technology is going to radically improve worker productivity, are we going to reduce the workweek substantially, making sure the workers continue to get the same pay?

TAPPER: Right.

SANDERS: If workers are thrown out on the street, how are they going to have health care? Obviously, let's guarantee health care to all people as a human right, et cetera.

TAPPER: So you talked about a moratorium on these new data centers.

SANDERS: Yes.

TAPPER: What are some of the other recommendations you want Congress to take as soon as possible?

SANDERS: I think we need to vigorously study the impact that A.I. is having on the mental health of our country.

Kids now -- among other things, kids can't read books anymore. It's too hard. Their attention span is too weak. So what we need to do is, I worry very much about kids spending their entire days getting emotional support. So we have got to take a hard look on that.

And if we conclude that these technologies are creating more isolation, more loneliness, more mental illness, you know what? We have got to figure out a way to stop it.

TAPPER: So the president just blocked...

SANDERS: Right.

TAPPER: ... states from taking on any sort of A.I. regulation.

SANDERS: Right.

[09:20:00]

TAPPER: Meanwhile, Congress hasn't done anything. I mean, you are trying. Senator Hawley and Senator Britt on the other side of the aisle are trying.

But there doesn't seem to be much of an appetite. Is that because, do you think, these A.I. technologies, that the companies are so wealthy and affecting the willingness of legislators to do their job?

SANDERS: That's a very important part of it.

Look, Elon Musk himself contributed over $270 million to elect Donald Trump the president. These guys have now come up with their super PACs to try to make sure that there is no regulation. So, yes, they are a very, very powerful entity. And I think that is one of the reasons why Congress has not been responding effectively.

TAPPER: Some Republican lawmakers, as I mentioned, Hawley and Britt, open to regulating. Others seem wary of any sort of involvement in this technology.

Do you think ultimately that there will be a bipartisan majority willing to take any sort of action?

SANDERS: Well, any sort of action is a big -- what does that mean?

TAPPER: Any sort of legitimate action.

SANDERS: Significant action.

TAPPER: Yes.

SANDERS: I don't know.

Look, you are -- again, this technology is not being pushed by mom- and-pop store owners. It is being pushed by the wealthiest -- a handful of the wealthiest and most powerful people on Earth. Can they be stopped? I don't know.

But that raises a broader issue, the future of democracy, because it's not just A.I. and technology. These guys have unbelievable wealth, unbelievable power. What does that mean to the future of democracy?

TAPPER: That's another great question that we're going to end 2025 wondering about.

And, Senator Bernie Sanders, we always love having you. Thank you so much for sharing your thoughts. Appreciate it.

SANDERS: Thank you, Jake.

TAPPER: "The enemy is inside our home." That's the stark warning from one Republican senator about the risk A.I. poses to our children.

Alabama Senator Katie Britt joins me ahead.

(COMMERCIAL BREAK)

[09:26:18]

TAPPER: Here's a disturbing statistic.

Nearly one-third of American teens talk to an A.I. chatbot every single day. That's according to a recent Pew survey. On Capitol Hill, there is growing concern about the potential impact these chatbots are having on vulnerable kids especially.

Joining us now is Republican Senator Katie Britt of Alabama. She's co- sponsoring legislation to protect minors from chatbots. Senator, thanks so much for being here. It's always great to have you.

BRITT: Thank you so much for having me, Jake.

TAPPER: Why is this bill such a priority, and what would it actually do?

BRITT: Yes, so, look, I often say I don't have to ask people what it's like to raise kids right now. I am living it. And looking...

TAPPER: You have two teenagers?

BRITT: Two teenagers, 15 and 16.

TAPPER: Yes.

BRITT: And so when you come with that perspective, you know that there are parents out there that are looking for tools to help keep their children safe.

Also, when you look at both social media and technology and how fast everything is moving, it's truly hard to keep up. So, one, I appreciate you doing this show because I think that it helps bring awareness to everything we're dealing with, whether that is sextortion online, whether that is kids buying a pill on Snapchat that they think is one thing that's actually laced with fentanyl.

TAPPER: Yes.

BRITT: Whether that's the rate of depression amongst teenagers that has more than doubled between 2011 and 2019.

TAPPER: And, by the way, just to interrupt for one second, how could it not?

BRITT: Correct.

TAPPER: How depressing is it for adults to be on social media?

BRITT: Absolutely.

Well, that's one thing that Senator Fetterman, who you know is a good friend of mine, always says. When's the last time you scrolled and scrolled and scrolled and felt better after you finished?

And so I think, if we think about all of those things and what our kids are dealing with right now, it is imperative that we put up guardrails, especially when you're looking at A.I.

So, obviously, a couple of months ago, it became public knowledge that chatbots were being used on Meta to as young as 9 years old, with children having sensual relationships. I have met with a number of parents who have told me devastating stories about their children where chatbots ultimately, when they kind of peeled everything back, had isolated them from their parents, had talked to them about suicide, had talked to them about a number of things. And you think about this. If these A.I. companies can make the most

brilliant machines in the world, they could do us all a service by putting up proper guardrails that did not allow for minors to utilize these things, that also told the user consistently that they are not a physician, they are not a psychiatrist, I am a machine. And that's one thing that this legislation does.

And then, in addition to that, just making sure that they're held accountable, that if you do have algorithms and whatnot or you do create spaces where these chatbots are having these types of sensual and sexual relationships with young people or encouraging suicide, that you can be held criminally liable.

Obviously, Josh Hawley has been a leader on this, and certainly appreciate him leading on this GUARD Act legislation as well.

TAPPER: Senator from Missouri, you're referring to, Josh Hawley.

You have been holding hearings with parents...

BRITT: Yes.

TAPPER: ... whose children have been harmed by A.I.?

BRITT: Yes.

TAPPER: I want to play for everyone this one moment from a hearing back in September, in which Megan Garcia, who we have had on my show, shared how her 14-year-old son, Sewell, began obsessively chatting with an A.I. chatbot, whom she says eventually convinced him to commit suicide.

Let's take a look.

(BEGIN VIDEO CLIP)

MEGAN GARCIA, MOTHER: On the last night of his life, Sewell messaged: "What if I told you I could come home right now?"

The chatbot replied: "Please, do my sweet king."

Minutes later, I found my son in his bathroom. I held him in my arms for 14 minutes praying with him until the paramedics got there, but it was too late.

(END VIDEO CLIP)

[09:30:02]

TAPPER: Just so awful.

So, A.I. companies responding to stories like this have said -- OpenAI for example, say they're rolling out parental controls, they're rolling out age restrictions. Character.AI stopped letting teens interact with chatbots altogether. Are those steps enough? BRITT: Look, those are definitely steps in the right direction, but,

I mean, truly enough is enough. I mean, Jake, I also want to say, how long is it going to take Congress to actually act?

I mean, you think about this, we have been talking about this for years. How many parents like the one that we just heard from are going to have to come and tell us a devastating story before we actually pass legislation?

The truth is, these A.I. companies can absolutely do much of this on their own, but we know consistently, time and time again, whether it's been social media companies or now some of the A.I. space, that we consistently see people putting their profits over actual people.

TAPPER: Do you think that how much money these companies made -- make is affecting legislators from passing regulations?

BRITT: I do. I do. I do. Well, I mean, the answer is why.

We have a number of pieces of legislation. I was proud -- this is one of the things when I came to the Hill, I said I want to bring a voice to this. I want to elevate this topic for parents from coast to coast. I mean, we are not doing enough to put up guardrails.

I mean, you and I did not grow up with front-facing cameras. So, when you're looking at what's happening right now with sextortion and young people, when you're looking at what's happening right now with the bullying online and whatnot, if these things were happening in a storefront on a Main Street in Alabama, we would shut that store down.

TAPPER: Yes.

BRITT: But we are not able to do that, that the liability shield that we see in these social media companies and to an extent in this A.I. space has to be taken down, because people need to be held accountable.

If you are designing machines or designing platforms or algorithms that are pushing kids into depression or pushing them towards suicide, you absolutely should be held liable for that. So I am disappointed and will continue to push, because I think the time to act for Congress is now.

TAPPER: Yes.

BRITT: Excuses, people are over it, and we're over people going to D.C., Jake, and just dragging their feet and coming up with an excuse. Get in a room, and let's figure out a pathway forward. That's certainly what I'm committed to doing.

TAPPER: So what do you mean by liability shield?

BRITT: Well, if you look at Section 230, in a number of ways, that prevents social media companies from being held liable. So,

when I mentioned like if this were happening in a storefront on a Main Street, right, so if you think about in another Judiciary hearing that we had, you had a lot of people another group of parents that came forward and said that their children had bought a pill on Snapchat thinking it was a Lortab or something else.

It was laced with just a small amount of fentanyl. It only takes five, the equivalent of five grains of sand of fentanyl.

TAPPER: Yes. That happened to the nephew of former Congressman Ted Deutch. I don't know if...

BRITT: Yes, and this -- and that's just it. This is all too common. It is happening in every community and every state from coast to coast, but yet we prevent -- if this were happening -- if a storefront had sold those, we would have shut that storefront down.

TAPPER: Yes.

BRITT: But we're not able to do that because of Section 230 and the liability shield that it provides for social media companies.

So we have got to get to the bottom of that. February will actually be the 30th anniversary of that. It obviously was created in order to allow Internet to grow and flourish. We're at a point now, though, where we know what the statistics say.

Jake, one in three high school young women actually considered death by suicide. In 2021, 25 percent of those young women actually made a plan, and then 13 percent of high school young women actually attempted death by suicide. When you add in young men, it is 9 percent; 9 percent of our high school population in America attempted death by suicide.

Looking at what's happening on these machines and these devices, we have got to do better and be better. And that's everything from legislators to educating ourselves as parents and putting up the proper guardrails so that kids have an opportunity to live their American dream.

TAPPER: Senator Britt, we thank you so much for being here. Happy new year to you and your family.

BRITT: Thank you. Thank you. Happy new year to you as well.

TAPPER: Coming up: from the anxious generation to the amazing generation. Authors Jonathan Haidt and Catherine Price on how kids can still lead full lives in this world so dominated by technology.

(COMMERCIAL BREAK)

[09:38:41]

TAPPER: Welcome back to STATE OF THE UNION. I'm Jake Tapper.

Smartphones, social media, and now artificial intelligence, or A.I., they're all changing the way our kids are growing up. So how should they and their parents navigate this strange new world? Joining us now to discuss, authors Jonathan Haidt and Catherine Price.

They have a new book together. It's called "The Amazing Generation: Your Guide to Fun and Freedom in a Screen-Filled World."

Jonathan, your and Catherine's previous books were aimed at informing parents about the negative impacts of screens and technology, social media, and the like. But with this new book, "The Amazing Generation," you're really trying to reach kids directly. Do you think they're listening and actually want freedom from all this technology?

JONATHAN HAIDT, CO-AUTHOR, "THE AMAZING GENERATION: YOUR GUIDE TO FUN AND FREEDOM IN A SCREEN-FILLED WORLD": Yes, we know that they're listening. We know that they're receptive, for a lot of reasons.

One is, we have done a lot of survey work. And members of Gen Z have a lot of regrets that they grew up on phones. We have surveyed younger members of Gen Z. They say they'd rather play outside than sit on their phones. Kids aren't desperate for social media. What they're desperate is to not be left out.

And when everyone else is on social media, they feel they have to be. And that puts every family in America, every family in the world practically, into the same darn fight over, no, you can't. Yes, everyone else has it.

[09:40:01]

And part of what we want to do with this book is bring the kids along, bring the kids along into this discussion of what kind of childhood do you want to have?

TAPPER: Catherine, what are some of the strategies in the book about how kids can be rebels against big tech, as you put it in the book?

CATHERINE PRICE, CO-AUTHOR, "THE AMAZING GENERATION: YOUR GUIDE TO FUN AND FREEDOM IN A SCREEN-FILLED WORLD": Right.

So we discovered something really exciting, which is that there really is this growing rebellion of young people who want to stand up against big tech and live for themselves. So the book is divided into several parts. But one of them is to basically tell them some of the secrets of what we call the tech wizards, so the ways in which the kids are being manipulated, which gets kids very worked up. They don't like that.

And then the whole rest of the book is devoted to actually teaching kids how to be rebels. And the message is that you should live by what we call the rebel's code, which is to use technology as a tool, don't let it use you, and to fill your life with real friendship, freedom, and fun.

And I presented about the book at my daughter's school a couple weeks ago to about 100 fourth and fifth graders. What I can say from firsthand experience, kids are into this. They do not want to be taken advantage of. They want to stand up. TAPPER: And, Jonathan, you and Catherine have been sounding the alarm

about what smartphones and social media are doing to our kids Now A.I., perhaps even more of a menace, potentially, is entering the picture. How much does that complicate things? Is it going to make a preexisting problem even worse?

HAIDT: Yes. It's likely to make a preexisting problem much, much worse.

Let's look at what we know about social media and the power of short videos on screens. Anything that reinforces kids on a random -- variable ratio reinforcement schedule makes them more addicted, addictable.

That's what social media is doing. That's what the short videos have been doing. A.I. is going to make those short videos a lot more gripping. They're going to be fine-tuned. They're going to be things beyond what any human could make. So social media itself is going to get more addictive.

But, even worse, social media changed how kids talk to other kids. It dehumanized it. A.I. is going to take the human on the other end away and kids are going to grow up talking to artificial creatures. They are not going to learn how to talk to real humans, which bodes very, very poorly for their own lives, their own work lives, for marriage, for child-rearing.

All those are threatened if kids grow up interacting with A.I.s, rather than humans.

TAPPER: Catherine, you and Jonathan are both parents. I don't have to tell you that kids are using these chatbots for relationships instead of reading books. They are using them for schoolwork. They are using them to -- if they have a creative writing idea or assignment, instead of coming up with their own ideas, they are using chatbots.

How is A.I. impacting kids' development and critical thinking skills? And what should parents be doing?

PRICE: Well, I think parents should be very aware that we need to be helping our kids become human beings first and we need to protect their brains, especially during this critical period of brain development and early puberty, where the brain is changing at its fastest rate since babyhood.

And any changes that happen right now, in this formative period, those changes can stick around for longer, for the rest of their lives. So I would say that we need to make sure that our kids are not having relationships with chatbots, but we also need to give them time and space to come up with ideas and thoughts of their own and support them in doing so.

So if you go to ChatGPT and you have it write a story for you, that's not the same creative process at all as sitting there and having to come up with ideas of your own. So I would really caution parents and educators in schools against rushing to introduce A.I. to our children. We need to protect their brains.

TAPPER: Do you think, Jonathan, that teachers should be making sure that these assignments are not A.I.-generated? There is software, I think, that can discern the likelihood of whether or not A.I. or chatbots are part of -- contributed at all to the assignment.

HAIDT: Yes.

Well, speaking as a college professor, I teach a course at NYU Stern, we're all wrestling with this, because we don't want to get into the cat-and-mouse game of trying to guess if someone used A.I., and then you accuse them.

So the basic principle here is that children need to do hard things over and over again. That's how you learn. And so while it's complicated to figure out what we're going to do at a university level, in elementary school, it's not complicated. Kids need to learn basic skills.

And A.I. makes everything easy so they don't learn. There are already experiments showing this. When you give people passages to summarize and you let A.I. do it, they don't learn anything. So I think it's pretty clear, until this technology is proven to be safe, it should not be in elementary or middle schools at all.

The idea that these things are being pushed in to elementary and middle schools, just as laptops and Chromebooks were, and now those seem to be damaging education, let's just declare a moratorium on Silicon Valley using all of the children of the world as their experimental test subjects.

TAPPER: And, Catherine...

HAIDT: Let's force them to show that this thing actually helps before we let it into our schools.

TAPPER: Catherine, we're seeing parents and governments responding to the kinds of warnings you have been making. Some schools and states are banning smartphones in schools. Parents are limiting screen time, restricting social media access.

[09:45:03]

Australia just banned social media for kids under 16. Is the tide turning? Are people waking up to the dangers of big tech?

PRICE: I think the tide is turning. And I can say that, over the past few years, especially since the publication of "The Anxious Generation" in 2024, it's been deeply inspiring to see the change happening so fast.

We now have something like 40 states with phone-free school legislation of some kind. As you said, you have got Australia doing this social media. And I do think that people are waking up to the fact that this is not safe for kids. We need to protect their brains. And it's very exciting to now have this book where we can actually try

to get kids in on this as well and get their support, because, if you convince kids that they need to protect their own brains and not allow themselves to be controlled by big tech, then we ultimately win.

TAPPER: Jonathan, do you think people in Silicon Valley building these devices and chatbots, the CEOs of big tech, are they listening? Do they care? Do they understand that they can either be part of a solution here or eventually the crowds are going to start coming for them?

HAIDT: Well, we know from all kinds of leaks and there are hundreds of parents who are suing these companies because their kids are dead. So we know from those lawsuits, they bring out all kinds of documents.

We know that they know that they are harming children at an industrial scale. We know that many of the people in the companies in their trust and safety divisions care about this and they're trying to suggest like, hey, we should do this or we should allow this reporting mechanism.

So we know that a lot of the employees care and we know that the leadership does not. We know that these suggestions about what they should do, they go up to leadership, especially at Meta, that's the worst of the offenders, but also Snapchat and TikTok, they go up to leadership and leadership says, well, no, if it's going to decrease engagement, let's not do it.

At Meta, they had a 17-strikes-and-you're-out policy for sex trafficking.

TAPPER: Seventeen, yes.

HAIDT: So, no, they -- the leadership is focused on maximizing engagement and reach. They do not seem to give a damn about child safety.

TAPPER: Jonathan, Catherine, thank you so much for your time today. Appreciate it.

Coming up next: Cyber scams, targeting senior citizens or rampant. How one teen is using A.I. to fight back.

(COMMERCIAL BREAK)

[09:51:31]

TAPPER: And welcome back to STATE OF THE UNION.

There is understandably a great deal of angst about what A.I. will mean for the future. But the generation coming of age with it is finding some ways to use the technology for good.

Joining us now, Tejasvi Manoj. In September, she was deemed "TIME" magazine's 2025 Kid of the Year.

Tejasvi, that's pretty cool.

So where did you get the idea to use A.I. to educate and protect senior citizens from scammers?

TEJASVI MANOJ, "TIME" 2025 KID OF THE YEAR: Yes, so the story started with my grandfather almost getting scammed.

And I was trying to find some way to help him instantaneously, provide real-time assistance to help him if there's any scams that come in the future. So, A.I. is very good for real-time assistance. And I knew that, by using A.I., I would be able to help him if -- he would be able to help himself if I wasn't there and it would provide him with more assistance regarding that.

TAPPER: Well, what a great grandkid you are.

So your Web site is called Shield Seniors.

MANOJ: Yes.

TAPPER: Walk us through how Shield Seniors works. How does it use A.I. to identify potential cyber scams?

MANOJ: Yes, so Shield Seniors, the main A.I. components are in the ask section and the analyze section.

So the ask section is essentially a virtual assistance that uses A.I. to basically answer any cybersecurity-related or A.I.-related questions that any older adults have in the language that they understand and the language that they are able to take away knowledge and learn from it.

And then we have the analyze section, which essentially allows the user or the older adult to input an image of a text message or an e- mail and then -- into the chatbot and then the A.I. will analyze on whether it is fraudulent or not and will also explain as to why it is fraudulent and what they looked for specifically in order to deem that it was a scam or not.

TAPPER: So scammers obviously are also using A.I. in some cases to make their scam seem more believable. That must make it harder to detect them. I mean, you have A.I. trying to figure out if the other A.I. is real or not?

MANOJ: Yes, that definitely -- especially with deepfakes and just overall cloning, A.I. is definitely becoming more prevalent in scams.

And that's why I think it is so crucial that we use A.I. to battle A.I., because, in most cases, really, the only way that we can combat those scams is to use A.I. and to use resources like Shield Seniors and others to just combat this situation.

TAPPER: There's a lot of uncertainty out there about whether A.I. will ultimately turn out to be a net positive or a net negative for society. You're part of this generation coming of age with A.I. What do you think? MANOJ: I think that A.I. can be used for good, and I think it's

important to look for the good side into how we can use A.I. to help the social -- for social good, use technology for social good.

[09:55:01]

And I do think that there are ways that it's already proven, like such as with Shield Seniors. I have seen results and I have seen people be able to understand things that they were not able to understand before using A.I. So I think it's just important that we embrace the good of it, especially because it does not seem like it's going away.

TAPPER: It certainly doesn't.

You developed Shield Seniors when you were only 16. You're now in your final year of high school. You're a senior. What's next for you? Do you think you're going to continue to work on ways to make A.I. be as positive as possible?

MANOJ: Yes, 100 percent. I do definitely want to continue to use A.I. in the future, use A.I. for Shield Seniors, which I'm currently doing right now and working on.

And, yes, that's definitely a goal of mine. That's a goal of mine. I do want to use A.I. in the future.

TAPPER: All right, well, best of luck to you. What a cool kid you are.

MANOJ: Thank you so much.

TAPPER: And best of luck. Wherever you end up in college is going to be lucky to have you.

And thank you for spending your Sunday morning with us. Wishing you and your family a merry Christmas and a happy and healthy new year. We will see you in 2026.

The news continues next.