Return to Transcripts main page
CNN Newsroom
Facebook CEO Mark Zuckerberg Testifies Before Congress. Aired 3-3:30p ET
Aired April 10, 2018 - 3:00 ET
THIS IS A RUSH TRANSCRIPT. THIS COPY MAY NOT BE IN ITS FINAL FORM AND MAY BE UPDATED.
[15:00:04]
SEN. CHARLES GRASSLEY (R), IOWA: If so, how many times has that happened and was Facebook only made aware of that transfer by some third party?
MARK ZUCKERBERG, CHAIRMAN AND CEO, FACEBOOK: Mr. Chairman, thank you.
As I mentioned, we're now conducting a full investigation into every single app that had access to a large amount of information before we lock down platform to prevent developers from accessing this information in around 2014.
We believe that we're going to be investigating many apps, tens of thousands of apps. And if we find any suspicious activity, we're going to conduct a full audit of those apps to understand how they're using their data and if they're doing anything improper.
And if we find that they're doing anything improper, we will ban them from Facebook and we will tell everyone affected.
As for past activity, I don't have all the examples of apps that we have banned here, but, if you would like, I can have my team follow up with you after this.
GRASSLEY: Have you ever required an audit to ensure the deletion of improperly transferred data, and, if so, how many times?
ZUCKERBERG: Mr. Chairman, yes we have. I don't have the exact figure on how many times we have, but, overall, the way we have enforced our platform policies in the past is, we have looked at patterns of how apps have used our APIs and accessed information, as well as looked into reports that people have made to us about apps that might be doing sketchy things.
Going forward, we're going to take a more proactive position on this and do much more regular spot checks and other reviews of apps, as well as increasing the amount of audits that we do.
And, again, I can make sure our team follows up with you on anything about the specific past stats that would be interesting.
GRASSLEY: I was going to assume that, sitting here today, you have no idea, and if I'm wrong on that, you're able -- you're telling me, I think, that you're able to supply these figures to us, at least as of this point?
ZUCKERBERG: Mr. Chairman, I will have my team follow up with you on what information we have.
GRASSLEY: OK. But right now you have no certainty of whether or not -- how much of that is going on. Right? OK.
Facebook collects massive amounts of data from consumers, including content, networks, contact lists, device information, location, and information from third parties.
Yet your data policy is only a few pages long and provides consumers with only a few examples of what is collected and how it might be used. The examples given emphasize benign uses, such as connecting with friends, but your policy does not give any indication for more controversial issues of such data.
My question, why doesn't Facebook disclose to it users all the ways the data might be used by Facebook and other third parties? And what is Facebook's responsibility to inform users about that information?
ZUCKERBERG: Mr. Chairman, I believe it's important to tell people exactly how the information that they share on Facebook is going to be used.
That's why every single time you go to share something on Facebook, whether it's a photo in Facebook or a message in Messenger or WhatsApp, every single time, there's a control right there about who you're going to be sharing it with, whether it's your friends or public or a specific group, and you can change that and control that in line.
To your broader point about the privacy policy, this gets into an issue that I think we and others in the tech industry have found challenging, which is that long privacy policies are very confusing.
And if you make it long and spell out all the detail, then you're probably going to reduce the percent of people who read it and make it accessible to them. So, one of the things that we have struggled with over time is to make something that is as simple as possible, so people can understand it, as well as giving them controls in line in the product in the context of when they're trying to actually use them, taking into account that we don't expect that most people will want to go through and read a full legal document.
GRASSLEY: Senator Nelson.
SEN. BILL NELSON (D), FLORIDA: Thank you, Mr. Chairman.
Yesterday, when we talked, I gave the relatively harmless example that I'm communicating with my friends on Facebook, and indicate that I love a certain kind of chocolate. And all of a sudden, I start receiving advertisements for chocolate.
What if I don't want to receive those commercial advertisements? So, your chief operating officer, Ms. Sandberg, suggested on the NBC "Today Show" that Facebook users who do not want their personal information used for advertising might have to pay for that protection, pay for it.
[15:05:16]
Are you actually considering having Facebook users pay for you not to use that information?
ZUCKERBERG: Senator, people have a control over how their information is used in ads in the product today. So, if you want to have an experience where your ads aren't targeted using all the information that we have available, you can turn off third-party information.
What we have found is that, even though some people don't like ads, people really don't ads that aren't relevant. And while there is some discomfort, for sure, with using information in making ads more relevant, the overwhelming feedback that we get from our community is that people would rather have us show relevant content there than not.
So we offer this control that you're referencing. Some people use it. It's not the majority of people on Facebook. And I think that that's a good level of control to offer.
I think what Sheryl was saying was that, in order to not run ads at all, we would still need some sort of business model.
NELSON: And that is your business model. So I take it that -- and I use the harmless example of chocolate, but if it got into more personal thing, communicating with friends, and I want to cut it off, I'm going to have to pay you in order not to send me using my personal information something that I don't want? That, in essence, is what I understood Sandberg to say.
Is that correct?
ZUCKERBERG: Yes, Senator, although, to be clear, we don't offer an option today for people to pay to not show ads.
We think an ad-supported service is the most aligned with our mission of trying to help connect everyone in the world, because we want to offer a free service that everyone can afford.
NELSON: OK.
ZUCKERBERG: That's the only way that we can reach billions of people.
NELSON: So, therefore, you consider my personally identifiable data the company's data, not my data; is that it?
ZUCKERBERG: No, Senator.
Actually, at the first line of our terms of service say that you control and own the information and content that you put on Facebook.
NELSON: Well, the recent scandal is obviously frustrating, not only because it affected 87 million, but because it seems to be part of a pattern of lax data practices by the company going back years.
So, back in 2011, there was a settlement with the FTC, and now we discover yet another incident where the data was failed to be protected. When you discovered the Cambridge Analytica that had fraudulently obtained all of this information, why didn't you inform those 87 million?
ZUCKERBERG: When we learned in 2015 that Cambridge Analytica had bought data from an app developer on Facebook that people had shared it with, we did take action. We took down the app.
And we demanded that both the app developer and Cambridge Analytica delete and stop using any data that they had. They told us that they did this.
In retrospect, it was clearly a mistake to believe them.
NELSON: Yes.
ZUCKERBERG: And we should have followed up and done a full audit then. And that's not a mistake that we will make.
NELSON: Yes. You did that and you apologized for it, but you didn't notify them. And do you think that you have an ethical obligation to notify 87 million Facebook users?
ZUCKERBERG: Senator, when we heard back from Cambridge Analytica that they had told us that they weren't using the data and deleted it, we considered it a closed case.
In retrospect, that was clearly a mistake. We shouldn't have taken their word for it. And we have updated our policies in how we're going to operate the company to make sure that we don't make that mistake again.
NELSON: Did anybody notify the FTC?
ZUCKERBERG: No, Senator, for the same reason, that we considered it a closed case.
GRASSLEY: Senator Thune?
SEN. JOHN THUNE (R), SOUTH DAKOTA: Yes.
And, Mr. Zuckerberg, would you do that differently today, presumably, in response to Senator Nelson's question?
ZUCKERBERG: Yes.
THUNE: Having to do it over?
This may be your first appearance before Congress, but it's not the first time that Facebook has faced tough questions about its privacy policies.
[15:10:02] "Wired" magazine recently noted that you have a 14-year history of apologizing for ill-advised decisions regarding user privacy, not unlike the one that you made just now in your opening statement.
After more than a decade of promises to do better, how is today's apology different? And why should we trust Facebook to make the necessary changes to ensure user privacy and give people a clearer picture of your privacy policies?
ZUCKERBERG: Thank you, Mr. Chairman.
So, we have made a lot of mistakes in running the company. I think it's pretty much impossible, I believe, to start a company in your dorm room and then grow it to be at the scale that we're at now without making some mistakes.
And because our service is about helping people connect and information, those mistakes have been different in how they -- we try not the make the same mistake multiple times, but, in general, a lot of the mistakes are around how people connect to each other, just because of the nature of the service.
Overall, I would say that we're going through a broader philosophical shift in how we approach our responsibility as a company. For the first 10 or 12 years of the company, I viewed our responsibility as primarily building tools, that if we could put the tools in people's hands, then that would empower people to do good things.
What I think we have learned now, across a number of issues, not just data privacy, but also fake news and foreign interference in elections, is that we need to take a more proactive role and a broader view of our responsibility.
It's not enough to just build tools. We need to make sure that they're used for good. And that means that we need to now take a more active view in policing the ecosystem and in watching and kind of looking out and making sure that all of the members in our community are using these tools in a way that's going to be good and healthy.
So, at the end of the day, this is going to be something where people will measure us by our results on this. It's not that I expect that anything I say here today to necessarily change people's view.
But I'm committed to getting this right, and I believe that over the coming years, once we fully work all the solutions through, people will see real differences.
THUNE: OK. Well, and I'm glad that you all have gotten that message.
As we discussed in my office yesterday, the line between legitimate political discourse and hate speech can sometimes be hard to identify, and especially when you're relying on artificial intelligence and other technologies for the initial discovery.
Can you discuss what steps that Facebook currently takes when making these evaluations, the challenges that you face and any examples of where you may draw the line between what is and what is not hate speech?
ZUCKERBERG: Yes, Mr. Chairman. I will speak to hate speech and then I will talk about enforcing our content policies more broadly.
So, actually, maybe, if you're OK with it, I will go in the other order.
So, from the beginning of the company, in 2004, I started it in my dorm room. It was me and my roommate. We didn't have A.I. technology that could look at the content that people were sharing, so we basically had to enforce our content policies reactively. People could share what they wanted, and then, if someone in the community found it to be offensive or against our policies, they'd flag it for us, and we'd look at it reactively.
Now, increasingly, we're developing A.I. tools that can identify certain classes of bad activity proactively and flag it for our team at Facebook. By the end of this year, by the way, we're going to have more than 20,000 people working on security and content review working across all these things.
So, when content gets flagged to us, we have those people look at it. And if it violates our policies, then we take it down. Some problems lend themselves more easily to A.I. solutions than others. So, hate speech is one of the hardest, because determining if something is hate speech is very linguistically nuanced. Right?
It's -- you need to understand what is a slur and what -- whether something is hateful, not just in English, but the majority of people on Facebook use it in languages that are different across the world.
Contrast that, for example, with an area like finding terrorist propaganda, which we actually have been very successful at deploying A.I. tools on already. Today, as we sit here, 99 percent of the ISIS and al Qaeda content that we take down on Facebook, our A.I. systems flag before any human sees it.
So, that's a success in terms of rolling out A.I. tools that can proactively police and enforce safety across the community.
Hate speech, I'm optimistic that over a five- to 10-year period, we will have A.I. tools that can get into some of the nuances, the linguistic nuances of different types of content to be more accurate in flagging things for our systems.
[15:15:01]
But, today, we're just not there on that. So, a lot of this is still reactive. People flag it to us. We have people look at it. We have policies to try to make it as not subjective as possible, but until we get it more automated, there's a higher error rate than I'm happy with.
THUNE: Thank you.
GRASSLEY: Senator Feinstein. SEN. DIANNE FEINSTEIN (D), CALIFORNIA: Thanks, Mr. Chairman.
Mr. Zuckerberg, what is Facebook doing to prevent foreign actors from interfering in U.S. elections?
ZUCKERBERG: Thank you, Senator.
This is one of my top priorities in 2018 is to get this right. I -- one of my greatest regrets in running the company is that we were slow in identifying the Russian information operations in 2016.
We expected them to do a number of more traditional cyber-attacks, which we did identify and notify the companies that they were trying to hack into them, but we were slow in identifying the type of new information operations.
FEINSTEIN: When did you identify new operations?
ZUCKERBERG: It was right around the time of the 2016 election itself.
So, since then, we -- 2018 is an incredibly important year for elections, not just with the U.S. midterms, but around the world, there are important elections in India, in Brazil, in Mexico, in Pakistan, and in Hungary. And we want to make sure we do everything we can to protect the integrity of those elections.
Now, I have more confidence that we're going to get this right, because since the 2016 election, there have been several important elections around the world where we've had a better record. There's the French presidential election. There's the German election. There's the U.S. Senate Alabama special election last year.
FEINSTEIN: Explain what is better about the record.
ZUCKERBERG: So, we have deployed new A.I. tools that do a better job of identifying fake accounts that may be trying to interfere in elections or spread misinformation.
And between those three elections, we were able to proactively remove tens of thousands of accounts that -- before they could contribute significant harm.
And the nature of these attacks, though, is that there are people in Russia whose job it is, is to try to exploit our systems and other Internet systems and other systems as well. So, this is an arms race. Right? And they're going to keep on getting better at this, and we need to invest in keeping on getting better at this too, which is why one of the things I mentioned before is, we're going to have more than 20,000 people by the end of this year working on security and content review across the company.
FEINSTEIN: Speak for a moment about automated bots that spread disinformation. What are you doing to punish those who exploit your platform in that regard?
ZUCKERBERG: Well, you're not allowed to have a fake account on Facebook. Your content has to be authentic.
So we build technical tools to try to identify when people are creating fake accounts, especially large networks of fake accounts, like the Russians have, in order to remove all of that content. After the 2016 election, our top priority was protecting the integrity of other elections around the world.
But at the same time, we had a parallel effort to trace back to Russia the IRA activity, the Internet Research Agency activity. That was that part of the Russian government that did this activity in 2016. And just last week, we were able to determine that a number of Russian media organizations that were sanctioned by the Russia regulator were operated and controlled by this Internet Research Agency.
So, we took the step last week -- it was a pretty big step for us -- of taking down sanctioned news organizations in Russia as part of an operation to remove 270 fake accounts and pages, part of their broader network in Russia that was actually not targeting international interference, as much as -- I'm sorry -- let me correct that.
It was primarily targeting spreading misinformation in Russia itself, as well as certain Russian-speaking neighboring countries.
FEINSTEIN: How many accounts of this type have you taken down?
ZUCKERBERG: Across -- in the IRA specifically, the ones that we have pegged back to the IRA, we can identify 470 in the American elections and the 270 that we specifically went after in Russia last week.
There are many others that are systems catch which are more difficult to attribute specifically to Russian intelligence. But the number would be in the tens of thousands of fake accounts that we remove, and I'm happy to have my team follow up with you on more information, if that would be helpful.
FEINSTEIN: Would you please? I think this is very important.
If you knew in 2015 that Cambridge Analytica was using the information of professor Kogan's, why didn't Facebook ban Cambridge Analytica in 2015? Why did you wait...
(CROSSTALK)
ZUCKERBERG: Senator, that's a great question.
Cambridge Analytica wasn't using our services in 2015, as far as we can tell. So this is clearly one of the questions that I asked our team as soon as I learned about this, is, why did we wait until we found out about the reports last month to ban them?
[15:20:08]
It's because, as of the time that we learned about their activity in 2015, they weren't an advertiser. They weren't running pages. So we actually had nothing to ban.
FEINSTEIN: Thank you.
Thank you, Mr. Chairman.
GRASSLEY: Thank you, Senator Feinstein.
Now Senator Hatch.
SEN. ORRIN HATCH (R), UTAH: Well, in my opinion, this is the most intense public scrutiny I have seen for a tech-related hearing since the Microsoft hearing that I chaired back in the late 1990s.
The recent stories about Cambridge Analytica and data-mining on social media have raised serious concerns about consumer privacy, and, naturally, I know you understand that. At the same time, these stories touch on the very foundation of the Internet economy and the way the Web sites that drive our Internet economy make money.
Some have professed themselves shocked, shocked that companies like Facebook and Google share user data with advertisers. Did any of these individuals ever stop to ask themselves why Facebook and Google don't charge for access? Nothing in life is free.
Everything involves trade-offs. If you want something without having to pay money for it, you're going to have to pay for it in some other way, it seems to me. And that's where -- what we're seeing here. And these great Web sites that don't charge for access, they extract value in some other way.
And there's nothing wrong with that, as long as they're up front about what they're doing. In my mind, the issue here is transparency. It's consumer choice. Do users understand what they're agreeing to when they access a Web site or agree to terms of service?
Are Web sites up front about how they extract value from users, or do they hide the ball? Do consumers have the information they need to make an informed choice regarding whether or not to visit a particular Web site?
To my mind, these are questions that we should ask or be focusing on.
Now, Mr. Zuckerberg, I remember well your first visit to Capitol Hill back in 2010. You spoke to the Senate Republican High-Tech Task Force, which I chair. You said back then that Facebook would always be free. Is that still your objective?
ZUCKERBERG: Senator, yes. There will always be a version of Facebook that is free. It is our mission to try to help connect everyone around the world and to bring the world closer together. In order to do that, we believe that we need offer a service that everyone can afford, and we're committed to doing that.
HATCH: Well, if so, how do you sustain a business model in which users don't pay for your service?
ZUCKERBERG: Senator, we run ads.
HATCH: I see. That's great.
Whenever a controversy like this arises, there's always a danger that Congress' response will be to step in and over-regulate. Now, that's been the experience that I have had in my 42 years here.
In your view, what sorts of legislative changes would help to solve the problem the Cambridge Analytica story has revealed? And what sorts of legislative changes would not help to solve this issue?
ZUCKERBERG: Senator, I think that there are a few categories of legislation that make sense to consider.
Around privacy specifically, there are a few principles that I think it would be useful to discuss and potentially codify into law. One is around having a simple and practical set of ways that you explain what you're doing with data.
And we talked a little bit earlier around the complexity of laying out these -- long privacy policy. It's hard to say that people fully understand something when it's only written out in a long legal document. This stuff needs to be implemented in a way where people can actually understand it, where consumers can understand it, but that can also capture all the nuances of how these services work in a way that doesn't -- that is not overly restrictive on providing the services.
That's one.
The second is around giving people complete control. This is the most important principle for Facebook. Every piece of content that you share on Facebook, you own, and you have complete control over who sees it, and how you share it. And you can remove it at any time.
That's why, every day, about 100 billion times a day, people come to one of our services and either post a photo or send a message to someone, because they know that they have that control and that who they say it's going to go to is going to be who sees the content. And I think that that control is something that's important.
That, I think, should apply to every service. And...
HATCH: Go ahead.
ZUCKERBERG: The third point is just around enabling innovation, because some of these use cases that are very sensitive, like face recognition, for example.
[15:25:05]
And I think that there's a balance that's extremely important to strike here, where you obtain special consent for sensitive features like face recognition, but don't -- but that we still need to make it so that American companies can innovate in those areas, or else we're going to fall behind Chinese competitors and others around the world who have different regimes for different new features like that.
GRASSLEY: Senator Cantwell?
SEN. MARIA CANTWELL (D), WASHINGTON: Thank you, Mr. Chairman.
Welcome, Mr. Zuckerberg.
Do you know who Palantir is?
ZUCKERBERG: I do.
CANTWELL: Some people have referred to them as a Stanford Analytica.
Do you agree?
ZUCKERBERG: Senator, I have not heard that.
CANTWELL: OK. Do you think Palantir taught Cambridge Analytica, as press reports are saying, how to do these tactics?
ZUCKERBERG: Senator, I don't know.
CANTWELL: Do you think that Palantir has ever scraped data from Facebook?
ZUCKERBERG: Senator, I'm not aware of that.
CANTWELL: OK.
Do you think that, during the 2016 campaign, as Cambridge Analytica was providing support to the Trump campaign under Project Alamo, were there any Facebook people involved in that sharing of technique and information?
ZUCKERBERG: Senator, we provided support to the Trump, campaign similar to what we provide to any advertiser or campaign who asks for it.
CANTWELL: So, that was a yes? Is that a yes?
ZUCKERBERG: Senator, can you repeat the specific question? I just want to make sure I get specifically what you're asking.
CANTWELL: During the 2016, Cambridge Analytica worked with the Trump campaign to refine tactics? And were Facebook employees involved in that?
ZUCKERBERG: Senator, I don't know that our employees were involved with Cambridge Analytica, although I know that we did help out the Trump campaign overall in sales support, in the same way that we do with other campaigns.
CANTWELL: So, they may have been involved and all working together during that time period? Maybe that's something your investigation will find out?
ZUCKERBERG: Senator, my -- I can certainly have my team get back to you on any specifics there that I don't know sitting here today. CANTWELL: Have you heard of total information awareness? Do you know
what I'm talking about?
ZUCKERBERG: No, I do not.
CANTWELL: OK.
Total information awareness was, 2003, John Ashcroft and others trying to do similar things to what I think is behind all of this, geopolitical forces trying to get data and information to influence a process.
So, when I look at Palantir and what they're doing, and I look at WhatsApp, which is another acquisition, and I look at where you are from the 2011 consent decree and where you are today, I'm thinking, is this guy outfoxing the foxes, or is he going along with what is a major trend in an information age, to try to harvest information for political forces?
And so my question to you is, do you see that those applications, that those companies, Palantir and even WhatsApp, are going to fall into the same situation that you have just fallen into over the last several years?
ZUCKERBERG: Senator, I'm not -- I'm not sure specifically.
Overall, I do think that these issues around information access are challenging. To the specifics act about those apps, I'm not really that familiar with that Palantir does. WhatsApp collects very little information, and I think is less likely to have the kinds of issues because of the way that the service is architected.
But, certainly, I think that these are broad issues across the tech industry.
CANTWELL: Well, I guess, given the track record, where Facebook is and why you're here today, I guess people would say that they didn't act boldly enough.
And the fact that people like John Bolton basically was an investor -- in a "New York Times" article earlier, I guess it was actually last month, that the Bolton PAC was obsessed with how America was becoming limp-wristed and spineless, and it wanted research and messaging for national security issues.
So, the fact that there are a lot of people who are interested in this larger effort, and what I think my constituents want to know is, was this discussed at your board meetings?
And what are the applications and interests that are being discussed without putting real teeth into this? We don't want to come back to this situation again.
I believe you have all the talent. My question is whether you have all the will to help us solve this problem.
ZUCKERBERG: Yes, Senator.