Return to Transcripts main page

At This Hour

Facebook Whistleblower Testifies Before Senate. Aired 11- 11:30am ET

Aired October 05, 2021 - 11:00   ET

THIS IS A RUSH TRANSCRIPT. THIS COPY MAY NOT BE IN ITS FINAL FORM AND MAY BE UPDATED.


SEN. JOHN THUNE (R-SD): In my view, we should encourage employees in the tech sector, like you to speak up about questionable practices of big tech companies. So, we can, among other things, ensure that Americans are fully aware of how social media platforms are using artificial intelligence, and opaque algorithms to keep them hooked on the platform.

So let me Ms. Haugen just ask you, we've learned from the information that you provided that Facebook conducts what's called engagement- based ranking, which you've described is very dangerous. Could you talk more about why engagement-based ranking is dangerous? And do you think Congress should seek to pass the legislation like the Filter Bubble Transparency Act that would give users the ability to avoid engagement-based ranking altogether?

FRANCES HAUGEN, FACEBOOK WHISTLEBLOWER: Facebook is going to say, you don't want to give up engagement-based ranking, you're not going to like Facebook as much, if we're not picking out the content for you, has been having lots of spam. Like let's say, imagine we ordered our feeds by time, like on iMessage, or on -- there are other forms of social media that are chronologically based, they're going to say, you're going to get spam -- you're in spammed, like, you're not going to enjoy your feed.

The reality is that those experiences have a lot of permutations, there are ways that we can make those experiences where computers don't regulate what we see, we together socially regulate what we see. But they don't want us to have that conversation, because Facebook knows that when they pick out the content that we focus on using computers, we spend more time on their platform, they make more money.

The dangers of engagement-based ranking are that Facebook knows that content, that elicits an extreme reaction from you is more likely to get a click a comment or reshare. And it's interesting because those clicks, and comms and reshares aren't even necessarily for your benefit. It's because they know that other people will produce more content if they get the likes and comments and reshares.

They prioritize content in your feed so that you will give little hits of dopamine to your friends, so they will create more content. And they have run experiments on people producer side experiments, where they have confirmed this. Yeah.

THUNE: So, you and your -- party information you provide the Wall Street Journal, it's been found that Facebook altered its algorithm in attempt to boost these meaningful social interactions or MSI. But rather than strengthening bonds between family and friends on the platform, the algorithm instead rewarded more outrage and sensationalism.

And I think Facebook would say that its algorithms are used to connect individuals with other friends and family that are largely positive. Do you believe that Facebook's algorithms make its platform a better place for more users? And should consumers have the option to use Facebook and Instagram without being manipulated by algorithms designed to keep them engaged on that platform?

HAUGEN: I strongly believe, like I've spent most of my career working on systems like engagement-based ranking. Like when I come to you and say these things, I'm basically damned in 10 years in my own work, right? Engagement-based ranking, Facebook says we can do it safely because we have AI.

You know, the artificial intelligence will find the bad content that we know our engagement-based ranking is promoting. They've written blog posts on how they know engagement-based rankings dangerous, but the AI will save us.

Facebook's own research says they cannot adequately identify dangerous content. And as a result, those dangerous algorithms that they admit are picking up the extreme sentence the division, they can't protect us from the harms that they know exist in their own system. And so, I don't think it's just a question of saying, should people have the option of choosing to not be manipulated by their algorithms?

I think if we had appropriate oversight, or if we were reformed 230, to make Facebook responsible for the consequences of their intentional ranking decisions, I think they would get rid of engagement-based ranking, because it is causing teenagers to be exposed to more anorexia content, it is pulling families apart.

And in places like Ethiopia, it's literally fanning ethnic violence. I encourage reform of these platforms, not picking and choosing individual ideas, but instead making the platform's themselves safer, less twitchy, less reactive, less viral, because that's how we scalable, we solve these problems.

THUNE: Thank you, Mr. Chair, I would simply say let's, let's get to work. So, we got some things we can do here. Thanks. I agree.

BLUMENTHAL: Thank you. Senator Schatz.

SEN. BRIAN SCHATZ (D-HI): Thank you, Mr. Chairman, Ranking Member, thank you for your courage in coming forward. Was there a particular moment when you came to the conclusion that reform from the inside was impossible and that you decided to be a whistleblower?

HAUGEN: There was a long series of moments where I became aware that Facebook, one fake Conflicts of interest between its own profits and the common good, public safety that Facebook consistently chose to prioritize its profits. [11:05:16]

I think the moment which I realized we needed to get help from the outside, that the only way these problems would be solved is by solving them together, not solving them alone, was when civic integrity was dissolved following the 2020 election. It really felt like a betrayal of the promises that Facebook can made to people who had sacrificed a great deal to keep the election safe, by basically dissolving our community and integrating and just other parts of the company.

SCHATZ: And when I know their responses, that they've sort of distributed the duties, that's an excuse, right?

HAUGEN: I cannot see into the hearts of other men. And I don't know what --

SCHATZ: Well, let me say it this way, it won't work, right?

HAUGEN: I can tell you that when I left the company, so my -- the people who I worked with were disproportionately maybe 75% of my pod of seven people. Those are product managers, program managers, most of them come from civic integrity.

All of us left the inauthentic behavior pod, either for other parts of the company, or the or the company entirely over the same six-week period of time. So, six months after the reorganization, we had clearly lost faith that those changes were coming.

SCHATZ: You said in your opening statement that they know how to make Facebook and Instagram safer. So thought experiment, you are now the Chief Executive Officer and Chairman of the company. What changes would you immediately institute?

HAUGEN: I would immediately establish a policy of how to share information and research from inside the company with appropriate oversight bodies like Congress. I would give proposed legislation to Congress saying here's what an effective oversight agency would look like.

I would actively engage with academics to make sure that the people who are confirming our Facebook marketing messages true have the information they need to confirm these things. And I would immediately implement the "soft interventions" that were identified to protect the 2020 election.

So that's things like requiring someone to click on a link before reshoring it because other companies like Twitter have found that that significantly reduces misinformation. No one is censored by being forced to click on a link before we're sharing it.

SCHATZ: Thank you, I want to pivot back to Instagram's targeting of kids. We all know that they announced a pause. But that reminds me of what they announced when they were going to issue a digital currency. And they got beat up by the U.S. Senate Banking Committee. And they said, never mind. And now they're coming back around hoping that nobody notices that they are going to try to issue a currency.

Now let's set aside for the moment the sort of the business model, which appears to be gobble up everything, do everything. That's the gross growth strategy. Do you believe that they're actually going to discontinue Instagram kids, or they're just waiting for the dust to settle?

HAUGEN: And I would be sincerely surprised if they do not continue working on Instagram Kids. And I would be amazed if a year from now we don't have this conversation again.

SCHATZ: Why?

HAUGEN: Facebook understands that if they want to continue to grow, they have to find new users, they have to make sure that that the next generation is just as engaged in Instagram as the current one. And the way they'll do that is by making sure that children establish habits before they have good self-regulation.

SCHATZ: By hooking kids?

HAUGEN: By hooking kids. I would like to emphasize one of the documents that we send in on problematic use, examines the rates of problematic use by age and that peaked with 14-year-olds, it's just like cigarettes, teenagers don't have good self-regulation. They say explicitly, I feel bad when I use Instagram and yet I can't stop. We need to protect the kids.

SCHATZ: Just my final question. I have a long list of misstatements, misdirections and outright lies from the company. I don't have time to read them but you're as intimate with all of these deceptions as I am, so I will just jump to the end. If you were a member of this panel, would you believe what Facebook is saying?

HAUGEN: I would not believe. Facebook has not earned our right to just have blind trust in them. Trust is last week one of the most beautiful things that I heard on the Committee was trust is earned and Facebook has not earned our trust.

SCHATZ: Thank you.

[11:10:01]

BLUMENTHAL: Thanks, Senator Schatz, Senator Moran and then we've been joined by the Chair, Senator Cantwell. She'll be next. We're going to break it about 11:30 if that's OK, because we have a vote and then we'll reconvene.

HAUGEN: OK.

SEN. JERRY MORAN (R-KS): Mr. Chairman, thank you. The conversation so far reminds me that you and I ought to resolve our differences and introduced legislation. So as Senator Thune said, let's go to work.

BLUMENTHAL: Our differences are very minor. Well, they seem very minor in the face of the revelations that we've now seen. So, I'm hoping we can move forward, Senator Moran.

MORAN: I share that view, Mr. Chairman, thank you. Thank you very much for your testimony. What examples do you know, we've talked about, particularly children, teenage girls in specifically, but what other examples do you know about where Facebook or Instagram knew its decisions would be harmful to its users, but still proceeded with the with the plan, and executed those harmful, that harmful behavior?

HAUGEN: Facebook's internal research is aware that there are a variety of problems facing children on Instagram that are -- they know that severe harm is happening to children. For example, in the case of bullying, Facebook knows that Instagram dramatically changes the experience of high school. So, when we were in high school, when I was in high school, most kids have --

MORAN: You looked at me, you're reporting.

HAUGEN: Sorry. The -- when I was in high school, you know, or most kids have positive home life's like it doesn't matter how bad it is at school, kids can go home and reset for 16 hours. Kids who are bullied on Instagram, the bullying follows them home, it follows them into their bedrooms.

The last thing they see before they go to bed at night is someone being cruel to them, or the first thing they see in the morning is someone being cruel to them. Kids are learning that their own friends, like people who they care about them are cruel to them. Like think about how that's going to impact their domestic relationships when they become 20 somethings or 30 somethings to believe that people who care about you are mean to you.

Facebook knows that parents today because they didn't experience these things. They never experienced this addictive experience with a piece of technology, they give their children bad advice. They say things like, why don't you just stop using it.

And so that Facebook's own research is aware that children express feelings of loneliness and struggling with these things, because they can't even get support from their own parents. I don't understand how Facebook can know all these things, and not escalate it to someone like Congress for help and support in navigating these problems.

MORAN: Let me ask the question in a broader way, besides teenagers or besides girls or besides youth, are there other practices at Facebook or Instagram that are known to be harmful but yet are pursued?

HAUGEN: Facebook is aware that choices are made in establishing like meaningful social, meaningful social interactions. So, engagement- based ranking that didn't care if you bullied someone or committed hate speech in the comments that was meaningful.

They know that that change directly changed publishers' behavior that companies like BuzzFeed wrote in and said, the content is most successful on our platform is some of the content we're most ashamed of, you have a problem with your ranking, and they did nothing. They know that politicians are being forced to take positions they know their own constituents don't like or approve of, because those are the ones that get distributed on Facebook.

That's a huge, huge negative impact. They all -- Facebook also knows that they have admitted in public that engagement-based raking is dangerous without integrity and security systems, but then not rolled out those integrity and security systems to most of the languages in the world. And that's what causing things like ethnic violence in Ethiopia.

MORAN: Thank you for your answer. What is the magnitude of Facebook's revenues or profits that come from the sale of user data?

HAUGEN: Oh, I am sorry, I've never worked on that. I'm not aware.

MORAN: Thank you. What regulations or legal actions by Congress or by administrative action, do you think would have the most consequence or be feared most by Facebook, Instagram or allied companies?

HAUGEN: I strongly encourage reforming Section 232 exempt decisions about algorithms, right? So, modifying 230 around content, I think has -- it's very complicated. Because user generated content is something that companies have less control over, they have 100% control over their algorithms.

And Facebook should not get a free pass, on choices it makes to prioritize growth and virality and reactiveness over public safety, they shouldn't get a free pass on that because they're paying for their profits right now with our safety. So, I strongly encourage reform of 230 in that way.

[11:15:10]

I also believe there needs to be a dedicated oversight body. Because right now the only people in the world who are trained to analyze these experiments, to understand what's happening inside of Facebook are people who, you know, grew up inside of Facebook, or Pinterest, or another social media company, and there needs to be a regulatory home, or someone like me could do a tour of duty after working on a place like this, and have a place to work on things like regulation, to bring that information out to the oversight boards that that have the right to do oversight.

MORAN: Or regulatory agency within the federal government.

HAUGEN: Yes.

MORAN: Thank you very much. Thank you, Chairman.

BLUMENTHAL: Senator Cantwell. Thank you, Senator Moran.

SEN. MARIA CANTWELL (D-WA): Thank you, Mr. Chairman. Thank you for holding this hearing. And I think my colleagues have brought up a lot of important issues. And so, I think I just want to continue on that vein. First of all, the Privacy Act that I introduced, along with several of my colleagues, actually does have FTC oversight of algorithm transparency. In some instances, I'd hope you'd take a look at that and tell us what other areas you think we should add to that level of transparency.

But clearly, that's the issue at hand here, I think in your coming forward. So, thank you, again, for your willingness to do that. The documentation that you say now exists, is the level of transparency about what's going on, that people haven't been able to see. And so, your information that you say is going up to the highest levels at Facebook, is that they purposely knew that their algorithms were continuing to have misinformation and hate information.

And that when presented with information about this terminology, you know, downstream MSI, meaningful social information, knowing that it was this choice, you could continue this wrongheaded information, hate information about the Rohingya or you could continue to get higher click through rates.

And I know you said you don't know about profits, but I'm pretty sure you know that on a page. If you click through the next page, I'm pretty sure there's a lot more ad revenue than if you didn't click through.

So, you're saying the documents exist that at the highest level, at Facebook, you had information discussing these two choices, and that people chose even though they knew that it was misinformation, and hurtful and maybe even causing people lives, they continue to choose profit.

HAUGEN: We have submitted documents to Congress outlining Mark Zuckerberg was directly presented with a list of "soft interventions." So hard intervention is like taking a piece of content off Facebook, taking a user off Facebook.

Soft interventions are about making slightly different choices to make the platform less viral, less twitchy. Mark was presented with these options and chose to not remove downstream MSI in April of 2020, even though he -- and even just isolated in at risk countries, that's countries at risk of violence, if it had any impact on the overall MSI metric. So, he chose --

CANTWELL: Which in translation means less money?

HAUGEN: Yeah, he said --

CANTWELL: Right? Was there another reason given why they would do it other than they thought it would really affect their numbers?

HAUGEN: I don't know for certain, like Jeff Horwitz, the report for the Wall Street Journal, I struggled with this. We sat there and read these minutes, and we're like, how is this possible? Like we've just read 100 pages on how downstream MSI expands hate speech, misinformation, violence inciting content, graphic violent content, why wouldn't you get rid of this?

And we -- the best theory that we've come up with, and I want to emphasize this is just our interpretation on it, is people's bonuses are tied to MSI, right? Like people stay or leave the company based on what they get paid. And like, if you hurt MSI, a bunch people didn't -- weren't going to get their bonuses.

CANTWELL: So, you're saying that this practice even still continues today. Like we're still in this environment. I'm personally very frustrated by this because we presented information to Facebook from one of my own constituents in 2018, talking about this issue with Rohingya, pleading with a company. We pleaded with the company, and they continue to not address this issue.

Now you're pointing out that these same algorithms are being used, and they know darn well in Ethiopia that it's causing an inciting violence, and again, they are still today choosing profit overtaking this information down. Is that correct?

HAUGEN: When writing began in the United States in the summer of last year, they turned off downstream and the site only for when they detected content was health content, which is probably COVID, and civic content. But Facebook's own algorithms are bad at finding this content. It's still in the raw form for 80, 90%of even that sensitive content in countries where they don't have integrity systems in the local language.

And in the case of Ethiopia, there are 100 million people in Ethiopia and six languages. Facebook only supports two of those languages for integrity systems. This strategy of focusing on language specific content specific systems AI to save us is doomed to fail.

[11:20:24]

CANTWELL: How -- I need to get to the one of the -- first of all, I'm sending a letter to Facebook today, they better not delete any information as a relates to the Rohingya or investigations about how they proceeded on this, particularly in light of your information or the documents.

But are we also now talking about advertising fraud? Are you selling something to advertisers? That's not really what they're getting? We know about this, because of the newspaper issues, we're trying to say that journalism that basically has to meet a different standard, a public interest standard that basically is out there basically proving every day or they can be sued.

These guys are a social media platform that doesn't have to live with that. And then the consequences, they're telling their advertisers that this was right, we see it, we see it, people are coming back to the local journalism, because they're like we want to be against with the trusted brand. We don't want to be in, you know, your website. So, I think you're finding for the -- SEC is an interesting one.

But I think that we also have to look at what are the other issues here and one of them is did you defraud -- did they defraud advertisers and telling them this was the advertising content that you were going to be advertised again, when in reality was something different? It was based on a different model.

HAUGEN: We have multiple examples of questions and answers for the advertising staff, the sales staff, where advertisers say after the riots last summer, were asked, should we come back to Facebook or after the insurrection? Like should we come back to Facebook, and Facebook said in their talking points that they gave to advertisers, we're doing everything in our power to make this safer, or we take down all the hate speech when we find it. The Facebook's owning --

CANTWELL: And that was not true.

HAUGEN: That was not true. They get 3% to 5% of hate speech.

CANTWELL: Thank you. Thank you, Mr. Chairman.

BLUMENTHAL: Thanks, Senator Cantwell. And if you want to make your letter available to other members of the committee, I'd be glad to join you myself.

CANTWELL: Thank you. Thank you.

BLUMENTHAL: And thank you for suggesting it.

CANTWELL: Thank you.

BLUMENTHAL: Senator Lee.

SEN. MIKE LEE (R-UT): Thank you, Mr. Chairman. And thank you Ms. Haugen, for joining us this week. It's very, very helpful. We're grateful that you're willing to make yourself available.

Last week, we had another witness from Facebook, Ms. Davis. She came in she testified before this committee and she focused on, among other things, the extent to which Facebook targets ads to children, including ads that are either sexually suggestive or geared toward adult themed products or themes in general.

Now, I didn't -- Well, I appreciated her willingness to be here. I didn't get the clearest answers in response to some of those questions. And so, I'm hoping that you can help shed some light on some of those issues related to Facebook's advertising processes here today, is we get into this, I want to first read you a quote that I got from Ms. Davis last week.

Here's what she said during her questioning, "when we do ads to young people, there are only three things that an advertiser can target around age, gender location. We also prohibit certain ads to young people, including weight loss ads, we don't allow tobacco ads at all, meaning to young people, we don't allow them to children, we don't allow them to minors."

Now, since that exchange happened last week, there are a number of individuals and groups including a group called the Technology Transparency Project, or TTP. That have indicated that part of our testimony was inaccurate that it was false. TTP noted that TDP had conducted an experiment just last month, and their goal was to run a series of ads that would be targeted to children ages 13 to 17, to users in the United States.

Now, I want to emphasize that TTP didn't end up running these ads. They stopped them from being distributed to users. But Facebook did in fact, approve them. And as I understand it, Facebook approved them for an audience of up to 9.1 million users, all of whom were teens. So, I brought a few of these to show you today. This is the first one I wanted to showcase.

This first one has a colorful graphic, encouraging kids to "throw a skittles party like no other," which, you know, as the graphic indicates and as the slang jargon also independently suggests. This involves kids getting together randomly to abuse prescription drugs. The second graphic displays an Anna Tip that is a tip specifically designed to encourage and promote anorexia.

[11:25:12]

And it's on there. Now, the language the Anna Tip itself, independently promotes that, the ad also promotes it insofar as it was suggesting. These are images you ought to look at when you need motivation to be more anorexic, I guess you could say.

Now, the third one invites children to find their partner online. And to make a love connection, you look lonely, find your partner now to make a love connection.

Now look, it'd be an entirely different kettle of fish. If this were targeted to an adult audience, it is not, it's targeted to 13 to 17- year-olds.

Now, obviously, I don't support and TTP does not support these messages, particularly when targeted to impressionable children. And again, just to be clear, TTP did not end up pushing the ads out after receiving Facebook's approval, but it did in fact, receive Facebook's approval.

So, I think this says something one could argue that it proves that Facebook is allowing and perhaps facilitating the targeting of harmful adult themed ads to our nation's children. So, could you please explain to me Ms. Haugen how these ads with a target audience of 13 to 17-year-old children? How would they possibly be approved by Facebook? And is AI involved in that?

HAUGEN: I did not work directly on the ad approval system. What was resonant for me about your testimony is Facebook has a deep focus on scale. So, scale is can we do things very cheaply for a huge number of people, which is part of why they rely on AI so much, it is very possible that none of those ads received by a human.

And the reality is that we've seen from repeated documents within my disclosures is that Facebook's AI systems only catch a very tiny minority of offending content. And best-case scenario, in the case of something like hate speech, at most, they will ever get 10 to 20%. In case of children, that means drug paraphernalia ads like that. And it's likely if they rely on computers and not humans, they will also likely never get more than 10% to 20% of those ads.

LEE: Understood. Mr. Chairman, I've got one minor follow up question, it should be easy to answer. BLUMENTHAL: Go ahead.

LEE: So while Facebook may claim that it only targets ads, based on age, gender, and location, even though these things seem to counteract that, but let's set that aside for a minute, and that they're not basing ads, based on specific interest categories, does Facebook still collect interest category data on teenagers, even if they aren't at that moment, targeting ads at teens, based on those interests categories?

HAUGEN: I think it's very important to differentiate between what targeting our advertisers allowed to specify and what targeting Facebook may learn for an ad, let's imagine you had some text on an ad, it would likely extract out features that if it was relevant for that ad, for example, in the case of something about partying, it was learn partying as a concept.

I'm very suspicious that personalized ads are still not being delivered to teenagers on Instagram, because the algorithms learn correlations, they learn interactions, where your party ad may still go to kids interested in partying, because Facebook is proof, is almost certainly has a ranking model in the background that says this person wants more party related content.

LEE: Interesting. Thank you. That's very helpful. And what that suggests to me is that while they're saying they're not targeting teens with those ads, the algorithm might do some of that work for them, which might explain why they collect the data, even while claiming that they're not targeting those ads in that way.

HAUGEN: I can't speak to whether or not that's the intention. But the reality is it's very, very, very difficult to understand these algorithms today. And over and over and over again, we saw these biases the algorithms unintentionally learn. And so yeah, it's very hard to disentangle out these factors as long as you have engagement- based ranking.

LEE: Thank you, Ms. Haugen.

BLUMENTHAL: Thank you very much, Senator Lee. Senator Markey.

SEN. ED MARKEY (D-MA): Thank you, Mr. Chairman, very much. Thank you, Ms. Haugen. You are a 21st century American hero, warning our country of the danger for young people, for our democracy. And our nation owes you just a huge debt of gratitude.