Facebook cracked down for the Chauvin verdict. Why not always? - Los Angeles Times
Advertisement

Facebook cracked down ahead of the Chauvin verdict. Why not always?

A demonstrator is outside the home of Facebook CEO Mark Zuckerberg in November 2020.
A demonstrator outside of the home of Facebook CEO Mark Zuckerberg in November 2020. The company is bracing for possible violence following a verdict in the trial of Derek Chauvin.
(Associated Press)
Share via

As lawyers for both sides offered their closing statements in the trial of Derek Chauvin on Monday, a thousand miles away, executives at Facebook were preparing for the verdict to drop.

Seeking to avoid incidents like the one last summer in which 17-year-old Kyle Rittenhouse shot and killed two protesters in Kenosha, Wis., the social media company said it would take actions aimed at “preventing online content from being linked to offline harm.”

(Chauvin is the former Minneapolis police officer found guilty Tuesday of the second-degree murder of George Floyd last May; the Kenosha shootings took place in August 2020 after a local militia group called on armed civilians to defend the city amid protests against the police shooting of another Black man, Jacob Blake.)

Advertisement

As precautions, Facebook said it would “remove Pages, groups, Events and Instagram accounts that violate our violence and incitement policy,” and would also “remove events organized in temporary, high-risk locations that contain calls to bring arms.” It also promised to take down content violating prohibitions on “hate speech, bullying and harassment, graphic violence, and violence and incitement,” as well as “limit the spread” of posts its system predicts are likely to later be removed for violations.

“Our teams are working around the clock to look for potential threats both on and off of Facebook and Instagram so we can protect peaceful protests and limit content that could lead to civil unrest or violence,” Monika Bickert, Facebook’s vice president of content policy, wrote in a blog post.

The jury in the trial of Derek Chauvin convicted the former Minneapolis police officer of murder in the death of George Floyd.

April 20, 2021

But in demonstrating the power it has to police problematic content when it feels a sense of urgency, Facebook invited its many critics to ask: Why not take such precautions all the time?

Advertisement

“Hate is an ongoing problem on Facebook, and the fact that Facebook, in response to this incident, is saying that it can apply specific controls to emergency situations means that there is more that they can do to address hate, and that … for the most part, Facebook is choosing not to do so,” said Daniel Kelley, associate director of the Anti-Defamation League’s Center for Technology and Society.

“It’s really disheartening to imagine that there are controls that they can put in place around so-called ‘emergency situations’ that would increase the sensitivity of their tools, their products, around hate and harassment [generally].”

This isn’t the only time Facebook has “turned up the dials” in anticipation of political violence. Just this year, it has taken similar steps around President Biden’s inauguration, the coup in Myanmar and India’s elections.

Facebook declined to discuss why these measures aren’t the platform’s default, or what downside always having them in place would pose. In a 2018 essay, Chief Executive Mark Zuckerberg said content that flirts with violating site policies received more engagement in the form of clicks, likes, comments and shares. Zuckerberg called it a “basic incentive problem” and said Facebook would reduce distribution of such “borderline content.”

Advertisement

Central to Facebook’s response seems to be its designation of Minneapolis as a temporary “high-risk location” — a status the company said may be applied to additional locations as the situation in Minneapolis develops. Facebook has previously described comparable moderation efforts as responses specifically geared toward “countries at risk of conflict.”

“They’re trying to get ahead of … any kind of outbreak of violence that may occur if the trial verdict goes one way or another,” Kelley said. “It’s a mitigation effort on their part, because they know that this is going to be … a really momentous decision.”

He said Facebook needs to make sure it doesn’t interfere with legitimate discussion of the Chauvin trial — a balance the company has more than enough resources to be able to strike, he added.

The case comes down to two key questions — whether Chauvin caused George Floyd’s death and whether his actions were reasonable — and each charge requires a different element of proof.

April 19, 2021

Another incentive for Facebook to handle the Chauvin verdict with extreme caution is to avoid feeding into the inevitable criticism of its impending decision about whether former President Trump will remain banned from the platform. Trump was kicked off earlier this year for his role in the Jan. 6 Capitol riots; the case is now being decided by Facebook’s third-party oversight committee.

Shireen Mitchell — founder of Stop Online Violence Against Women and a member of “The Real Facebook Oversight Board,” a Facebook-focused watchdog group — sees the steps being taken this week as an attempt to preemptively “soften the blow” of that decision.

Trump, “who has incited violence, including an insurrection; has targeted Black people and Black voters; is going to get back on their platform,” Mitchell predicted. “And they’re going to in this moment pretend like they care about Black people by caring about this case. That’s what we’re dealing with, and it’s such a false flag over decades of … the things that they’ve done in the past, that it’s clearly a strategic action.”

As public pressure mounts for web platforms to strengthen their moderation of user content, Facebook isn’t the only company that has developed powerful moderation tools and then faced questions as to why it only selectively deploys them.

Earlier this month, Intel faced criticism and mockery over “Bleep,” an artificially intelligent moderation tool aimed at giving gamers more granular control over what sorts of language they encounter via voice chat — including sliding scales for how much misogyny and white nationalism they want to hear, and a button to toggle the N-word on and off.

And this week, Nextdoor launched an alert system that notifies users if they try to post something racist, but then doesn’t actually stop them from publishing it.

Advertisement

For families who have lost loved ones to police violence, the killing of George Floyd tears at old wounds and compels them to speak of those they’ve lost.

June 14, 2020

Advertisement