Op-Ed: Elon Musk’s Twitter shows its dark side
For Twitter’s 200 million-plus daily active users, it’s been a long, strange week and a half. Elon Musk, billionaire, CEO and Twitter superfan, spent $44 billion on a much-contested, overvalued and at times dubious acquisition of his very favorite corner of the internet. Besides firing Twitter’s top leadership and half the rest of the staff, exactly what he plans to do with his new toy is murky at best.
Except for one thing: Among Musk’s many antics and public pronouncements since acquisition talks began in April, he has consistently cited the obliteration of content moderation on Twitter as a primary motive for the purchase.
For those in the social media industry and those, like me, who study it, what Musk views as content moderation appears remarkable for its narrowness. It focuses mostly on takedowns of controversial claims, language or account holders, ignoring the moderation work that removes destructive spam and bots. To social media experts, Musk’s disdain for Twitter’s rules comes off as naive, and his desire for near-absolute “free speech” on the website as a misguided impossibility.
Despite what critics like Musk seem to think, content moderation is far from a partisan tool of the woke mob.
Done well, content moderation requires an expansive, interrelated and cross-company system of people, policy and practices. It must adhere to legal mandates that differ country by country and that can carry costly fines. It must encourage the widest possible user participation and at the same time reduce the potential for user harm from that participation. And it must constantly refine computational tools, automation and the human judgment required to meet those ends.
Musk tweets a link to an unfounded conspiracy theory about Paul Pelosi to millions, showing that Twitter has become a powerful tool of right-wing attacks.
The night before Musk formally took charge at Twitter, he crowed to his 115 million followers, “the bird is freed.” It was as good as announcing that he didn’t know what he didn’t know.
Twitter’s moderation standards already erred toward permissiveness, especially when weighed against its closest market peers. For example, unlike many other platforms, Twitter allows users to circulate consensual sexual content featuring adults, giving an outlet to people who enjoy such material while also taking seriously anything that crosses a legal line or violates policies against such things as gratuitous violence, threats, self-harm and the abuse or torture of animals.
Twitter’s former top lawyer and policy chief, Vijaya Gadde, had an outsize role in establishing and policing its expansive but still safe rules. She is as well-known for fighting in court for a user’s right to post as she is for banning @realDonaldTrump after the Jan. 6 attack on the U.S. Capitol.
Musk fired Gadde in his first round of axings.
In all, Twitter’s new chief executive hacked the staff in half by Friday morning. In a thread he posted, Yoel Roth, the head of content moderation who has now reportedly resigned, tried to reassure doubters that the site’s “core” practices were in place. His in-house “Trust & Safety” team, he tweeted, had been reduced by just 15%, and the front-line workers — the globally dispersed outside contractors who do the bulk of Twitter’s moderation work — by less than that.
Most of us wouldn’t be able to stomach what these human moderators see over and over, every day. It is a sad but universal truth that there are enough people interested in uploading and circulating this kind of stuff that a social media company needs to employ a small army of low-wage, low-status workers to deal with it. Twitter’s small army just got smaller.
Musk’s Twitter plans remain a mystery, leaving users, advertisers and employees to parse his every move to guess where he might take the company.
Even before the corporate bloodletting, Musk’s Twitter began to show its dark side. Montclair State University researchers clocked an “immediate, visible and measurable spike” in hate speech on the site during the first 12 hours of Musk’s ownership.
Musk addressed an open letter to advertisers trying allay their jitters over the possible reputational degradation of the site. Twitter, he said, would not descend into an unmoderated “free-for-all hellscape.” Nonetheless, major advertisers — General Mills, Volkswagen and General Motors among others — “paused” their participation.
Last Thursday, Musk tried again to calm advertisers’ fears. “Elon, Great chat yesterday,” marketer Lou Paskalis, tweeted Friday. “As you heard overwhelmingly from senior advertisers on the call, the issue concerning us all is content moderation and its impact on BRAND SAFETY/SUITABILITY. You say you’re committed to moderation, but you just laid off 75% of the moderation team!”
Musk didn’t tweet a correction on that percentage; he just blocked Paskalis. He also threatened to “name and shame” specific brands that had pulled ads and he blamed “activist” groups for the loss of advertising dollars.
Elon Musk and Tesla have millions of vocal fans on Twitter. Not all of them are real. Two researchers are trying to figure out who controls the bots.
What’s the worst that could happen? What are the advertisers, users and others worried about? A Twitter where anything goes, where a mercurial and arrogant decision maker with no experience in running a social media site polls his fans for product ideas and moves policy and moderation boundaries at will.
Musk keeps pointing out that Twitter’s moderation hasn’t changed. Yet. He swears he’ll appoint a diverse moderation council to replace old Twitter’s system. But with Twitter bleeding $4 million a day, according to Musk’s tweets, will anyone left be willing to go to the mat the next time @kanyewest, for example, uses his account to project “death con 3” on Jews?
An apt analogy to the new, Elonian Twitter could perhaps be a car with iffy brakes, speeding down a road with no guardrails. But that might be lost on Musk; he has thus far been relatively unperturbed about spontaneously combusting Teslas and troubled autopilot programming that in one set of tests reportedly failed to recognize the shape of a moving child in its path.
Just before Musk’s takeover of Twitter was finalized, sharp-eyed users noted that he had changed his profile, anointing himself “Chief Twit.” After 12 days of staff bloodletting, revenue missteps, abrupt policy shifts and general Twitter chaos, we now can all say: Hail to the chief.
Sarah T. Roberts is an associate professor of gender studies, information studies and labor studies, and is faculty director of the Center for Critical Inquiry at UCLA. She is the author of “Behind the Screen: Content Moderation in the Shadows of Social Media.”
More to Read
A cure for the common opinion
Get thought-provoking perspectives with our weekly newsletter.
You may occasionally receive promotional content from the Los Angeles Times.