Column: In another attack on Western society, Russian trolls sow doubt about vaccines - Los Angeles Times
Advertisement

Column: In another attack on Western society, Russian trolls sow doubt about vaccines

Share via

Efforts by Russian trolls online to influence the 2016 presidential election and the 2016 “Brexit” vote prompting Britain to leave the European Union have been widely reported. But another effort by the Russians to infect social media with toxic misinformation and propaganda may have an even more far-reaching effect.

That’s a troll campaign to sow doubt about vaccines. Two recent reports have documented the incursion of the notorious Internet Research Agency, a St. Petersburg-based “troll farm” linked to the Russian government, into the vaccine debate.

According to the Times of London, the disinformation included a spurious report of widespread deaths of children in Mexico after vaccinations, and online posts in support of Andrew Wakefield, the former British doctor whose discredited paper linking the MMR vaccine to autism has sown persistent and unfounded doubts about the vaccine since its publication in 1998. Wakefield was stripped of his British medical license and the discredited paper was retracted after its flaws were uncovered.

Advertisement

Scores of accounts, either known or suspected to be tied to Russia, ... churned out anti-vaccine tweets, including support for Mr Wakefield.

— The Times of London

In an analysis of social media accounts posting items on vaccines, the Times reported finding “scores of accounts, either known or suspected to be tied to Russia, that churned out anti-vaccine tweets, including support for Mr Wakefield.”

The newspaper’s findings parallel those of a team of researchers at George Washington University, the University of Maryland and Johns Hopkins who analyzed 1.8 million tweets posted on Twitter between July 14, 2014, and Sept. 26, 2017. Their paper, published last week in the American Journal of Public Health, is titled “Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate.”

Advertisement

It reports that a large percentage of tweets devoted to the vaccination issue have been either “bots”— that is, automated messages sometimes aimed at depositing malware in users’ computers — or the product of Russian trolls including the Internet Research Agency. Malware refers to applications or programs that can secretly expose personal information about a computer’s user to third parties or otherwise interfere with a computer system for an unauthorized user’s profit.

“Whereas bots that spread malware and unsolicited content disseminated anti-vaccine messages,” the paper observed, “Russian trolls promoted discord.” They often did so by falsely asserting that the anti-vaccine case is equal in scientific legitimacy to pro-vaccine orthodoxy. “Accounts masquerading as legitimate users,” the paper says, “create false equivalency, eroding public consensus on vaccination.”

The doubts these accounts sow are distinctly dangerous to public health. Outbreaks of measles, one of the diseases covered by the MMR (measles, mumps and rubella) vaccine, periodically occur in the United States, Britain and other developed countries. They’re typically associated with low vaccination rates in some communities that make their residents vulnerable to travelers carrying the infection from other lands; the U.S. outbreak of 2014-15 is thought to have begun with a traveler who infected unvaccinated parkgoers at Disneyland.

Advertisement

Concerns about a new outbreak have been voiced in Britain, where vaccination rates have remained in the inordinately low 80%-90% range ever since the Wakefield paper. As it happens, the anti-vaccine propaganda promoted in the West by Russian trolls hasn’t taken root in Russia itself — that country, according to the World Health Organization, has a 100% vaccination rate for almost all vaccines, including measles, better than Britain or the U.S.

The academic researchers found that bots and trolls have different apparent motivations and techniques, but what they share is opportunism. Take the Internet Research Agency, which was indicted in February, along with 13 Russian individuals, including 12 associated with the organization, by special counsel Robert S. Mueller III. The indictment charged them with using stolen identities and U.S. social media platforms such as Facebook and Twitter to “sow discord in the U.S. political system, including the 2016 U.S. presidential election.”

For them, the conflict over vaccines, in which the established science showing that vaccines are safe battles with unscientific supposition and emotionalism in opposition to child vaccinations, is a gift. The academic researchers focused partially on hundreds of tweets using the hashtag #VaccinateUS, which they found were invariably identified as products of the Internet Research Agency. Those tweets were actually fairly evenly divided between pro-vaccine and anti-vaccine messages, which the paper says seems aimed at muddying the water of a scientific issue.

“Whether they’re specifically trying to make us distrust the medical system or just get us to fight more is unclear,” Mark Dredze, a computer scientist at Johns Hopkins and co-author of the paper, told me. “But certainly they’ve identified this issue as a contentious one, and they’re promoting that contention.”

The bots, by contrast, may be mostly looking for clicks, which they can exploit to deposit malware. They attract anti-vaccine social media users not necessarily because those users are unusually susceptible to the bots’ message, but because they spend more time engaged online than pro-vaxxers, Dredze says.

Advertisement

“The anti-vaccine community is talking about these things more, so you’re more likely to get clicks on a daily basis if the link is anti-vaccine,” Dredze says. “And if you’re trying to distribute malware, the name of the game is getting people to click on your links.”

What can be done about the outbreak of infectious tweets and online messages? Dredze and his co-authors suggest that public health officials try emphasizing that the sources of anti-vaccine messages have dubious credibility and may be out only to compromise users’ computers.

Dredze himself says he’s optimistic that managers of social media platforms will eventually refine their techniques for blocking and neutralizing bots and trolls, much as ever more sophisticated spam filters have reduced the burden of email spam for the average user.

“It’s only in the past year or two that Facebook and Twitter have gotten serious about this problem, so it’s going to take them a while to figure it out,” he told me. “But there are some behaviors that humans are susceptible to—some people will always fall for email scams, and that’s never going to change. There are some real core questions about how as a society we use these platforms and how we trust information. Those things are exacerbated by Twitter and Facebook, but they’re not caused by Twitter and Facebook.”

Keep up to date with Michael Hiltzik. Follow @hiltzikm on Twitter, see his Facebook page, or email [email protected].

Return to Michael Hiltzik’s blog.

Advertisement
Advertisement