What happened when machines took over social media

By Mark Scott , Laura Kayali
10/19/2020 01:07 PM EDT
LONDON — Nobody appreciated the content moderators until they were gone.

As the coronavirus pandemic swept the world, social media giants like Facebook, Google and Twitter did what other companies did. They sent workers home — including the tens of thousands of people tasked with sifting through mountains of online material and weeding out hateful, illegal and sexually explicit content.

In their place, the companies turned to algorithms to do the job. It did not go well.

The Covid-driven experiment represented a real-world baptism of fire for something social media companies have long dreamed of: using machine-learning tools and artificial intelligence — not humans — to police posts on their platforms.

When the social media giants announced the changes, they acknowledged the algorithms might struggle to discriminate between legitimate and illegitimate content. And indeed, the effects were almost immediate.

While far more content was flagged and removed for allegedly breaking the companies’ rules on what could be posted online, in some areas dangerous and possibly illegal material was more likely to slip past the machines.

Facebook and Google roughly doubled the amount of potentially harmful material they removed in the second quarter of this year compared with the three months through March, according to the companies’ most recent transparency reports. Twitter has yet to provide figures for 2020.

In Syria, where campaigners and journalists rely on social media to document potential war crimes, scores of activists’ accounts were closed down overnight — often with no right to appeal those decisions. Other content, including news articles and health information linked to the coronavirus, was similarly scrubbed from the internet as the machines got to work.

And yet, a lot of questionable posts remained untouched. In France, campaigners fighting against racism and anti-Semitism noticed a more than 40 percent increase in hate speech on Twitter. Less than 12 percent of those posts were removed, the groups said.

In other high-profile areas, like child exploitation and self-harm, the number of removals fell by at least 40 percent in the second quarter of 2020 because of a lack of humans to make the tough calls about what broke the platforms’ rules, according to Facebook’s transparency report.

“Everything changed with Covid,” said Jeff Deutch, a researcher at the Syrian Archive, an organization that helps document atrocities in war-torn countries worldwide, often on social media. “It was concerning. We thought we were going back to the problems we had seen before in 2017, when the platforms removed content, out of an abundance of caution, that they believed was glorifying terrorism.”page1image35750272page1image35750656page1image35751424page1image35751040

The results of this impromptu experiment — taken on a massive scale, without notifying users — raise far- reaching questions about what happens to free speech when human arbiters are taken out of the equation, and algorithms become the final judges of what is and is not appropriate to post.

It also reinforces a very nondigital truth: Even the most sophisticated algorithms and machine-learning tools still can’t replicate the computing power of an army of human content moderators.

“It’s very hard to automate certain things,” said Joris van Hoboken, a law professor at Vrije Universiteit Brussels who co-authored a recent study into the use of artificial intelligence to police online material. “A lot of what we see online is based on context. There are serious problems of transparency and accountability when you rely on machines over humans to make these decisions.”

The impact of Covid-19

Before the global pandemic forced tech giants to send home their content moderators, more than 30,000 people — often low-paid workers in places like Berlin and Austin, Texas — spent hours looking at some of the worst material to be found on the world’s largest social networks.

Facebook, Google and Twitter, almost in unison, announced their switch to automation in the middle of March.

There was a bittersweet reaction among many of those who found themselves temporarily out of a job, according to three current and former content moderators who spoke to POLITICO on the condition of anonymity because they did not want to jeopardize their current positions or had signed agreements not to speak publicly about their previous work.

Social media content moderators review thousands of explicit posts each day and are given little mental support to handle the graphic imagery they have to police. Their decisions are then pumped into the companies’ machine-learning tools, which require large datasets of removal decisions to learn from, according to Tarleton Gillespie, who works at Microsoft Research, an independent research unit of the tech company.

One content moderator told POLITICO that the monthslong break was a welcome relief to what he considered “back-breaking work.”

The tech giants initially said content moderators could not work from home because of concerns about how people’s data may be handled — though Facebook now allows some moderators to review material remotely, with the exception of the most explicit material. Google said it would follow suit, but has not given details about how.

The moderator said the unplanned experiment also confirmed something many contractors already suspected — that the human content reviewers weren’t just making calls on questionable content; they were training the machines that were planned to eventually replace them.

“It just fast-tracked something that many of us saw coming,” the moderator said. “We are only here until the AI tools can catch up.”

For now, those tools remain anything but perfect.

Google’s YouTube, for instance, removed 11.4 million videos in the second quarter compared with 6 million videos in the three months through March, based on the company’s latest records. The number of successfulpage2image35758336page2image35760832page2image35760448page2image35751808

appeals — when content creators believed their videos had been deleted incorrectly — also grew fourfold, to 160,000 videos, over the same period.

“Human review is critical for us,” said Marco Pancini, YouTube’s director of public policy for Europe, the Middle East and Africa.

It’s “not only necessary to train our machine learning systems, but it also serves as a check providing feedback that improves the accuracy of our systems over time,” Pancini added.

Facebook reported similar figures, with the number of removals flagged as hate speech on its main platform more than doubling to 22.5 million in the second quarter of 2020, according to the social network’s data. The rise was in part because the company expanded its content moderation tools to non-English language posts. Meanwhile, the number of successful content appeals was just 12,600, a significant drop from the 2.3 million figure for the first three months of the year.

Appeals to overturn incorrect decisions also took longer or, in some cases, were not handled at all, because of an absence of people to handle them, according to the company’s latest transparency report.

That represents a “backslide” in the rights of people posting content online, said Jillian York, director for international freedom of expression at Electronic Frontier Foundation, a nonprofit group.

“Before the pandemic, people were getting some notice about bans and violations,” she added. “We were making more progress. I’m worried we’re back to 10 years ago.”

Stephen Turner, European director for public policy at Twitter, said in a statement that the company had widened its definition of what constitutes harmful content in response to the Covid-19 pandemic. The company will publish figures on how much content its machines have so far taken down later this year.

There’s a growing awareness within the social media giants — which have been grappling with how to deal with hateful and false content from high-profile figures like President Donald Trump — that it may never be possible to fully automate decisions on questionable content, especially when it comes to difficult calls that require local knowledge or cultural cues.

“Content enforcement is not an either-or approach, where we use either human reviewers or artificial intelligence,” Guy Rosen, Facebook’s vice president of integrity, told reporters in August. “Sophisticated systems need both people and technology.”

First automate, then legislate

The missteps in content moderation over the last six months have not gone unnoticed by policymakers in Brussels and Washington, who are pushing tech companies to take greater responsibility for illegal, hateful and false information.

Since 2018, officials in the European Union and the United States have begun asking increasingly tough questions about how tech giants handle online content. Some tech executives, like Facebook’s head Mark Zuckerberg, have doubled down on tech, promising that sophisticated algorithms and as-yet unproven machine learning tools will be able to put a stop to harmful, sexually explicit and false content.

Social media companies had already received criticism for using automation to remove posts and videos that breached global copyright rules. During the heated lobbying around the EU’s recent copyright reforms, campaigners had pushed back against what they called “censorship machines.”page3image36647680page3image36640576page3image36639232

The hope was to use similar tactics in more complex areas of content moderation — an especially thorny topic as both European and U.S. officials push for new rules to force firms to take more responsibility for what is published online.

Policymakers are already taking notice of how firms have used automation. An internal document seen by POLITICO — prepared for Europe’s Digital Services Act, a series of proposals expected in December to update how online content is handled — warned that companies’ use of machines could lead to errors in how social media posts are handled.

Officials are also considering plans to increase transparency over how tech giants make decisions about what should be removed from their platforms, including around algorithms that decide what shows up in people’s social media feeds.

The events of the last six months have only highlighted the need for human moderation and clear rules — rather than opaque algorithms — on what kind of content should be taken down, said Dia Kayyali, program manager for tech and advocacy at Witness, a nonprofit organization that helps people use video and technology to protect and defend human rights.

Kayyali’s group, along with dozens of groups and academics, sent an open letter to the social media companies in April imploring them to keep detailed records on how they decided to automatically remove content.

“In 2020, with the state of automation as it now stands, it’s very important that humans do content moderation,” Kayyali said. “When they’re not involved, as we’ve seen in the last six months, we have seen a massive increase in takedowns.”

This article is part of a special report, The Essential Tech Worker.

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Email

Share This Post

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore