Tech Platforms Treat White Nationalism Different From Islamic Terrorism

Critics say Facebook, YouTube, and Twitter are quicker to block content from ISIS than from white nationalists.
White supremacists
White supremacists clashed with protesters during a rally in Charlottesville, Virginia, in 2017.Samuel Corum/Getty Images

In January 2018, the top policy executives from YouTube, Facebook, and Twitter testified in a Senate hearing about terrorism and social media, touting their companies’ use of artificial intelligence to detect and remove terrorist content from groups like ISIS and al Qaeda. After the hearing, Muslim Advocates, a civil rights group that has been working with tech companies for five or six years, told executives in an open letter that it was alarmed to hear “almost no mention about violent actions by white supremacists,” calling the omission “particularly striking” in light of the murder of Heather Heyer at a white supremacist rally in Charlottesville, Virginia, and similar events.

More than a year later, Muslim Advocates has yet to receive a formal response to its letter. But concerns that Big Tech expends more effort to curb the spread of terrorist content from high-profile foreign groups, while applying fewer resources and less urgency toward terrorist content from white supremacists, resurfaced last week after the shootings at two mosques in Christchurch, New Zealand, which prime minister Jacinda Ardern called “the worst act of terrorism on our shores.”

In the US, some critics say law enforcement is hamstrung in combating white supremacists by inadequate tools, such as the lack of a domestic terrorism law. But the big tech companies are private corporations accustomed to shaping global public policy in their favor. For them, failure to police terrorist content by white supremacists is a business decision molded by political pressure, not a legal constraint.

X content

This content can also be viewed on the site it originates from.

Tech companies say that it is easier to identify content related to known foreign terrorist organizations such as ISIS and al Qaeda because of information-sharing with law enforcement and industry-wide efforts, such as the Global Internet Forum to Counter Terrorism, a group formed by YouTube, Facebook, Microsoft, and Twitter in 2017.

On Monday, for example, YouTube said on its Twitter account that it was harder for the company to stop the video of the shootings in Christchurch than to remove copyrighted content or ISIS-related content because YouTube’s tools for content moderation rely on “reference files to work effectively.” Movie studios and record labels provide reference files in advance, and “many violent extremist groups, like ISIS, use common footage and imagery,” YouTube wrote.

But as a voluntary organization, the Global Internet Forum can set its own priorities and collect content from white nationalists as well. Facebook noted that member companies have shared “more than 800 visually distinct videos” related to the Christchurch attacks to the group’s database, “along with URLs and context on our enforcement approaches.”

X content

This content can also be viewed on the site it originates from.

Law professor Hannah Bloch-Wehba hasn’t seen any evidence that technology is inherently better at identifying ISIS-related content than right-wing extremist content. Rather, she says, tech platforms built these tools in response to pressure from regulators and engineered them to address a specific kind of terrorist threat.

“We just haven’t seen comparable pressure for platforms to go after white violence,” and if they do, companies face “political blowback from the right,” says Bloch-Wehba. “It feeds into a narrative about who terrorists are, who is seen as a threat, and what kinds of violent content is presumed to be risky.”

Bloch-Wehba says tech companies' definitions of terrorism tend to be vague, but ISIS and al Qaeda are typically the only groups named in their transparency reports, which reveals their priorities.

The cycle is self-reinforcing: The companies collect more data on what ISIS content looks like based on law enforcement’s myopic and under-inclusive views, and then this skewed data is fed to surveillance systems, she says. Meanwhile, consumers don’t have enough visibility in the process to know whether these tools are proportionate to the threat, whether they filter too much content, or whether they discriminate against certain groups, she says.

If platforms are now having a harder time automating the process of identifying content from white nationalists or white supremacists, “it’s going to be hard for them to play catch-up,” Bloch-Wehba says.

Madihha Ahussain, special counsel for anti-Muslim bigotry for Muslim Advocates, says it’s not just a matter of expanding guidelines around terrorist content. Tech companies fail to enforce established community standards. “We believe there’s a lot of content generated from white nationalist groups generally that would violate” tech platform guidelines, but “it takes a lot on the part of advocacy groups to see some action.”

For years, Muslim Advocates took it as a good sign that tech executives would meet with the group and appeared responsive. “But then we realized that nothing was actually changing,” Ahussain says.

In a statement to WIRED, a YouTube spokesperson said, “Over the last few years we have heavily invested in human review teams and smart technology that helps us quickly detect, review, and remove this type of content. We have thousands of people around the world who review and counter abuse of our platforms and we encourage users to flag any videos that they believe violate our guidelines.”

YouTube says its guidelines prohibiting violent or graphic content that incites violence are not limited to foreign terrorist organizations and go beyond just ISIS and al Qaeda. The company estimates that the Global Internet Forum contained 100,000 hashes of known terrorist content at the end of 2018.

YouTube also says it’s taking a stricter approach to videos flagged by users that contain controversial religious or supremacist content, even if they don't violate the company’s guidelines. In such cases, YouTube will not allow the videos to contain ads, and it says it will remove the videos from its recommendations algorithms and remove features like comments, suggested videos, and likes.

In a statement, a spokesperson for Twitter said, “As per our Hateful Conduct Policy, we prohibit behavior that targets individuals based on protected categories including race, ethnicity, national origin or religious affiliation. This includes references to violent events where protected groups have been the primary targets or victims.”

Facebook pointed to a company blog post on Monday about its response to the New Zealand tragedy. The company said the original Facebook Live video was removed and hashed “so that other shares that are visually similar to that video are then detected and automatically removed from Facebook and Instagram.” Since variants of screen recordings of the stream were difficult to detect, Facebook used audio technology to detect additional copies.

Tech platforms have a financial interest in promoting their own version of “free expression,” Bloch-Wehba says. “Any attempt to move away comes laden with this set of assumptions about consumer rights, but those aren’t really legal rights—or, at least, they’re very unsettled legal rights,” she says. Nonetheless, “it plays into the same conversation, mostly coming to the right wing, that we should all be able to say whatever we want.”

Ahussain says meaningful change will only come if tech platforms want to address the issue, but the lack of diversity within tech companies has led to a lack of understanding about the complexities and nuances of threats faced by Muslims. To address that, Muslim Advocates and other groups want tech companies to hear directly from the communities that have been impacted. “We’ve recognized the need to have conversations in a neutral space,” and with a chance to set the tone, agenda, and guest list, she says.


More Great WIRED Stories