Twitch—the popular game-streaming site acquired by Amazon in 2014—has been inundated in recent months by “hate raids,” which can dump vulgar and hateful speech into the site’s prominent chat feeds. For some time, racist slurs and bigoted references have been winning this fight, but a leaked interface update suggests that Twitch might finally take legitimate steps to squash its toxic chat feeds.
On Sunday, streaming-industry reporter Zach Bussey shared a series of screenshots, including an interface as apparently captured from Twitch’s German site, that point to a new type of user verification system coming to the chat-heavy service. As pictured and described, this system would allow Twitch users to opt into either verifying their email address or phone number. (A version of email verification already exists, but currently, Twitch users can use the same address to bulk-verify multiple accounts at the same time.)
The incentive for opting in to this process will come from individual Twitch channel moderators, who might only allow people to chat if they’ve verified either (or both) credentials.
The leaked interface also points to a long-requested Twitch moderation feature finally coming online: the ability to silence accounts based on how long they’ve existed. Should your channel be “mobbed” by hundreds of newly created accounts, all run by an automatic bot system with designs on flooding its dedicated chat channel, this teased new system would block them with a rule such as “accounts must be older than one week” (or even longer, if a host wants).
A month after #ADayOffTwitch
Without these systems in place, Twitch users have had to hunt for unofficial add-on moderation tools to turn back an increasingly aggressive network of hate raiders, some of whom organize in a constantly changing network of outside platforms such as Discord. In a late August report, Washington Post reporter Nathan Grayson describes much of the hate raid ecosystem. And a late August report from The Mary Sue directly quotes and cites some of the most hateful language and tactics used by hate raiders ahead of September 1’s #ADayOffTwitch, an effort led by affected streamers to bring more public attention to the platform’s problems.
However, since the community’s moderation tools rely on public-facing information as opposed to Twitch’s full control of the new-user pipeline, they’re only so effective. Hate raids are typically generated with a mix of automatic bot systems and Twitch’s lenient, free account-creation interface. (The latter continues to lack any form of CAPTCHA authentication system, thus making it a prime candidate for bot exploitation.) While Twitch includes built-in tools to block or flag messages that trigger a dictionary full of vulgar and hateful terms, many of the biggest hate-raid perpetrators have turned to their own dictionary-combing tools.
These tools allow hate-raid perpetrators to evade basic moderation tools because they construct words using non-Latin characters—and can generate thousands of facsimiles of notorious slurs by mixing and matching characters, thus looking close enough to the original word. Their power for hate and bigotry explodes thanks to context that turns arguably innocent words into targeted insults, depending on the marginalized group they’re aimed at. Twitch has since pushed updates to its dictionary-minding moderation systems that, among other things, look for floods of non-Latin characters. But these, too, have proven insufficient for some affected Twitch hosts.
Earlier this month, Twitch sued two users whom it identified as repeat perpetrators of hate raids. Yet even though that suit targeted creators of thousands of accounts, the subsequent game of hate speech whack-a-mole has left affected users scrambling to come up with tools and systems that can beat back a flood of toxicity. And if users want to make a living as subscriber-supported streamers, any affected channels—usually hosted by smaller streamers, sometimes with viewer counts in the tens or hundreds—are left with few superior options to turn to. In the West, neither YouTube Gaming nor Facebook Gaming offer noticeably more robust auto-moderation tools, and they don’t enjoy audiences anywhere near Twitch’s massive numbers. The latter becomes a sticking point for any host hoping to organically grow their viewership while also flagging their channel with tags like “LGBTQIA+” or “African American.”
When reached with questions from Ars Technica about Bussey’s report and about any other built-in tools that the network may roll out to creators in the face of hate mobs, a Twitch representative declined to comment.