Internet users have become increasingly creative in their efforts to circumvent AI-powered content moderation of offensive or banned expressions online, according to the results of a survey conducted by TELUS International, a digital customer experience (CX) innovator. The survey found that more than half of respondents (51%) said they have seen “algospeak” used on social media, moderated forums and brand websites as well as in gaming communities, with 42% indicating this behavior has increased since they first spotted it themselves.
A combination of ‘algorithm’ and ‘speak,’ algospeak is the collection of codewords, slang, deliberate typos, emojis and the use of different words that sound or have a meaning that is similar to the intended one. For example, “unalive” is a regularly-used algospeak term for “dead” and “The Vid” for COVID-19. While algospeak can help individuals – including those in marginalized communities – discuss topics perceived by some to be controversial without having their content automatically flagged for removal, it also can be used by those wanting to intimidate, harass and cyberbully others.
While 30% of Americans have used algospeak, the behavior is most common among the digital natives of Gen Z (aged 18-24) with nearly three-quarters (72%) saying they have recognized and been exposed to this type of behavior, and 41% saying they have used it themselves.
“The wide variety of digital platforms where people can express their thoughts and opinions combined with the cultural normalization of society documenting the world around us on a daily or even hourly basis, means that today’s brands are facing an even steeper uphill battle when it comes to quickly reviewing and accurately removing harmful, abusive or inappropriate materials that violate their guidelines,” said Siobhan Hanna, Managing Director, AI Data Solutions, TELUS International. “With the use of algospeak becoming more and more prevalent, brands must establish a robust content moderation practice that leverages both AI and a diverse team of content moderators and data annotators. Depending on a given site’s particular content guidelines, a human-in-the-loop approach can help brands either remove all instances of algospeak or ensure that context is properly considered in these instances to minimize the flagging of content by marginalized groups that does not contradict community guidelines. An established content moderation strategy is no longer a nice-to-have for brands, but a must-have in order to maintain safe online environments.”
Other findings from the survey of 1,000 U.S. consumers included:
- Words/text are the top method of communicating on Facebook (65%), Instagram (35%), Twitter (40%), moderated forums/online communities (49%) and branded websites (47%) over other forms of communication such as emojis or video.
- More than one in five (22%) said they immediately see an uptick in the use of algospeak and emojis to circumvent banned terms when a polarizing societal event occurs.
Meeting Customer Expectations for Content Moderation
Consumers entering a brand’s online environment have the expectation that it will be free of offensive or hateful content, and for many, that includes instances of algospeak. In fact, 38% of respondents said that brands should be able to identify and remove these nuanced phrases and emojis immediately. Only a third (32%) of survey respondents believed that brands are currently doing a good job moderating this type of language, which puts them at high risk of losing consumers’ trust and their business.
“These survey findings indicate that today’s discerning consumers have put the onus on brands to ensure their content moderation strategies are keeping pace with evolving digital behaviors,” added Hanna. “Inevitably, the complexity and quantity of content will continue to grow over time, and now more than ever, brands need to consider working with an experienced partner that has the insight to foresee these shifts in trends, the agility to quickly pivot and the global resources to scale if they are going to successfully build and maintain consumer loyalty over the long term.”
TELUS International partners with global brands to protect the safety and well-being of their user communities. To learn about TELUS International’s content moderation capabilities, visit https://www.telusinternational.com/solutions/trust-safety-security/content-moderation-solutions.
Survey Methodology: The survey findings are based on a Pollfish survey that was conducted on Aug. 11, 2022, and included responses from 1,000 Americans.