Data Backup Digest

Do-It-Yourself Windows File Recovery Software: A Comparison

results »

Facebook's AI is Getting Better at Detecting (and Removing) Hate Speech

Whether you love them or hate them, there's no denying the fact that the team at Facebook is always working to improve their social media network. One of their latest attempts is a highly concentrated effort that focuses specifically on hate speech.

According to recent reports, Facebook's AI-driven algorithms are capable of detecting 94.7% of the hate speech that is posted on the network. In the most recent quarter, the team reports having removed 22.1 piece of hate-related content from their site. This a dramatic increase from the 6.9 million pieces of hate-related content reported in 2019.

It's important to note that the high figure pertains specifically to hate-related content within the last quarter. Generally speaking, the Facebook dev team says their tools are capable of detecting and automatically flagging 88.8% of all hate-related content posted on the site.

Regardless of how you look at it, this is a substantial improvement from several years ago. In 2017, it was reported that Facebook's AI-driven algorithms were only able to detect 24% of the content that was eventually flagged as hate speech.

The dev team with Facebook cites two key breakthroughs as major players in their new and improved detection system:

- Gaining a deeper semantic understanding of natural language, which lets their system better detect subtle, complex, and hidden meanings behind certain words, phrases, and sentences

- Expanding how AI tools interact with the content, including all images, text, and individual comments to a specific post

There have also been some other breakthroughs, too. According to a recent blog post, the team with Facebook has implemented XLM technology.

A recent blog post continued on to explain the technology by saying: "For semantic understanding of language, we’ve recently deployed new technologies such as XLM, Facebook AI’s method of self-supervised pretraining across multiple languages. Further, we are working to advance these systems by leveraging new state-of-the-art models such as XLM-R, which incorporates RoBERTa, Facebook AI’s state-of-the-art self-supervised pretraining method."

But that's not all. The team with Facebook also had to work out a way for their new tools to properly analyze, digest, and understand the content that is posted on their site.

To this extent, their recent blog post continued on to say: "To broaden how our tools understand content, we have also built a pre-trained universal representation of content for integrity problems. This whole entity understanding system is now used at scale to analyze content to help determine whether it contains hate speech. More recently, we have further improved the whole entity understanding system by using post-level, self-supervised learning."

It's important to keep in mind that Facebook is available to users in hundreds of different languages, countries, and regions. While their dev team admits that they're undertaking a monumental task, and while they understand that they'll never be able to detect and remove all hate speech posted on their site, they remain committed to keeping their users protected however they can.

That's why you can expect to see even more updates in the coming weeks, months, and years from the team at Facebook.


No comments yet. Sign in to add the first!