Most social media users have heard the news surrounding Facebook and its new umbrella company, Meta. With not only Facebook, but Instagram, Oculus VR, and WhatsApp all falling underneath the umbrella, it’s safe to say that the team with Meta certainly has their hands full. Couple this with a sharp increase in harmful content, including hate speech, and some experts are left wondering how Meta will deal with posts like this.
Thankfully, it seems that the team with Meta is already making moves to remove such content and even prevent these posts from being seen by the general public. Details on a brand new, AI-driven system, known as the Few-Shot Learner, or FSL, have already been made available. As expected, they’re nothing short of amazing.
What is the FSL?
The FSL is Meta’s answer to online hate speech and other harmful types of content. It represents a significant improvement over some of their past systems, many of which had difficulty adjusting to new patterns and trends. According to Meta, their newest tool can identify these patterns and adapt to new habits in online content within a matter of weeks. Previous systems would take months.
In modern AI and machine learning, the system needs to be trained with various datasets. For this scenario, the team with Meta used actual examples of hateful content to train their FSL system.
According to a recent blog post: “We’ve built and recently deployed a new AI technology called Few-Shot Learner (FSL) that can adapt to take action on new or evolving types of harmful content within weeks instead of months. It not only works in more than 100 languages, but it also learns from different kinds of data, such as images and text, and it can strengthen existing AI models that are already deployed to detect other types of harmful content.”
While the FSL method isn’t exactly new, it is relatively recent. It starts by developing a broad, generalized knowledge within the AI system. From there, it narrows down the field with various types of content, including labeled data. Finally, it’s trained with new policies via condensed text.
As a result, the FSL method doesn’t rely strictly on pattern-matching or trend identification. Instead, its training is based on actual language, established policies, and specific content examples. This is far more effective than previous technologies, and it’s made possible by a series of recent breakthroughs – including self-supervised learning techniques and highly efficient infrastructure.
Meta’s new FSL system has already been tested in the field. Although it’s been applied multiple times since its original launch, most social media users are familiar with FSL through their recent COVID-19 fact-checking campaign. In this case, FSL identified false or misleading information and automatically flagged it on their site.
Comprehensive testing was also performed on these systems, including offline and online A/B testing. Not only do these tests help to determine the accuracy and validity of the current FSL system, but future tests are planned to ensure long-term integrity and effectiveness.
Facebook’s Meta Uses AI to Target Harmful Online Content
No comments yet. Sign in to add the first!