Facebook is finally starting to take a proactive approach to the spread of fake news. As unverifiable facts and, in some cases, downright misinformation is continually being pushed across the popular social media site, it's important to have some amount of moderation.
Flagging Posts with False or Misleading Information
You may have already seen their new system in action. While posts in the past have been left unconfirmed, unverified, and unedited, this is no longer the case. Now, when a post is made that contains dubious or disputable facts, it is immediately checked against a comprehensive internal database of information.
If a post's legitimacy is ever in question, the post is still displayed on Facebook's site. However, it comes with an embedded warning to let users know that the post's facts cannot be verified. The team with Facebook is even taking their efforts one step further by automatically notifying users who have engaged with fake or misleading news in the past.
Specifically, Facebook is automatically notifying users who have seen fake news regarding the recent coronavirus pandemic. With so much information being thrown around regarding COVID-19 in 2020, much of which is unverifiable in any way, shape, or form, it's nice to see that Facebook is stepping up to lead the fight against misinformation.
But how exactly is their information obtained in the first place? Perhaps even more importantly, who is actually checking these facts?
Taking A Deeper Look at the Facebook Fact-Checkers
As it turns out, it's an entire team of human fact-checkers that are employed directly by Facebook. Since they are required to sign standard non-disclosure agreements from talking about the more sensitive aspects of their work, it's difficult to find any sort of concrete information about them.
They also work with a number of external organizations, approximately 43 to date, that provide various services across 24 different languages. Moreover, the human fact-checkers employed by Facebook aren't necessarily using their own intelligence to scan and verify facts. Instead, they use a proprietary tool that was created by the development team.
Moreover, the new Facebook fact-checkers aren't necessarily reviewing or moderating every single post that is made across the entire social media channel. With so many posts made on a daily basis, such a task would be nearly impossible.
Instead, they rely on a series of advanced algorithms in tandem with posts that have been reported from the sites actual base of users. While the results certainly aren't perfect, many see these efforts as a strong step in the right direction.
Not everyone agrees with the Facebook algorithms or their fact-checkers. In fact, researchers with MIT recently called attention to the "implied truth effect," which occurs in environments where some posts are fact-checked and disputed while others are not. Unfortunately, this causes modern consumers to put more value into posts that weren't fact-checked; even if these posts contained false or misleading information to begin with. As a result, some experts believe that the Facebook algorithms and fact-checkers may actually cause more damage in the long run.
How Facebook is Fighting Against the Spread of Fake News
No comments yet. Sign in to add the first!