Will Facebook’s deepfake ban be enough?

With rising concerns about fake content on social media platforms and deep mistrust surrounding the upcoming 2020 elections, the tech company makes a bid for users’ trust.

Ragan Insider Premium Content
Ragan Insider Content

Deepfakes are a problem for social media companies.

The technology that allows users to create lifelike fake videos of another person could have dire consequences for trust in digital media and the ability of our country to have open and honest discussions about the problems facing the nation. It also could help foreign actors attack us with misinformation and propaganda.

And it’s a big problem for communicators tasked with crisis and brand management, developing and preserving trust, and telling authentic stories.

Facebook is introducing new guidelines about these kinds of videos in an attempt to address the problem.

Facebook now says it will flag videos that have been manipulated or edited, but it won’t remove them entirely. The reason for this, the company says, is that leaving up flagged videos helps people have context when the videos appear elsewhere on the internet.

It wrote in a blog post:

To read the full story, log in.
Become a Ragan Insider member to read this article and all other archived content.
Sign up today

Already a member? Log in here.
Learn more about Ragan Insider.