facebook logo

Facebook’s refusal to label or prohibit Donald Trump’s posts promoting police violence, in contrast with its own policies and legal guidance prohibiting speech that incites violence, is prompting massive protests from its employees, the media and social media. Twitter’s action to label his tweets as having questionable (or no) facts about mail order ballots contributing to voter fraud is likewise controversial.

Recently, a coalition of organizations under the hashtag #StopHateforProfit has urged global corporations to boycott Facebook advertising to show they will not support a company that puts profit over safety.

Meanwhile, Facebook founder and CEO Mark Zuckerberg continues to resist taking action to stop hate-filled content and to identify misleading and potentially harmful content, under the precept that Americans don’t want social media companies censoring their content.

There are two major issues here that impact the practice of communications. One has to do with the free speech element of the First Amendment. The second, related to the first, introduces a relatively new concept: accountability for determining that published information is true.

What Does the Law Say?

Since 1987, when the FCC (Federal Communications Commission) Fairness Doctrine was repealed that required broadcast, network, and print media to present news in a balanced and factual way, the path has been cleared for one-sided, fantastical, and controversial broadcasts and articles. One example of this is the rise of the conservative talk show phenomenon. This divergence began long before the advent of social media and its ubiquitous position as America’s primary news source.  Although many Americans may still be under the assumptions that new media is required to be fair and objective, that fairness has not been legally enforceable for a generation.

Now, social media is the primary source of news for most Americans. Of particular relevance to this content is Section 230 of the Communications Decency Act, which makes it clear that content is the responsibility of the writer, not the publisher. “Publishers” in this case are social media companies and their platforms. These companies, like traditional publishers, support free speech in the content that they publish, allowing for a wide and diverse range of opinions.

However, the doctrine of “imminent lawless action” is a standard currently in use that was established by the United States Supreme Court in Brandenburg v. Ohio (1969). Under the imminent lawless action test, speech is not protected by the First Amendment if the speaker intends to incite a violation of the law that is both imminent and likely. Advocacy of force or criminal activity does not receive First Amendment protections if the advocacy is directed to inciting or producing imminent lawless action or is likely to incite or produce such action.

Recommending police use force against peaceful demonstrators could certainly be considered likely to result in violence. The very act of police force is violent. Encouraging violence toward minorities is likewise a violation of this prohibition.

Clearly, social media companies should not allow content that promotes violence, whether it is police, military, or vigilantes. So, what is the problem?

There shouldn’t be one. Inciting violence is not protected under the doctrine of free speech. That goes for everyone, from denizens of the White House to shadowy groups and individuals promoting harm against others because of race, religion, or sexual orientation. It goes for the cyber bullies online that threaten harm to their victims. This kind of dialogue on social media and in any media channel should not be permitted.

Going Deeper into What Free Speech Does Not Mean

Physical violence is pretty easy to understand. However, words can convey and cause harm indirectly as well as promoting physical attacks. There is, unfortunately, such an epidemic of mendacity in America today, that this harm occurs frequently. Because it is not as visible as bodily harm, it often slips under the radar.

Speech is powerful. It influences opinion, which influences action. It can be very destructive. Look at the tragic tales of teen suicide provoked by online bullying. Untrue content published online can destroy reputations, businesses, relationships, and even peace of mind.

A person who is libeled can, of course, always seek legal action. However, this is expensive and often far beyond the reach of the average individual. And what if the person who libels them has no assets? It’s an exercise in futility. The damage is done and cannot be undone.

Clearly, destructive aspects of free speech can be harmful.  However, we are reluctant to question such a fundamental right and fall into the trap of becoming a nation with censorship, which can lead us down a path to totalitarianism.

Let us introduce a new element in this debate: it’s called accountability.

Did our forefathers intend that the right of free speech could be manipulated into harming citizens? No. They were reacting to a world in which people were put to death for speaking the truth.

So how did free speech become interpreted as “freedom to lie?” This is a corruption of the last generation. We have gone too far in allowing platforms that reach billions and individuals who reach those multitudes to have the freedom to lie, with no accountability.

Is it that hard to verify a claim? Really?

Zuckerberg has said that he doesn’t think people want a technology company to police the content they receive. And it is daunting to think of verifying the truth in billions of comments that are published daily. And, writers are responsible for the truth of their content, remember?

However, unlike the past, there is no “filter” for this content anymore. On social media, it is ubiquitous. Even if people block certain senders, it is still almost impossible to avoid inaccurate or damaging content.

The platforms on which these writers publish CAN hold them accountable. Twitter is beginning to exercise these responsibilities. It is not hard – and should be required – for public officials and those with huge followings to verify the truth of what they post. It should not be hard to identify those who publish lies that damage others.

With accountability, public figures are responsible for the truth, or lack of truth, in what they say, write, and publish. With accountability, cyber bullies are responsible for what they post and the damage that may be done to their targets.  With accountability, those who communicate with large audiences are accountable for the veracity of their claims.

Back to Zuckerberg’s comment that people don’t want social media companies deciding what is true or not. He’s missing the point. No one decides what is true. It is or it isn’t. There are no alternative facts. They are myths – like the unicorn and Big Foot.  If a public figure wants to make a claim, he or she is responsible for providing facts to back it up. If people slander or harass others – suggesting someone is involved with a murder, for example – they are responsible for the reputational damage that is done. Those who bully others via social media are responsible for what they do.

Here’s how this could work in practicality:

  • Public figures who make claims that are not supported by facts have those statements marked, as Twitter did.
  • Public figures who consistently make these statements, are fined. Not censored by their peers. Fined. Or dropped from the platform.
  • Cyber bullies are also fined.
  • Those who incite violence are immediately dropped from the platform.
  • People who commit libel are also fined. It is to not up to the individuals to incur the legal costs of a civil suit. Or the injustices and influence peddling that characterizes our legal system today.

Yes, it is impossible to police everyone. But public officials have a responsibility to their constituents to behave honorably. A few incidences of fines being levied and individuals being dropped from platforms and they will begin to police themselves.

Same with cyber bullies. With consequences, restraint will be used.

And as for dropping those who incite violence, it’s illegal already. Why is there a minute’s hesitation in taking away their bully pulpit?

So, who oversees this process? There needs to be a body set up to administer the levying and paying of fines. There needs to be legal clout behind it. Here Zuckerberg is right—this is not the purview of social media companies. We have such bodies for many industries – the FDA, the Environmental Protection Agency, and the FAA, to name a few. Their purpose is to protect the public. Up until now, we as a society have not recognized that the public also needs protection against lies. It’s time to face reality and act.

And for immediate actions that should occur now, social media companies can enforce the policies that they currently have in place. Face reality and step up.

This is a lofty goal and a revolutionary position, advocating some extraordinary changes in our thinking and our policies. But it’s high time to make these changes.

If you’re in healthcare, insurance, technology or other professional services industries, and need help with a PR, marketing or social media campaign, contact Scott Public Relations.

Download our e-book, “The C-Suite Asks, We Answer: The Top 6 Questions About Healthcare PR.”

healthcare public relations e-book

Learn more about healthcare PR, insurance PR, and technology PR in Scott PR’s Einsight blog, and follow Scott Public Relations on LinkedIn, Twitter, and Facebook

Sign up to receive our monthly advice on healthcare, insurance and technology PR: Scott Public Relations.