Submitted by Mad Max on
In a world of disinformation, can anyone make informed political decisions? With all the anti-vaxxers, Q-Anon devotees, and 9-11 truthers out there, how out of touch can we get? Can we ever come together as a nation if we can’t even agree on what’s true?
Unless we do a better job of cracking down on disinformation, democracy is in danger. You might think that democracy just means that everybody gets a vote, but it’s about more than that: it means that everyone gets to make meaningful decisions about who runs things and how. And you can’t make a meaningful decision if you don’t know what’s going on.
But is it possible to crack down on disinformation without silencing people and infringing on their free speech? In the US, our right to free speech is protected by the First Amendment, which means that the government cannot silence political speech, and can’t selectively withhold resources from speakers based on political disagreements. But private social media companies, like Twitter or Facebook, have the legal right to delete any speech they want.
And sometimes hiding or deleting information is justified. If a person were to tack a Nazi diatribe or a misleading anti-fluoridation rant to a grocery store bulletin board, it would be reasonable for the grocery store to take their flier down. Can’t we say the same for Facebook when it comes to anti-semitic conspiracy theories, or misleading op-eds that raise unfounded doubts about the severity of the COVID-19 pandemic?
But how exactly should social media companies decide what to crack down on, and how to get rid of it? One obvious place to start is factually incorrect claims about matters of serious practical importance. Even this poses a significant practical challenge: it took serious effort from Twitter to crack down on election misinformation in 2020, and Facebook’s attempts to remove misinformation about Covid have met with only partial success.
Even if they succeed at suppressing blatant factual inaccuracies, social media companies will have a harder time identifying more subtle types of dishonesty and manipulation. Still, I’m glad that social media companies are putting in effort; who knows how much worse the problem would be without it?
One thing I’m not in favor of is government oversight of social media platforms. We know that the US government cannot directly restrict speech on social media, because of the First Amendment. But government also can’t indirectly restrict speech by making social media companies liable for what they post, thanks to Section 230 of the 1996 Communications Decency Act. If you post damaging lies about someone on Instagram, the person you lied about can sue you, but they can’t sue Instagram. I think that this is overall a good thing (and that recent attempts to roll back section 230 have had bad effects).
But if the government can’t make internet companies behave, who will? I don’t think we can trust them to self-regulate. We, the users of social media, need to take some responsibility for developing shared expectations, and reporting posts that violate them. We also can and should regulate our own behavior: fact-checking articles before posting them, pointing out when others share factually inaccurate information, and acknowledging and accepting corrections, and seeking out new information from a wide variety of sources.
Ray Briggs
https://www.philosophytalk.org/blog/cracking-down-disinformation