A Way Forward??

A recent column from the Boston Globe began with a paragraph that captures a discussion we’ve had numerous times on this blog.

Senator Daniel Patrick Moynihan once said, “Everyone is entitled to his own opinion, but not his own facts.” These days, though, two out of three Americans get their news from social media sites like Facebook, its subsidiary Instagram, Google’s YouTube, and Twitter. And these sites supply each of us with our own facts, showing conservatives mostly news liked by other conservatives, feeding liberals mostly liberal content.

The author, Josh Bernoff, explained why reimposing the Fairness Doctrine isn’t an option; that doctrine was a quid pro quo of sorts. It required certain behaviors in return for permission to use broadcast frequencies controlled by the government. It never applied to communications that didn’t use those frequencies–and there is no leverage that would allow government to require a broader application.

That said, policymakers are not entirely at the mercy of the social networking giants who have become the most significant purveyors of news and information–as well as propaganda and misinformation.

As the column points out, social media sites are making efforts–the author calls them “baby steps”–to control the worst content, like hate speech. But they’ve made only token efforts to alter the algorithms that generate clicks and profits by feeding users materials that increase involvement with the site. Unfortunately, those algorithms also intensify American tribalism.

These algorithms keep users on the site longer by sustaining their preferred worldviews, irrespective of the factual basis of those preferences–and thus far, social media sites have not  been held accountable for the damage that causes.

Their shield is Section 230 of the Communications Decency Act. Section 230 is

a key part of US media regulation that enables social networks to operate profitably. It creates a liability shield so that sites like Facebook that host user-generated content can’t be held responsible for defamatory posts on their sites and apps. Without it, Facebook, Twitter, Instagram, YouTube, and similar sites would get sued every time some random poster said that Mike Pence was having an affair or their neighbor’s Christmas lights were part of a satanic ritual.

Removing the shield entirely isn’t the answer. Full repeal would drastically curb free expression–not just on social media, but in other places, like the comment sections of newspapers. But that doesn’t mean we can’t take a leaf from the Fairness Doctrine book, and make Section 230 a quid pro quo–something that could be done without eroding the protections of the First Amendment.

Historically, Supreme Court opinions regarding First Amendment protections for problematic speech have taken the position that the correct remedy is not shutting it down but stimulating “counterspeech.” Justice Oliver Wendell Holmes wrote in a 1919 opinion, “The ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market.” And in 1927, Justice Louis Brandeis wrote, “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”….

Last year, Facebook generated $70 billion in advertising revenue; YouTube, around $15 billion; and Twitter, $3 billion. Now the FCC should require them to set aside 10 percent of their total ad space to expose people to diverse sources of content. They would be required to show free ads for mainstream liberal news sources to conservatives, and ads for mainstream conservative news sites to liberals. (They already know who’s liberal and who’s conservative — how do you think they bias the news feed in the first place?) The result would be sort of a tax, paid in advertising, to compensate for the billions these companies make under the government’s generous Section 230 liability shield and counteract the toxicity of their algorithms.

Sounds good to me. 

Comments