Information Silos And The First Amendment

The First Amendment contemplates and protects a “marketplace of ideas.” We have no precedent for an information environment in which there is no marketplace–no “agora” where different ideas and perspectives contend with each other for acceptance.

What we have instead are information “silos”–a column in the New York Times recently quoted Robert Post, a Yale professor, for the observation that people have always been crazy, but the internet has allowed them to find each other.

In those silos, they talk only to each other.

Social media has enabled the widespread and instantaneous transmission of lies in the service of political gain, and we are seeing the results. The question is: what should we do?

One set of scholars has concluded that the damage being done by misinformation and propaganda outweighs the damage of censorship. Rick Hasen, perhaps the most pre-eminent scholar of election law, falls into that category:

Change is urgent to deal with election pathologies caused by the cheap speech era, but even legal changes as tame as updating disclosure laws to apply to online political ads could face new hostility from a Supreme Court taking a libertarian marketplace-of-ideas approach to the First Amendment. As I explain, we are experiencing a market failure when it comes to reliable information voters need to make informed choices and to have confidence in the integrity of our electoral system. But the Court may stand in the way of necessary reform.

I don’t know what Hasen considers “necessary reform,” but I’m skeptical.

I have always been a First Amendment purist, and I still agree with the balance struck by the Founders, who understood that–as pernicious and damaging as bad ideas can be–allowing government to determine which ideas get voiced is likely to be much more dangerous. (As a former ACLU colleague memorably put it, “Poison gas is a great weapon until the wind shifts.”)

That said, social media platforms aren’t government. Like brick-and-mortar private businesses, they can insist on certain behaviors by their customers. And like other private businesses, they can and should be regulated in the public interest. (At the very least, they should be required to apply their own rules consistently. People expressing concern/outrage over Twitter’s ban of Trump should be reminded that he would have encountered that ban much earlier had he been an ordinary user. Trump had flouted Twitter and Facebook rules for years.)

The Times column suggests we might learn from European approaches to issues of speech, including falsehoods and hate speech. Hate speech can only be banned in the U.S. if it is intended to incite imminent violence and is actually likely to do so. Europeans have decided that hate speech isn’t valuable public discourse– that racism isn’t an idea; it’s a form of discrimination.

The underlying philosophical difference here is about the right of the individual to self-expression. Americans value that classic liberal right very highly — so highly that we tolerate speech that might make others less equal. Europeans value the democratic collective and the capacity of all citizens to participate fully in it — so much that they are willing to limit individual rights.

The First Amendment was crafted for a political speech environment that was markedly different than today’s, as Tim Wu has argued.  Government censorship was then the greatest threat to free speech. Today, those, including Trump, “who seek to control speech use new methods that rely on the weaponization of speech itself, such as the deployment of ‘troll armies,’ the fabrication of news, or ‘flooding’ tactics” that humiliate, harass, discourage, and even destroy targeted speakers.”

Wu argues that Americans can no longer assume that the First Amendment is an adequate guarantee against malicious speech control and censorship. He points out that the marketplace of ideas has become corrupted by technologies “that facilitate the transmission of false information.”

American courts have long held that the best test of truth is the power of an idea to get itself accepted in the competition that characterizes a marketplace. They haven’t addressed what happens when there is no longer a functioning market–when citizens  confine their communicative interactions to sites that depend for their profitability on confirming the biases of carefully targeted populations.

I certainly don’t think the answer is to dispense with–or water down– the First Amendment. But that Amendment was an effort to keep those with power from controlling information. In today’s information environment, platforms like Twitter, Facebook, etc. are as powerful and influential as government. Our challenge is to somehow rein in intentional propaganda and misinformation without throwing the baby out with the bathwater.

Any ideas how we do that?

Comments

A Way Forward??

A recent column from the Boston Globe began with a paragraph that captures a discussion we’ve had numerous times on this blog.

Senator Daniel Patrick Moynihan once said, “Everyone is entitled to his own opinion, but not his own facts.” These days, though, two out of three Americans get their news from social media sites like Facebook, its subsidiary Instagram, Google’s YouTube, and Twitter. And these sites supply each of us with our own facts, showing conservatives mostly news liked by other conservatives, feeding liberals mostly liberal content.

The author, Josh Bernoff, explained why reimposing the Fairness Doctrine isn’t an option; that doctrine was a quid pro quo of sorts. It required certain behaviors in return for permission to use broadcast frequencies controlled by the government. It never applied to communications that didn’t use those frequencies–and there is no leverage that would allow government to require a broader application.

That said, policymakers are not entirely at the mercy of the social networking giants who have become the most significant purveyors of news and information–as well as propaganda and misinformation.

As the column points out, social media sites are making efforts–the author calls them “baby steps”–to control the worst content, like hate speech. But they’ve made only token efforts to alter the algorithms that generate clicks and profits by feeding users materials that increase involvement with the site. Unfortunately, those algorithms also intensify American tribalism.

These algorithms keep users on the site longer by sustaining their preferred worldviews, irrespective of the factual basis of those preferences–and thus far, social media sites have not  been held accountable for the damage that causes.

Their shield is Section 230 of the Communications Decency Act. Section 230 is

a key part of US media regulation that enables social networks to operate profitably. It creates a liability shield so that sites like Facebook that host user-generated content can’t be held responsible for defamatory posts on their sites and apps. Without it, Facebook, Twitter, Instagram, YouTube, and similar sites would get sued every time some random poster said that Mike Pence was having an affair or their neighbor’s Christmas lights were part of a satanic ritual.

Removing the shield entirely isn’t the answer. Full repeal would drastically curb free expression–not just on social media, but in other places, like the comment sections of newspapers. But that doesn’t mean we can’t take a leaf from the Fairness Doctrine book, and make Section 230 a quid pro quo–something that could be done without eroding the protections of the First Amendment.

Historically, Supreme Court opinions regarding First Amendment protections for problematic speech have taken the position that the correct remedy is not shutting it down but stimulating “counterspeech.” Justice Oliver Wendell Holmes wrote in a 1919 opinion, “The ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market.” And in 1927, Justice Louis Brandeis wrote, “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”….

Last year, Facebook generated $70 billion in advertising revenue; YouTube, around $15 billion; and Twitter, $3 billion. Now the FCC should require them to set aside 10 percent of their total ad space to expose people to diverse sources of content. They would be required to show free ads for mainstream liberal news sources to conservatives, and ads for mainstream conservative news sites to liberals. (They already know who’s liberal and who’s conservative — how do you think they bias the news feed in the first place?) The result would be sort of a tax, paid in advertising, to compensate for the billions these companies make under the government’s generous Section 230 liability shield and counteract the toxicity of their algorithms.

Sounds good to me. 

Comments

Constitutional Rights At The Schoolhouse Door

As regular readers of this blog and my former students know, I  approach my course on “Law and Public Affairs” through a constitutional lens. There are some obvious reasons for that focus: many of my students will work for government agencies, and will be  legally obliged to adhere to what I have sometimes called “the Constitutional Ethic.” Due to the apparent lack of civic education in the nation’s high schools, a troubling number of  graduate students come to class with very hazy understandings of the country’s legal foundations.

Freedom of speech seems particularly susceptible to misunderstanding.

The first problem is that a significant number of Americans don’t “get” that  the Bill of Rights only restrains government. Walmart or the Arts and Entertainment Channel or (as one angry caller insisted when I was at the ACLU) White Castle cannot be sued for denying you your First Amendment Right to express yourself.

The most difficult concept for my students, however, has been the principle of content neutrality. Government can–within reasonable limits– regulate the time, place and manner of citizens’ communication, but it cannot favor some messages over others. (I used to illustrate that rule by explaining that city ordinances could prohibit sound trucks from operating in residential neighborhoods between the hours of 10 pm and 7 am, but could not allow trucks advocating for candidate Smith while banning those for candidate Jones. I had to discontinue that example when I realized that none of today’s students had the slightest idea what a sound truck was…)

One example I did continue to use was public school efforts to control T-shirts with messages on them. Private schools can do what they wish–they aren’t government–but public schools cannot constitutionally favor some messages over others. This is evidently a lesson that many Indiana schools have yet to learn. A brief article from the Indianapolis Star reports that the ACLU is suing a school in Manchester, Indiana, after a student was forced by administrators to go home for wearing a T-shirt with the text “I hope I don’t get killed for being Black today.”

According to the Complaint, students at the school are allowed to wear T-shirts with Confederate flags and “Blue Lives Matter” slogans. It describes the plaintiff, who is identified only by his initials, as one of the few Black students at the school.

“Schools cannot selectively choose which social issues students can support through messages on their clothing,” Ken Falk, the ACLU of Indiana’s legal director, said in a prepared statement on Monday. “Students do not lose their constitutional rights at the schoolhouse doors. The refusal of the school to allow D.E. to wear his t-shirt is a violation of his right to free speech.”

The school would be within its rights to ban all “message” T-shirts (although I can hear the grumbling now). Favoring certain messages over others, however, is a violation of the principle of content-neutrality –a core precept of the Free Speech Clause that prohibits government from favoring some messages over others.

The courts give school administrators a good deal more leeway than other government actors, on the theory that providing an educational environment requires a larger measure of control than would be appropriate for adults. But there are limits; as Ken Falk noted, and the Supreme Court affirmed in Tinker v. DeMoinesstudents do not leave their constitutional rights at the schoolhouse door.

Far too many school administrators are more focused on exerting control than on modeling or transmitting basic constitutional values. Too many public schools are operated as totalitarian regimes–environments that stress compliance and group-think, rather than teaching critical thinking, acquainting young people with the values of a democratic society, and encouraging civic debate and engagement.

When school officials themselves routinely break the rules, is it any wonder so many young people graduate still unaware of them?

Comments

It Depends And It’s Complicated

Every so often, intellectual luminaries initiate what the rest of us might call a “pissing match.”

One such match was triggered by a letter published in Harpers, warning that the spread of “censoriousness” is leading to “an intolerance of opposing views” and “a vogue for public shaming and ostracism”. Some of my favorite authors–and some not-so-favorite– are signatories (and no, I’m not identifying either category.)

The letter approves of the “powerful protests for racial and social justice” that it says are leading to “overdue demands for police reform, along with wider calls for greater equality and inclusion across our society”, but it goes on to disapprove–strongly–of what it calls “a new set of moral attitudes and political commitments” leading to the delivery of “hasty and disproportionate punishments instead of considered reforms”, and it charges that this disproportionate response tends “to weaken our norms of open debate and toleration of differences in favor of ideological conformity”.

As an aside, I’m not so certain those norms ever existed outside certain rarified circles. I sure haven’t seen much evidence of a genteel “toleration of differences”– and such courtesies certainly haven’t characterized social media.

I find myself agreeing with a remark attributed to US senator Brian Schatz (D. Hawaii), that “lots of brainpower and passion is being devoted to a problem that takes a really long time to describe, and is impossible to solve, and meanwhile we have mass preventable death”.

A Guardian article reported the reactions of some of that paper’s columnists, at least two of whom pointed out that the letter was a bit fuzzy in its definition of “cancel culture.” Zoe Williams, for example, wrote

This reminds me a lot of the arguments we used to have about religious tolerance in the 90s. Toleration was a good and necessary thing; but what if it meant you had to tolerate people who themselves wouldn’t tolerate you?

One of the Guardian commenters was Samuel Moyn, a professor of law and history at Yale, who had signed the original letter. He explained that he’d signed on, not because he is a free speech absolutist–a status he disclaims–but because he believes that,

If it is true that hierarchies are in part maintained – not just undone – by speech, and that speech can harm and not just help, it doesn’t follow that more free speech for more people isn’t generally a good cause. It is.

A few people sent me the original letter, and asked my opinion. With the caveat that I am no more equipped to weigh in than anyone else, here are my reactions:

Free speech has always been contested. It has also always been misunderstood: we have the right to “speak our piece” without interference by government. We have never had–and never will have–the right to speak our piece without repercussions, without hearing from people who disagree with what we have said.

Do extreme negative responses intimidate people, and deter others from speaking out–suppressing, rather than encouraging, productive debate? Yes. Isn’t that regrettable? Usually–although not always.

Is the extreme sort of blowback that the letter excoriates often unfair, and even unhelpful to the cause of those engaging in the disproportionate reaction? Yes–often.

Have the Internet and social media amplified both hateful speech and over-the-top censorious responses to it? Yes. Does that reality make civil, productive discussion and debate more difficult? You betcha.

None of this, however, qualifies as “breaking news.”

A number of people critical of the letter point to signatories who–they say–are guilty of the very behavior they criticize. That doesn’t make the criticism wrong, of course, but it does point to the fact that whether a reaction is proportionate to the offense is very much a subjective determination.

I tell my students at the beginning of each semester that my goal is for them to leave my class using two phrases far more frequently than they did previously: It depends and it’s more complicated than that.

Meanwhile, Senator Schatz has a point.

Comments

FaceBook, Disinformation And The First Amendment

These are tough times for Free Speech purists–of whom I am one.

I have always been persuaded by the arguments that support freedom of expression. In a genuine  marketplace of ideas, I believe–okay, I want to believe– that better ideas will drive out worse ones. More compelling is the argument that, while some ideas may be truly dangerous, giving   government the right to decide which ideas get expressed and which ones don’t would be much more dangerous. 

But FaceBook and other social media sites are really testing my allegiance to unfettered, unregulated–and amplified–expression. Recently, The Guardian reported that more than 3 million followers and members support the crazy QAnon conspiracy on Facebook, and their numbers are growing.

For those unfamiliar with QAnon, it

is a movement of people who interpret as a kind of gospel the online messages of an anonymous figure – “Q” – who claims knowledge of a secret cabal of powerful pedophiles and sex traffickers. Within the constructed reality of QAnon, Donald Trump is secretly waging a patriotic crusade against these “deep state” child abusers, and a “Great Awakening” that will reveal the truth is on the horizon.

Brian Friedberg, a senior researcher at the Harvard Shorenstein Center is quoted as saying that Facebook is a “unique platform for recruitment and amplification,” and that he doubts  QAnon would have been able to happen without the “affordances of Facebook.”

Facebook isn’t just providing a platform to QAnon groups–its  algorithms are actively recommending them to users who may not otherwise have been exposed to them. And it isn’t only QAnon. According to the Wall Street Journal, Facebook’s own internal research in 2016 found that “64% of all extremist group joins are due to our recommendation tools.”

If the problem was limited to QAnon and other conspiracy theories, it would be troubling enough, but it isn’t. A recent essay by a Silicone Valley insider named Roger McNamee in Time Magazine began with an ominous paragraph:

If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.

McNamee points to a predictable cycle: platforms are pressured to “do something” about harassment, disinformation or conspiracy theories. They respond by promising to improve their content moderation. But– as the essay points out– none have been successful at limiting the harm from third party content, and  so the cycle repeats.  (As he notes, banning Alex Jones removed his conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.)

The article identifies three reasons content moderation cannot work: scale, latency, and intent. Scale refers to the sheer hundreds of millions messages posted each day. Latency is the time it takes for even automated moderation to identify and remove a harmful message. The most important obstacle, however, is intent–a/k/a the platform’s business model.

The content we want internet platforms to remove is the content most likely to keep people engaged and online–and that makes it exceptionally valuable to the platforms.

As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.

McNamee argues we should not have to accept disinformation as the price of access, and he offers a remedy:

At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.

I’m not sure I share McNamee’s belief that his solution doesn’t implicate the First Amendment.

The (relative) newness of the Internet and social media creates uncertainty. What, exactly, are these platforms? How should they be classified? They aren’t traditional publishers–and third parties’ posts aren’t their “speech.” 

As 2020 campaigns heat up, more attention is being paid to how FaceBook promotes propaganda. Its refusal to remove or label clear lies from the Trump campaign has prompted advertisers to temporarily boycott the platform. FaceBook may react by tightening some moderation, but ultimately, McNamee is right: that won’t solve the problem.

One more conundrum of our Brave New World……

Happy 4th!

Comments