Hard Cases…

As I used to tell my students, cases rarely make it to the Supreme Court unless they’re difficult–unless there are persuasive arguments on both (or several) sides of the issue or issues involved. That admonition has actually become debatable as the current Court, dominated by religious “originalists,” has accepted cases that previous Courts wouldn’t have agreed to hear, but it remains largely true.

And hard cases, as the old legal precept warns, make bad law.

Which brings me to a First Amendment Free Speech case currently pending at the U.S. Supreme Court.

The question before the Court is the constitutionality of laws passed by Florida and Texas that restrict social media giants from removing certain political or controversial posts–in other words, from moderating the content posted to their platforms. As the Washington Post reported,

During almost four hours of argument Monday, the Supreme Court justices considered whether state governments can set the rules for how social media platforms curate content in a major First Amendment case with implications for the future of free speech online.

The laws being litigated are an effort to prevent social media companies from removing “conservative” viewpoints. The laws would impose strict limits on whether and when firms can block or take down content on their platforms.
At the heart of the matter is the issue highlighted by an exchange between Justice Alito and lawyer Paul Clement.
Justice Samuel Alito pressed NetChoice — a group representing the tech industry — to define the term “content moderation,” asking whether the term was “anything more than a euphemism for censorship.” “If the government’s doing it, then content moderation might be a euphemism for censorship,” said Paul Clement, an attorney representing NetChoice. “If a private party is doing it, content moderation is a euphemism for editorial discretion.”
I’ve frequently posted about Americans’ widespread lack of civic literacy–especially about censorship and freedom of speech. It is depressing how few citizens understand that the Bill of Rights is essentially a list of things that government is forbidden to do. Government is prohibited from dictating our beliefs, censoring our communications, searching or seizing us without probable cause, etc. Those restrictions do not apply to private actors, and for many years, courts have recognized the right of newspapers and other print media to decide what they will, and will not, print, in the exercise of their Free Speech rights.
Perhaps the most important question posed by the recent First Amendment challenges to Texas and Florida’s new social media laws is whether platforms exercise a constitutionally protected right to “editorial discretion” when they moderate speech. The platform’s central challenge to both laws is that their must-carry and transparency obligations infringe on that right by interfering with the platforms’ ability to pick and choose what speech they host on their sites. It’s the same right, they argue, that newspapers exercise when they pick and choose what speech appears in their pages.
In other words, whose First Amendment rights will we protect? Or to put it another way, does the First Amendment give all of us a right to have our opinions disseminated by the social media platform of our choice? Or, to ask that in a different way, if the First Amendment protects speech, does it also protect the right of powerful social media companies to suppress the speech of some number of people who use their platforms?
The Knight Foundation argues
The First Amendment is not concerned solely—or perhaps even primarily—with the maximization of speech per se. Instead, what it protects and facilitates is the kind of information ecosystem in which free speech values can flourish. Courts have recognized that protecting the right of speech intermediaries to choose what they do and do not publish—in other words, protecting their right to editorial discretion—is a necessary means of creating that kind of environment.
Most of us have concerns about the content moderation policies of these enormously influential and powerful sites. The question before the Court is–once again–who decides? Are those who run those sites entitled to decide what appears on them, or can government control their decisions?
Elon Musk’s takeover of Twitter (now ridiculous “X”) and his idiosyncratic definition of “free speech” has turned that site into a cesspool of anti-Semitism and conspiracy theories. The First Amendment currently gives him the right to make the site odious, just as Facebook has the right to remove racist and other objectionable posts. We the People decide which platforms we will patronize.
As I used to tell my students, the Bill of Rights addresses a deceptively simple question: who has the right to make this decision?

Flooding The Zone

Times are tough for us Free Speech defenders ….

It’s bad enough that so few Americans understand either the protections or the limitations of the First Amendment’s Free Speech provisions. Fewer still can distinguish between hate speech and hate crimes. And even lawyers dedicated to the protection of our constitutional right to publicly opine and debate recognize the existence of grey zones.

When the Internet first became ubiquitous, I celebrated this new mechanism for expression. I saw it as a welcome new development in the “marketplace of ideas.”  What I didn’t see was its potential for the spread of deliberate propaganda.

Color me disabused.

Steve Bannon coined the phrase that explains what we are seeing: “flooding the zone with shit.” Rather than inventing a story to counter explanations with which one disagrees, the new approach–facilitated by bots and AI– simply produces immense amounts of conflicting and phony “information” which is then uploaded to social media and other sites.  The goal is no longer to make people believe “story A” rather than “story B.” The goal is to create a population that no longer knows what to believe.

It’s a tactic that has infected American politics and made governing close to impossible–but it is not a tactic confined to the U.S. It’s global.

Heather Cox Richardson has summed up the resulting threat:

A report published last week by the European Commission, the body that governs the European Union, says that when X, the company formerly known as Twitter, got rid of its safety standards, Russian disinformation on the site took off. Lies about Russia’s war against Ukraine spread to at least 165 million people in the E.U. and allied countries like the U.S., and garnered at least 16 billion views. The study found that Instagram, Telegram, and Facebook, all owned by Meta, also spread pro-Kremlin propaganda that uses hate speech and boosts extremists.

The report concluded that “the Kremlin’s ongoing disinformation campaign not only forms an integral part of Russia’s military agenda, but also causes risks to public security, fundamental rights and electoral processes” in the E.U. The report’s conclusions also apply to the U.S., where the far right is working to undermine U.S. support for Ukraine by claiming—falsely—that U.S. aid to Ukraine means the Biden administration is neglecting emergencies at home, like the fires last month in Maui.

Russian operatives famously flooded social media with disinformation to influence the 2016 U.S. election, and by 2022 the Federal Bureau of Investigation (FBI) warned that China had gotten into the act. Today, analyst Clint Watts of Microsoft reported that in the last year, China has honed its ability to generate artificial images that appear to be U.S. voters, using them to stoke “controversy along racial, economic, and ideological lines.” It uses social media accounts to post divisive, AI-created images that attack political figures and iconic U.S. symbols.

Once upon a time, America could depend upon two large oceans to protect us from threats from abroad. Those days are long gone, and our contemporary isolationists–who refuse to understand, for example, how Russia’s invasion of Ukraine could affect us–utterly fail to recognize that opposing our new global reality  doesn’t make it go away.

The internet makes it possible to deliver disinformation on a scale never previously available–or imagined. And it poses a very real problem for those of us who defend freedom of speech, because most of the proposed “remedies” I’ve seen would make things worse.

This nation’s Founders weren’t naive; they understood that ideas are powerful, and that  bad ideas can do real harm. They opted for freedom of speech–defined in our system as freedom from government censorship– because they also recognized that allowing government to decide which ideas could be exchanged would be much more harmful.

I still agree with the Founders’ decision, but even if I didn’t, current communication technology has largely made government control impossible. (I still recall a conversation I had with two students at a Chinese university that had invited me to speak. I asked them about China’s control of the Internet and they laughed, telling me that any “tech savvy” person could evade state controls–and that many did. And that was some 18 years ago.)

At this poiint, we have to depend upon those who manage social media platforms to monitor what their users post, which is why egomaniacs like Elon Musk–who champions a “free speech” he clearly doesn’t understand–are so dangerous.

Ultimately, we will have to depend upon the ability of the public to separate the wheat from the chaff–and the ability to do that requires a level of civic literacy that has thus far eluded us….


The New Gatekeepers?

Speaking of media and information failures…

Any competent historian will confirm that propaganda and misinformation have always been with us. (Opponents of Thomas Jefferson warned that bibles would be burned if he were elected). The difference between that history and the world we now occupy is, of course, the Internet, and its ability to spread mis- and disinformation worldwide with the click of a computer key.

As a recent column in the New York Times put it, the Internet has caused misinformation to metastasize.

The column noted that on July 8, Trump had taken to Truth Social, his pathetic social media platform, to claim that he had really won the 2020 presidential vote in Wisconsin, despite all evidence to the contrary. Barely 8000 people shared that “Truth.” And yet 

Within 48 hours of Mr. Trump’s post, more than one million people saw his claim on at least dozen other sites. It appeared on Facebook and Twitter, from which he has been banished, but also YouTube, Gab, Parler and Telegram, according to an analysis by The New York Times.

The spread of Mr. Trump’s claim illustrates how, ahead of this year’s midterm elections, disinformation has metastasized since experts began raising alarms about the threat. Despite years of efforts by the media, by academics and even by social media companies themselves to address the problem, it is arguably more pervasive and widespread today.

It isn’t just Facebook and Twitter. The number of platforms has proliferated. Some 69 million people have joined those like Parler, Gab, Truth Social, Gettr and Rumble, sites that brag about being “conservative alternatives” to Big Tech.  And even though many of those who have flocked to such platforms have been banned from larger sites, “they continue to spread their views, which often appear in screen shots posted on the sites that barred them.”

When the Internet was in its infancy, I was among those who celebrated the diminished–actually, the obliterated–role of the gatekeeper. Previously, editors at traditional news sources–our local newspapers and television news stations–had decided what was newsworthy, what their audiences needed to know, and imposed certain rules that dictated whether even those chosen stories could be reported. The most important of those rules was verification; could the reporter confirm the accuracy of whatever was being alleged? 

True, the requirement that news be verified slowed down reporting, and often prevented an arguably important story from being published at all. Much depended upon the doggedness of the reporter. But professional journalists– purveyors of that much derided “lame stream” journalism–were gatekeepers preventing the widespread dissemination of unsubstantiated rumors, conspiracies and outright lies.

Today, anyone with a computer and the time to use it can spread a story, whether that story is verifiable or an outright invention. We no longer have gatekeepers. Even the larger and presumably more responsible platforms are intent upon generating “clicks” and increasing “engagement,” the time users spend on their sites. Accuracy is a minor concern, if it is a concern at all.

The Wild West of today’s information environment is enormously dangerous to civil society and democratic self-government. But now, an even more ominous threat looms: Billionaires are buying social media platforms. Elon Musk, currently the world’s richest man, now owns Twitter, “a social media network imbued with so much political capital it could fracture nations.”

It’s a trend years in the making. From the political largess of former Facebook executives like Sheryl Sandberg and Joel Kaplan to the metapolitics of Peter Thiel, tech titans have long adopted an inside/outside playbook for conducting politics by other means.

 But recent developments, including Donald Trump’s investment in Twitter clone Truth Social and Kanye West’s supposed agreement to buy the ailing social network Parler, illustrate how crucial these new technologies have become in politics. More than just communication tools, platforms have become the stage on which politics is played.

The linked article was written by Joan Donovan, research director of Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy, and it details the multiple ways in which these billionaires can deploy the power of social media to the detriment of American democracy. As she concludes:

In many ways, the infamous provocateur journalist Andrew Breitbart was right: politics are downstream of culture. To this I’d add that culture is downstream of infrastructure. The politics we get are the ones that sprout from our technology, so we should cultivate a digital public infrastructure that does not rely on the whims of billionaires. If we do not invest in building an online public commons, our speech will only be as free as our hopefully benevolent dictators say it is.

A world in which Peter Thiel and Elon Musk are informational gatekeepers is a dystopian world I don’t want to inhabit.


Restraining Power

The growing concerns about social media–especially platforms’ moderation of users’ posts–are just the most recent and visible examples of an older conundrum: how do we define and restrain the misuse of power?

When the U.S. Constitution was drafted, concerns about the infringement of individual rights focused almost entirely on government, because only government entities had the power to prescribe and proscribe individual behaviors and punish those who failed to conform. Accordingly, the Bill of Rights restrained only government (initially, only the federal government, which was seen as a greater threat than the state and local units of government that were included in its prohibitions after passage of the 14th Amendment.)

To state the glaringly obvious, in the 200+ years since passage of the original Bill of Rights, a lot of things have changed.

Governments aren’t the only entities exercising considerable authority over our lives–major corporations, a number of them global in scope, not only influence government but engage in negative behaviors that directly affect millions of people, from polluting the environment to exploiting third-world labor. Scholars have belatedly come to question whether the Bill of Rights shouldn’t be applied more broadly–to restrain all entities large enough or powerful enough to invade individual rights.

I have absolutely no idea how that might work.( It probably wouldn’t.) /That said, we are at a point where we absolutely must contend with the inordinate power exercised by private, non-governmental organizations, and especially by Facebook, Twitter, et al.

Robert Reich addressed that problem in a recent essay for the Guardian.

Twitter and Instagram just removed antisemitic posts from Kanye West and temporarily banned him from their platforms. It just goes to show … um, what?

How good these tech companies are at content moderation? Or how irresponsible they are for “muzzling” controversial views from the extreme right? (Defenders of West, such as the Indiana attorney general, Todd Rokita, are incensed that he’s been banned.) Or how arbitrary these giant megaphones are in making these decisions? (What would Elon Musk do about Kanye West?)

 Call it the Kayne West paradox: do the social media giants have a duty to take down noxious content or a duty to post it? And who decides?

As Reich quite accurately notes, these platforms, with their huge size and extraordinary power over what’s communicated, exert enormous sway over the American public. And they are utterly unaccountable to that public.

Two cases pending before the Supreme Court illustrate the underlying dilemma:

One case involves Section 230 of Communications Decency Act of 1996. That section gives social media platforms protection from liability for what’s posted on them. In that case, plaintiffs claim that social media ( YouTube in one case,Twitter in the other) led to the deaths of family members at the hands of terrorists. In another case, the plaintiffs are arguing that the First Amendment forbids these platforms from being more vigilant. That case arises from a Texas law that allows Texans and the state’s attorney general to sue  social media giants for “unfairly” banning or censoring them based on political ideology.

It’s an almost impossible quandary – until you realize that these questions arise because of the huge political and social power of these companies, and their lack of accountability.

In reality, they aren’t just for-profit companies. By virtue of their size and power, their decisions have enormous public consequences.

Reich is betting is that the Court will treat them as common carriers, like railroads or telephone lines. Common carriers can’t engage in unreasonable discrimination in who uses them, must charge just and reasonable prices, and must provide reasonable care to the public.

But is there any reason to trust the government to do a better job of content moderation than the giants do on their own? (I hate to imagine what would happen under a Republican FCC.)

So are we inevitably locked into the Kanye West paradox?

Or is there a third and better alternative to the bleak choice between leaving content moderation up to the giant unaccountable firms or to a polarized government?

The answer is yes. It’s to address the underlying problem directly: the monopoly power possessed by the giant social media companies.

The way to do this is apply the antitrust laws – and break them up.

My guess is that this is where we’ll end up, eventually. There’s no other reasonable choice. As Winston Churchill is reputed to have said: “Americans can always be trusted to do the right thing, once all other possibilities have been exhausted.”

It’s hard to disagree. And actually, a far more aggressive approach to anti-trust would solve more problems than those we are experiencing with social media…


That Misunderstood First Amendment

I know that my constant yammering about the importance of civic education can seem pretty tiresome –especially in the abstract–so I was initially gratified to read Brookings Institution article focusing on a very tangible example.

Emerging research confirms the damage being done by misinformation being disseminated by social media, and that research has led to a sometimes acrimonious debate over what can be done to ameliorate the problem. One especially troubling argument has been over content that isn’t, as the article recognizes, “per se illegal” but nevertheless likely to cause significant. harm.

Many on the left insist digital platforms haven’t done enough to combat hate speech, misinformation, and other potentially harmful material, while many on the right argue that platforms are doing far too much—to the point where “Big Tech” is censoring legitimate speech and effectively infringing on Americans’ fundamental rights.

There is considerable pressure on policymakers to pass laws addressing the ways in which social media platforms operate–and especially how those platforms moderate incendiary posts. As the article notes,  the electorate’s incorrect beliefs about the First Amendment add to “the political and economic challenges of building better online speech governance.”

What far too many Americans don’t understand about freedom of speech–and for that matter, not only the First Amendment but the entire Bill of Rights–is that the liberties being protected are freedom from government action. If the government isn’t involved, neither is the Constitution.

I still remember a telephone call I received when I directed Indiana’s ACLU. A young man wanted the ACLU to sue White Castle, which had refused to hire him because they found the tattoos covering him “unappetizing.” He was sure they couldn’t do that, because he had a First Amendment right to express himself. I had to explain to him that White Castle also had a First Amendment right to control its messages. Had the legislature or City-County Council forbid citizens to communicate via tattooing, that would be government censorship, and would violate the First Amendment.

That young man’s belief that the right to free speech is somehow a free-floating right against anyone trying to restrict his communication is a widespread and pernicious misunderstanding, and it complicates discussion of the available approaches to content moderation on social media platforms. Facebook, Twitter and the rest are, like newspaper and magazine publishers, private entities–like White Castle, they have their own speech rights. As the author of the Brookings article writes,

Nonetheless, many Americans erroneously believe that the content-moderation decisions of digital platforms violate ordinary people’s constitutionally guaranteed speech rights. With policymakers at all levels of government working to address a diverse set of harms associated with platforms, the electorate’s mistaken beliefs about the First Amendment could add to the political and economic challenges of building better online speech governance.

The author conducted research into three related questions: How common is this inaccurate belief? Does it correlate with lower support for content moderation? And if it does, does education about the actual scope of First Amendment speech protection increase support for platforms to engage in content moderation?

The results of that research were, as academics like to say, “mixed,” especially for proponents of more and better civic education.

Fifty-nine percent of participants answered the Constitutional question incorrectly, and were less likely to support decisions by platforms to ban particular users. As the author noted, misunderstanding of the First Amendment was both very common and linked to lower support for content moderation. Theoretically, then, educating about the First Amendment should increase support for content moderation.

However, it turned out that such training actually lowered support for content moderation-(interestingly, that  decrease in support was “linked to Republican identity.”)

Why might that be? The author speculated that respondents might reduce their support for content moderation once they realized that there is less legal recourse than expected when they find such moderation uncongenial to their political preferences.

In other words, it is reasonable to be more skeptical of private decisions about content moderation once one becomes aware that the legal protections for online speech rights are less than one had previously assumed. …

 Republican politicians and the American public alike express the belief that platform moderation practices favor liberal messaging, despite strong empirical evidence to the contrary. Many Americans likely hold such views at least in part due to strategically misleading claims by prominent politicians and media figures, a particularly worrying form of misinformation. Any effort to improve popular understandings of the First Amendment will therefore need to build on related strategies for countering widespread political misinformation.

Unfortunately, when Americans inhabit alternative realities, even civic education runs into a wall….