Tag Archives: social media

That Misunderstood First Amendment

I know that my constant yammering about the importance of civic education can seem pretty tiresome –especially in the abstract–so I was initially gratified to read Brookings Institution article focusing on a very tangible example.

Emerging research confirms the damage being done by misinformation being disseminated by social media, and that research has led to a sometimes acrimonious debate over what can be done to ameliorate the problem. One especially troubling argument has been over content that isn’t, as the article recognizes, “per se illegal” but nevertheless likely to cause significant. harm.

Many on the left insist digital platforms haven’t done enough to combat hate speech, misinformation, and other potentially harmful material, while many on the right argue that platforms are doing far too much—to the point where “Big Tech” is censoring legitimate speech and effectively infringing on Americans’ fundamental rights.

There is considerable pressure on policymakers to pass laws addressing the ways in which social media platforms operate–and especially how those platforms moderate incendiary posts. As the article notes,  the electorate’s incorrect beliefs about the First Amendment add to “the political and economic challenges of building better online speech governance.”

What far too many Americans don’t understand about freedom of speech–and for that matter, not only the First Amendment but the entire Bill of Rights–is that the liberties being protected are freedom from government action. If the government isn’t involved, neither is the Constitution.

I still remember a telephone call I received when I directed Indiana’s ACLU. A young man wanted the ACLU to sue White Castle, which had refused to hire him because they found the tattoos covering him “unappetizing.” He was sure they couldn’t do that, because he had a First Amendment right to express himself. I had to explain to him that White Castle also had a First Amendment right to control its messages. Had the legislature or City-County Council forbid citizens to communicate via tattooing, that would be government censorship, and would violate the First Amendment.

That young man’s belief that the right to free speech is somehow a free-floating right against anyone trying to restrict his communication is a widespread and pernicious misunderstanding, and it complicates discussion of the available approaches to content moderation on social media platforms. Facebook, Twitter and the rest are, like newspaper and magazine publishers, private entities–like White Castle, they have their own speech rights. As the author of the Brookings article writes,

Nonetheless, many Americans erroneously believe that the content-moderation decisions of digital platforms violate ordinary people’s constitutionally guaranteed speech rights. With policymakers at all levels of government working to address a diverse set of harms associated with platforms, the electorate’s mistaken beliefs about the First Amendment could add to the political and economic challenges of building better online speech governance.

The author conducted research into three related questions: How common is this inaccurate belief? Does it correlate with lower support for content moderation? And if it does, does education about the actual scope of First Amendment speech protection increase support for platforms to engage in content moderation?

The results of that research were, as academics like to say, “mixed,” especially for proponents of more and better civic education.

Fifty-nine percent of participants answered the Constitutional question incorrectly, and were less likely to support decisions by platforms to ban particular users. As the author noted, misunderstanding of the First Amendment was both very common and linked to lower support for content moderation. Theoretically, then, educating about the First Amendment should increase support for content moderation.

However, it turned out that such training actually lowered support for content moderation-(interestingly, that  decrease in support was “linked to Republican identity.”)

Why might that be? The author speculated that respondents might reduce their support for content moderation once they realized that there is less legal recourse than expected when they find such moderation uncongenial to their political preferences.

In other words, it is reasonable to be more skeptical of private decisions about content moderation once one becomes aware that the legal protections for online speech rights are less than one had previously assumed. …

 Republican politicians and the American public alike express the belief that platform moderation practices favor liberal messaging, despite strong empirical evidence to the contrary. Many Americans likely hold such views at least in part due to strategically misleading claims by prominent politicians and media figures, a particularly worrying form of misinformation. Any effort to improve popular understandings of the First Amendment will therefore need to build on related strategies for countering widespread political misinformation.

Unfortunately, when Americans inhabit alternative realities, even civic education runs into a wall….

 

 

A Compelling Read

Jonathan Haidt is a well-regarded scholar who has written a compelling article for the Atlantic, titled  “Why The Past Ten Years Of American Life Have Been Uniquely Stupid.” He begins by referencing the biblical story of Babel:

What would it have been like to live in Babel in the days after its destruction? In the Book of Genesis, we are told that the descendants of Noah built a great city in the land of Shinar. They built a tower “with its top in the heavens” to “make a name” for themselves. God was offended by the hubris of humanity and said:

Look, they are one people, and they have all one language; and this is only the beginning of what they will do; nothing that they propose to do will now be impossible for them. Come, let us go down, and confuse their language there, so that they will not understand one another’s speech.

The text does not say that God destroyed the tower, but in many popular renderings of the story he does, so let’s hold that dramatic image in our minds: people wandering amid the ruins, unable to communicate, condemned to mutual incomprehension.

Babel, according to Haidt, is not a story about tribalism. Instead, he insists it’s a story about the “fragmentation of everything.” And he makes a point that is often overlooked:  this fragmentation isn’t just happening between those who see themselves as red or blue, but within both left and right, and “within universities, companies, professional associations, museums, and even families.”

How have we come to this point? Haidt blames social media.The early Internet seemed to promise an expansion of co-operation and global democracy.

Myspace, Friendster, and Facebook made it easy to connect with friends and strangers to talk about common interests, for free, and at a scale never before imaginable. By 2008, Facebook had emerged as the dominant platform, with more than 100 million monthly users, on its way to roughly 3 billion today. In the first decade of the new century, social media was widely believed to be a boon to democracy. What dictator could impose his will on an interconnected citizenry? What regime could build a wall to keep out the internet?

The high point of techno-democratic optimism was arguably 2011, a year that began with the Arab Spring and ended with the global Occupy movement. That is also when Google Translate became available on virtually all smartphones, so you could say that 2011 was the year that humanity rebuilt the Tower of Babel. We were closer than we had ever been to being “one people,” and we had effectively overcome the curse of division by language. For techno-democratic optimists, it seemed to be only the beginning of what humanity could do.

Then, he writes, it all fell apart.

Haidt references the three major forces that social scientists have identified as collectively necessary to the cohesion of successful democracies: they are social capital–defined as extensive social networks with high levels of trust– strong institutions, and shared stories. And he points out that social media has weakened all three, as the platforms morphed from a new form of communication to a mechanism for performing –for what Haidt characterizes as the management of ones “personal brand.” Communication became a method for impressing others, rather than a sharing that might deepen friendships and understanding. He blamed the introduction of the “like’ and “share” buttons–which allowed the platforms to gauge users’ engagement–as a critical turning point.

As a social psychologist who studies emotion, morality, and politics, I saw this happening too. The newly tweaked platforms were almost perfectly designed to bring out our most moralistic and least reflective selves. The volume of outrage was shocking.

I encourage you to click through and read the entire, lengthy article, but if you don’t have time to do so, I’ll end this recap with the paragraph that struck me as a description of the most troubling consequences of our current use of these social media platforms.

It’s not just the waste of time and scarce attention that matters; it’s the continual chipping-away of trust. An autocracy can deploy propaganda or use fear to motivate the behaviors it desires, but a democracy depends on widely internalized acceptance of the legitimacy of rules, norms, and institutions. Blind and irrevocable trust in any particular individual or organization is never warranted. But when citizens lose trust in elected leaders, health authorities, the courts, the police, universities, and the integrity of elections, then every decision becomes contested; every election becomes a life-and-death struggle to save the country from the other side.

Haidt’s very troubling conclusion: If we do not make major changes soon, then our institutions, our political system, and our society may collapse.

I’m very afraid he’s right.

Social Media, Tribalism, And Craziness

If we are ever going to emerge from pandemic hell or semi-hell, we have to get a handle on two of the most dangerous aspects of contemporary life: the use of social media to spread disinformation, and the politicization of science–including, especially now, medical science.

Talking Points Memo recently ran a column (behind the paywall, so no link–sorry) from an expert in social media. That column made several points:

  •  fake news spreads faster than verified and validated news from credible sources. We also know that items and articles connecting vaccines and death are among the content people engage with most.
  • The algorithms used by social media platforms are primed for engagement, creating a “rabbit-hole effect”–it pushes users who click on anti-vaccine messages toward more anti-vaccine content. The people spreading medical misinformation know this, and know how to exploit the weaknesses of the engagement-driven systems on social media platforms.
  • “Social media is being manipulated on an industrial scale, including by a Russian campaign pushing disinformation about COVID-19 vaccines.” Research tells us that people who rely on Facebook for their news about the coronavirus are less likely to be vaccinated than people who get their coronavirus news from any other source.

According to the column, the problem is exacerbated by the way in which vaccine-related misinformation fits into people’s preexisting beliefs.

I was struck by the observation that acceptance of  wild and seemingly obvious inaccuracies requires a certain “pre-existing” belief system. That, not surprisingly, gets us to America’s current, extreme political tribalism.              
 
Let me share some very troubling data: To date, some 86% of Democrats have received at least one COVID-19 vaccine shot–compared with only 45% of Republicans. A Washington Post survey found that only 6% of Democratic respondents reported an intent to decline the vaccine, while 47% of Republicans said they would refuse to be inoculated. 

Not to put too fine a point on it,  this is insane.

Aside from people with genuine medical conditions that make vaccination unwise, the various justifications offered for denying the vaccine range from hypocritical (“pro-life” politicians suddenly defending the right of individuals to control of their own bodies) to legally inaccurate (“freedom” has never included the right to endanger others—if it did, we’d have the “freedom” to drive drunk and ignore red lights), to conspiratorial (COVID is a “hoax” perpetrated by those hated liberals).

Now, America has always had citizens willing to make decisions that endanger others; what is truly mystifying, however, is why such people overwhelmingly inhabit red states— including Indiana. 

Every state with large numbers of people who have refused vaccination is predominantly Republican. In several of those states, hospitalizations of unvaccinated COVID patients threatens to overwhelm health care systems. New York, a blue state, has five Covid patients hospitalized per 100,000 people, while red state Florida, where Governor Ron DeSantis has actually barred businesses from requiring patrons to show proof of vaccination, has 34 per100,000.

DeSantis’ Trumpian approach is an excellent example of just how dramatically the GOP has departed from the positions that used to define it. Whatever happened to the Republican insistence that business owners have the right to determine the rules for their own employees and patrons? (They still give lip service to those rules when the issue is whether to serve LGBTQ customers, but happily abandon them when the decision involves the health and safety of those same patrons.)

And what happened to the GOP’s former insistence on patriotism? Surely protecting others in one’s community from a debilitating and frequently deadly disease is patriotic.

Tribalism has clearly triumphed over logic and self-interest. As Amanda Marcotte recently wrote in Salon,

getting the vaccine would be an admission for conservatives that they were wrong about COVID-19 in the first place, and that liberals were right. And for much of red-state America, that’s apparently a far worse fate than death.

Making vaccine refusal a badge of political affiliation makes absolutely no sense. It does, however, correspond to the precipitous decline of rationality in what was once the “Grand Old Party”—a party now characterized by the anti-science, anti-logic, anti-intellectualism of officials like Marjorie Taylor Greene, Lauren Boebert, Jim Jordan, Paul Gosar, and Louie Gohmert (who was memorably described by Charlie Savage as “the dumbest mammal to enter a legislative chamber since Caligula’s horse”).

These mental giants (cough, cough) are insisting that vaccination will “magnetize” the body and make keys stick to you, and that Bill Gates is sneaking “tracking chips” into the vaccine doses. (As a friend recently queried, don’t most of those people warning against “tracking devices” own cell phones?? Talk about tracking…)

Talk about buffoonery.

The problem is, these sad, deranged people are endangering the rest of us.
 
 
 
 

 

 

The Age Of Misinformation

Political scientists often study the characteristics and influence of those they dub “high information voters.” Although that cohort is relatively small, it accounts for a significant amount–probably a majority–of America’s political discourse.

Research has suggested that these more informed voters, who follow politics closely, are just as likely–perhaps even more likely– to exhibit confirmation bias as are Americans less invested in the daily political news. But their ability to spread both information and misinformation is far greater than it was before the Internet and the ubiquity of social media.

As Max Fisher recently wrote in a column for the New York Times, 

There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.

All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.

Fisher attributes this phenomenon to a number of factors, but especially to an aspect of identity politics; we live in an age where political identity has become central to the self-image held by many Americans.

Fisher cites research attributing the prevalence of misinformation to three main elements of our time. Perhaps the most important of the three is a social environment in which individuals feel the need for what he terms “in-grouping,” and I would call tribalism — identification with like-minded others  as a source of strength and (especially) superiority. As he says,

In times of perceived conflict or social change, we seek security in groups. And that makes us eager to consume information, true or not, that lets us see the world as a conflict putting our righteous ingroup against a nefarious outgroup.

 American political polarization promotes the sharing of disinformation. The hostility between Red and Blue America feeds a pervasive distrust, and when people are distrustful, they become much more prone to engage in and accept rumor and falsehood. Distrust also encourages people to see the world as “us versus them”– and that’s a world in which we are much more apt to believe information that bolsters “us” and denigrates “them.” We know that  individuals with more polarized views are more likely to believe falsehoods.

And of course, the emergence of high-profile political figures who prey on these tribal instincts exacerbates the situation.

Then there is the third factor — a shift to social media, which is a powerful outlet for composers of disinformation, a pervasive vector for misinformation itself and a multiplier of the other risk factors.

“Media has changed, the environment has changed, and that has a potentially big impact on our natural behavior,” said William J. Brady, a Yale University social psychologist.

“When you post things, you’re highly aware of the feedback that you get, the social feedback in terms of likes and shares,” Dr. Brady said. So when misinformation appeals to social impulses more than the truth does, it gets more attention online, which means people feel rewarded and encouraged for spreading it.

It isn’t surprising that people who get positive feedback when they post inflammatory or false statements are more likely to do so again–and again. In one particularly troubling analysis, researchers found that when a fact-check revealed that information in a post was wrong, the response of partisans wasn’t to revise their thinking or get upset with the purveyor of the lie.

Instead, it was to attack the fact checkers.

“The problem is that when we encounter opposing views in the age and context of social media, it’s not like reading them in a newspaper while sitting alone,” the sociologist Zeynep Tufekci wrote in a much-circulated MIT Technology Review article. “It’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium. Online, we’re connected with our communities, and we seek approval from our like-minded peers. We bond with our team by yelling at the fans of the other one.”

In an ecosystem where that sense of identity conflict is all-consuming, she wrote, “belonging is stronger than facts.”

We’re in a world of hurt…..

 

Mandating Fairness

Whenever one of my posts addresses America’s problem with disinformation, at least one commenter will call for re-institution of the Fairness Doctrine–despite the fact that, each time, another commenter (usually a lawyer) will explain why that doctrine wouldn’t apply to social media or most other Internet sites causing contemporary mischief.

The Fairness Doctrine was contractualGovernment owned the broadcast channels that were being auctioned for use by private media companies, and thus had the right to require certain undertakings from responsive bidders. In other words, in addition to the payments being tendered, bidders had to promise to operate “in the public interest,” and the public interest included an obligation to give contending voices a fair hearing.

The government couldn’t have passed a law requiring newspapers and magazines to be “fair,” and it cannot legally require fair and responsible behavior from cable channels and social media platforms, no matter how much we might wish it could.

So–in this era of QAnon and Fox News and Rush Limbaugh clones– where does that leave us?

The Brookings Institution, among others, has wrestled with the issue.

The violence of Jan. 6 made clear that the health of online communities and the spread of disinformation represents a major threat to U.S. democracy, and as the Biden administration takes office, it is time for policymakers to consider how to take a more active approach to counter disinformation and form a public-private partnership aimed at identifying and countering disinformation that poses a risk to society.

Brookings says that a non-partisan public-private effort is required because disinformation crosses platforms and transcends political boundaries. They recommend a “public trust” that would provide analysis and policy proposals intended to defend democracy against the constant stream of  disinformation and the illiberal forces at work disseminating it. 
It would identify emerging trends and methods of sharing disinformation, and would
support data-driven initiatives to improve digital media-literacy. 

Frankly, I found the Brookings proposal unsatisfactorily vague, but there are other, more concrete proposals for combatting online and cable propaganda. Dan Mullendore pointed to one promising tactic in a comment the other day. Fox News income isn’t–as we might suppose– dependent mostly on advertising; significant sums come from cable fees. And one reason those fees are so lucrative is that Fox gets bundled with other channels, meaning that many people pay for Fox who wouldn’t pay for it if it weren’t a package deal . A few days ago, on Twitter, a lawyer named Pam Keith pointed out that a simple regulatory change ending  bundling would force Fox and other channels to compete for customers’ eyes, ears and pocketbooks.

Then there’s the current debate over Section 230 of the Communications Decency Act, with many critics advocating its repeal, and others, like the Electronic Frontier Foundation, defending it.

Section 230 says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of “interactive computer service providers,” including basically any online service that publishes third-party content. Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish.

Most observers believe that an outright repeal of Section 230 would destroy social networks as we know them (the linked article explains why, as do several others), but there is a middle ground between total repeal and naive calls for millions of users to voluntarily leave platforms that fail to block hateful and/or misleading posts.

Fast Company has suggested that middle ground.

One possibility is that the current version of Section 230 could be replaced with a requirement that platforms use a more clearly defined best-efforts approach, requiring them to use the best technology and establishing some kind of industry standard they would be held to for detecting and mediating violating content, fraud, and abuse. That would be analogous to standards already in place in the area of advertising fraud….

Another option could be to limit where Section 230 protections apply. For example, it might be restricted only to content that is unmonetized. In that scenario, you would have platforms displaying ads only next to content that had been sufficiently analyzed that they could take legal responsibility for it. 

A “one size fits all” reinvention of the Fairness Doctrine isn’t going to happen. But that doesn’t mean we can’t make meaningful, legal improvements that would make a real difference online.