The Challenges Of Modern Life

The Supreme Court’s docket this year has two cases that will require the Court to confront a thorny challenge of modern life–to adapt (or not) to the novel realities of today’s communication technologies.

Given the fact that at least five of the Justices cling to the fantasy that they are living in the 1800s, I’m not holding my breath.

The cases I’m referencing are two that challenge Section 230, social media’s “safe space.”

As Time Magazine explained on February 19th,

The future of the federal law that protects online platforms from liability for content uploaded on their site is up in the air as the Supreme Court is set to hear two cases that could change the internet this week.

The first case, Gonzalez v. Google, which is set to be heard on Tuesday, argues that YouTube’s algorithm helped ISIS post videos and recruit members —making online platforms directly and secondarily liable for the 2015 Paris attacks that killed 130 people, including 23-year-old American college student Nohemi Gonzalez. Gonzalez’s parents and other deceased victims’ families are seeking damages related to the Anti-Terrorism Act.

Oral arguments for Twitter v. Taamneh—a case that makes similar arguments against Google, Twitter, and Facebook—centers around another ISIS terrorist attack that killed 29 people in Istanbul, Turkey, will be heard on Wednesday.

The cases will decide whether online platforms can be held liable for the targeted advertisements or algorithmic content spread on their platforms.

Re-read that last sentence, because it accurately reports the question the Court must address. Much of the media coverage of these cases misstates that question. These cases  are not about determining whether the platforms can be held responsible for posts by the individuals who upload them. The issue is whether they can be held responsible for the algorithms that promote those posts–algorithms that the platforms themselves developed.

Section 230, which passed in 1996, is a part of the Communications Decency Act.

The law explicitly states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” meaning online platforms are not responsible for the content a user may post.

Google argues that websites like YouTube cannot be held liable as the “publisher or speaker” of the content users created, because Google does not have the capacity to screen “all third-party content for illegal or tortious materia.l” The company also argues that “the threat of liability could prompt sweeping restrictions on online activity.”

It’s one thing to insulate tech platforms from liability for what users post–it’s another to allow them free reign to select and/or promote certain content–which is what their algorithms do. In recognition of that distinction, in 2021, Senators Amy Klobuchar and Ben Ray Lujan introduced a bill that would remove tech companies’ immunity from lawsuits if their algorithms promoted health misinformation.

As a tech journalist wrote in a NYT opinion essay,

The law, created when the number of websites could be counted in the thousands, was designed to protect early internet companies from libel lawsuits when their users inevitably slandered one another on online bulletin boards and chat rooms. But since then, as the technology evolved to billions of websites and services that are essential to our daily lives, courts and corporations have expanded it into an all-purpose legal shield that has acted similarly to the qualified immunity doctrine that often protects policeofficers from liability even for violence and killing.

As a journalist who has been covering the harms inflicted by technology for decades, I have watched how tech companies wield Section 230 to protect themselves against a wide array of allegations, including facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking — behavior that they would have likely been held liable for in an offline context….

There is a way to keep internet content freewheeling while revoking tech’s get-out-of-jail-free card: drawing a distinction between speech and conduct.

In other words, continue to offer tech platforms immunity for the defamation cases that Congress had in mind when Section 230 passed, but impose liability for illegal conduct that their own technology enables and/or promotes. (For example, the author confirmed that advertisers could easily use Facebook’s ad targeting algorithms to violate the Fair Housing Act.)

Arguably, the creation of an algorithm is an action–not the expression or communication of an opinion or idea. When that algorithm demonstrably encourages and/or facilitates illegal behavior, its creator ought to be held liable.

It’s like that TV auto ad that proclaims “this isn’t your father’s Oldsmobile.” The Internet isn’t your mother’s newspaper, either. Some significant challenges come along with the multiple benefits of modernity– how to protect free speech without encouraging the barbarians at the gate is one of them.

 

Comments

Speech Versus Action

A recent report on an upcoming Supreme Court case from the New Republic made me think–definitely not for the first time–about the widespread misconceptions around the First Amendment.

Most of the people who read this blog are aware of many of those misconceptions. Probably the most annoying is the most basic–it constantly amazes me (okay, irritates the heck out of me) how many Americans don’t know that the First Amendment, like the rest of the  Bill of Rights, protects only against government action.

I still remember a call I got when I was with Indiana’s ACLU; the caller had applied for a position with White Castle, and had been told that his extensive tattoos were incompatible with their customer service standards. He demanded we sue White Castle for infringing his Free Speech rights. I had to explain that–had the City Council passed an ordinance against tattoos, that would have violated his First Amendment rights, but White Castle is private–and has its own First Amendment right to determine the manner of its own communication.

The case described in the linked article isn’t that clear-cut. It  involves an often-contested “gray area.”

The Supreme Court will hear Counterman v. Colorado in April to decide whether prosecutors must prove that a defendant meant to threaten someone with harm, or if they can opt for the lower threshold of whether a reasonable person might interpret a defendant’s actions or statements as a threat. Where the high court ultimately comes down on this distinction could be consequential in an age when it’s easier than ever for Americans to threaten not just each other, but also election workers, FBI agents, members of Congress, and even Supreme Court justices. How far does the First Amendment go to protect them?

In my classes, I took a rather unorthodox approach to this question, and a number of similar issues. While you won’t find my distinction in legal treatises, it seemed to help students understand the purpose–and limits– of the Free Speech clause. The fundamental distinction I drew was between speech (defined as communication of a message) and action.

The distinction doesn’t rely on whether there was verbal communication.

If I tell you that this cubic zirconium ring I’m selling is really a diamond, and charge you accordingly, I have engaged in fraud–a behavior. The First Amendment won’t protect me.

If I text and telephone you every hour and call you names, that’s harassment–a behavior. The First Amendment won’t protect me.

If I burn an American flag, I am sending a message (we know it’s a message, because  most Americans understand it and find it offensive). That message is protected by the First Amendment.

The problem for law enforcement arises when it is unclear whether we’re dealing with behavior–a genuine threat–or the expression of an opinion. (As lawyers like to say, it’s a “fact-sensitive” inquiry.) Social media trolling has vastly complicated this determination.

At the heart of this case is a campaign of harassment that seems all too familiar. The plaintiff, Billy Counterman, used multiple Facebook accounts to send hostile messages to an unidentified local musician in Colorado. Among the numerous messages that Counterman sent her were ones that read, especially in the context of the years-long barrage, as threats. “Fuck off permanently,” Counterman said in one of the messages. “You’re not being good for human relations,” read another. “Die. Don’t need you.” The target, who never responded to him and blocked him multiple times, ultimately contacted Colorado police, who charged Counterman for violating the state’s anti-stalking statutes.

Colorado law defines the offense to describe anyone who “repeatedly follows, approaches, contacts, places under surveillance, or makes any form of communication with another person … in a manner that would cause a reasonable person to suffer serious emotional distress and does cause that person … to suffer serious emotional distress.” Notably, under the rulings of Colorado courts, prosecutors aren’t required to prove that the defendant intended to threaten a person. They instead must only show that a reasonable person would have taken the statements as threats, which is a much easier threshold to clear at trial.

In the lower courts, the troll was handed a sentence of four years under the state’s anti-stalking statute.

This is one of those “hard cases” that –as the saying goes– sometimes make bad law. Four years seems pretty excessive for being an online asshole; on the other hand, such trolling far too frequently becomes a “heckler’s veto”-defined as behavior that allows  people who disagrees with a speaker’s message to shut that message down.

It remains to be seen how the Court will treat online harassment, but it sure seems like it falls on the “behavior” side of my explanatory line…..

Comments

The Fox Effect

There’s clearly a lot that could be said about former President Trump’s lunch with one full-fledged Neo-Nazi and and one wanna-be Nazi, and most of it has been said or written. I won’t add my two cents to the reactions, except to say that I agree with the two most common ones: Trump’s anti-Semitism is disgusting but hardly a surprise to anyone who follows the news even superficially; and the most telling element of this whole sordid story was the lack of pushback–or even comment–from most Republicans.

Far and away the best comment I’ve come across, and the impetus for this post, was an observation by the Daily Show’s Trevor Noah.

Everyone agrees that Nick Fuentes should not be having dinner with former president Donald Trump. He’s much better suited to be a host on Fox News.

The Daily Show followed up with an absolutely devastating “mash up” of speeches by Nick Fuentes, the Neo-Nazi, and various Fox News personalities, including  its most reliable and prominent bigot, Tucker Carlson. You really need to click through and watch it, and then consider the effect of Fox’s poison on its (largely elderly) audience.

There is a reason President Biden has identified Fox as one of the most destructive forces in the world, and Rupert Murdock as the most dangerous man in America. 

As the linked report shows, four elements make Fox News a” uniquely damaging part of the American news landscape: its strength on the political right, the demonstrated way in which it shapes its viewers’ beliefs, its grip on Republican power and the views of its leadership.”

A national poll conducted by he Washington Post and the University of Maryland looked at where people with varying political ideologies get their news about politics and government. Researchers found that  Democrats and Democrat-leaning independents consulted a reasonably wide variety of essentially mainstream sources. At least three out of ten of that group identified CNN, CBS, NBC, MSNBC, NPR, the Times, and/or The Post as  their main sources of news.

Among most Republicans, though, only two sources were identified: local television and Fox News.

Cable-news viewership skews toward demographics that are more Republican in the first place, and CNN and MSNBC are fighting for a similar base of viewers — viewers who also partake of news from other outlets. Fox News’s strength with 43 percent of the country (the percentage that is Republican or Republican-leaning independent, according to Gallup) gives it a distinct advantage in ratings.

Most Americans don’t care about ratings, of course. So it’s important to put this in a more useful context: Fox News has a larger audience than its competitors — an audience that is largely politically homogeneous. And new research reinforces that this homogeneity is not solely a function of Republicans choosing Fox News but of the network filtering what it shows its viewers.

In other words, Fox chooses what it presents as “news”–and what it omits.

Another recent study paid  a group of regular Fox viewers to watch CNN, then compared  how those viewers understood news events with how a control group of Fox News viewers understood them. The study found “large effects on attitudes and policy preferences about COVID-19” and in “evaluations of Donald Trump and Republican candidates and elected officials.”

Participants in the experiment even grew to recognize the way in which Fox News presents reality: “group participants became more likely to agree that if Donald Trump made a mistake, Fox News would not cover it — i.e., that Fox News engages in partisan coverage filtering.”

Researchers also found that much of what Fox News did show was exaggerated or untrue.

There is a growing body of research confirming that Fox is a propaganda outlet serving the GOP, and not a real news organization–a conclusion brilliantly supported in the Daily Show mash-up.

To belabor the point: where people get their news matters–which explains the considerable concern  generated by Elon Musk’s acquisition of Twitter. In pursuit of his profound misunderstanding of the First Amendment’s Free Speech clause,  Musk has opened the Twitter floodgates–the frequency of racist tweets and hate speech has grown significantly. 

Twitter has thus joined Fox in normalizing bigotry and incivility. Those of us who were already worried that Twitter was shortening attention spans and increasing Americans’ tendency to substitute bumper-sticker memes for thoughtful debate, now see the platform exacerbating racial and religious divisions and reinforcing pernicious stereotypes. 

The social media admonition not to feed the trolls seems appropriate here. In a very real sense, both Fox News and Twitter are America’s trolls. Somehow, we need to figure out how to keep people from feeding them.

Given the undeniable lure of confirmation bias, it won’t be easy.

Comments

Ron “Contempt For The Constitution” DeSantis

Yesterday’s blog post noted that Florida man Ron DeSantis is a favorite of the New Right. A recent judicial opinion, striking down one of his many outrageous attacks on the Constitutional rights of Florida citizens explains why.

A federal judge on Thursday halted a key piece of the “Stop-WOKE” Act touted by Republican Gov. Ron DeSantis, blocking state officials from enforcing what he called a “positively dystopian” policy restricting how lessons on race and gender can be taught in colleges and universities.

The 138-page order from Chief U.S. District Judge Mark Walker is being heralded as a major win for campus free speech by the groups who challenged the state.

Among other “dystopian” provisions of DeSantis’ anti-woke law were rules about what university professors could–and could not–say in the classroom. As the Judge noted in his opinion, the law gave the state “unfettered authority to muzzle its professors in the name of ‘freedom.'”

Florida legislators passed DeSantis’ “Individual Freedom Act” earlier this year (a label reminiscent of George W. Bush’s anti-environmental “Blue Skies” Act..). The law prohibits schools and private companies from

leveling guilt or blame to students and employees based on race or sex, takes aim at lessons over issues like “white privilege” by creating new protections for students and workers, including that a person should not be instructed to “feel guilt, anguish, or any other form of psychological distress” due to their race, color, sex or national origin.

The judge ruled that such policies violate both First Amendment free speech protections and 14th Amendment due-process rights on college campuses.

The law officially bans professors from expressing disfavored viewpoints in university classrooms while permitting unfettered expression of the opposite viewpoints,” wrote Walker. “Defendants argue that, under this Act, professors enjoy ‘academic freedom’ so long as they express only those viewpoints of which the State approves. This is positively dystopian.”

This particular lawsuit challenged the application of the anti-Woke law to colleges and universities; other pending challenges assert that the law is illegal and unconstitutional when applied to  K-12 schools and to the workplace.

In a column discussing the law and the ruling, Jennifer Rubin noted,

The law, for example, bars discussion of the concept that a person “by virtue of his or her race, color, national origin, or sex should be discriminated against or receive adverse treatment to achieve diversity, equity, or inclusion.” During oral arguments, when asked if this would bar professors from supporting affirmative action in classroom settings, attorneys for the state government answered, “Your Honor, yes.”

Walker cited that admission, finding:

Thus, Defendants assert the idea of affirmative action is so “repugnant” that instructors can no longer express approval of affirmative action as an idea worthy of merit during class instruction. … What does this mean in practical terms? Assuming the University of Florida Levin College of Law decided to invite Supreme Court Justice Sonia Sotomayor to speak to a class of law students, she would be unable to offer this poignant reflection about her own lived experience, because it endorses affirmative action.

The law so blatantly violates the concept of free speech that one wonders if remedial constitutional education should be a requirement for Florida officeholders.

No wonder the so-called intellectuals of the New Right see DeSantis as one of their own. He has consistently used his position and the power of the state to suppress the expression of views he dislikes. Rubin reminds readers of DeSantis’ “don’t say gay” law, his statute banning “critical race theory” in schools and his attempt to fire an elected county prosecutor who criticized his abortion policies. To which I would add his attacks on voting rights and his (successful) gerrymandering efforts.

DeSantis has also regularly flexed his power as governor: excluding media from events, taking public proceedings behind closed doors (including the selection of the University of Florida’s president) and exacting revenge on supposedly woke corporations such as Disney.

DeSantis’s contempt for dissent and his crackdown on critics should not be discounted. This is the profile of a constitutional ignoramus, a bully and a strongman. Voters should be forewarned.

DeSantis, Trump and the New Right sure don’t look anything like the libertarian, limited-government GOP I once knew…The only part of Rubin’s critique with which I disagree is her labeling of DeSantis as a “constitutional ignoramus.” It’s much worse than that.

Unlike Trump, who is an ignoramus, DeSantis knows better. He just doesn’t care.

Comments

Is Design Censorship?

We live in a world where seemingly settled issues are being reframed. A recent, fascinating discussion on the Persuasion podcast focused on the role of social media in spreading both misinformation and what Renee DiResta, the expert being interviewed, labeled “rumors.”

As she explained, using the term “misinformation” (a use to which I plead guilty) isn’t a particularly useful way of framing  the problem we face, because so many of the things that raise people’s hackles aren’t statements of fact; they aren’t falsifiable. And even when they are, even when what was posted or asserted was demonstrably untrue, and is labeled untrue, a lot of people simply won’t believe it is false. As she says, “if you’re in Tribe A, you distrust the media of Tribe B and vice versa. And so even the attempt to correct the misinformation, when it is misinformation, is read with a particular kind of partisan valence. “Is this coming from somebody in my tribe, or is this more manipulation from the bad guys?”

If we aren’t dealing simply in factual inaccuracies or even outright lies, how should we describe the problem?

One of the more useful frameworks for what is happening today is rumors: people are spreading information that can maybe never be verified or falsified, within communities of people who really care about an issue. They spread it amongst themselves to inform their friends and neighbors. There is a kind of altruistic motivation. The platforms find their identity for them based on statistical similarity to other users. Once the network is assembled and people are put into these groups or these follower relationships, the way that information is curated is that when one person sees it, they hit that share button—it’s a rumor, they’re interested, and they want to spread it to the rest of their community. Facts are not really part of the process here. It’s like identity engagement: “this is a thing that I care about, that you should care about, too.” This is rewarmed media theory from the 1960s: the structure of the system perpetuates how the information is going to spread. Social media is just a different type of trajectory, where the audience has real power as participants. That’s something that is fundamentally different from all prior media environments. Not only can you share the rumor, but millions of people can see in aggregate the sharing of that rumor.

Her explanation of how social media algorithms work is worth quoting at length

When you pull up your Twitter feed, there’s “Trends” on the right hand side, and they’re personalized for you. And sometimes there’s a very, very small number of participants in the trend, maybe just a few hundred tweets. But it’s a nudge, it says you are going to be interested in this topic. It’s bait: go click this thing that you have engaged with before that you are probably going to be interested in, and then you will see all of the other people’s tweets about it. Then you engage. And in the act of engagement, you are perpetuating that trend.

Early on, I was paying attention to the anti-vaccine movement. I was a new mom, and I was really interested in what people were saying about this on Facebook. I was kind of horrified by it, to be totally candid. I started following some anti-vaccine groups, and then Facebook began to show me Pizzagate, and then QAnon. I had never typed in Pizzagate, and I had never typed in QAnon. But through the power of collaborative filtering, it understood that if you were an active participant in a conspiracy theory community that fundamentally distrusts the government, you are probably similar to these other people who maybe have a different flavor of the conspiracy. And the recommendation engine didn’t understand what it was doing. It was not a conscious effort. It just said: here’s an active community, you have some similarities, you should go join that active community. Let’s give you this nudge. And that is how a lot of these networks were assembled in the early and mid-2010s.

Then DiResta posed what we used to call the “sixty-four thousand dollar question:”  are changes to the design of an algorithm censorship?

Implicit in that question, of course, is another: what about the original design of an algorithm?  Those mechanisms have been designed  to respond to certain inputs in certain ways, to “nudge” the user to visit X rather than Y.  Is that censorship? And if the answer to either of those questions is “yes,” is the First Amendment implicated?

To say that we are in uncharted waters is an understatement.

Comments