The Challenges Of Modern Life

The Supreme Court’s docket this year has two cases that will require the Court to confront a thorny challenge of modern life–to adapt (or not) to the novel realities of today’s communication technologies.

Given the fact that at least five of the Justices cling to the fantasy that they are living in the 1800s, I’m not holding my breath.

The cases I’m referencing are two that challenge Section 230, social media’s “safe space.”

As Time Magazine explained on February 19th,

The future of the federal law that protects online platforms from liability for content uploaded on their site is up in the air as the Supreme Court is set to hear two cases that could change the internet this week.

The first case, Gonzalez v. Google, which is set to be heard on Tuesday, argues that YouTube’s algorithm helped ISIS post videos and recruit members —making online platforms directly and secondarily liable for the 2015 Paris attacks that killed 130 people, including 23-year-old American college student Nohemi Gonzalez. Gonzalez’s parents and other deceased victims’ families are seeking damages related to the Anti-Terrorism Act.

Oral arguments for Twitter v. Taamneh—a case that makes similar arguments against Google, Twitter, and Facebook—centers around another ISIS terrorist attack that killed 29 people in Istanbul, Turkey, will be heard on Wednesday.

The cases will decide whether online platforms can be held liable for the targeted advertisements or algorithmic content spread on their platforms.

Re-read that last sentence, because it accurately reports the question the Court must address. Much of the media coverage of these cases misstates that question. These cases  are not about determining whether the platforms can be held responsible for posts by the individuals who upload them. The issue is whether they can be held responsible for the algorithms that promote those posts–algorithms that the platforms themselves developed.

Section 230, which passed in 1996, is a part of the Communications Decency Act.

The law explicitly states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” meaning online platforms are not responsible for the content a user may post.

Google argues that websites like YouTube cannot be held liable as the “publisher or speaker” of the content users created, because Google does not have the capacity to screen “all third-party content for illegal or tortious materia.l” The company also argues that “the threat of liability could prompt sweeping restrictions on online activity.”

It’s one thing to insulate tech platforms from liability for what users post–it’s another to allow them free reign to select and/or promote certain content–which is what their algorithms do. In recognition of that distinction, in 2021, Senators Amy Klobuchar and Ben Ray Lujan introduced a bill that would remove tech companies’ immunity from lawsuits if their algorithms promoted health misinformation.

As a tech journalist wrote in a NYT opinion essay,

The law, created when the number of websites could be counted in the thousands, was designed to protect early internet companies from libel lawsuits when their users inevitably slandered one another on online bulletin boards and chat rooms. But since then, as the technology evolved to billions of websites and services that are essential to our daily lives, courts and corporations have expanded it into an all-purpose legal shield that has acted similarly to the qualified immunity doctrine that often protects policeofficers from liability even for violence and killing.

As a journalist who has been covering the harms inflicted by technology for decades, I have watched how tech companies wield Section 230 to protect themselves against a wide array of allegations, including facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking — behavior that they would have likely been held liable for in an offline context….

There is a way to keep internet content freewheeling while revoking tech’s get-out-of-jail-free card: drawing a distinction between speech and conduct.

In other words, continue to offer tech platforms immunity for the defamation cases that Congress had in mind when Section 230 passed, but impose liability for illegal conduct that their own technology enables and/or promotes. (For example, the author confirmed that advertisers could easily use Facebook’s ad targeting algorithms to violate the Fair Housing Act.)

Arguably, the creation of an algorithm is an action–not the expression or communication of an opinion or idea. When that algorithm demonstrably encourages and/or facilitates illegal behavior, its creator ought to be held liable.

It’s like that TV auto ad that proclaims “this isn’t your father’s Oldsmobile.” The Internet isn’t your mother’s newspaper, either. Some significant challenges come along with the multiple benefits of modernity– how to protect free speech without encouraging the barbarians at the gate is one of them.

 

Comments