The Challenges Of Modern Life

The Supreme Court’s docket this year has two cases that will require the Court to confront a thorny challenge of modern life–to adapt (or not) to the novel realities of today’s communication technologies.

Given the fact that at least five of the Justices cling to the fantasy that they are living in the 1800s, I’m not holding my breath.

The cases I’m referencing are two that challenge Section 230, social media’s “safe space.”

As Time Magazine explained on February 19th,

The future of the federal law that protects online platforms from liability for content uploaded on their site is up in the air as the Supreme Court is set to hear two cases that could change the internet this week.

The first case, Gonzalez v. Google, which is set to be heard on Tuesday, argues that YouTube’s algorithm helped ISIS post videos and recruit members —making online platforms directly and secondarily liable for the 2015 Paris attacks that killed 130 people, including 23-year-old American college student Nohemi Gonzalez. Gonzalez’s parents and other deceased victims’ families are seeking damages related to the Anti-Terrorism Act.

Oral arguments for Twitter v. Taamneh—a case that makes similar arguments against Google, Twitter, and Facebook—centers around another ISIS terrorist attack that killed 29 people in Istanbul, Turkey, will be heard on Wednesday.

The cases will decide whether online platforms can be held liable for the targeted advertisements or algorithmic content spread on their platforms.

Re-read that last sentence, because it accurately reports the question the Court must address. Much of the media coverage of these cases misstates that question. These cases  are not about determining whether the platforms can be held responsible for posts by the individuals who upload them. The issue is whether they can be held responsible for the algorithms that promote those posts–algorithms that the platforms themselves developed.

Section 230, which passed in 1996, is a part of the Communications Decency Act.

The law explicitly states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” meaning online platforms are not responsible for the content a user may post.

Google argues that websites like YouTube cannot be held liable as the “publisher or speaker” of the content users created, because Google does not have the capacity to screen “all third-party content for illegal or tortious materia.l” The company also argues that “the threat of liability could prompt sweeping restrictions on online activity.”

It’s one thing to insulate tech platforms from liability for what users post–it’s another to allow them free reign to select and/or promote certain content–which is what their algorithms do. In recognition of that distinction, in 2021, Senators Amy Klobuchar and Ben Ray Lujan introduced a bill that would remove tech companies’ immunity from lawsuits if their algorithms promoted health misinformation.

As a tech journalist wrote in a NYT opinion essay,

The law, created when the number of websites could be counted in the thousands, was designed to protect early internet companies from libel lawsuits when their users inevitably slandered one another on online bulletin boards and chat rooms. But since then, as the technology evolved to billions of websites and services that are essential to our daily lives, courts and corporations have expanded it into an all-purpose legal shield that has acted similarly to the qualified immunity doctrine that often protects policeofficers from liability even for violence and killing.

As a journalist who has been covering the harms inflicted by technology for decades, I have watched how tech companies wield Section 230 to protect themselves against a wide array of allegations, including facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking — behavior that they would have likely been held liable for in an offline context….

There is a way to keep internet content freewheeling while revoking tech’s get-out-of-jail-free card: drawing a distinction between speech and conduct.

In other words, continue to offer tech platforms immunity for the defamation cases that Congress had in mind when Section 230 passed, but impose liability for illegal conduct that their own technology enables and/or promotes. (For example, the author confirmed that advertisers could easily use Facebook’s ad targeting algorithms to violate the Fair Housing Act.)

Arguably, the creation of an algorithm is an action–not the expression or communication of an opinion or idea. When that algorithm demonstrably encourages and/or facilitates illegal behavior, its creator ought to be held liable.

It’s like that TV auto ad that proclaims “this isn’t your father’s Oldsmobile.” The Internet isn’t your mother’s newspaper, either. Some significant challenges come along with the multiple benefits of modernity– how to protect free speech without encouraging the barbarians at the gate is one of them.

 

Comments

Who’s Talking?

I finally got around to reading an article about Facebook by a Professor Scott Galloway, sent to me by a reader. In it, Galloway was considering the various “fixes” that have been suggested in the wake of continuing revelations about the degree to which Facebook and other social media platforms have facilitated America’s divisions.

There have been a number of similar articles, but what Galloway did better than most was explain the origin of Section 230 of the Communications Act in language we non-techie people can understand.

In most industries, the most robust regulator is not a government agency, but a plaintiff’s attorney. If your factory dumps toxic chemicals in the river, you get sued. If the tires you make explode at highway speed, you get sued. Yes, it’s inefficient, but ultimately the threat of lawsuits reduces regulation; it’s a cop that covers a broad beat. Liability encourages businesses to make risk/reward calculations in ways that one-size-fits-all regulations don’t. It creates an algebra of deterrence.

Social media, however, is largely immunized from such suits. A 1996 law, known as “Section 230,” erects a fence around content that is online and provided by someone else. It means I’m not liable for the content of comments on the No Mercy website, Yelp isn’t liable for the content of its user reviews, and Facebook, well, Facebook can pretty much do whatever it wants.

There are increasing calls to repeal or reform 230. It’s instructive to understand this law, and why it remains valuable. When Congress passed it — again, in 1996 — it reasoned online companies were like bookstores or old-fashioned bulletin boards. They were mere distribution channels for other people’s content and shouldn’t be liable for it.

Seems reasonable. So–why the calls for its repeal? Galloway points to the multiple ways in which the information and communication environments have changed since 1996.

In 1996, 16% of Americans had access to the Internet, via a computer tethered to a phone cord. There was no Wi-Fi. No Google, Facebook, Twitter, Reddit, or YouTube — not even Friendster or MySpace had been birthed. Amazon sold only books. Section 230 was a fence protecting a garden plot of green shoots and untilled soil.

Today, as he points out, some 3 billion individuals use Facebook, and fifty-seven percent of the world population uses some sort of social media. Those are truly astonishing numbers.

I have previously posted about externalities–the ability of manufacturers and other providers to compete more successfully in the market by “offloading” certain of their costs to society at large. When it comes to social media, Galloway tells us that its externalities have grown as fast as the platforms’ revenues–and thanks to Section 230, society has borne the costs.

In sum, behind the law’s liability shield, tech platforms have morphed from Model UN members to Syria and North Korea. Only these Hermit Kingdoms have more warheads and submarines than all other nations combined.

As he points out, today’s social media has the resources to play by the same rules as other powerful media. Bottom line: We need a new fence. We need to redraw Section 230 so that it that protects society from the harms of social media companies without destroying  their  usefulness or economic vitality.

What we have learned since 1996 is that Facebook and other social media companies are not neutral platforms.  They aren’t bulletin boards. They are rigorously managed– personalized for each user, and actively boosting or suppressing certain content. Galloway calls that “algorithmic amplification” and it didn’t exist in 1996.

There are evidently several bills pending in Congress that purport to address the problem–aiming at the ways in which social media platforms weaponize these algorithms. Such approaches should avoid raising credible concerns about chilling free expression.

Reading the essay gave me some hope that we can deal–eventually–with the social damage being inflicted by social media. It didn’t, however, suggest a way to counter the propaganda spewed daily by Fox News or Sinclair or their clones…

Comments

Section 230

These are hard times for free speech advocates. The Internet–with its capacity for mass distribution of lies, misinformation, bigotry and incitement to violence–cries out for reform, but it is not apparent (certainly not to me) what sort of reforms might curb the dangers without also stifling free expression.

One approach is focused on a law that is older than Google: Section 230 of the Communications Decency Act. 

What is Section 230? Is it really broken? Can it be fixed without inadvertently doing more damage? 

The law is just 26 words that allow online platforms to make rules about what people can or can’t post without being held legally responsible for the content. (There are some exceptions, but not many. )As a recent newsletter on technology put it (sorry, for some reason link doesn’t work),

If I accuse you of murder on Facebook, you might be able to sue me, but you can’t sue Facebook. If you buy a defective toy from a merchant on Amazon, you might be able to take the seller to court, but not Amazon. (There is some legal debate about this, but you get the gist.)

The law created the conditions for Facebook, Yelp and Airbnb to give people a voice without being sued out of existence. But now Republicans and Democrats are asking whether the law gives tech companies either too much power or too little responsibility for what happens under their watch.


Republicans mostly worry that Section 230 gives internet companies too much power to suppress online debate and discussion, while Democrats mostly worry that it lets those companies ignore or even enable dangerous incitements and/or illegal transactions. 

The fight over Section 230 is really a fight over the lack of control exercised by Internet giants like Facebook and Twitter. In far too many situations, the law allows people to lie online without consequence–lets face it, that high school kid who is spreading lewd rumors about a girl who turned him down for a date isn’t likely to be sued, no matter how damaging, reprehensible and untrue his posts may be. The recent defamation suits brought by the voting machine manufacturers were salutary and satisfying, but most people harmed by the bigotry and disinformation online are not in a position to pursue such remedies.

The question being debated among techies and lawyers is whether Section 230 is too protective; whether it reduces incentives for platforms like Facebook and Twitter to make and enforce stronger measures that would be more effective in curtailing obviously harmful rhetoric and activities. 

Several proposed “fixes” are currently being considered. The Times article described them.


Fix-it Plan 1: Raise the bar. Some lawmakers want online companies to meet certain conditions before they get the legal protections of Section 230.

One example: A congressional proposal would require internet companies to report to law enforcement when they believe people might be plotting violent crimes or drug offenses. If the companies don’t do so, they might lose the legal protections of Section 230 and the floodgates could open to lawsuits.

Facebook this week backed a similar idea, which proposed that it and other big online companies would have to have systems in place for identifying and removing potentially illegal material.

Another proposed bill would require Facebook, Google and others to prove that they hadn’t exhibited political bias in removing a post. Some Republicans say that Section 230 requires websites to be politically neutral. That’s not true.

Fix-it Plan 2: Create more exceptions. One proposal would restrict internet companies from using Section 230 as a defense in legal cases involving activity like civil rights violations, harassment and wrongful death. Another proposes letting people sue internet companies if child sexual abuse imagery is spread on their sites.

Also in this category are legal questions about whether Section 230 applies to the involvement of an internet company’s own computer systems. When Facebook’s algorithms helped circulate propaganda from Hamas, as David detailed in an article, some legal experts and lawmakers said that Section 230 legal protections should not have applied and that the company should have been held complicit in terrorist acts.


Slate has an article describing all of the proposed changes to Section 230.

I don’t have a firm enough grasp of the issues involved–let alone the technology needed to accomplish some of the proposed changes–to have a favored “fix” to Section 230.

I do think that this debate foreshadows others that will arise in a world where massive international companies–online and not– in many cases wield more power than governments. Constraining these powerful entities will require new and very creative approaches.

Comments

It Isn’t Just Media Literacy

Americans today face some unprecedented challenges–and as I have repeatedly noted, our information environment makes those challenges far more difficult to meet.

The Internet, which has brought us undeniable benefits and conveniences, also allows us to occupy “filter bubbles”—to inhabit different realities. One result has been a dramatic loss of trust, as even people of good will, inundated with misinformation, spin, and propaganda, don’t know what to believe, or how to determine which sources are credible.

Fact-checking sites are helpful, but they only help those who seek them out. The average American scrolling through her Facebook feed during a lunch break is unlikely to stop and check the veracity of most of what her friends have posted.

There is general agreement that Americans need to develop media literacy and policy tools to discourage the transmittal of propaganda. But before we can teach media literacy in our schools or consider policy interventions to address propaganda, we need to consider what media literacy requires, and what the First Amendment forbids.

Think about that fictional person scrolling through her Facebook or Twitter feed. She comes across a post berating her Congressman for failing to block the zoning of a liquor store in her neighborhood. If our person is civically literate—if she understands federalism and separation of powers– she knows that her Congressman has no authority in such matters, and that the argument is bogus.

In other words, basic knowledge of how government works is a critical component of media literacy.

It isn’t just civic knowledge, of course. People who lack a basic understanding of the difference between a scientific theory and the way we use the term “theory” in casual conversation are much more likely to dismiss evolution and climate change as “just theories,” and to be taken in by efforts to discredit both.

To be blunt about it, people fortified with basic civic and scientific knowledge are far more likely to recognize disinformation when they encounter it. That knowledge is just as important as information on how to detect “deep fakes” and similar counterfeits.

There are also policy steps we can take to diminish the power of propaganda without doing violence to the First Amendment. The Brookings Institution has suggested establishment of a “public trust” to provide analysis and generate policy proposals that would defend democracy “against the constant stream of disinformation and the illiberal forces at work disseminating it.”

In too many of the discussions of social media and media literacy, we overlook the fact that disinformation isn’t encountered only online. Cable news has long been a culprit. (One study found that Americans who got their news exclusively from Fox knew less about current events than people who didn’t follow news at all.)  Any effort to reduce the flow of propaganda must include measures aimed at cable television as well as online media.

Many proposals that are aimed at online disinformation address the social media protections offered by Section 230 of the Communications Decency Act.  I reviewed them here.

Bottom line: we can walk and chew gum at the same time.

If and when we get serious about media literacy, we need to do two things. We need to ensure that America’s classrooms have the resources—curricular and financial—to teach civic, scientific and media literacy. (Critical thinking and logic would also be very helpful…) And policymakers must devise regulations that will deter propaganda without eviscerating the First Amendment. Such regulations are unlikely to totally erase the problem, but well-considered tweaks can certainly reduce it.

Comments

Mandating Fairness

Whenever one of my posts addresses America’s problem with disinformation, at least one commenter will call for re-institution of the Fairness Doctrine–despite the fact that, each time, another commenter (usually a lawyer) will explain why that doctrine wouldn’t apply to social media or most other Internet sites causing contemporary mischief.

The Fairness Doctrine was contractualGovernment owned the broadcast channels that were being auctioned for use by private media companies, and thus had the right to require certain undertakings from responsive bidders. In other words, in addition to the payments being tendered, bidders had to promise to operate “in the public interest,” and the public interest included an obligation to give contending voices a fair hearing.

The government couldn’t have passed a law requiring newspapers and magazines to be “fair,” and it cannot legally require fair and responsible behavior from cable channels and social media platforms, no matter how much we might wish it could.

So–in this era of QAnon and Fox News and Rush Limbaugh clones– where does that leave us?

The Brookings Institution, among others, has wrestled with the issue.

The violence of Jan. 6 made clear that the health of online communities and the spread of disinformation represents a major threat to U.S. democracy, and as the Biden administration takes office, it is time for policymakers to consider how to take a more active approach to counter disinformation and form a public-private partnership aimed at identifying and countering disinformation that poses a risk to society.

Brookings says that a non-partisan public-private effort is required because disinformation crosses platforms and transcends political boundaries. They recommend a “public trust” that would provide analysis and policy proposals intended to defend democracy against the constant stream of  disinformation and the illiberal forces at work disseminating it. 
It would identify emerging trends and methods of sharing disinformation, and would
support data-driven initiatives to improve digital media-literacy. 

Frankly, I found the Brookings proposal unsatisfactorily vague, but there are other, more concrete proposals for combatting online and cable propaganda. Dan Mullendore pointed to one promising tactic in a comment the other day. Fox News income isn’t–as we might suppose– dependent mostly on advertising; significant sums come from cable fees. And one reason those fees are so lucrative is that Fox gets bundled with other channels, meaning that many people pay for Fox who wouldn’t pay for it if it weren’t a package deal . A few days ago, on Twitter, a lawyer named Pam Keith pointed out that a simple regulatory change ending  bundling would force Fox and other channels to compete for customers’ eyes, ears and pocketbooks.

Then there’s the current debate over Section 230 of the Communications Decency Act, with many critics advocating its repeal, and others, like the Electronic Frontier Foundation, defending it.

Section 230 says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of “interactive computer service providers,” including basically any online service that publishes third-party content. Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish.

Most observers believe that an outright repeal of Section 230 would destroy social networks as we know them (the linked article explains why, as do several others), but there is a middle ground between total repeal and naive calls for millions of users to voluntarily leave platforms that fail to block hateful and/or misleading posts.

Fast Company has suggested that middle ground.

One possibility is that the current version of Section 230 could be replaced with a requirement that platforms use a more clearly defined best-efforts approach, requiring them to use the best technology and establishing some kind of industry standard they would be held to for detecting and mediating violating content, fraud, and abuse. That would be analogous to standards already in place in the area of advertising fraud….

Another option could be to limit where Section 230 protections apply. For example, it might be restricted only to content that is unmonetized. In that scenario, you would have platforms displaying ads only next to content that had been sufficiently analyzed that they could take legal responsibility for it. 

A “one size fits all” reinvention of the Fairness Doctrine isn’t going to happen. But that doesn’t mean we can’t make meaningful, legal improvements that would make a real difference online.

Comments