Tag Archives: Facebook

Who’s Talking?

I finally got around to reading an article about Facebook by a Professor Scott Galloway, sent to me by a reader. In it, Galloway was considering the various “fixes” that have been suggested in the wake of continuing revelations about the degree to which Facebook and other social media platforms have facilitated America’s divisions.

There have been a number of similar articles, but what Galloway did better than most was explain the origin of Section 230 of the Communications Act in language we non-techie people can understand.

In most industries, the most robust regulator is not a government agency, but a plaintiff’s attorney. If your factory dumps toxic chemicals in the river, you get sued. If the tires you make explode at highway speed, you get sued. Yes, it’s inefficient, but ultimately the threat of lawsuits reduces regulation; it’s a cop that covers a broad beat. Liability encourages businesses to make risk/reward calculations in ways that one-size-fits-all regulations don’t. It creates an algebra of deterrence.

Social media, however, is largely immunized from such suits. A 1996 law, known as “Section 230,” erects a fence around content that is online and provided by someone else. It means I’m not liable for the content of comments on the No Mercy website, Yelp isn’t liable for the content of its user reviews, and Facebook, well, Facebook can pretty much do whatever it wants.

There are increasing calls to repeal or reform 230. It’s instructive to understand this law, and why it remains valuable. When Congress passed it — again, in 1996 — it reasoned online companies were like bookstores or old-fashioned bulletin boards. They were mere distribution channels for other people’s content and shouldn’t be liable for it.

Seems reasonable. So–why the calls for its repeal? Galloway points to the multiple ways in which the information and communication environments have changed since 1996.

In 1996, 16% of Americans had access to the Internet, via a computer tethered to a phone cord. There was no Wi-Fi. No Google, Facebook, Twitter, Reddit, or YouTube — not even Friendster or MySpace had been birthed. Amazon sold only books. Section 230 was a fence protecting a garden plot of green shoots and untilled soil.

Today, as he points out, some 3 billion individuals use Facebook, and fifty-seven percent of the world population uses some sort of social media. Those are truly astonishing numbers.

I have previously posted about externalities–the ability of manufacturers and other providers to compete more successfully in the market by “offloading” certain of their costs to society at large. When it comes to social media, Galloway tells us that its externalities have grown as fast as the platforms’ revenues–and thanks to Section 230, society has borne the costs.

In sum, behind the law’s liability shield, tech platforms have morphed from Model UN members to Syria and North Korea. Only these Hermit Kingdoms have more warheads and submarines than all other nations combined.

As he points out, today’s social media has the resources to play by the same rules as other powerful media. Bottom line: We need a new fence. We need to redraw Section 230 so that it that protects society from the harms of social media companies without destroying  their  usefulness or economic vitality.

What we have learned since 1996 is that Facebook and other social media companies are not neutral platforms.  They aren’t bulletin boards. They are rigorously managed– personalized for each user, and actively boosting or suppressing certain content. Galloway calls that “algorithmic amplification” and it didn’t exist in 1996.

There are evidently several bills pending in Congress that purport to address the problem–aiming at the ways in which social media platforms weaponize these algorithms. Such approaches should avoid raising credible concerns about chilling free expression.

Reading the essay gave me some hope that we can deal–eventually–with the social damage being inflicted by social media. It didn’t, however, suggest a way to counter the propaganda spewed daily by Fox News or Sinclair or their clones…

Free Speech And Online Propaganda

The recent revelations about Facebook have crystalized a growing–and perhaps insoluble– problem for free speech purists like yours truly. 

I have always been convinced by the arguments first advanced in John Stuart Mill’s On Liberty  and the considerable scholarship supporting the basic philosophy underlying the  First Amendment: yes, some ideas are dangerous, but allowing government to determine which ideas can be expressed would be far more dangerous.

I still believe that to be true when it comes to the exchange of ideas in what we like to call the “marketplace of ideas”–everything from private conversations, to public and/or political pronouncements, to the publication of books, pamphlets, newspapers and the like–even to broadcast “news.” 

But surely we are not without tools to regulate social media behemoths like Facebook–especially in the face of overwhelming evidence that its professed devotion to “free speech” is merely a smokescreen for the platform’s real devotion–to a business plan that monetizes anger and hate.

We currently occupy a legal free-speech landscape that I am finding increasingly uncomfortable: Citizens United and its ilk basically endorsed a theory of “free” speech that gave rich folks megaphones with which to drown out ordinary participants in that speech marketplace. Fox News and its clones–business enterprises that identified an “underserved market” of angry reactionaries–were already protected under traditional free speech doctrine. (My students would sometimes ask why outright lying couldn’t be banned, and I would respond by asking them how courts would distinguish between lying and wrongheadedness, and to consider just how chilling lawsuits for “lying” might be…They usually got the point.) 

Americans were already dealing–none too successfully– with politically-motivated distortions of our information environment before the advent of the Internet. Now we are facing what is truly an unprecedented challenge from a platform used by billions of people around the globe–a platform with an incredibly destructive business model. In brief, Facebook makes more money when users are more “engaged”–when we stay on the platform for longer periods of time. And that engagement is prompted by negative emotions–anger and hatred.

There is no historical precedent for the sheer scale of the damage being done. Yes, we have had popular books and magazines, propaganda films and the like in the past, and yes, they’ve been influential. Many people read or viewed them. But nothing in the past has been remotely as powerful as the (largely unseen and unrecognized) algorithms employed by Facebook–algorithms that aren’t even pushing a particular viewpoint, but simply stirring mankind’s emotional pot and setting tribe against tribe.

The question is: what do we do? (A further question is: have our political structures deteriorated to a point where government cannot do anything about anything…but I leave consideration of that morose possibility for another day.)

The Brookings Institution recently summarized legislative efforts to amend Section 230–the provision of communication law that provides platforms like Facebook with immunity for what users post. Whatever the merits or dangers of those proposals, none of them would seem to address the elephant in the room, which is the basic business model built into the algorithms employed. So long as the priority is engagement, and so long as engagement requires a degree of rage (unlikely with pictures of adorable babies and cute kittens), Facebook and other social media sites operating on the same business plan will continue to strengthen divisions and atomize communities.

The men who crafted America’s constitution were intent on preventing any one part of the new government from amassing too much power–hence separation of powers and federalism. They could not have imagined a time when private enterprises had the ability to exercise more power than government, but that is the time we occupy. 

If government should be prohibited from using its power to censor or mandate or otherwise control expression, shouldn’t Facebook be restrained from–in effect–preferring and amplifying intemperate speech?

I think the answer is yes, but I don’t have a clue how we do that while avoiding unanticipated negative consequences. 

 

Regulating Facebook et al

Over the past few years, as my concerns about the media environment we inhabit have grown, I have found Tom Wheeler’s columns and interviews invaluable. Wheeler–for those of you unfamiliar with him– chaired the Federal Communications Commission from 2013 to 2017, and is currently both a senior fellow at Harvard’s Kennedy School Shorenstein Center and a visiting fellow at the Brookings Institution.

He’s also a clear writer and thinker.

In a recent article for Time Magazine, Wheeler proposes the establishment of a new federal agency that would be empowered to regulate Internet giants like Facebook. He began the article by noting Mark Zuckerberg’s apparent willingness to be regulated–a willingness expressed in advertisements and testimony to Congress. As he notes, however,

A tried-and-true lobbying strategy is to loudly proclaim support for lofty principles while quietly working to hollow out the implementation of such principles. The key is to move beyond embracing generic concepts to deal with regulatory specifics. The headline on Politico’s report of the March 25 House of Representatives hearing, “D.C.’s Silicon Valley crackdown enters the haggling phase,” suggests that such an effort has begun. Being an optimist, I want to take Facebook at its word that it supports updated internet regulations. Being a pragmatist and former regulator, though, I believe we need to know exactly what such regulations would provide.

Wheeler proceeds to explain why he favors the creation of a separate agency that would be charged with regulating “big Tech.” As he notes, most proposals in Congress would give that job to the Federal Trade Commission (FTC). Wheeler has nothing negative to say about the FTC but points out that the agency is already “overburdened with an immense jurisdiction.” (Companies have even been known to seek transfer of their oversight to the agency, believing that such a transfer would allow its issues to get lost among the extensive and pressing other matters for which the agency is responsible.) Furthermore,  oversight of digital platforms “should not be a bolt-on to an existing agency but requires full-time specialized focus.”

So how should a new agency approach its mission?

Digital companies complain (not without some merit) that current regulation with its rigid rules is incompatible with rapid technology developments. To build agile policies capable of evolving with technology, the new agency should take a page from the process used in developing the technology standards that created the digital revolution. In that effort, the companies came together to agree on exactly how things would work. This time, instead of technical standards, there would be behavioral standards.

The subject matter of these new standards should be identified by the agency, which would convene industry and public stakeholders to propose a code, much like electric codes and fire codes. Ultimately, the agency would approve or modify the code and enforce it. While there is no doubt that such a new approach is ambitious, the new challenges of the digital giants require new tools.

Wheeler proceeds to outline how the proposed agency would approach issues such as misinformation and privacy, and to describe how it might promote and protect competition in the digital marketplace.

It’s a truism among policy wonks that government’s efforts to engage with rapidly changing social realities lag the development of those realities. The Internet has changed dramatically from the first days of the World Wide Web; the social media sites that are so ubiquitous now didn’t exist before 1997, and blogs like the one you are reading first emerged in 1999–a blink of the eye in historical terms. In the next twenty years, there will undoubtedly be digital innovations we can’t yet imagine or foresee. A specialized agency to oversee our digital new world makes a lot of sense.

I’m usually leery of creating new agencies of government, given the fact that once they appear on the scene, they tend to outlive their usefulness. But Wheeler makes a persuasive case.

And the need for thoughtful, informed regulation gets more apparent every day.

Section 230

These are hard times for free speech advocates. The Internet–with its capacity for mass distribution of lies, misinformation, bigotry and incitement to violence–cries out for reform, but it is not apparent (certainly not to me) what sort of reforms might curb the dangers without also stifling free expression.

One approach is focused on a law that is older than Google: Section 230 of the Communications Decency Act. 

What is Section 230? Is it really broken? Can it be fixed without inadvertently doing more damage? 

The law is just 26 words that allow online platforms to make rules about what people can or can’t post without being held legally responsible for the content. (There are some exceptions, but not many. )As a recent newsletter on technology put it (sorry, for some reason link doesn’t work),

If I accuse you of murder on Facebook, you might be able to sue me, but you can’t sue Facebook. If you buy a defective toy from a merchant on Amazon, you might be able to take the seller to court, but not Amazon. (There is some legal debate about this, but you get the gist.)

The law created the conditions for Facebook, Yelp and Airbnb to give people a voice without being sued out of existence. But now Republicans and Democrats are asking whether the law gives tech companies either too much power or too little responsibility for what happens under their watch.


Republicans mostly worry that Section 230 gives internet companies too much power to suppress online debate and discussion, while Democrats mostly worry that it lets those companies ignore or even enable dangerous incitements and/or illegal transactions. 

The fight over Section 230 is really a fight over the lack of control exercised by Internet giants like Facebook and Twitter. In far too many situations, the law allows people to lie online without consequence–lets face it, that high school kid who is spreading lewd rumors about a girl who turned him down for a date isn’t likely to be sued, no matter how damaging, reprehensible and untrue his posts may be. The recent defamation suits brought by the voting machine manufacturers were salutary and satisfying, but most people harmed by the bigotry and disinformation online are not in a position to pursue such remedies.

The question being debated among techies and lawyers is whether Section 230 is too protective; whether it reduces incentives for platforms like Facebook and Twitter to make and enforce stronger measures that would be more effective in curtailing obviously harmful rhetoric and activities. 

Several proposed “fixes” are currently being considered. The Times article described them.


Fix-it Plan 1: Raise the bar. Some lawmakers want online companies to meet certain conditions before they get the legal protections of Section 230.

One example: A congressional proposal would require internet companies to report to law enforcement when they believe people might be plotting violent crimes or drug offenses. If the companies don’t do so, they might lose the legal protections of Section 230 and the floodgates could open to lawsuits.

Facebook this week backed a similar idea, which proposed that it and other big online companies would have to have systems in place for identifying and removing potentially illegal material.

Another proposed bill would require Facebook, Google and others to prove that they hadn’t exhibited political bias in removing a post. Some Republicans say that Section 230 requires websites to be politically neutral. That’s not true.

Fix-it Plan 2: Create more exceptions. One proposal would restrict internet companies from using Section 230 as a defense in legal cases involving activity like civil rights violations, harassment and wrongful death. Another proposes letting people sue internet companies if child sexual abuse imagery is spread on their sites.

Also in this category are legal questions about whether Section 230 applies to the involvement of an internet company’s own computer systems. When Facebook’s algorithms helped circulate propaganda from Hamas, as David detailed in an article, some legal experts and lawmakers said that Section 230 legal protections should not have applied and that the company should have been held complicit in terrorist acts.


Slate has an article describing all of the proposed changes to Section 230.

I don’t have a firm enough grasp of the issues involved–let alone the technology needed to accomplish some of the proposed changes–to have a favored “fix” to Section 230.

I do think that this debate foreshadows others that will arise in a world where massive international companies–online and not– in many cases wield more power than governments. Constraining these powerful entities will require new and very creative approaches.

Falsely Shouting “Fire” In The Digital Theater

Tom Wheeler is one of the savviest observers of the digital world.

Now at the Brookings Institution, Wheeler headed up the FCC during the Obama administration, and recently authored an essay titled “The Consequences of Social Media’s Giant Experiment.” That essay–like many of his other publications–considered the impact of legally-private enterprises that have had a huge public impact.

The “experiment” Wheeler considers is the shutdown of Trump’s disinformation megaphones: most consequential, of course, were the Facebook and Twitter bans of Donald Trump’s accounts, but it was also important that  Parler–a site for rightwing radicalization and conspiracy theories–was effectively shut down for a time by Amazon’s decision to cease hosting it, and decisions by both Android and Apple to remove it from their app stores. (I note that, since Wheeler’s essay, Parler has found a new hosting service–and it is Russian owned.)

These actions are better late than never. But the proverbial horse has left the barn. These editorial and business judgements do, however, demonstrate how companies have ample ability to act conscientiously to protect the responsible use of their platforms.

Wheeler addresses the conundrum that has been created by a subsection of the law that  insulates social media companies from responsibility for making the sorts of  editorial judgements that publishers of traditional media make every day. As he says, these 26 words are the heart of the issue: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

As he points out,

If you are insulated from the consequences of your actions and make a great deal of money by exploiting that insulation, then what is the incentive to act responsibly?…

The social media companies have put us in the middle of a huge and explosive lab experiment where we see the toxic combination of digital technology, unmoderated content, lies and hate. We now have the answer to what happens when these features and large profits are blended together in a connected world. The result not only has been unproductive for civil discourse, it also represents a danger to democratic systems and effective problem-solving.

Wheeler repeats what most observers of our digital world have recognized: these platforms have the technological capacity to exercise the same sort of responsible moderation that  we expect of traditional media. What they lack is the will–because more responsible moderating algorithms would eat into their currently large–okay, obscene– profits.

The companies’ business model is built around holding a user’s attention so that they may display more paying messages. Delivering what the user wants to see, the more outrageous the better, holds that attention and rings the cash register.

Wheeler points out that we have mischaracterized these platforms–they are not, as they insist, tech enterprises. They are media, and should be required to conform to the rules and expectations that govern media sources. He has other suggestions for tweaking the rules that govern these platforms, and they are worth consideration.

That said, the rise of these digital giants creates a bigger question and implicates what is essentially a philosophical dilemma.

The U.S. Constitution was intended to limit the exercise of power; it was crafted at a time in human history when governments held a clear monopoly on that power. That is arguably no longer the case–and it isn’t simply social media giants. Today, multiple social and economic institutions have the power to pose credible threats both to individual liberty and to social cohesion. How we navigate the minefield created by that reality–how we restrain the power of theoretically “private” enterprises– will determine the life prospects of our children and grandchildren.

At the very least, we need rules that will limit the ability of miscreants to falsely shout fire in our digital environments.