The recent revelations about Facebook have crystalized a growing–and perhaps insoluble– problem for free speech purists like yours truly.
I have always been convinced by the arguments first advanced in John Stuart Mill’s On Liberty and the considerable scholarship supporting the basic philosophy underlying the First Amendment: yes, some ideas are dangerous, but allowing government to determine which ideas can be expressed would be far more dangerous.
I still believe that to be true when it comes to the exchange of ideas in what we like to call the “marketplace of ideas”–everything from private conversations, to public and/or political pronouncements, to the publication of books, pamphlets, newspapers and the like–even to broadcast “news.”
But surely we are not without tools to regulate social media behemoths like Facebook–especially in the face of overwhelming evidence that its professed devotion to “free speech” is merely a smokescreen for the platform’s real devotion–to a business plan that monetizes anger and hate.
We currently occupy a legal free-speech landscape that I am finding increasingly uncomfortable: Citizens United and its ilk basically endorsed a theory of “free” speech that gave rich folks megaphones with which to drown out ordinary participants in that speech marketplace. Fox News and its clones–business enterprises that identified an “underserved market” of angry reactionaries–were already protected under traditional free speech doctrine. (My students would sometimes ask why outright lying couldn’t be banned, and I would respond by asking them how courts would distinguish between lying and wrongheadedness, and to consider just how chilling lawsuits for “lying” might be…They usually got the point.)
Americans were already dealing–none too successfully– with politically-motivated distortions of our information environment before the advent of the Internet. Now we are facing what is truly an unprecedented challenge from a platform used by billions of people around the globe–a platform with an incredibly destructive business model. In brief, Facebook makes more money when users are more “engaged”–when we stay on the platform for longer periods of time. And that engagement is prompted by negative emotions–anger and hatred.
There is no historical precedent for the sheer scale of the damage being done. Yes, we have had popular books and magazines, propaganda films and the like in the past, and yes, they’ve been influential. Many people read or viewed them. But nothing in the past has been remotely as powerful as the (largely unseen and unrecognized) algorithms employed by Facebook–algorithms that aren’t even pushing a particular viewpoint, but simply stirring mankind’s emotional pot and setting tribe against tribe.
The question is: what do we do? (A further question is: have our political structures deteriorated to a point where government cannot do anything about anything…but I leave consideration of that morose possibility for another day.)
The Brookings Institution recently summarized legislative efforts to amend Section 230–the provision of communication law that provides platforms like Facebook with immunity for what users post. Whatever the merits or dangers of those proposals, none of them would seem to address the elephant in the room, which is the basic business model built into the algorithms employed. So long as the priority is engagement, and so long as engagement requires a degree of rage (unlikely with pictures of adorable babies and cute kittens), Facebook and other social media sites operating on the same business plan will continue to strengthen divisions and atomize communities.
The men who crafted America’s constitution were intent on preventing any one part of the new government from amassing too much power–hence separation of powers and federalism. They could not have imagined a time when private enterprises had the ability to exercise more power than government, but that is the time we occupy.
If government should be prohibited from using its power to censor or mandate or otherwise control expression, shouldn’t Facebook be restrained from–in effect–preferring and amplifying intemperate speech?
I think the answer is yes, but I don’t have a clue how we do that while avoiding unanticipated negative consequences.