Meta Goes Vichy

The term “Vichy” refers to the shameful, collaborationist government in World War II France, during that country’s Nazi occupation. In the run-up to the Trump/MAGA occupation of the United States, Mark Zuckerberg just announced Vichy Meta.

Meta won’t be even a small part of the resistance.

Zuckerberg has announced that Facebook will end its longstanding fact-checking program. Third-party fact-checking was originally instituted to curtail the spread of misinformation on Facebook and Meta’s other social media apps.

The change was the latest sign of how companies owned by multi-zillionaires are “repositioning” (aka groveling) in preparation for the Trump presidency.

The Bulwark headlined the move “Mark Zuckerberg is a Surrender Monkey,” pointing out that he’d recently named Joel Kaplan as the company’s head of public policy. Kaplan isn’t just a Republican in good standing, he’s a close friend of Brett Kavanaugh, and–according to the article– “somewhere between friendly-toward and horny-for Trumpism.” He also appointed Dana White, manager of something called Ultimate Fighting Championship to Meta’s board of directors. That background is arguably  irrelevant to Meta’s business, but his usefulness rather clearly isn’t in any expertise he possesses; instead, his “value” clearly lies in being one of Donald Trump’s closest friends and top endorsers. 

Add to that Zuckerberg’s one million dollar donation to Trump’s Inaugural fund.

Kaplan went on Fox & Friends (of course) to explain that Facebook is killing its fact-checking program in order to make its content moderation strategy more like Elon Musk’s Twitter/X regime.  

As all sentient Americans are aware, when Musk purchased Twitter (which he awkwardly re-named X), he promised unfettered free speech. He then proceeded to invite back users who had previously been banned for bad behavior. He then fired content moderation teams, and replaced them with crowdsourced “community notes” below disputed content. That is the model Meta is adopting.

So–how are things going at X?

Numerous studies have documented the enormous amounts of false and hateful content now on X. Antisemitic, racist and misogynistic posts rose sharply immediately after Musk’s takeover, and have continued to proliferate. It hasn’t only been the bigotry. Disinformation about issues like climate change and migration has exploded, and users are spending greater amounts of time liking and reposting items from authoritarian governments and terrorist groups like the Islamic State and Hamas. 

There’s a reason so many advertisers have fled and former users of the platform have decamped for Bluesky.

The Bulwark reproduced Zuckerberg’s tweets announcing the change, including one jaw-dropping post explaining that the company would move its “trust and safety and content moderation teams” out of California and send them to Texas, to allay concerns about content-moderation bias. (If just being located in a Blue state creates bias, what sort of bias can we expect from people located in and influenced by Greg Abbott’s Red, retrograde Texas?)

All this to pander to an incoming autocrat whose continuing mental decline becomes more obvious every day. In his most recent press conference, Trump once again threatened to invade Greenland–owned by our ally Denmark– and to recapture the Panama Canal (which he inexplicably explained was necessary to counter China.) He also announced his intention to make Canada part of the U.S., and to rename the Gulf of Mexico.

Well, I’m sure those measures will bring down the price of eggs….

This is the buffoon who will soon occupy the Oval Office. The fact that a (slim) majority of Americans voted for this mentally-ill ignoramus is depressing enough, but recognizing that we have large numbers of citizens who vote their White Christian Nationalism is one thing; the fact that people who clearly know better are willing to surrender their integrity in advance in order to stay in the good graces of the lunatic-in-charge is appalling. 

Facebook has already morphed from a useful platform allowing us to interact with family and friends into a site where advertisements vastly outnumber real posts. Its content moderators were already bending over backwards to accommodate Rightwing worldviews. How many users have the time or energy–or interest–to rebut blatant falsehoods and conspiracy theories? For that matter, in a platform increasingly occupied by “bubbles”–where we interact mostly with people who already agree with us–will we even see the sorts of misinformation and disinformation that will be posted and enthusiastically shared by people who desperately want to believe that vaccines are a liberal plot and Jews have space lasers?

As Timothy Snyder wrote in “On Tyranny,” this is how democracies die: by surrendering in advance.

Comments

Restraining Power

The growing concerns about social media–especially platforms’ moderation of users’ posts–are just the most recent and visible examples of an older conundrum: how do we define and restrain the misuse of power?

When the U.S. Constitution was drafted, concerns about the infringement of individual rights focused almost entirely on government, because only government entities had the power to prescribe and proscribe individual behaviors and punish those who failed to conform. Accordingly, the Bill of Rights restrained only government (initially, only the federal government, which was seen as a greater threat than the state and local units of government that were included in its prohibitions after passage of the 14th Amendment.)

To state the glaringly obvious, in the 200+ years since passage of the original Bill of Rights, a lot of things have changed.

Governments aren’t the only entities exercising considerable authority over our lives–major corporations, a number of them global in scope, not only influence government but engage in negative behaviors that directly affect millions of people, from polluting the environment to exploiting third-world labor. Scholars have belatedly come to question whether the Bill of Rights shouldn’t be applied more broadly–to restrain all entities large enough or powerful enough to invade individual rights.

I have absolutely no idea how that might work.( It probably wouldn’t.) /That said, we are at a point where we absolutely must contend with the inordinate power exercised by private, non-governmental organizations, and especially by Facebook, Twitter, et al.

Robert Reich addressed that problem in a recent essay for the Guardian.

Twitter and Instagram just removed antisemitic posts from Kanye West and temporarily banned him from their platforms. It just goes to show … um, what?

How good these tech companies are at content moderation? Or how irresponsible they are for “muzzling” controversial views from the extreme right? (Defenders of West, such as the Indiana attorney general, Todd Rokita, are incensed that he’s been banned.) Or how arbitrary these giant megaphones are in making these decisions? (What would Elon Musk do about Kanye West?)

 Call it the Kayne West paradox: do the social media giants have a duty to take down noxious content or a duty to post it? And who decides?

As Reich quite accurately notes, these platforms, with their huge size and extraordinary power over what’s communicated, exert enormous sway over the American public. And they are utterly unaccountable to that public.

Two cases pending before the Supreme Court illustrate the underlying dilemma:

One case involves Section 230 of Communications Decency Act of 1996. That section gives social media platforms protection from liability for what’s posted on them. In that case, plaintiffs claim that social media ( YouTube in one case,Twitter in the other) led to the deaths of family members at the hands of terrorists. In another case, the plaintiffs are arguing that the First Amendment forbids these platforms from being more vigilant. That case arises from a Texas law that allows Texans and the state’s attorney general to sue  social media giants for “unfairly” banning or censoring them based on political ideology.

It’s an almost impossible quandary – until you realize that these questions arise because of the huge political and social power of these companies, and their lack of accountability.

In reality, they aren’t just for-profit companies. By virtue of their size and power, their decisions have enormous public consequences.

Reich is betting is that the Court will treat them as common carriers, like railroads or telephone lines. Common carriers can’t engage in unreasonable discrimination in who uses them, must charge just and reasonable prices, and must provide reasonable care to the public.

But is there any reason to trust the government to do a better job of content moderation than the giants do on their own? (I hate to imagine what would happen under a Republican FCC.)

So are we inevitably locked into the Kanye West paradox?

Or is there a third and better alternative to the bleak choice between leaving content moderation up to the giant unaccountable firms or to a polarized government?

The answer is yes. It’s to address the underlying problem directly: the monopoly power possessed by the giant social media companies.

The way to do this is apply the antitrust laws – and break them up.

My guess is that this is where we’ll end up, eventually. There’s no other reasonable choice. As Winston Churchill is reputed to have said: “Americans can always be trusted to do the right thing, once all other possibilities have been exhausted.”

It’s hard to disagree. And actually, a far more aggressive approach to anti-trust would solve more problems than those we are experiencing with social media…

Comments

Who’s Talking?

I finally got around to reading an article about Facebook by a Professor Scott Galloway, sent to me by a reader. In it, Galloway was considering the various “fixes” that have been suggested in the wake of continuing revelations about the degree to which Facebook and other social media platforms have facilitated America’s divisions.

There have been a number of similar articles, but what Galloway did better than most was explain the origin of Section 230 of the Communications Act in language we non-techie people can understand.

In most industries, the most robust regulator is not a government agency, but a plaintiff’s attorney. If your factory dumps toxic chemicals in the river, you get sued. If the tires you make explode at highway speed, you get sued. Yes, it’s inefficient, but ultimately the threat of lawsuits reduces regulation; it’s a cop that covers a broad beat. Liability encourages businesses to make risk/reward calculations in ways that one-size-fits-all regulations don’t. It creates an algebra of deterrence.

Social media, however, is largely immunized from such suits. A 1996 law, known as “Section 230,” erects a fence around content that is online and provided by someone else. It means I’m not liable for the content of comments on the No Mercy website, Yelp isn’t liable for the content of its user reviews, and Facebook, well, Facebook can pretty much do whatever it wants.

There are increasing calls to repeal or reform 230. It’s instructive to understand this law, and why it remains valuable. When Congress passed it — again, in 1996 — it reasoned online companies were like bookstores or old-fashioned bulletin boards. They were mere distribution channels for other people’s content and shouldn’t be liable for it.

Seems reasonable. So–why the calls for its repeal? Galloway points to the multiple ways in which the information and communication environments have changed since 1996.

In 1996, 16% of Americans had access to the Internet, via a computer tethered to a phone cord. There was no Wi-Fi. No Google, Facebook, Twitter, Reddit, or YouTube — not even Friendster or MySpace had been birthed. Amazon sold only books. Section 230 was a fence protecting a garden plot of green shoots and untilled soil.

Today, as he points out, some 3 billion individuals use Facebook, and fifty-seven percent of the world population uses some sort of social media. Those are truly astonishing numbers.

I have previously posted about externalities–the ability of manufacturers and other providers to compete more successfully in the market by “offloading” certain of their costs to society at large. When it comes to social media, Galloway tells us that its externalities have grown as fast as the platforms’ revenues–and thanks to Section 230, society has borne the costs.

In sum, behind the law’s liability shield, tech platforms have morphed from Model UN members to Syria and North Korea. Only these Hermit Kingdoms have more warheads and submarines than all other nations combined.

As he points out, today’s social media has the resources to play by the same rules as other powerful media. Bottom line: We need a new fence. We need to redraw Section 230 so that it that protects society from the harms of social media companies without destroying  their  usefulness or economic vitality.

What we have learned since 1996 is that Facebook and other social media companies are not neutral platforms.  They aren’t bulletin boards. They are rigorously managed– personalized for each user, and actively boosting or suppressing certain content. Galloway calls that “algorithmic amplification” and it didn’t exist in 1996.

There are evidently several bills pending in Congress that purport to address the problem–aiming at the ways in which social media platforms weaponize these algorithms. Such approaches should avoid raising credible concerns about chilling free expression.

Reading the essay gave me some hope that we can deal–eventually–with the social damage being inflicted by social media. It didn’t, however, suggest a way to counter the propaganda spewed daily by Fox News or Sinclair or their clones…

Comments

Free Speech And Online Propaganda

The recent revelations about Facebook have crystalized a growing–and perhaps insoluble– problem for free speech purists like yours truly. 

I have always been convinced by the arguments first advanced in John Stuart Mill’s On Liberty  and the considerable scholarship supporting the basic philosophy underlying the  First Amendment: yes, some ideas are dangerous, but allowing government to determine which ideas can be expressed would be far more dangerous.

I still believe that to be true when it comes to the exchange of ideas in what we like to call the “marketplace of ideas”–everything from private conversations, to public and/or political pronouncements, to the publication of books, pamphlets, newspapers and the like–even to broadcast “news.” 

But surely we are not without tools to regulate social media behemoths like Facebook–especially in the face of overwhelming evidence that its professed devotion to “free speech” is merely a smokescreen for the platform’s real devotion–to a business plan that monetizes anger and hate.

We currently occupy a legal free-speech landscape that I am finding increasingly uncomfortable: Citizens United and its ilk basically endorsed a theory of “free” speech that gave rich folks megaphones with which to drown out ordinary participants in that speech marketplace. Fox News and its clones–business enterprises that identified an “underserved market” of angry reactionaries–were already protected under traditional free speech doctrine. (My students would sometimes ask why outright lying couldn’t be banned, and I would respond by asking them how courts would distinguish between lying and wrongheadedness, and to consider just how chilling lawsuits for “lying” might be…They usually got the point.) 

Americans were already dealing–none too successfully– with politically-motivated distortions of our information environment before the advent of the Internet. Now we are facing what is truly an unprecedented challenge from a platform used by billions of people around the globe–a platform with an incredibly destructive business model. In brief, Facebook makes more money when users are more “engaged”–when we stay on the platform for longer periods of time. And that engagement is prompted by negative emotions–anger and hatred.

There is no historical precedent for the sheer scale of the damage being done. Yes, we have had popular books and magazines, propaganda films and the like in the past, and yes, they’ve been influential. Many people read or viewed them. But nothing in the past has been remotely as powerful as the (largely unseen and unrecognized) algorithms employed by Facebook–algorithms that aren’t even pushing a particular viewpoint, but simply stirring mankind’s emotional pot and setting tribe against tribe.

The question is: what do we do? (A further question is: have our political structures deteriorated to a point where government cannot do anything about anything…but I leave consideration of that morose possibility for another day.)

The Brookings Institution recently summarized legislative efforts to amend Section 230–the provision of communication law that provides platforms like Facebook with immunity for what users post. Whatever the merits or dangers of those proposals, none of them would seem to address the elephant in the room, which is the basic business model built into the algorithms employed. So long as the priority is engagement, and so long as engagement requires a degree of rage (unlikely with pictures of adorable babies and cute kittens), Facebook and other social media sites operating on the same business plan will continue to strengthen divisions and atomize communities.

The men who crafted America’s constitution were intent on preventing any one part of the new government from amassing too much power–hence separation of powers and federalism. They could not have imagined a time when private enterprises had the ability to exercise more power than government, but that is the time we occupy. 

If government should be prohibited from using its power to censor or mandate or otherwise control expression, shouldn’t Facebook be restrained from–in effect–preferring and amplifying intemperate speech?

I think the answer is yes, but I don’t have a clue how we do that while avoiding unanticipated negative consequences. 

Comments

Regulating Facebook et al

Over the past few years, as my concerns about the media environment we inhabit have grown, I have found Tom Wheeler’s columns and interviews invaluable. Wheeler–for those of you unfamiliar with him– chaired the Federal Communications Commission from 2013 to 2017, and is currently both a senior fellow at Harvard’s Kennedy School Shorenstein Center and a visiting fellow at the Brookings Institution.

He’s also a clear writer and thinker.

In a recent article for Time Magazine, Wheeler proposes the establishment of a new federal agency that would be empowered to regulate Internet giants like Facebook. He began the article by noting Mark Zuckerberg’s apparent willingness to be regulated–a willingness expressed in advertisements and testimony to Congress. As he notes, however,

A tried-and-true lobbying strategy is to loudly proclaim support for lofty principles while quietly working to hollow out the implementation of such principles. The key is to move beyond embracing generic concepts to deal with regulatory specifics. The headline on Politico’s report of the March 25 House of Representatives hearing, “D.C.’s Silicon Valley crackdown enters the haggling phase,” suggests that such an effort has begun. Being an optimist, I want to take Facebook at its word that it supports updated internet regulations. Being a pragmatist and former regulator, though, I believe we need to know exactly what such regulations would provide.

Wheeler proceeds to explain why he favors the creation of a separate agency that would be charged with regulating “big Tech.” As he notes, most proposals in Congress would give that job to the Federal Trade Commission (FTC). Wheeler has nothing negative to say about the FTC but points out that the agency is already “overburdened with an immense jurisdiction.” (Companies have even been known to seek transfer of their oversight to the agency, believing that such a transfer would allow its issues to get lost among the extensive and pressing other matters for which the agency is responsible.) Furthermore,  oversight of digital platforms “should not be a bolt-on to an existing agency but requires full-time specialized focus.”

So how should a new agency approach its mission?

Digital companies complain (not without some merit) that current regulation with its rigid rules is incompatible with rapid technology developments. To build agile policies capable of evolving with technology, the new agency should take a page from the process used in developing the technology standards that created the digital revolution. In that effort, the companies came together to agree on exactly how things would work. This time, instead of technical standards, there would be behavioral standards.

The subject matter of these new standards should be identified by the agency, which would convene industry and public stakeholders to propose a code, much like electric codes and fire codes. Ultimately, the agency would approve or modify the code and enforce it. While there is no doubt that such a new approach is ambitious, the new challenges of the digital giants require new tools.

Wheeler proceeds to outline how the proposed agency would approach issues such as misinformation and privacy, and to describe how it might promote and protect competition in the digital marketplace.

It’s a truism among policy wonks that government’s efforts to engage with rapidly changing social realities lag the development of those realities. The Internet has changed dramatically from the first days of the World Wide Web; the social media sites that are so ubiquitous now didn’t exist before 1997, and blogs like the one you are reading first emerged in 1999–a blink of the eye in historical terms. In the next twenty years, there will undoubtedly be digital innovations we can’t yet imagine or foresee. A specialized agency to oversee our digital new world makes a lot of sense.

I’m usually leery of creating new agencies of government, given the fact that once they appear on the scene, they tend to outlive their usefulness. But Wheeler makes a persuasive case.

And the need for thoughtful, informed regulation gets more apparent every day.

Comments