Meta Goes Vichy

The term “Vichy” refers to the shameful, collaborationist government in World War II France, during that country’s Nazi occupation. In the run-up to the Trump/MAGA occupation of the United States, Mark Zuckerberg just announced Vichy Meta.

Meta won’t be even a small part of the resistance.

Zuckerberg has announced that Facebook will end its longstanding fact-checking program. Third-party fact-checking was originally instituted to curtail the spread of misinformation on Facebook and Meta’s other social media apps.

The change was the latest sign of how companies owned by multi-zillionaires are “repositioning” (aka groveling) in preparation for the Trump presidency.

The Bulwark headlined the move “Mark Zuckerberg is a Surrender Monkey,” pointing out that he’d recently named Joel Kaplan as the company’s head of public policy. Kaplan isn’t just a Republican in good standing, he’s a close friend of Brett Kavanaugh, and–according to the article– “somewhere between friendly-toward and horny-for Trumpism.” He also appointed Dana White, manager of something called Ultimate Fighting Championship to Meta’s board of directors. That background is arguably  irrelevant to Meta’s business, but his usefulness rather clearly isn’t in any expertise he possesses; instead, his “value” clearly lies in being one of Donald Trump’s closest friends and top endorsers. 

Add to that Zuckerberg’s one million dollar donation to Trump’s Inaugural fund.

Kaplan went on Fox & Friends (of course) to explain that Facebook is killing its fact-checking program in order to make its content moderation strategy more like Elon Musk’s Twitter/X regime.  

As all sentient Americans are aware, when Musk purchased Twitter (which he awkwardly re-named X), he promised unfettered free speech. He then proceeded to invite back users who had previously been banned for bad behavior. He then fired content moderation teams, and replaced them with crowdsourced “community notes” below disputed content. That is the model Meta is adopting.

So–how are things going at X?

Numerous studies have documented the enormous amounts of false and hateful content now on X. Antisemitic, racist and misogynistic posts rose sharply immediately after Musk’s takeover, and have continued to proliferate. It hasn’t only been the bigotry. Disinformation about issues like climate change and migration has exploded, and users are spending greater amounts of time liking and reposting items from authoritarian governments and terrorist groups like the Islamic State and Hamas. 

There’s a reason so many advertisers have fled and former users of the platform have decamped for Bluesky.

The Bulwark reproduced Zuckerberg’s tweets announcing the change, including one jaw-dropping post explaining that the company would move its “trust and safety and content moderation teams” out of California and send them to Texas, to allay concerns about content-moderation bias. (If just being located in a Blue state creates bias, what sort of bias can we expect from people located in and influenced by Greg Abbott’s Red, retrograde Texas?)

All this to pander to an incoming autocrat whose continuing mental decline becomes more obvious every day. In his most recent press conference, Trump once again threatened to invade Greenland–owned by our ally Denmark– and to recapture the Panama Canal (which he inexplicably explained was necessary to counter China.) He also announced his intention to make Canada part of the U.S., and to rename the Gulf of Mexico.

Well, I’m sure those measures will bring down the price of eggs….

This is the buffoon who will soon occupy the Oval Office. The fact that a (slim) majority of Americans voted for this mentally-ill ignoramus is depressing enough, but recognizing that we have large numbers of citizens who vote their White Christian Nationalism is one thing; the fact that people who clearly know better are willing to surrender their integrity in advance in order to stay in the good graces of the lunatic-in-charge is appalling. 

Facebook has already morphed from a useful platform allowing us to interact with family and friends into a site where advertisements vastly outnumber real posts. Its content moderators were already bending over backwards to accommodate Rightwing worldviews. How many users have the time or energy–or interest–to rebut blatant falsehoods and conspiracy theories? For that matter, in a platform increasingly occupied by “bubbles”–where we interact mostly with people who already agree with us–will we even see the sorts of misinformation and disinformation that will be posted and enthusiastically shared by people who desperately want to believe that vaccines are a liberal plot and Jews have space lasers?

As Timothy Snyder wrote in “On Tyranny,” this is how democracies die: by surrendering in advance.

Comments

That Misunderstood First Amendment

I know that my constant yammering about the importance of civic education can seem pretty tiresome –especially in the abstract–so I was initially gratified to read Brookings Institution article focusing on a very tangible example.

Emerging research confirms the damage being done by misinformation being disseminated by social media, and that research has led to a sometimes acrimonious debate over what can be done to ameliorate the problem. One especially troubling argument has been over content that isn’t, as the article recognizes, “per se illegal” but nevertheless likely to cause significant. harm.

Many on the left insist digital platforms haven’t done enough to combat hate speech, misinformation, and other potentially harmful material, while many on the right argue that platforms are doing far too much—to the point where “Big Tech” is censoring legitimate speech and effectively infringing on Americans’ fundamental rights.

There is considerable pressure on policymakers to pass laws addressing the ways in which social media platforms operate–and especially how those platforms moderate incendiary posts. As the article notes,  the electorate’s incorrect beliefs about the First Amendment add to “the political and economic challenges of building better online speech governance.”

What far too many Americans don’t understand about freedom of speech–and for that matter, not only the First Amendment but the entire Bill of Rights–is that the liberties being protected are freedom from government action. If the government isn’t involved, neither is the Constitution.

I still remember a telephone call I received when I directed Indiana’s ACLU. A young man wanted the ACLU to sue White Castle, which had refused to hire him because they found the tattoos covering him “unappetizing.” He was sure they couldn’t do that, because he had a First Amendment right to express himself. I had to explain to him that White Castle also had a First Amendment right to control its messages. Had the legislature or City-County Council forbid citizens to communicate via tattooing, that would be government censorship, and would violate the First Amendment.

That young man’s belief that the right to free speech is somehow a free-floating right against anyone trying to restrict his communication is a widespread and pernicious misunderstanding, and it complicates discussion of the available approaches to content moderation on social media platforms. Facebook, Twitter and the rest are, like newspaper and magazine publishers, private entities–like White Castle, they have their own speech rights. As the author of the Brookings article writes,

Nonetheless, many Americans erroneously believe that the content-moderation decisions of digital platforms violate ordinary people’s constitutionally guaranteed speech rights. With policymakers at all levels of government working to address a diverse set of harms associated with platforms, the electorate’s mistaken beliefs about the First Amendment could add to the political and economic challenges of building better online speech governance.

The author conducted research into three related questions: How common is this inaccurate belief? Does it correlate with lower support for content moderation? And if it does, does education about the actual scope of First Amendment speech protection increase support for platforms to engage in content moderation?

The results of that research were, as academics like to say, “mixed,” especially for proponents of more and better civic education.

Fifty-nine percent of participants answered the Constitutional question incorrectly, and were less likely to support decisions by platforms to ban particular users. As the author noted, misunderstanding of the First Amendment was both very common and linked to lower support for content moderation. Theoretically, then, educating about the First Amendment should increase support for content moderation.

However, it turned out that such training actually lowered support for content moderation-(interestingly, that  decrease in support was “linked to Republican identity.”)

Why might that be? The author speculated that respondents might reduce their support for content moderation once they realized that there is less legal recourse than expected when they find such moderation uncongenial to their political preferences.

In other words, it is reasonable to be more skeptical of private decisions about content moderation once one becomes aware that the legal protections for online speech rights are less than one had previously assumed. …

 Republican politicians and the American public alike express the belief that platform moderation practices favor liberal messaging, despite strong empirical evidence to the contrary. Many Americans likely hold such views at least in part due to strategically misleading claims by prominent politicians and media figures, a particularly worrying form of misinformation. Any effort to improve popular understandings of the First Amendment will therefore need to build on related strategies for countering widespread political misinformation.

Unfortunately, when Americans inhabit alternative realities, even civic education runs into a wall….

Comments