Flooding The Zone

Times are tough for us Free Speech defenders ….

It’s bad enough that so few Americans understand either the protections or the limitations of the First Amendment’s Free Speech provisions. Fewer still can distinguish between hate speech and hate crimes. And even lawyers dedicated to the protection of our constitutional right to publicly opine and debate recognize the existence of grey zones.

When the Internet first became ubiquitous, I celebrated this new mechanism for expression. I saw it as a welcome new development in the “marketplace of ideas.”  What I didn’t see was its potential for the spread of deliberate propaganda.

Color me disabused.

Steve Bannon coined the phrase that explains what we are seeing: “flooding the zone with shit.” Rather than inventing a story to counter explanations with which one disagrees, the new approach–facilitated by bots and AI– simply produces immense amounts of conflicting and phony “information” which is then uploaded to social media and other sites.  The goal is no longer to make people believe “story A” rather than “story B.” The goal is to create a population that no longer knows what to believe.

It’s a tactic that has infected American politics and made governing close to impossible–but it is not a tactic confined to the U.S. It’s global.

Heather Cox Richardson has summed up the resulting threat:

A report published last week by the European Commission, the body that governs the European Union, says that when X, the company formerly known as Twitter, got rid of its safety standards, Russian disinformation on the site took off. Lies about Russia’s war against Ukraine spread to at least 165 million people in the E.U. and allied countries like the U.S., and garnered at least 16 billion views. The study found that Instagram, Telegram, and Facebook, all owned by Meta, also spread pro-Kremlin propaganda that uses hate speech and boosts extremists.

The report concluded that “the Kremlin’s ongoing disinformation campaign not only forms an integral part of Russia’s military agenda, but also causes risks to public security, fundamental rights and electoral processes” in the E.U. The report’s conclusions also apply to the U.S., where the far right is working to undermine U.S. support for Ukraine by claiming—falsely—that U.S. aid to Ukraine means the Biden administration is neglecting emergencies at home, like the fires last month in Maui.

Russian operatives famously flooded social media with disinformation to influence the 2016 U.S. election, and by 2022 the Federal Bureau of Investigation (FBI) warned that China had gotten into the act. Today, analyst Clint Watts of Microsoft reported that in the last year, China has honed its ability to generate artificial images that appear to be U.S. voters, using them to stoke “controversy along racial, economic, and ideological lines.” It uses social media accounts to post divisive, AI-created images that attack political figures and iconic U.S. symbols.

Once upon a time, America could depend upon two large oceans to protect us from threats from abroad. Those days are long gone, and our contemporary isolationists–who refuse to understand, for example, how Russia’s invasion of Ukraine could affect us–utterly fail to recognize that opposing our new global reality  doesn’t make it go away.

The internet makes it possible to deliver disinformation on a scale never previously available–or imagined. And it poses a very real problem for those of us who defend freedom of speech, because most of the proposed “remedies” I’ve seen would make things worse.

This nation’s Founders weren’t naive; they understood that ideas are powerful, and that  bad ideas can do real harm. They opted for freedom of speech–defined in our system as freedom from government censorship– because they also recognized that allowing government to decide which ideas could be exchanged would be much more harmful.

I still agree with the Founders’ decision, but even if I didn’t, current communication technology has largely made government control impossible. (I still recall a conversation I had with two students at a Chinese university that had invited me to speak. I asked them about China’s control of the Internet and they laughed, telling me that any “tech savvy” person could evade state controls–and that many did. And that was some 18 years ago.)

At this poiint, we have to depend upon those who manage social media platforms to monitor what their users post, which is why egomaniacs like Elon Musk–who champions a “free speech” he clearly doesn’t understand–are so dangerous.

Ultimately, we will have to depend upon the ability of the public to separate the wheat from the chaff–and the ability to do that requires a level of civic literacy that has thus far eluded us….

Comments

Computational Propaganda

[Sorry to clutter your inboxes; I published this in error. Consider it an “extra.”]

I am now officially befuddled. Out of my depth. And very worried.

Politico has published the results of an investigation that the magazine conducted into the popularity (in social-media jargon, the “viral-ness”) of the hashtag “release the memo.” It found that the committee vote

marked the culmination of a targeted, 11-day information operation that was amplified by computational propaganda techniques and aimed to change both public perceptions and the behavior of American lawmakers….Computational propaganda—defined as “the use of information and communication technologies to manipulate perceptions, affect cognition, and influence behavior”—has been used, successfully, to manipulate the perceptions of the American public and the actions of elected officials.

I’ve been struggling just to understand what “bots” are. The New York Times recent lengthy look at these artificial “followers”–you can evidently buy followers to pump up your perceived popularity–helped to an extent, but left me thinking that these “fake” followers were mostly a form of dishonest puffery by celebrities and would-be celebrities.

Politico disabused me.

The publication’s analysis showed how the #releasethememo campaign had been fueled by  computational propaganda. As the introduction says, ” It is critical that we understand how this was done and what it means for the future of American democracy.”

I really encourage readers to click through and read the article in its entirety. If you are like me, the technical aspects require slow and careful reading. Here, however, are a few of the findings that particularly worry me–and should worry us all.

Whether it is Republican or Russian or “Macedonian teenagers”—it doesn’t really matter. It is computational propaganda—meaning artificially amplified and targeted for a specific purpose—and it dominated political discussions in the United States for days. The #releasethememo campaign came out of nowhere. Its movement from social media to fringe/far-right media to mainstream media so swift that both the speed and the story itself became impossible to ignore. The frenzy of activity spurred lawmakers and the White House to release the Nunes memo, which critics say is a purposeful misrepresentation of classified intelligence meant to discredit the Russia probe and protect the president.

And this, ultimately, is what everyone has been missing in the past 14 months about the use of social media to spread disinformation. Information and psychological operations being conducted on social media—often mischaracterized by the dismissive label “fake news”—are not just about information, but about changing behavior. And they can be surprisingly effective.

An original tweet from a right-wing conspiracy buff with few followers was amplified by an account named KARYN.

The KARYN account is an interesting example of how bots lay a groundwork of information architecture within social media. It was registered in 2012, tweeting only a handful of times between July 2012 and November 2013 (mostly against President Barack Obama and in favor of the GOP). Then the account goes dormant until June 2016—the period that was identified by former FBI Director James Comey as the beginning of the most intense phase of Russian operations to interfere in the U.S. elections. The frequency of tweets builds from a few a week to a few a day. By October 11, there are dozens of posts a day, including YouTube videos, tweets to political officials and influencers and media personalities, and lots of replies to posts by the Trump team and related journalists. The content is almost entirely political, occasionally mentioning Florida, another battleground state, and sometimes posting what appear to be personal photos (which, if checked, come from many different phones and sources and appear “borrowed”). In October 2016, KARYN is tweeting a lot about Muslims/radical Islam attacking democracy and America; how Bill Clinton had lots of affairs; alleged financial wrongdoing on Clinton’s part; and, of course, WikiLeaks.

There’s much more evidence that KARYN is a bot—a bot that follows a random Republican guy in Michigan with 70-some followers. Why?

It would be fair to say that if you were setting up accounts to track views representative of a Trump-supporter, @underthemoraine would be a pulse to keep a finger on—the virtual Michigan “man in the diner” or “taxi driver” that journalists are forever citing as proof of conversations with real, nonpolitical humans in swing states. KARYN follows hundreds of such accounts, plus conservative media, and a lot of other bots.

KARYN triggers other bots and political operatives, and they combine to create a “tweet storm” or viral message. Many of these accounts are “organizers and amplifiers”—accounts with “human conductors” that are partly automated and linked to networks that automatically amplify content.

The article is very long, and very detailed–and I hope many of you will read it in its entirety. For now, I will leave you with the concluding paragraphs:

So what are the lessons of #releasethememo? Regardless of how much of the campaign was American and how much was Russian, it’s clear there was a massive effort to game social media and put the Nunes memo squarely on the national agenda—and it worked to an astonishing degree. The bottom line is that the goals of the two overlapped, so the origin—human, machine or otherwise—doesn’t actually matter. What matters is that someone is trying to manipulate us, tech companies are proving hopelessly unable or unwilling to police the bad actors manipulating their platforms, and politicians are either clueless about what to do about computational propaganda or—in the case of #releasethememo—are using it to achieve their goals. Americans are on their own.

And, yes, that also reinforces the narrative the Russians have been pushing since 2015: You’re on your own; be angry, and burn things down. Would that a leader would step into this breech, and challenge the advancing victory of the bots and the cynical people behind them.

Comments

Weaponizing Speech

A couple of weeks ago, I came across a provocative article by Tim Wu, a media historian who teaches at Columbia University, titled “Did Twitter Kill the First Amendment?” He began with the question:

You need not be a media historian to notice that we live in a golden age of press harassment, domestic propaganda and coercive efforts to control political debate. The Trump White House repeatedly seeks to discredit the press, threatens to strip broadcasters of their licenses and calls for the firing of journalists and football players for speaking their minds. A foreign government tries to hack our elections, and journalists and public speakers are regularly attacked by vicious, online troll armies whose aim is to silence opponents.

In this age of “new” censorship and blunt manipulation of political speech, where is the First Amendment?

Where, indeed? As Wu notes, the First Amendment was written for a different set of problems in a very different world, and much of the jurisprudence it has spawned deals with issues far removed from the ones that bedevil us today.

As my students are all too often surprised to learn, the Bill of Rights protects us against government misbehavior–in the case of our right to free speech, the First Amendment prohibits government censorship. For the most part, in this age of Facebook and Twitter and other social media, the censors come from the private sector–or in some cases, from governments other than our own, through various internet platforms.

The Russian government was among the first to recognize that speech itself could be used as a tool of suppression and control. The agents of its “web brigade,” often called the “troll army,” disseminate pro-government news, generate false stories and coordinate swarm attacks on critics of the government. The Chinese government has perfected “reverse censorship,” whereby disfavored speech is drowned out by “floods” of distraction or pro-government sentiment. As the journalist Peter Pomerantsev writes, these techniques employ information “in weaponized terms, as a tool to confuse, blackmail, demoralize, subvert and paralyze.”

It’s really difficult for most Americans to get our heads around this new form of warfare. We understand many of the negative effects of our fragmented and polarized media environment, the ability to live in an information bubble, to “choose our news”–and we recognize the role social media plays in constructing and reinforcing that bubble. It’s harder to visualize how Russia’s infiltration of Facebook and Twitter might have influenced our election.

Wu wants law enforcement to do more to protect journalists from cyber-bullying and threats of violence. And he wants Congress to step in to regulate social media (lots of luck with that in this anti-regulatory age.) For example, he says much too little is being done to protect American politics from foreign attack.

The Russian efforts to use Facebook, YouTube and other social media to influence American politics should compel Congress to act. Social media has as much impact as broadcasting on elections, yet unlike broadcasting it is unregulated and has proved easy to manipulate. At a minimum, new rules should bar social media companies from accepting money for political advertising by foreign governments or their agents. And more aggressive anti-bot laws are needed to fight impersonation of humans for propaganda purposes.

When Trump’s White House uses Twitter to encourage people to punish Trump’s critics — Wu cites the President’s demand that the N.F.L., on pain of tax penalties, censor players — “it is wielding state power to punish disfavored speech. There is precedent for such abuses to be challenged in court.”

It is hard to argue with Wu’s conclusion that

no defensible free-speech tradition accepts harassment and threats as speech, treats foreign propaganda campaigns as legitimate debate or thinks that social-media bots ought to enjoy constitutional protection. A robust and unfiltered debate is one thing; corruption of debate itself is another.

The challenge will be to craft legislation that addresses these unprecedented issues effectively–without inadvertently limiting the protections of the First Amendment.

We have some time to think about this, because the current occupants of both the White House and the Congress are highly unlikely to act. In the meantime, Twitter is the weapon and tweets are the “incoming.”

Comments