Flooding The Zone

Times are tough for us Free Speech defenders ….

It’s bad enough that so few Americans understand either the protections or the limitations of the First Amendment’s Free Speech provisions. Fewer still can distinguish between hate speech and hate crimes. And even lawyers dedicated to the protection of our constitutional right to publicly opine and debate recognize the existence of grey zones.

When the Internet first became ubiquitous, I celebrated this new mechanism for expression. I saw it as a welcome new development in the “marketplace of ideas.”  What I didn’t see was its potential for the spread of deliberate propaganda.

Color me disabused.

Steve Bannon coined the phrase that explains what we are seeing: “flooding the zone with shit.” Rather than inventing a story to counter explanations with which one disagrees, the new approach–facilitated by bots and AI– simply produces immense amounts of conflicting and phony “information” which is then uploaded to social media and other sites.  The goal is no longer to make people believe “story A” rather than “story B.” The goal is to create a population that no longer knows what to believe.

It’s a tactic that has infected American politics and made governing close to impossible–but it is not a tactic confined to the U.S. It’s global.

Heather Cox Richardson has summed up the resulting threat:

A report published last week by the European Commission, the body that governs the European Union, says that when X, the company formerly known as Twitter, got rid of its safety standards, Russian disinformation on the site took off. Lies about Russia’s war against Ukraine spread to at least 165 million people in the E.U. and allied countries like the U.S., and garnered at least 16 billion views. The study found that Instagram, Telegram, and Facebook, all owned by Meta, also spread pro-Kremlin propaganda that uses hate speech and boosts extremists.

The report concluded that “the Kremlin’s ongoing disinformation campaign not only forms an integral part of Russia’s military agenda, but also causes risks to public security, fundamental rights and electoral processes” in the E.U. The report’s conclusions also apply to the U.S., where the far right is working to undermine U.S. support for Ukraine by claiming—falsely—that U.S. aid to Ukraine means the Biden administration is neglecting emergencies at home, like the fires last month in Maui.

Russian operatives famously flooded social media with disinformation to influence the 2016 U.S. election, and by 2022 the Federal Bureau of Investigation (FBI) warned that China had gotten into the act. Today, analyst Clint Watts of Microsoft reported that in the last year, China has honed its ability to generate artificial images that appear to be U.S. voters, using them to stoke “controversy along racial, economic, and ideological lines.” It uses social media accounts to post divisive, AI-created images that attack political figures and iconic U.S. symbols.

Once upon a time, America could depend upon two large oceans to protect us from threats from abroad. Those days are long gone, and our contemporary isolationists–who refuse to understand, for example, how Russia’s invasion of Ukraine could affect us–utterly fail to recognize that opposing our new global reality  doesn’t make it go away.

The internet makes it possible to deliver disinformation on a scale never previously available–or imagined. And it poses a very real problem for those of us who defend freedom of speech, because most of the proposed “remedies” I’ve seen would make things worse.

This nation’s Founders weren’t naive; they understood that ideas are powerful, and that  bad ideas can do real harm. They opted for freedom of speech–defined in our system as freedom from government censorship– because they also recognized that allowing government to decide which ideas could be exchanged would be much more harmful.

I still agree with the Founders’ decision, but even if I didn’t, current communication technology has largely made government control impossible. (I still recall a conversation I had with two students at a Chinese university that had invited me to speak. I asked them about China’s control of the Internet and they laughed, telling me that any “tech savvy” person could evade state controls–and that many did. And that was some 18 years ago.)

At this poiint, we have to depend upon those who manage social media platforms to monitor what their users post, which is why egomaniacs like Elon Musk–who champions a “free speech” he clearly doesn’t understand–are so dangerous.

Ultimately, we will have to depend upon the ability of the public to separate the wheat from the chaff–and the ability to do that requires a level of civic literacy that has thus far eluded us….

Comments

It Isn’t Just In MAGA-World

Let’s be honest: believing the people who tell you what you want to hear is a trait shared by all humans–Left, Right and Center. There’s a reason researchers study  confirmation bias–the current terminology for what we used to call “cherry-picking the facts.”

Just one recent example: MAGA folks who are frantic to believe that Joe Biden is just as corrupt as Donald Trump (okay, maybe not quite that corrupt…) have latched onto a report issued by James Comer, a Republican House member determined to find something to support that accusation. Unfortunately, as TNR (among many other media outlets) has reported, there just isn’t anything that we might call “evidence” to support that desired belief.

The House GOP accused Joe Biden and his family on Wednesday of engaging in business with foreign entities—but were unable to provide any actual evidence linking the president to any wrongdoing.

House Oversight Committee Chair James Comer released a 65-page memo detailing a sprawling investigation into Biden and some of his relatives, particularly his son Hunter Biden. Nowhere in the massive document was there a specific allegation of a crime committed by Biden or any of his relatives.

During a press conference explaining the investigation, Comer was asked if he had evidence directly linking Biden to corruption. The Kentucky Republican hemmed and hawed but ultimately admitted he didn’t.

It’s easy enough to see confirmation bias at work when a commenter to this blog “cites” to Comer, a lawmaker who has publicly admitted that he “intuited” misbehavior by the Biden family, despite the fact that even Fox “News” personalities have admitted that there’s no there there. But it isn’t only folks on the Right who engage in confirmation bias–and the strength of that human impulse to cherry-pick is about to get a test on steroids.

Researchers have recently warned about the likely misuse of AI--artificial intelligence–in producing misleading and dishonest political campaigns

Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election.

The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.

No more.

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.

The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.

“We’re not prepared for this,” warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. “To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”

Some of the ways in which AI can mislead voters include the production of automated robocall messages that use a (simulated) candidate’s voice and instruct voters to cast their ballots on the wrong date, or phony audio recordings that sound as if a candidate was expressing racist views– AI can easily produce video footage showing someone giving a speech or interview that they never gave. It would be especially simple for AI to fake images designed to look like local news reports making a variety of false claims….The possibilities are endless. And what happens if an international entity — a cybercriminal or a nation state — impersonates someone?

AI-generated political disinformation already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children supposedly learning satanism in libraries.

If we have trouble now knowing who or what to believe, today’s confusion over what constitutes “fact” and what doesn’t is about to be eclipsed by the coming creation of a world largely invented by digital liars employing tools we’ve never before encountered.

If regulators can’t figure out how to address the dangers inherent in this new technology–and quickly!– artificial intelligence plus confirmation bias may just put the end to whatever remains of America’s rational self-government.

Comments

We’re in Sci-Fi Territory…

Time on the treadmill goes faster when you listen to a podcast, but the other day, I should have listened to music. Instead, I listened to Ezra Klein and his guest discuss AI (Artificial Intelligence).

In case you’ve missed the mountain of reporting, recriminating, pooh-poohing and dark prophesying, let me share the podcast’s introduction.

OpenAI last week released its most powerful language model yet: GPT-4, which vastly outperforms its predecessor GPT-3.5 on a variety of tasks.

GPT-4 can pass the bar exam in the 90th percentile, while the previous model struggled, around the 10th percentile. GPT-4 scored in the 88th percentile on the LSAT, up from GPT-3.5’s 40th percentile. And on the advanced sommelier theory test, GPT-4 performed better than 77 percent of test takers. (GPT-3.5 hovered around 46 percent.) These are stunning results — not just what the model can do but also the rapid pace of progress. And Open AI’s ChatGPT and other chat bots are just one example of what recent A.I. systems can achieve.

Every once in a while, a commenter to this blog will say “I’m glad I’m old.” Given the enormity of change we are likely to see over the next decade, I understand the appeal of the sentiment. You really need to listen to the entire podcast to understand both the potential benefits and the huge dangers, but an observation that really took me aback was the fact that right now AI can do any job that humans can do remotely.

Think about that.

In 2018, researchers reported that nine out of ten manufacturing jobs had been lost to automation since 2000. That same year, Pew asked 1900  experts to predict the impact of emerging technologies on employment; half predicted large-scale replacement of both white- and blue-collar workers by robots and “digital agents,” and scholars at Oxford warned that half of all American jobs were at risk.

It would be easy to dismiss those findings and predictions–after all, where are those self-driving cars we were promised? But those cited warnings were issued before the accelerated development of AI, and before there was AI able to develop further AI generations without human programmers.

Many others who’ve been following the trajectory  of AI progress describe the technology’s uses–and potential misuses–in dramatic terms.

In his op-eds, Tom Friedman usually conveys an “I’m on top of it” attitude (one I find somewhat off-putting), but that sense was absent from his recent essay on AI. 

I had a most remarkable but unsettling experience last week. Craig Mundie, the former chief research and strategy officer for Microsoft, was giving me a demonstration of GPT-4, the most advanced version of the artificial intelligence chatbot ChatGPT, developed by OpenAI and launched in November. Craig was preparing to brief the board of my wife’s museum, Planet Word, of which he is a member, about the effect ChatGPT will have on words, language and innovation.

“You need to understand,” Craig warned me before he started his demo, “this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”

Large language modules like ChatGPT will steadily increase in their capabilities, Craig added, and take us “toward a form of artificial general intelligence,” delivering efficiencies in operations, ideas, discoveries and insights “that have never been attainable before across every domain.”

The rest of the column described the “demo.” It was gobsmacking.

What happens if and when very few humans are required to run the world– when most jobs (not just those requiring manual labor, but jobs we haven’t previously thought of as threatened) disappear?

The economic implications are staggering enough, but a world where paid labor is rare would require a significant paradigm shift for the millions of humans who find purpose and meaning in their work. Somehow, I doubt that they will all turn to art, music or other creative pursuits to fill the void…

I lack the capacity to envision the changes that are barreling down on (unsuspecting, unprepared) us–changes that will require my grandchildren to occupy (and hopefully thrive) in a world I can’t even imagine.

If we’re entering a world previously relegated to science fiction, maybe we need to consider applying and adapting Asimov’s three laws of robotics:  1) A robot (or any AI) may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot (or any AI) must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot (or other AI) must protect its own existence as long as such protection does not conflict with the First or Second Law.

Or maybe it’s already too late…..

Comments

Messing With Our Minds

As if the websites peddling conspiracy theories and political propaganda weren’t enough, we now have to contend with “Deepfakes.” Deepfakes, according to the Brookings Institution, are 

videos that have been constructed to make a person appear to say or do something that they never said or did. With artificial intelligence-based methods for creating deepfakes becoming increasingly sophisticated and accessible, deepfakes are raising a set of challenging policy, technology, and legal issues.

Deepfakes can be used in ways that are highly disturbing. Candidates in a political campaign can be targeted by manipulated videos in which they appear to say things that could harm their chances for election. Deepfakes are also being used to place people in pornographic videos that they in fact had no part in filming.

Because they are so realistic, deepfakes can scramble our understanding of truth in multiple ways. By exploiting our inclination to trust the reliability of evidence that we see with our own eyes, they can turn fiction into apparent fact. And, as we become more attuned to the existence of deepfakes, there is also a subsequent, corollary effect: they undermine our trust in all videos, including those that are genuine. Truth itself becomes elusive, because we can no longer be sure of what is real and what is not.

The linked article notes that researchers are trying to devise technologies to detect deep fakes, but until there are apps or other tools that will identify these very sophisticated forgeries, we are left with “legal remedies and increased awareness,” neither of which is very satisfactory.

We already inhabit an information environment that has done more damage to social cohesion than previous efforts to divide and mislead. Thanks to the ubiquity of the Internet and social media (and the demise of media that can genuinely be considered “mass”), we are all free to indulge our confirmation biases–free to engage in what a colleague dubs “motivated reasoning.” It has become harder and harder to separate truth from fiction, moderate spin from outright propaganda.

One result is that thoughtful people–people who want to be factually accurate and intellectually honest–are increasingly unsure of what they can believe.

What makes this new fakery especially dangerous is that, as the linked article notes, most of us do think that “seeing is believing.” We are far more apt to accept visual evidence than other forms of information. There are already plenty of conspiracy sites that offer altered photographic “evidence”–of the aliens who landed at Roswell, of purportedly criminal behavior by public figures, etc. Now people intent on deception have the ability to make those alterations virtually impossible to detect.

Even if technology is developed that can detect fakery, will “motivated” reasoners rely on it?

Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated? And what should people believe when different detection algorithms—or different people—render conflicting verdicts regarding whether a video is genuine?

We are truly entering a new and unsettling “hall of mirrors” version of reality.

Comments