Let’s be honest: believing the people who tell you what you want to hear is a trait shared by all humans–Left, Right and Center. There’s a reason researchers study confirmation bias–the current terminology for what we used to call “cherry-picking the facts.”
Just one recent example: MAGA folks who are frantic to believe that Joe Biden is just as corrupt as Donald Trump (okay, maybe not quite that corrupt…) have latched onto a report issued by James Comer, a Republican House member determined to find something to support that accusation. Unfortunately, as TNR (among many other media outlets) has reported, there just isn’t anything that we might call “evidence” to support that desired belief.
The House GOP accused Joe Biden and his family on Wednesday of engaging in business with foreign entities—but were unable to provide any actual evidence linking the president to any wrongdoing.
House Oversight Committee Chair James Comer released a 65-page memo detailing a sprawling investigation into Biden and some of his relatives, particularly his son Hunter Biden. Nowhere in the massive document was there a specific allegation of a crime committed by Biden or any of his relatives.
During a press conference explaining the investigation, Comer was asked if he had evidence directly linking Biden to corruption. The Kentucky Republican hemmed and hawed but ultimately admitted he didn’t.
It’s easy enough to see confirmation bias at work when a commenter to this blog “cites” to Comer, a lawmaker who has publicly admitted that he “intuited” misbehavior by the Biden family, despite the fact that even Fox “News” personalities have admitted that there’s no there there. But it isn’t only folks on the Right who engage in confirmation bias–and the strength of that human impulse to cherry-pick is about to get a test on steroids.
Researchers have recently warned about the likely misuse of AI--artificial intelligence–in producing misleading and dishonest political campaigns
Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election.
The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.
Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.
The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.
“We’re not prepared for this,” warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. “To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”
Some of the ways in which AI can mislead voters include the production of automated robocall messages that use a (simulated) candidate’s voice and instruct voters to cast their ballots on the wrong date, or phony audio recordings that sound as if a candidate was expressing racist views– AI can easily produce video footage showing someone giving a speech or interview that they never gave. It would be especially simple for AI to fake images designed to look like local news reports making a variety of false claims….The possibilities are endless. And what happens if an international entity — a cybercriminal or a nation state — impersonates someone?
AI-generated political disinformation already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children supposedly learning satanism in libraries.
If we have trouble now knowing who or what to believe, today’s confusion over what constitutes “fact” and what doesn’t is about to be eclipsed by the coming creation of a world largely invented by digital liars employing tools we’ve never before encountered.
If regulators can’t figure out how to address the dangers inherent in this new technology–and quickly!– artificial intelligence plus confirmation bias may just put the end to whatever remains of America’s rational self-government.