As we head into what promises (threatens?) to be a pivotal year for American democratic governance, we do so in an environment unlike any that we have previously occupied. The “disinformation” industry has really come into its own over the past several years–filling the void that has been created by the near-demise of local journalism, and taking advantage of the enormous influence of social media.
The most recent weapons against facts and accuracy are visual: “deep fakes” in which the alterations are nearly impossible to detect. The influence of those fabrications on people who have lived in a world where “seeing is believing” is difficult to predict.
One possibility they considered: Technology might “erode the evidentiary value of video and audio” with the result that we begin seeing them the way we now see drawings or paintings — rather than as factual records. In that case, all bets are off.
As the article put it,
Normally, when you receive new information, you decide whether or not to believe it in part based on how much you trust the person telling you.
“But there are cases where evidence for something is so strong that it overrides these social effects,” says Cailin O’Connor, a philosopher at UC Irvine. For decades, those cases have included video and audio evidence.
These recordings have been “backstops,” Rini says. But we’re hurtling toward a crisis that could quickly erode our ability to rely on them, leaving us leaning only on the reputation of the messenger.
One huge implication is that people may be less likely to avoid bad behavior if they know they can later disavow a recording of their mischief.
Just think how technological advances in deep fakes can affect political campaigns.
Just in time for the presidential election, the Brookings Institution shares news about a new technique for making deep fakes, invented by Israeli researchers. It creates highly realistic videos by substituting the face of another individual for the person who is really speaking.
Unlike previous methods, this one works on any two people without extensive, iterated focus on their faces, cutting hours or even days from previous deepfake processes without the need for expensive hardware. Because the Israeli researchers have released their model publicly—a move they justify as essential for defense against it—the proliferation of this cheap and easy deep fake technology appears inevitable.
Can videos of Joe Biden using the “n word” or Bernie Sanders vowing fidelity to communism be far behind? As the Brookings article notes,
If AI is reaching the point where it will be virtually impossible to detect audio and video representations of people saying things they never said (and even doing things they never did), seeing will no longer be believing, and we will have to decide for ourselves—without reliable evidence—whom or what to believe. Worse, candidates will be able to dismiss accurate but embarrassing representations of what they say are fakes, an evasion that will be hard to disprove.
In our incredibly polarized political environment, the temptation to “cherry pick” information–to give in to the very human impulse to engage in confirmation bias–is already strong. We are rapidly approaching a time when technology will be able to hand partisans a plausible reason to disbelieve inconvenient news about a preferred candidate, while giving others desired “evidence” about an opponent’s flaws.
We can also predict that a political party willing to employ gerrymandering, vote suppression and a wide variety of political “dirty tricks” will not hesitate to use these tools.
Uncharted territory, indeed…..