What Pictures Do And Don’t Tell Us

As we head into what promises (threatens?) to be a pivotal year for American democratic governance, we do so in an environment unlike any that we have previously occupied. The “disinformation” industry has really come into its own over the past several years–filling the void that has been created by the near-demise of local journalism, and taking advantage of the enormous influence of social media.

The most recent weapons against facts and accuracy are visual: “deep fakes” in which the alterations are nearly impossible to detect. The influence of those fabrications on people who have lived in a world where “seeing is believing” is difficult to predict.

In a recent article from Axios Future, philosophers considered the challenge presented by deep fakes.

One possibility they considered: Technology might “erode the evidentiary value of video and audio” with the result that we begin seeing them the way we now see drawings or paintings —  rather than as factual records. In that case, all bets are off.

As the article put it,

Normally, when you receive new information, you decide whether or not to believe it in part based on how much you trust the person telling you.

“But there are cases where evidence for something is so strong that it overrides these social effects,” says Cailin O’Connor, a philosopher at UC Irvine. For decades, those cases have included video and audio evidence.

These recordings have been “backstops,” Rini says. But we’re hurtling toward a crisis that could quickly erode our ability to rely on them, leaving us leaning only on the reputation of the messenger.

One huge implication is that people may be less likely to avoid bad behavior if they know they can later disavow a recording of their mischief.

Just think how technological advances in deep fakes can affect political campaigns.

Just in time for the presidential election, the Brookings Institution shares news about a new technique for making deep fakes, invented by Israeli researchers.  It creates highly realistic videos by substituting the face of another individual for the person who is really speaking.

Unlike previous methods, this one works on any two people without extensive, iterated focus on their faces, cutting hours or even days from previous deepfake processes without the need for expensive hardware. Because the Israeli researchers have released their model publicly—a move they justify as essential for defense against it—the proliferation of this cheap and easy deep fake technology appears inevitable.

Can videos of Joe Biden using the “n word” or Bernie Sanders vowing fidelity to communism be far behind? As the Brookings article notes,

If AI is reaching the point where it will be virtually impossible to detect audio and video representations of people saying things they never said (and even doing things they never did), seeing will no longer be believing, and we will have to decide for ourselves—without reliable evidence—whom or what to believe. Worse, candidates will be able to dismiss accurate but embarrassing representations of what they say are fakes, an evasion that will be hard to disprove.

In our incredibly polarized political environment, the temptation to “cherry pick” information–to give in to the very human impulse to engage in confirmation bias–is already strong. We are rapidly approaching a time when technology will be able to hand partisans a plausible reason to disbelieve inconvenient news about a preferred candidate, while giving others desired “evidence” about an opponent’s flaws.

We can also predict that a political party willing to employ gerrymandering, vote suppression and a wide variety of political “dirty tricks” will not hesitate to use these tools.

Uncharted territory, indeed…..

Comments

More Of This…

I don’t know about all of you, but I get positively desperate for good news. The American political landscape is so bleak–every day, it seems there is a new report of really egregious wrongdoing: trashing the environment, screwing over students and public education, kicking hungry children off food stamps,  the President’s corruption and conflicts of interest…the list is endless, and it’s all aided and abetted by the propaganda that litters the Internet.

As we head into 2020, the effectiveness of that propaganda has been enhanced by “deep fakes”–doctored photographs that look so real the distortions are difficult to detect.

Rather than sighing and wondering how effective this new method of disinformation will prove to be, Governing Magazine reports that a couple of universities are doing something about it.

If you were under any illusion that online hooey peaked with the 2016 election, brace yourself for the era of “deepfakes” — fabricated videos so realistic they can put words in the mouths of politicians or anyone else that they never said.

As the 2020 election approaches, a new University of Washington initiative aims to combat the wave of increasingly sophisticated digital counterfeiting and misinformation coursing through social media and give the public tools to sort fact from fakery.

The Center for an Informed Public (CIP) has been seeded with $5 million from the John S. and James L. Knight Foundation, part of a $50 million round of grants awarded this year to 11 U.S. universities and research institutions to study how technology is transforming democracy.

The mission is to use the new research to help everyone vulnerable to being fooled by online manipulation — whether it’s schoolkids unsure about which news sites are trustworthy or baby boomers uncritically sharing fraudulent news stories on Facebook.

Kate Starbird is a UW associate professor and one of the CIP’s principal researchers. She has spent years studying the spread of conspiracy theories and deliberate misinformation in the wake of crisis events like school shootings and the 2013 Boston Marathon bombing, and she says this is “not a K-12 problem. It’s a K-99 problem.”

Starbird and other researchers have examined millions of tweets and discovered how various actors, including foreign intelligence operatives, have worked to intensify political divisions in America.

In 2016, for example, Twitter accounts associated with Russia’s Internet Research Agency impersonated activists supportive and critical of the #BlackLivesMatter movement. Tweets from those accounts became some of the most widely shared. “Russian agents did not create political division in the United States, but they were working to encourage it,” Starbird recounted in a Medium post about the research.

Fighting the bots and trolls and pervasive propaganda is essential–but it won’t be easy.

The CIP grew in part out of the UW’s popular course, “Calling BS in the Age of Big Data,” created two years ago by West and biology professor Carl Bergstrom. The course is in such demand that its 160 seats filled within one minute of registration opening this quarter, West said.

Sam Gill, who leads community and national initiatives for the Knight Foundation, said he sees the new UW center as “sort of like the first public health school in the country for the Internet.”

The link between quality information and public health is not merely metaphorical, as Internet-fueled misconceptions about vaccines have contributed to outbreaks of measles and other diseases once thought eradicated. An ongoing measles outbreak in Samoa has killed 50 children.

Similarly, misinformation has made it harder for the U.S. to combat climate change, which scientists predict will wreak havoc in the coming decades unless big cuts are made in greenhouse-gas emissions. Emma Spiro, an assistant professor in the Information School and another CIP researcher, said there is already talk of collaboration with the UW’s EarthLab research institute to address climate knowledge.

I don’t think it is hyperbole to say that there is a war being fought between fact and deliberate fiction. We need new weapons in order to win that war.

I hope this very promising effort to create those weapons will be joined by many others.

Comments