Those Dueling Realities

News literacy matters more than ever–and we live at a time when it is harder and harder to tell truth from fiction.

One example from the swamps of the Internet. The link will take you to a doctored photo of  actor Sylvester Stallone wearing a t-shirt that says  “4 Useless Things: woke people, COVID-19 vaccines, Dr. Anthony Fauci and President Joe Biden.” In the original, authentic photo, Stallone is wearing a plain dark t-shirt.

The News Literacy Project, which issues ongoing reports of these sorts of visual misrepresentation, says this about the Stallone t-shirt.

Digitally manipulating photos of celebrities to make it look like they endorse a provocative political message — often on t-shirts — is extremely common. Such posts are designed to resonate with people who have strong partisan views and may share the image without pausing to consider whether it’s authentic. It’s also likely that some of these fakes are marketing ploys to boost sales of t-shirts that are easily found for sale online. For example, this reply to an influential Twitter account includes the same doctored image and a link to a product page where the shirt can be purchased.

It’s bad enough that there are literally thousands of sites using text to promote lies. But people have a well-known bias toward visual information (“Who am I going to believe, you or my lying eyes?””Seeing is believing.” Etc.) With the availability of “deep fake” technologies, the ability to doctor photographs has become easier, more widespread, and much harder to detect.

The Guardian recently reported on the phenomenon, beginning with a definition.

Have you seen Barack Obama call Donald Trump a “complete dipshit”, or Mark Zuckerberg brag about having “total control of billions of people’s stolen data”, or witnessed Jon Snow’s moving apology for the dismal ending to Game of Thrones? Answer yes and you’ve seen a deepfake. The 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake. Want to put new words in a politician’s mouth, star in your favourite movie, or dance like a pro? Then it’s time to make a deepfake.

As the article noted, a fair percentage of deep-fake videos are pornographic. A firm called “Deeptrace” identified 15,000 altered videos online in September 2019, and a “staggering 96%” were pornographic. Ninety-nine percent of those “mapped faces from female celebrities on to porn stars.”

As new techniques allow unskilled people to make deepfakes with a handful of photos, fake videos are likely to spread beyond the celebrity world to fuel revenge porn. As Danielle Citron, a professor of law at Boston University, puts it: “Deepfake technology is being weaponised against women.” Beyond the porn there’s plenty of spoof, satire and mischief.

But it isn’t just about videos. Deepfake technology can evidently create convincing phony photos from scratch. The report noted that a supposed Bloomberg journalist, “Maisy Kinsley”,  who was a deepfake, had even been given profiles on LinkedIn and Twitter.

Another LinkedIn fake, “Katie Jones”, claimed to work at the Center for Strategic and International Studies, but is thought to be a deepfake created for a foreign spying operation.

Audio can be deepfaked too, to create “voice skins” or ”voice clones” of public figures. Last March, the chief of a UK subsidiary of a German energy firm paid nearly £200,000 into a Hungarian bank account after being phoned by a fraudster who mimicked the German CEO’s voice. The company’s insurers believe the voice was a deepfake, but the evidence is unclear. Similar scams have reportedly used recorded WhatsApp voice messages.

No wonder levels of trust have declined so precipitously! The Guardian addressed the all-important question: how can you tell whether a visual image is real or fake? It turns out, it’s very hard–and getting harder.

In 2018, US researchers discovered that deepfake faces don’t blink normally. No surprise there: the majority of images show people with their eyes open, so the algorithms never really learn about blinking. At first, it seemed like a silver bullet for the detection problem. But no sooner had the research been published, than deepfakes appeared with blinking. Such is the nature of the game: as soon as a weakness is revealed, it is fixed.

Governments, universities and tech firms are currently funding research that will  detect deepfakes, and we can only hope that research is successful–and soon. The truly insidious consequence of a widespread inability to tell whether an image is or is not authentic would be the creation of a “zero-trust society, where people cannot, or no longer bother to, distinguish truth from falsehood.”

Deepfakes are just one more element of an information environment that encourages us to construct, inhabit and defend our own, preferred “realities.” 
 

Comments

A Picture Is Worth A Thousand Words

One of the most significant ways today’s protests differ from uprisings in the 60s is the ubiquity of cellphone cameras. It’s one thing to hear verbal descriptions of improper behavior–quite another to see it.

Historians tell us that it wasn’t until the Viet Nam war was televised that American public revulsion ended it.

When there’s video, when there are pictures, it’s no longer possible to dismiss accusations as overheated, harder to tell yourself there must have been more to the story…The widespread outrage we are seeing right now is in reaction to appalling behaviors that are shared daily on social media and the evening news.

Unfortunately, propagandists also understand how visual evidence shapes public opinion. Case in point: Fox News. As the Washington Post reported:

Fox News on Friday removed manipulated images that had appeared on its website as part of the outlet’s coverage of protests over the killing of George Floyd…

The misleading material ran alongside stories about a small expanse of city blocks in Seattle that activists have claimed as the Capitol Hill Autonomous Zone. That occupation had until then been peaceful–with people coming and going to hear political speeches and concerts and enjoy free food. Fox’s coverage, however, was designed to give the appearance of armed unrest.

The misleading material spliced a June 10 photograph of an armed man at the Seattle protests with different photographs — one also from June 10, of a sign reading, “You Are Now Entering Free Cap Hill,” and others from images captured May 30 of a shattered storefront and other unrest downtown.

The conservative news site, in coverage that labeled Seattle “CRAZY TOWN” and called the city “helpless,” also displayed an image of a city block set ablaze that was actually taken in St. Paul, Minn.

It wasn’t until the Seattle Times called Fox out for the misleading photographs that Fox removed them and “apologized,” saying “a recent slide show depicting scenes from Seattle mistakenly included a picture from St. Paul, Minnesota. Fox News regrets these errors.”

Sure they do.

Rolling Stone had yet another report of Fox’s “editing.”

A local Fox affiliate ran a story about a family flagging down law enforcement to protect their business from looters, only to have the police come and handcuff them. Fox News removed footage showing police drawing their guns and putting the family in handcuffs, and selectively edited out the police’s mistakes and aggressive tactics.

It isn’t just television. The Internet is awash with deceptive sites; just this week, I read about a site run by a Trump supporter with the URL JoeBiden.info, featuring out-of-context quotes from the former vice president and GIFs of him touching women in ways that would make women uncomfortable.

Now, we face the prospect of even more massive disinformation campaigns via so-called “deepfakes.” As Forbes recently warned, deepfakes are going to create havoc–and we are not prepared.

Last month during ESPN’s hit documentary series The Last Dance, State Farm debuted a TV commercial that has become one of the most widely discussed ads in recent memory. It appeared to show footage from 1998 of an ESPN analyst making shockingly accurate predictions about the year 2020.

As it turned out, the clip was not genuine: it was generated using cutting-edge AI. The commercial surprised, amused and delighted viewers.

What viewers should have felt, though, was deep concern.

Deepfake technology allows anyone with a modicum of skill and a computer to create realistic photos and videos showing people saying and doing things that they didn’t actually say or do. The technology is powered by something called “generative adversarial networks (GANs).”

Several deepfake videos have gone viral recently, giving millions around the world their first taste of this new technology: President Obama using an expletive to describe President Trump, Mark Zuckerberg admitting that Facebook’s true goal is to manipulate and exploit its users, Bill Hader morphing into Al Pacino on a late-night talk show.

The counterfeits are already hard to detect, and the technology continues to improve; meanwhile, its use is growing at a rapid pace.

It does not require much imagination to grasp the harm that could be done if entire populations can be shown fabricated videos that they believe are real. Imagine deepfake footage of a politician engaging in bribery or sexual assault right before an election; or of U.S. soldiers committing atrocities against civilians overseas; or of President Trump declaring the launch of nuclear weapons against North Korea. In a world where even some uncertainty exists as to whether such clips are authentic, the consequences could be catastrophic.

Because of the technology’s widespread accessibility, such footage could be created by anyone: state-sponsored actors, political groups, lone individuals.

The potential for chaos and political mischief boggles the mind. Given the reluctance of platforms like Facebook to alert users to even obvious lies, they’re unlikely to identify deepfakes, even if they develop technology enabling them to do so.

It’s already difficult to counter much of the disinformation disseminated through cyberspace–for one thing, we don’t know who has seen it, so we don’t know where to send corrections.

If a picture is worth a thousand words, what’s a fake picture worth?

Comments

Messing With Our Minds

As if the websites peddling conspiracy theories and political propaganda weren’t enough, we now have to contend with “Deepfakes.” Deepfakes, according to the Brookings Institution, are 

videos that have been constructed to make a person appear to say or do something that they never said or did. With artificial intelligence-based methods for creating deepfakes becoming increasingly sophisticated and accessible, deepfakes are raising a set of challenging policy, technology, and legal issues.

Deepfakes can be used in ways that are highly disturbing. Candidates in a political campaign can be targeted by manipulated videos in which they appear to say things that could harm their chances for election. Deepfakes are also being used to place people in pornographic videos that they in fact had no part in filming.

Because they are so realistic, deepfakes can scramble our understanding of truth in multiple ways. By exploiting our inclination to trust the reliability of evidence that we see with our own eyes, they can turn fiction into apparent fact. And, as we become more attuned to the existence of deepfakes, there is also a subsequent, corollary effect: they undermine our trust in all videos, including those that are genuine. Truth itself becomes elusive, because we can no longer be sure of what is real and what is not.

The linked article notes that researchers are trying to devise technologies to detect deep fakes, but until there are apps or other tools that will identify these very sophisticated forgeries, we are left with “legal remedies and increased awareness,” neither of which is very satisfactory.

We already inhabit an information environment that has done more damage to social cohesion than previous efforts to divide and mislead. Thanks to the ubiquity of the Internet and social media (and the demise of media that can genuinely be considered “mass”), we are all free to indulge our confirmation biases–free to engage in what a colleague dubs “motivated reasoning.” It has become harder and harder to separate truth from fiction, moderate spin from outright propaganda.

One result is that thoughtful people–people who want to be factually accurate and intellectually honest–are increasingly unsure of what they can believe.

What makes this new fakery especially dangerous is that, as the linked article notes, most of us do think that “seeing is believing.” We are far more apt to accept visual evidence than other forms of information. There are already plenty of conspiracy sites that offer altered photographic “evidence”–of the aliens who landed at Roswell, of purportedly criminal behavior by public figures, etc. Now people intent on deception have the ability to make those alterations virtually impossible to detect.

Even if technology is developed that can detect fakery, will “motivated” reasoners rely on it?

Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated? And what should people believe when different detection algorithms—or different people—render conflicting verdicts regarding whether a video is genuine?

We are truly entering a new and unsettling “hall of mirrors” version of reality.

Comments

As If We Needed Another Looming Threat

If I didn’t have a platform bed, I’d just crawl under my bed and hide.

I’m frantic about the elections. I’m depressed about climate change and our government’s unwillingness to confront it. The last issue of The Atlantic had several lengthy stories about technologies that will disrupt our lives and could conceivably end them. (Did you know that the government is doing research on the “weaponizing” of our brains? That Alexa is becoming our best friend and confidant?)

And now there’s “Deepfakes.”

Senator Ben Sasse (you remember him–he talks a great game, but then folds like a Swiss Army knife and votes the GOP party line) has written a truly terrifying explanation of what’s on the horizon.

Flash forward two years and consider these hypotheticals. You’re seated at your desk, having taken your second sip of coffee and just beginning to contemplate the breakfast sandwich steaming in the bag in front of you. You click on your favorite news site, one you trust. “Unearthed Video Shows President Conspiring with Putin.” You can’t resist.

The video, in ultrahigh definition, shows then-presidential candidate Donald Trump and Vladimir Putin examining an electoral map of the United States. They are nodding and laughing as they appear to discuss efforts to swing the election to Trump. Jared Kushner and Ivanka Trump smile wanly in the background. The report notes that Trump’s movements on the day in question are difficult to pin down.

Alternate scenario: Same day, same coffee and sandwich. This time, the headline reports the discovery of an audio recording of Democratic presidential candidate Hillary Clinton and Attorney General Loretta E. Lynch brainstorming about how to derail the FBI investigation of Clinton’s use of a private server to handle classified emails. The recording’s date is unclear, but its quality is perfect; Clinton and Lynch can be heard discussing the attorney general’s airport tarmac meeting with former president Bill Clinton in Phoenix on June 27, 2016.

The recordings in these hypothetical scenarios are fake — but who are you going to believe? Who will your neighbors believe? The government? A news outlet you distrust?

Sasse writes that these Deepfakes — defined as seemingly authentic video or audio recordings — are likely to send American politics into an even deeper tailspin, and he warns that Washington isn’t paying nearly enough attention to them. (Well, of course not. The moral midgets who run our government have power to amass, and a public to fleece–that doesn’t leave them time or energy to address the actual issues facing us.)

Consider: In December 2017, an amateur coder named “DeepFakes” was altering porn videos by digitally substituting the faces of female celebrities for the porn stars’. Not much of a hobby, but it was effective enough to prompt news coverage. Since then, the technology has improved and is readily available. The word deepfake has become a generic noun for the use of machine-learning algorithms and facial-mapping technology to digitally manipulate people’s voices, bodies and faces. And the technology is increasingly so realistic that the deepfakes are almost impossible to detect.

Creepy, right? Now imagine what will happen when America’s enemies use this technology for less sleazy but more strategically sinister purposes.

I’m imagining. And you’ll forgive me if I find Sasse’s solution–Americans have to stop distrusting each other–pretty inadequate, if not downright fanciful. On the other hand, I certainly don’t have a better solution to offer.

Maybe if I lose weight I can squeeze under that platform bed…..

Comments