News literacy matters more than ever–and we live at a time when it is harder and harder to tell truth from fiction.
One example from the swamps of the Internet. The link will take you to a doctored photo of actor Sylvester Stallone wearing a t-shirt that says “4 Useless Things: woke people, COVID-19 vaccines, Dr. Anthony Fauci and President Joe Biden.” In the original, authentic photo, Stallone is wearing a plain dark t-shirt.
The News Literacy Project, which issues ongoing reports of these sorts of visual misrepresentation, says this about the Stallone t-shirt.
Digitally manipulating photos of celebrities to make it look like they endorse a provocative political message — often on t-shirts — is extremely common. Such posts are designed to resonate with people who have strong partisan views and may share the image without pausing to consider whether it’s authentic. It’s also likely that some of these fakes are marketing ploys to boost sales of t-shirts that are easily found for sale online. For example, this reply to an influential Twitter account includes the same doctored image and a link to a product page where the shirt can be purchased.
It’s bad enough that there are literally thousands of sites using text to promote lies. But people have a well-known bias toward visual information (“Who am I going to believe, you or my lying eyes?””Seeing is believing.” Etc.) With the availability of “deep fake” technologies, the ability to doctor photographs has become easier, more widespread, and much harder to detect.
The Guardian recently reported on the phenomenon, beginning with a definition.
Have you seen Barack Obama call Donald Trump a “complete dipshit”, or Mark Zuckerberg brag about having “total control of billions of people’s stolen data”, or witnessed Jon Snow’s moving apology for the dismal ending to Game of Thrones? Answer yes and you’ve seen a deepfake. The 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake. Want to put new words in a politician’s mouth, star in your favourite movie, or dance like a pro? Then it’s time to make a deepfake.
As the article noted, a fair percentage of deep-fake videos are pornographic. A firm called “Deeptrace” identified 15,000 altered videos online in September 2019, and a “staggering 96%” were pornographic. Ninety-nine percent of those “mapped faces from female celebrities on to porn stars.”
As new techniques allow unskilled people to make deepfakes with a handful of photos, fake videos are likely to spread beyond the celebrity world to fuel revenge porn. As Danielle Citron, a professor of law at Boston University, puts it: “Deepfake technology is being weaponised against women.” Beyond the porn there’s plenty of spoof, satire and mischief.
But it isn’t just about videos. Deepfake technology can evidently create convincing phony photos from scratch. The report noted that a supposed Bloomberg journalist, “Maisy Kinsley”, who was a deepfake, had even been given profiles on LinkedIn and Twitter.
Another LinkedIn fake, “Katie Jones”, claimed to work at the Center for Strategic and International Studies, but is thought to be a deepfake created for a foreign spying operation.
Audio can be deepfaked too, to create “voice skins” or ”voice clones” of public figures. Last March, the chief of a UK subsidiary of a German energy firm paid nearly £200,000 into a Hungarian bank account after being phoned by a fraudster who mimicked the German CEO’s voice. The company’s insurers believe the voice was a deepfake, but the evidence is unclear. Similar scams have reportedly used recorded WhatsApp voice messages.
No wonder levels of trust have declined so precipitously! The Guardian addressed the all-important question: how can you tell whether a visual image is real or fake? It turns out, it’s very hard–and getting harder.
In 2018, US researchers discovered that deepfake faces don’t blink normally. No surprise there: the majority of images show people with their eyes open, so the algorithms never really learn about blinking. At first, it seemed like a silver bullet for the detection problem. But no sooner had the research been published, than deepfakes appeared with blinking. Such is the nature of the game: as soon as a weakness is revealed, it is fixed.
Governments, universities and tech firms are currently funding research that will detect deepfakes, and we can only hope that research is successful–and soon. The truly insidious consequence of a widespread inability to tell whether an image is or is not authentic would be the creation of a “zero-trust society, where people cannot, or no longer bother to, distinguish truth from falsehood.”
Deepfakes are just one more element of an information environment that encourages us to construct, inhabit and defend our own, preferred “realities.”