Those Dueling Realities

News literacy matters more than ever–and we live at a time when it is harder and harder to tell truth from fiction.

One example from the swamps of the Internet. The link will take you to a doctored photo of  actor Sylvester Stallone wearing a t-shirt that says  “4 Useless Things: woke people, COVID-19 vaccines, Dr. Anthony Fauci and President Joe Biden.” In the original, authentic photo, Stallone is wearing a plain dark t-shirt.

The News Literacy Project, which issues ongoing reports of these sorts of visual misrepresentation, says this about the Stallone t-shirt.

Digitally manipulating photos of celebrities to make it look like they endorse a provocative political message — often on t-shirts — is extremely common. Such posts are designed to resonate with people who have strong partisan views and may share the image without pausing to consider whether it’s authentic. It’s also likely that some of these fakes are marketing ploys to boost sales of t-shirts that are easily found for sale online. For example, this reply to an influential Twitter account includes the same doctored image and a link to a product page where the shirt can be purchased.

It’s bad enough that there are literally thousands of sites using text to promote lies. But people have a well-known bias toward visual information (“Who am I going to believe, you or my lying eyes?””Seeing is believing.” Etc.) With the availability of “deep fake” technologies, the ability to doctor photographs has become easier, more widespread, and much harder to detect.

The Guardian recently reported on the phenomenon, beginning with a definition.

Have you seen Barack Obama call Donald Trump a “complete dipshit”, or Mark Zuckerberg brag about having “total control of billions of people’s stolen data”, or witnessed Jon Snow’s moving apology for the dismal ending to Game of Thrones? Answer yes and you’ve seen a deepfake. The 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake. Want to put new words in a politician’s mouth, star in your favourite movie, or dance like a pro? Then it’s time to make a deepfake.

As the article noted, a fair percentage of deep-fake videos are pornographic. A firm called “Deeptrace” identified 15,000 altered videos online in September 2019, and a “staggering 96%” were pornographic. Ninety-nine percent of those “mapped faces from female celebrities on to porn stars.”

As new techniques allow unskilled people to make deepfakes with a handful of photos, fake videos are likely to spread beyond the celebrity world to fuel revenge porn. As Danielle Citron, a professor of law at Boston University, puts it: “Deepfake technology is being weaponised against women.” Beyond the porn there’s plenty of spoof, satire and mischief.

But it isn’t just about videos. Deepfake technology can evidently create convincing phony photos from scratch. The report noted that a supposed Bloomberg journalist, “Maisy Kinsley”,  who was a deepfake, had even been given profiles on LinkedIn and Twitter.

Another LinkedIn fake, “Katie Jones”, claimed to work at the Center for Strategic and International Studies, but is thought to be a deepfake created for a foreign spying operation.

Audio can be deepfaked too, to create “voice skins” or ”voice clones” of public figures. Last March, the chief of a UK subsidiary of a German energy firm paid nearly £200,000 into a Hungarian bank account after being phoned by a fraudster who mimicked the German CEO’s voice. The company’s insurers believe the voice was a deepfake, but the evidence is unclear. Similar scams have reportedly used recorded WhatsApp voice messages.

No wonder levels of trust have declined so precipitously! The Guardian addressed the all-important question: how can you tell whether a visual image is real or fake? It turns out, it’s very hard–and getting harder.

In 2018, US researchers discovered that deepfake faces don’t blink normally. No surprise there: the majority of images show people with their eyes open, so the algorithms never really learn about blinking. At first, it seemed like a silver bullet for the detection problem. But no sooner had the research been published, than deepfakes appeared with blinking. Such is the nature of the game: as soon as a weakness is revealed, it is fixed.

Governments, universities and tech firms are currently funding research that will  detect deepfakes, and we can only hope that research is successful–and soon. The truly insidious consequence of a widespread inability to tell whether an image is or is not authentic would be the creation of a “zero-trust society, where people cannot, or no longer bother to, distinguish truth from falsehood.”

Deepfakes are just one more element of an information environment that encourages us to construct, inhabit and defend our own, preferred “realities.” 
 

11 Comments

  1. If our legacy news outlets are propaganda, it only worsens when you start sharing photos of celebrity memes.

    As consumers, we must beware that marketing is explicitly used to separate dollars from our possession. For example, McDonald’s uses yellow and red in its branding for a purpose.

    Same with communication, religion, etc., etc.

    The age of mass consumerism started in the 50s and is slowly coming to an end, but when the masses are controlled by fear and unmet wants, the manipulators will continue pressing the right buttons for votes or dollars.

    Buyer beware!

  2. I honestly wondered if the ‘you’re a stupid son of a bitch’ from Biden to Doocy was a deepfake! (but then after thinking it over, Biden speaks truth….)

  3. Sheila, another thoughtful deep dive. But Smekens! For the first time, I agree with you! 🤗

  4. Buyer beware indeed. I often wonder what is the purpose of all this in the human universe. Why are there so many pathetic wretches who waste their intelligence, money, time and technology to promote idiot partisan displays? Why? Where is the reward? How do they honor themselves by doing these stupid things?

    The answer, I think, centers around people feeling they have no purpose in life other than to create attacks on other humans. Is it soft aggression? Perhaps. But it’s clear that tribal attacks in 2022 use electrons instead of flint spear points. Different weapons. Same “idea”. The 25% who haven’t made the intellectual leap out of cave-dwelling will never go away.

  5. Newsmax, OANN and Fox have more than likely shared deepfake photos and videos. They are truly despicable power and money hungry ba@$&rds.

    An app called ‘Informable’ was created to help people learn how to spot false news. Even though it doesn’t address deepfake pics or videos, it can teach people how to question the validity of spoken or written words and encourage digging deeper to confirm or invalidate stories. Sure wish politically far right leaning people would be required to use it. It would also be a valuable teaching resource for grades one – twelve.

  6. Apparently, lying to each other has become the biggest trend in marketing nowadays. It’s the other side of the mistrust each other coin.

    We thought that communications was a huge opportunity to improve the average life but, now that we can communicate between literally all of us, instantly, we find that we use that power not to inform, but to misinform.

    Now that we all are living in a dystopian metaverse of the imaginary, what’s next?

  7. I’ve always had the suspicion that Donald Trump was a deepfake.

    I’m waiting for the genetic engineering that will bring us better humans.

    The current batch seems 50% defective.

  8. What Todd, Vernon and Pete said and, unbelievably, as even more enchanced technology and its marriage to disinformation plays out in this eternal brawl for political power, we may become so enured to such tomfoolery that it isn’t tomfoolery anymore but fact in the ultimate GoebbelsWorld we will have created. We got trouble; right here in River City. . .

Comments are closed.