Now Reading
Fake Nudes Could Become a Big Problem. News Organisations Don’t Have to Publish Them.

Fake Nudes Could Become a Big Problem. News Organisations Don’t Have to Publish Them.

An image of an apparently unclothed Taylor Swift sat atop a story published by Vice at the end of June. It was created with an app called DeepNude, which the headline described as “a horrifying app” that “undresses a photo of any woman with a single click”.

The breasts in the photo (which Vice censored) didn’t really belong to Swift: DeepNude had used artificial intelligence to pluck them from a database and then imposed them onto an image of the pop star. Other DeepNude images in the piece appeared to strip a bikini off of Tyra Banks and a pink gown off of Natalie Portman. But was including the pictures themselves, rather than just describing them, really necessary? Clearly not. Following criticism from readers and colleagues, Vice decided to remove the photos and apologised for publishing them.

The entire episode should stand as a warning – not just of the ease with which falsified images can be created, but of the fraught choices facing newsrooms as they cover such technology. As doctored images and videos become more convincing, and as we head into an election season featuring many powerful women and a climate of misogyny, editors may find themselves regularly making decisions like the one Vice initially botched.

Also read: Should We? The Ethics of Publishing Graphic but Newsworthy Photos Is Not Straightforward

In the Vice story, the faux-nudes were certainly effective at showing what the app – now taken down by its creator – could do. But they also victimised the women they depicted. For me, they instilled not a horror of the future but an all-too-familiar queasiness. Women are regularly objectified without their consent. Nudes of female celebs are leaked all the time. Did we really need to see this slightly different example to understand the stakes of the DeepNude app?

By now, we’re all pretty familiar with the concept of a photoshopped image. The description in the article text, of an app that worked well but not perfectly, seemed to do the necessary lifting. And there are also ways to demonstrate the technology that are less invasive: In a story published Wednesday, the Verge included a DeepNude example in which a woman’s face is mostly covered (though it’s unclear if she gave consent to be included at all).

The afternoon after publication, Vice replaced the lead image with a screenshot from DeepNude of a woman’s torso (with no identifying features), naked and clothed portions spliced together. A note appended to the article explains:

This story originally included five side-by-side images of various celebrities and DeepNude-manipulated images of those celebrities. While the images were redacted to not show explicit nudity, after hearing from our readers, academic experts, and colleagues, we realised that those images could do harm to the real people in them. We think it’s important to show the real consequences that new technologies unleashed on the world without warning have on people, but we also have to make sure that our reporting minimises harm. For that reason, we have removed the images from the story, and regret the error.

(The Vice reporter did not respond to a request for additional comment.)

This issue doesn’t just apply to apps like DeepNude or the more widely known practice of making deepfakes, realistic-looking but fake videos constructed with A.I. Just this week, another outlet chose to keep up a humiliating and fake picture for the sake of news value. A ProPublica piece published on Monday, “Inside the Secret Border Patrol Facebook Group Where Agents Joke About Migrant Deaths and Post Sexist Memes,” includes a screenshot of one such meme in which President Donald Trump appears to force New York representative Alexandria Ocasio-Cortez’s head down on his crotch.

Also read: Amid Growing Online Hate, India Must Reconsider Immunities to Facebook, Twitter

Why publish that? “Our general view is that this is one of those occasions when readers need a visceral understanding of what is going on in this Facebook group that a description in words cannot adequately convey,” said Richard Tofel, the president of ProPublica, when I reached out for comment.

Alexandria Ocasio-Cortez. Credit: Twitter/Alexandria Ocasio-Cortez
Alexandria Ocasio-Cortez. Photo: Twitter/@AOC

This makes sense, to an extent: In a dark gross world, reporters’ stories will be filled with dark gross things, some of which they will choose to convey directly with visual examples. But while ProPublica published the meme on its website, it didn’t insist on keeping it on every platform: Tofel told me the organisation agreed with a request from Apple News to publish the story through that app without the image.

ProPublica seems to have made the choice to include the Ocasio-Cortez meme thoughtfully: Tofel noted it cropped the image to reduce how much questionable material it displayed, chose not to include other images from the Facebook group because of potential harm, and currently takes comfort in “the fact that among the leading driver of readers to the story were three tweets linking to it from Representative Ocasio-Cortez herself.”

(Though it’s certainly possible that Ocasio-Cortez disagreed with the image choice, but not enough to speak up about it; or that she wanted to prioritise keeping focus on the border crisis over her own interests.)

News organisations choose to post graphic or offensive material all the time because of their news value, but in these cases, it’s iffy that the journalistic merits outweigh the harm. The equation here is tricky: While a screenshot of a demeaning meme could help readers understand the nature of the misogyny of this group, it could also harm not only the subject it depicts but readers who have been harmed by misogyny and sexual harassment in their own lives.

Sometimes, as when this dichotomy of inform versus disturb is applied to photojournalism of horrific and newsworthy events, newsrooms will rightly choose to publish images that they feel words will not convey. But in the cases of the Ocasio-Cortez meme and the DeepNude creations, photos are not helping us grasp something for which we otherwise lack language. There may be cases where screenshots of misogynistic images may prove inextricable from the story. The bar should be extremely high, just as it is with photos of dead bodies.

In the case of memes and deepfakes, I worry that spreading the images is exactly what their creators want. A skim of the article means ingesting the misogyny without much other context. It also sends a clear message to other young women interested in gaining power: Your enemies will contort your likeness in the most degrading way they know how, and even the best, most rational gatekeepers in the news media will put it on display.

This piece was originally published on Future Tense, a partnership between Slate magazine, Arizona State University, and New America.

Scroll To Top