National

Anonymous group uses AI to add 'modest' clothing to women in photos, igniting debate over policing women's bodies online

In the wake of explicit AI-generated photos of Taylor Swift spreading on social media, and conversations surrounding the lack of resources for deepfake victims, another group of people are using artificial intelligence technology to manipulate women's bodies differently — in this case to cover them up.

The hashtag movement called #DignifAI was born on 4chan, an anonymous online message board with little moderation or regulation on what users can post. The AI Swift photos allegedly also originated on 4chan.

A post from Jan. 31 describing #DignifiAI has since been taken down but has been copied and pasted elsewhere. Titled “Operation/TradAI,” an anonymous 4chan user asked participants to use “the power of AI” to “clothe” women in photos, “purify them of their tattoos” and “lengthen their skirts.” Yahoo News will not be linking out to any of the posts.

By "TradAI," the 4chan user is referring to "trad" or "tradwife," a shortened slang of "traditional" that was also coined on 4chan. The "trad" ideology is about wanting to go back to a stereotypical archetype of a woman; it's a romanticization of the woman being the homemaker and doting on her husband and kids.

“Give them modest+stylish clothing such as loose sweaters, dresses that cover shoulders/chest/back/knees, blue jeans/flannel long sleeve for country girl aesthetic, angelic/righteous theme,” an anonymous 4chan user detailed. “Put them in a feminine and wholesome role, such as gardening, cleaning, being a caregiver, etc.”

Nic Coppage, a social worker at the National Alliance for Eating Disorders who studies the correlation between mental health and body acceptance, told Yahoo News that regardless of whether someone is adding or taking away clothes on photos of women, it "reduces women to sexual objects."

“[It] brings up the question of bodily autonomy and takes power away from the woman in the image to decide who can see her body and how much of her body she wants [to be] shown,” Coppage explained.

While #DignifAI isn’t about revealing more when it comes to women’s bodies, the behavior is still relevant to the overarching concern that AI is helping strangers manipulate and police women’s bodies online.

4chan strongly emphasizes anonymity; unlike other online message boards such as Reddit, 4chan users don’t need to make an official account or pick a username. They can’t be directly messaged and, unless they use the same name across several posts or reveal something personal about themselves, it’s difficult to trace them.

That anonymity is what makes pinpointing the origins of #DignifAI to one user challenging. It started on 4chan's /pol/ board, which stands for "politically incorrect" and is one of the platform's most popular boards. An investigation presented at the 2017 International Association for the Advancement of Artificial Intelligence (AAAI) Conference on Web and Social Media found that "hate speech is predominant on /pol/," with "many of its posters subscribing to the alt-right and exhibiting characteristics of xenophobia, social conservatism, racism and, generally speaking, hate."

Is this part of a larger problem seen on 4chan? Or is #DignifAI a result of ‘trolling’ and trying to incite a reaction?

4chan is commonly associated with incels. The term "incel," which refers to "involuntarily celibate," is ascribed to people who share a belief in male supremacy and dominion over female bodies.

Andrew G. Thomas, an integrative psychotherapist and senior lecturer at the Swansea University School of Psychology who has studied the incel phenomenon, told Yahoo News that there is no clear-cut explanation for why 4chan users are participating in #DignifAI. He believes it can be traced to a desire to lash out at women who they perceive hurt them by posting provocative photos online.

“The women who want to show off their bodies don’t want to be covered up or have their tattoos removed, and [Swift] doesn’t want AI images generated,” Thomas said. “I see this as, ‘Well, you are hurting me, so I’m going to hurt you back.’”

But some 4chan users claim #DignifAI is one big joke to deliberately provoke criticism. Recent comments seem to revel in the current media coverage and share links to articles, calling the outrage “hilarious.”

Kat Tenbarge, a tech and culture reporter for NBC News Digital who has reported on #DignifAI, told Yahoo News that "4chan posits itself as a community that engages to troll" so it can appear as if participants don't have any stake in what they're doing.

“The action that they're taking still has meaning and subscribes to some form of ideology,” Tenbarge explained. “And oftentimes, it does reflect on the ideology that they truly have. And I think that's the case here.”

There’s an argument several 4chan commenters make that #DignifAI “is not harming” the women “in any way.” But, Tenbarge points out, creating “non-consensual images” of women is harmful.

“A lot of women who are now being attacked with this #DignifAI tool appear to be the ones who are in charge of their own image,” Tenbarge explained. “The message that's being sent is that [#DignifAI is] trying to humiliate you for your choices, whatever those choices may be.”

#DignifAI targets aren’t always celebrities or influencers. What can the average person do to protect themselves?

Emily Poler, a Brooklyn, N.Y.-based litigator specializing in intellectual property, told Yahoo News that the issue that lawmakers need to figure out quickly with regard to AI is what to do about non-commercial users.

Copyrighting personal photos is possible — and easier if they haven’t been posted online. But even copyrighting photos doesn’t guarantee people won’t manipulate them. The copyright holder has to find unauthorized copies or versions of the photo and then file a “takedown notice.”

Overall, Poler said, lawmakers need to move quickly to figure out how to protect people from non-consensual AI edits. Especially as 4chan users in the #DignifAI forum are warning the public that “they haven’t seen anything yet.”