- Many were outraged this week over a desktop app called DeepNude, that allows users to remove clothing from pictures of women to make them look naked.
- Vice’s Motherboard published an article where they tested the app’s capabilities on pictures of celebrities and found that it only works on women.
- Motherboard described the app as “easier to use, and more easily accessible than deepfakes have ever been.”
- The app’s developers later pulled it from sale after much criticism, but the new technology has reignited debate about the need for social media companies and lawmakers to regulate and moderate deepfakes.
The New Deepfake App
Developers pulled a new desktop app called DeepNude that let users utilize deepfake technology to remove clothing from pictures of women to make them look naked.
The app was removed after an article published by Vice New’s tech publication Motherboard expressed concerns over the technology.
Motherboard downloaded and tested the app on more than a dozen pictures of both men and women. They found that while the app does work on women who are fully clothed, it works best on images where people are already showing more skin.
“The results vary dramatically,” the article said. “But when fed a well lit, high resolution image of a woman in a bikini facing the camera directly, the fake nude images are passably realistic.”
The article also contained several of the images Motherboard tested, including photos of celebrities like Taylor Swift, Tyra Banks, Natalie Portman, Gal Gadot, and Kim Kardashian. The pictures were later removed from the article.
Motherboard reported that the app explicitly only works on women. “When Motherboard tried using an image of a man,” they wrote, “It replaced his pants with a vulva.”
Motherboard emphasized how frighteningly accessible the app is. “DeepNude is easier to use, and more easily accessible than deepfakes have ever been,” they reported.
Anyone can get the app for free, or they can purchase a premium version. Motherboard reported that the premium version costs $50, but a screenshot published in the Verge indicated that it was $99.
In the free version, the output image is partly covered by a watermark. In the paid version, the watermark is removed but there is a stamp that says “FAKE” in the upper-left corner.
However, as Motherboard notes, it would be extremely easy to crop out the “FAKE” stamp or remove it with photoshop.
On Thursday, the day after Motherboard published the article, DeepNude announced on their Twitter account that they had pulled the app.
“Despite the safety measures adopted (watermarks) if 500,000 people use it, the probability that people will misuse it is too high,” the statement said. “We don’t want to make money this way. Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it.”
“The world is not yet ready for DeepNude,” the statement concluded. The DeepNude website has now been taken down.
Where Did it Come From?
According to the Twitter account for DeepNude, the developers launched downloadable software for the app for Windows and Linux on June 23.
After a few days, the apps developers had to move the website offline because it was receiving too much traffic, according to DeepNude’s Twitter.
Currently, it is unclear who these developers are or where they are from. Their Twitter account lists their location as Estonia, but does not provide more information.
Motherboard was able to reach the anonymous creator by email, who requested to go by the name Alberto. Alberto told them that the app’s software is based on an open source algorithm called pix2pix that was developed by researchers at UC Berkeley back in 2017.
That algorithm is similar to the ones used for deepfake videos, and weirdly enough it’s also similar to the technology that self-driving cars use to formulate driving scenarios.
Alberto told Motherboard that the algorithm only works on women because “images of nude women are easier to find online,” but he said he wants to make a male version too.
Alberto also told Motherboard that during his development process, he asked himself if it was morally questionable to make the app, but ultimately decided it was not because he believed that the invention of the app was inevitable.
“I also said to myself: the technology is ready (within everyone’s reach),” Alberto told Motherboard. “So if someone has bad intentions, having DeepNude doesn’t change much… If I don’t do it, someone else will do it in a year.”
The Need for Regulation
This inevitability argument is one that has been discussed often in the debates surrounding deepfakes.
It also goes along with the idea that even if these deepfakes are banned by Pornhub and Reddit, they are just going to pop up in other places. These kind of arguments are also an important part of the discussion of how to detect and regulate deepfakes.
Motherboard showed the DeepNude app to Hany Farid, a computer science professor at UC Berkeley who is an expert on deepfakes. Faird said that he was shocked by how easily the app created the fakes.
Usually, deepfake videos take hours to make. By contrast, DeepNude only takes about 30 seconds to render these images.
“We are going to have to get better at detecting deepfakes,” Farid told Motherboard. “In addition, social media platforms are going to have to think more carefully about how to define and enforce rules surrounding this content.”
“And, our legislators are going to have to think about how to thoughtfully regulate in this space.”
The Role of Social Media
The need for social media platforms and politicians to regulate this kind of content has become increasingly prevalent in the discussion about deepfakes.
Over the last few years, deepfakes have become widespread internationally, but any kind of laws or regulations have been unable to keep up with the technology.
On Wednesday, Facebook CEO Mark Zuckerberg said that his company is looking into ways to deal with deepfakes during a conversation at the Aspen Ideas Festival.
He did not say exactly how Facebook is doing this, but he did say that the problem from his perspective was how deepfakes are defined.
“Is it AI-manipulated media or manipulated media using AI that makes someone say something they didn’t say?” Zuckerberg said. “I think that’s probably a pretty reasonable definition.”
However, that definition is also exceptionally narrow. Facebook recently received significant backlash after it decided not to take down a controversial video of Nancy Pelosi that had been slowed down, making her drunk or impaired.
Zuckerberg said he argued that the video should be left up because it is better to show people fake content than hide it. However, experts worry that that kind of thinking could set a dangerous precedent for deepfakes.
The Role of Lawmakers
On Monday, lawmakers in California proposed a bill that would ban deepfakes in the state. The assemblymember that introduced the bill said he did it because of the Pelosi video.
On the federal level, similar efforts to regulate deepfake technology have been stalled.
Separate bills have been introduced in both the House and the Senate to criminalize deepfakes, but both of the bills have only been referred to committees, and it is unclear whether or not they have even been discussed by lawmakers.
However, even if these bills do move forward, there are a lot of legal hurdles they have to go through. An attorney named Carrie Goldberg, whose law firm specializes in revenge porn, spoke to Motherboard about these issues.
“It’s a real bind,” said Goldberg. “Deepfakes defy most state revenge porn laws because it’s not the victim’s own nudity depicted, but also our federal laws protect the companies and social media platforms where it proliferates.”
However, the article’s author, Samantha Cole, also argued that the political narratives around deepfakes leave out the women victimized by them.
“Though deepfakes have been weaponized most often against unconsenting women, most headlines and political fear of them have focused on their fake news potential,” she wrote.
That idea of deepfakes being “fake news” or disinformation seems to be exactly how Zuckerberg and Facebook are orienting their policies.
Moving forward, many feel that policy discussions about deepfakes should also consider how the technology disproportionately affects women and can be tied to revenge porn.
See what others are saying: (Vice) (The Verge) (The Atlantic)
Meta Reinstates Trump on Facebook and Instagram
The company, which banned the former president two years ago for his role in inciting the Jan. 6 insurrection, now says the risk to public safety has “sufficiently receded.”
Meta Ends Suspension
Meta announced Wednesday that it will reinstate the Facebook and Instagram accounts of former President Donald Trump, just two years after he was banned for using the platforms to incite a violent insurrection.
In a blog post, the company said the suspensions would be lifted “in the coming weeks” but with “new guardrails in place to deter repeat offenses.”
Specifically, Meta stated that due to Trump’s violations of its Community Standards, he will face “heightened penalties for repeat offenses” under new protocols for “public figures whose accounts are reinstated from suspensions related to civil unrest.”
“In the event that Mr. Trump posts further violating content, the content will be removed and he will be suspended for between one month and two years, depending on the severity of the violation,” the blog post continued.
The company also noted its updated protocols address content that doesn’t violate its Community Standards but “contributes to the sort of risk that materialized on January 6, such as content that delegitimizes an upcoming election or is related to QAnon.”
However, unlike direct violations, that content would have its distribution limited, but it would not be taken down. As a penalty for repeat offenses, Meta says it “may temporarily restrict access to […] advertising tools.”
As far as why the company is doing this, it explained that it assessed whether or not to extend the “unprecedented” two-year suspension it placed on Trump back in January of 2021 and determined that the risk to public safety had “sufficiently receded.”
Meta also argued that social media is “rooted in the belief that open debate and the free flow of ideas are important values” and it does not want to “get in the way of open, public and democratic debate.”
“The public should be able to hear what their politicians are saying — the good, the bad and the ugly — so that they can make informed choices at the ballot box,” the tech giant added.
Meta’s decision prompted widespread backlash from many people who argue the former president has clearly not learned from the past because he continues to share lies about the election, conspiracy theories, and other incendiary language on Truth Social.
“Trump incited an insurrection. And tried to stop the peaceful transfer of power,” Rep. Adam Schiff (D-Ca.) tweeted. “He’s shown no remorse. No contrition. Giving him back access to a social media platform to spread his lies and demagoguery is dangerous. @facebook caved, giving him a platform to do more harm.”
According to estimates last month by the advocacy groups Accountable Tech and Media Matters for America, over 350 of Trump’s posts on the platform would have explicitly violated Facebook’s policies against QAnon content, election claims, and harassment of marginalized groups.
“Mark Zuckerberg’s decision to reinstate Trump’s accounts is a prime example of putting profits above people’s safety,” NAACP President Derrick Johnson told NPR.
“It’s quite astonishing that one can spew hatred, fuel conspiracies, and incite a violent insurrection at our nation’s Capitol building, and Mark Zuckerberg still believes that is not enough to remove someone from his platforms.”
However, on the other side, many conservatives and Trump supporters have cheered the move as a win for free speech.
Others, like Rep. Jim Jordan (R-Oh.) also asserted that Trump “shouldn’t have been banned in the first place. Can’t happen again.”
Trump himself echoed that point on in a post on Truth Social, where he claimed Facebook has lost billions of dollars both removing and reinstating him.
“Such a thing should never again happen to a sitting President, or anybody else who is not deserving of retribution! THANK YOU TO TRUTH SOCIAL FOR DOING SUCH AN INCREDIBLE JOB. YOUR GROWTH IS OUTSTANDING, AND FUTURE UNLIMITED!!!” he continued.
The question that remains, however, is whether Trump will actually go back to Facebook or Instagram. As many have noted, the two were never his main platforms. Twitter was always been his preferred outlet, and while Elon Musk reinstated his account some time ago, he has not been posting on the site.
There is also the question of how Truth Social — which Trump created and put millions of dollars into — would survive if he went back to Meta’s platforms. The company is already struggling financially, and as Axios notes, if Trump moves back, it signals to investors that he is not confident in the company.
On the other hand, Trump’s lawyers formally petitioned Meta to reinstate him, which could indicate that this goes beyond just a symbolic win and is something he actually wants. Additionally, if he were to start engaging on Facebook and Instagram again, it would immediately give him access to his over 57 million followers across the two platforms while he continues his 2024 presidential campaign.
See what others are saying: (NPR) (Axios) (The New York Times)
Meta Encouraged to Change Nudity Policy in Potential Win For Free The Nipple Movement
The company’s oversight board said Meta’s current rules are too confusing to follow, and new guidelines should be developed to “respect international human rights standards.”
Rules Based in “A Binary View of Gender”
In a move many have described as a big step for Free The Nipple advocates, Meta’s oversight board released a decision Tuesday encouraging the company to modify its nudity and sexual activity policies so that social media users are treated “without discrimination on the basis of sex or gender.”
The board—which consists of lawyers, journalists, and academics—said the parent company of Facebook and Instagram should change its guidelines “so that it is governed by clear criteria that respect international human rights standards.”
Its decision came after a transgender and nonbinary couple had two different posts removed for alleged violations of Meta’s Sexual Solicitation Community Standard. Both posts included images of the couple bare-chested with their nipples covered along with captions discussing transgender healthcare, as they were fundraising for one of them to undergo top surgery.
Both posts, one from 2021 and another from 2022, were taken down after users reported it and Meta’s own automated system flagged it. The posts were restored after an appeal, but the oversight board stated that their initial removal highlights faults in the company’s policies.
“Removing these posts is not in line with Meta’s Community Standards, values or human rights responsibilities,” the board said in its decision,
According to the board, Meta’s sexual solicitation policy is too broad and creates confusion for social media users. The board also said the policy is “based on a binary view of gender and a distinction between male and female bodies.
“Such an approach makes it unclear how the rules apply to intersex, non-binary and transgender people, and requires reviewers to make rapid and subjective assessments of sex and gender, which is not practical when moderating content at scale,” the decision continued.
Free the Nipple Movement
The board stated that the rules get especially confusing regarding female nipples, “particularly as they apply to transgender and non-binary people.”
While there are exceptions to Meta’s rules, including posts in medical or health contexts, the board said that these exceptions are “often convoluted and poorly defined.”
“The lack of clarity inherent in this policy creates uncertainty for users and reviewers, and makes it unworkable in practice,” the decision said.
The board’s recommended that Meta change how it manages nudity on its platforms. The group also requested that Meta provide more details regarding what content specifically violates its Sexual Solicitation Community Standard.
For over a decade, Meta’s nudity policies have been condemned by many activists and users for strictly censoring female bodies. The Free the Nipple movement was created to combat rules that prevent users from sharing images of a bare female chest, but still allow men to freely post topless photos of themselves.
Big names including Rihanna, Miley Cyrus, and Florence Pugh have advocated for Free the Nipple.
Meta now has 60 days to respond to the board’s recommendations. In a statement to the New York Post, a spokesperson for the company said Meta is “constantly evaluating our policies to help make our platforms safer for everyone.”
See What Others Are Saying: (Mashable) (The New York Post) (Oversight Committee Decision)
Amazon Labor Union Receives Official Union Certification
The company already plans to appeal the decision.
Amazon Labor Union’s Victory
The National Labor Relations Board on Wednesday certified the Amazon Labor Union (ALU) Staten Island election from April, despite Amazon’s objections.
After Staten Island staffers won the vote to unionize by 500 votes in the spring of 2022, Amazon quickly filed a slew of objections, claiming that the ALU had improperly influenced the election. Amazon pushed for the results to be overturned.
Now, the National Labor Relations Board has dismissed Amazon’s allegations and certified the election. This certification gives legitimacy to the ALU and puts Amazon in a position to be penalized should they decide not to bargain with the union in good faith.
“We’re demanding that Amazon now, after certification, meet and bargain with us,” ALU attorney Seth Goldstein said to Motherboard regarding the certification. “We’re demanding bargaining, and if we need to, we’re going to move to get a court order enforcing our bargaining rights. It’s outrageous that they’ve been violating federal labor while they continue to do so.”
Negotiate or Appeal
Amazon has until Jan. 25 to begin bargaining with the ALU, or the online retailer can appeal the decision by the same deadline. The company has already announced its plan to appeal.
“As we’ve said since the beginning, we don’t believe this election process was fair, legitimate, or representative of the majority of what our team wants,” Amazon spokesperson Kelly Nantel, said in a statement.
This win comes after two recent defeats in ALU’s unionization efforts. The union lost an election at a facility in Albany and another in Staten Island.
ALU’s director Chris Smalls told Yahoo! Finance that he is unconcerned about these losses.
“For us, whatever campaign is ready to go, the Amazon Labor Union is going to throw their support behind it, no matter what…We know that it’s going to take collective action for Amazon to come to the table,” he told the outlet. “So, for us, it’s never unsuccessful. These are growing pains, and we’re going to fight and continue to grow.”