- Many were outraged this week over a desktop app called DeepNude, that allows users to remove clothing from pictures of women to make them look naked.
- Vice’s Motherboard published an article where they tested the app’s capabilities on pictures of celebrities and found that it only works on women.
- Motherboard described the app as “easier to use, and more easily accessible than deepfakes have ever been.”
- The app’s developers later pulled it from sale after much criticism, but the new technology has reignited debate about the need for social media companies and lawmakers to regulate and moderate deepfakes.
The New Deepfake App
Developers pulled a new desktop app called DeepNude that let users utilize deepfake technology to remove clothing from pictures of women to make them look naked.
The app was removed after an article published by Vice New’s tech publication Motherboard expressed concerns over the technology.
Motherboard downloaded and tested the app on more than a dozen pictures of both men and women. They found that while the app does work on women who are fully clothed, it works best on images where people are already showing more skin.
“The results vary dramatically,” the article said. “But when fed a well lit, high resolution image of a woman in a bikini facing the camera directly, the fake nude images are passably realistic.”
The article also contained several of the images Motherboard tested, including photos of celebrities like Taylor Swift, Tyra Banks, Natalie Portman, Gal Gadot, and Kim Kardashian. The pictures were later removed from the article.
Motherboard reported that the app explicitly only works on women. “When Motherboard tried using an image of a man,” they wrote, “It replaced his pants with a vulva.”
Motherboard emphasized how frighteningly accessible the app is. “DeepNude is easier to use, and more easily accessible than deepfakes have ever been,” they reported.
Anyone can get the app for free, or they can purchase a premium version. Motherboard reported that the premium version costs $50, but a screenshot published in the Verge indicated that it was $99.
In the free version, the output image is partly covered by a watermark. In the paid version, the watermark is removed but there is a stamp that says “FAKE” in the upper-left corner.
However, as Motherboard notes, it would be extremely easy to crop out the “FAKE” stamp or remove it with photoshop.
On Thursday, the day after Motherboard published the article, DeepNude announced on their Twitter account that they had pulled the app.
“Despite the safety measures adopted (watermarks) if 500,000 people use it, the probability that people will misuse it is too high,” the statement said. “We don’t want to make money this way. Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it.”
“The world is not yet ready for DeepNude,” the statement concluded. The DeepNude website has now been taken down.
Where Did it Come From?
According to the Twitter account for DeepNude, the developers launched downloadable software for the app for Windows and Linux on June 23.
After a few days, the apps developers had to move the website offline because it was receiving too much traffic, according to DeepNude’s Twitter.
Currently, it is unclear who these developers are or where they are from. Their Twitter account lists their location as Estonia, but does not provide more information.
Motherboard was able to reach the anonymous creator by email, who requested to go by the name Alberto. Alberto told them that the app’s software is based on an open source algorithm called pix2pix that was developed by researchers at UC Berkeley back in 2017.
That algorithm is similar to the ones used for deepfake videos, and weirdly enough it’s also similar to the technology that self-driving cars use to formulate driving scenarios.
Alberto told Motherboard that the algorithm only works on women because “images of nude women are easier to find online,” but he said he wants to make a male version too.
Alberto also told Motherboard that during his development process, he asked himself if it was morally questionable to make the app, but ultimately decided it was not because he believed that the invention of the app was inevitable.
“I also said to myself: the technology is ready (within everyone’s reach),” Alberto told Motherboard. “So if someone has bad intentions, having DeepNude doesn’t change much… If I don’t do it, someone else will do it in a year.”
The Need for Regulation
This inevitability argument is one that has been discussed often in the debates surrounding deepfakes.
It also goes along with the idea that even if these deepfakes are banned by Pornhub and Reddit, they are just going to pop up in other places. These kind of arguments are also an important part of the discussion of how to detect and regulate deepfakes.
Motherboard showed the DeepNude app to Hany Farid, a computer science professor at UC Berkeley who is an expert on deepfakes. Faird said that he was shocked by how easily the app created the fakes.
Usually, deepfake videos take hours to make. By contrast, DeepNude only takes about 30 seconds to render these images.
“We are going to have to get better at detecting deepfakes,” Farid told Motherboard. “In addition, social media platforms are going to have to think more carefully about how to define and enforce rules surrounding this content.”
“And, our legislators are going to have to think about how to thoughtfully regulate in this space.”
The Role of Social Media
The need for social media platforms and politicians to regulate this kind of content has become increasingly prevalent in the discussion about deepfakes.
Over the last few years, deepfakes have become widespread internationally, but any kind of laws or regulations have been unable to keep up with the technology.
On Wednesday, Facebook CEO Mark Zuckerberg said that his company is looking into ways to deal with deepfakes during a conversation at the Aspen Ideas Festival.
He did not say exactly how Facebook is doing this, but he did say that the problem from his perspective was how deepfakes are defined.
“Is it AI-manipulated media or manipulated media using AI that makes someone say something they didn’t say?” Zuckerberg said. “I think that’s probably a pretty reasonable definition.”
However, that definition is also exceptionally narrow. Facebook recently received significant backlash after it decided not to take down a controversial video of Nancy Pelosi that had been slowed down, making her drunk or impaired.
Zuckerberg said he argued that the video should be left up because it is better to show people fake content than hide it. However, experts worry that that kind of thinking could set a dangerous precedent for deepfakes.
The Role of Lawmakers
On Monday, lawmakers in California proposed a bill that would ban deepfakes in the state. The assemblymember that introduced the bill said he did it because of the Pelosi video.
On the federal level, similar efforts to regulate deepfake technology have been stalled.
Separate bills have been introduced in both the House and the Senate to criminalize deepfakes, but both of the bills have only been referred to committees, and it is unclear whether or not they have even been discussed by lawmakers.
However, even if these bills do move forward, there are a lot of legal hurdles they have to go through. An attorney named Carrie Goldberg, whose law firm specializes in revenge porn, spoke to Motherboard about these issues.
“It’s a real bind,” said Goldberg. “Deepfakes defy most state revenge porn laws because it’s not the victim’s own nudity depicted, but also our federal laws protect the companies and social media platforms where it proliferates.”
However, the article’s author, Samantha Cole, also argued that the political narratives around deepfakes leave out the women victimized by them.
“Though deepfakes have been weaponized most often against unconsenting women, most headlines and political fear of them have focused on their fake news potential,” she wrote.
That idea of deepfakes being “fake news” or disinformation seems to be exactly how Zuckerberg and Facebook are orienting their policies.
Moving forward, many feel that policy discussions about deepfakes should also consider how the technology disproportionately affects women and can be tied to revenge porn.
See what others are saying: (Vice) (The Verge) (The Atlantic)
Uber Forks Over $19 Million in Fine for Misleading Australian Riders
The penalty is just the latest in a string of lawsuits going back years.
Uber Gets Fined
Uber has agreed to pay a $19 million fine after being sued by the Australian Competition and Consumer Commission for making false or misleading statements in its app.
The first offense stems from a company policy that allows users to cancel their ride at no cost up to five minutes after the driver has accepted the trip. Despite the terms, between at least December 2017 and September 2021, over two million Australians who wanted to cancel their ride were nevertheless warned that they may be charged a small fee for doing so.
Uber said in a statement that almost all of those users decided to cancel their trips despite the warnings.
The cancellation message has since been changed to: “You won’t be charged a cancellation fee.”
The second offense, occurring between June 2018 and August 2020, involved the company showing customers in Sydney inflated estimates of taxi fares on the app.
The commission said that Uber did not ensure the algorithm used to calculate the prices was accurate, leading to actual fares almost always being higher than estimated ones.
The taxi fare feature was removed in August 2020.
A Troubled Legal History
Uber has been sued for misleading its users or unfairly charging customers in the past.
In 2016, the company paid California-based prosecutors up to $25 million for misleading riders about the safety of its service.
An investigation at the time found that at least 25 of Uber’s approved drivers had serious criminal convictions including identity theft, burglary, child sex offenses and even one murder charge, despite background checks.
In 2017, the company also settled a lawsuit by the Federal Trade Commission (FTC) for $20 million after it misled drivers about how much money they could earn.
In November 2021, the Justice Department sued the company for allegedly charging disabled customers a wait-time fee even though they needed more time to get in the car, then refused to refund them.
Later the same month, a class-action lawsuit in New York alleged that Uber charged riders a final price higher than the upfront price listed when they ordered the ride.
See what others are saying: (ABC) (NASDAQ) (Los Angeles Times)
Report Finds That Instagram Promotes Pro-Eating Disorder Content to 20 Million Users, Including Children
According to the study, even users hoping to recover were given eating disorder content because they were “still in Instagram’s algorithmically curated bubble.”
Instagram Promotes Eating Disorder Content
Instagram promotes pro-eating disorder content to millions of its users, including children as young as nine-years-old, according to a Thursday report from the child advocacy non-profit group Fairplay.
The report, titled “Designing for Disorder: Instagram’s Pro-eating Disorder Bubble,” studied what it called an eating disorder “bubble,” which consisted of nearly 90,000 accounts that reached 20 million unique users. The average age of the bubble was 19, but researchers found users aged nine- and 10-years-old that followed three or more of these accounts. Roughly one-third of those in the bubble were underage.
According to Fairplay, Instagram’s parent company Meta derives $2 million in revenue a year from the bubble and another $228 million from those who follow it.
“In addition to being profitable, this bubble is also undeniably harmful,” the report said. “Algorithms are profiling children and teens to serve them images, memes and videos encouraging restrictive diets and extreme weight loss.”
“Meta’s pro-eating disorder bubble is not an isolated incident nor an awful accident,” it continued. “Rather it is an example of how, without appropriate checks and balances, Meta systematically puts profit ahead of young people’s safety and wellbeing.”
Researchers identified the bubble by first looking at 153 seed accounts with over 1,000 followers that posted content celebrating eating disorders. Some used phrases like “thinspiration” or other slang terms like “ana” and “mia” to refer to specific eating disorders. Others included an underweight body mass index in their bios.
Those seed accounts alone had roughly 2.3 million collective followers, 1.6 million of which were unique. Of those unique users, researchers looked at how many seed accounts each followed to determine that nearly 90,000 accounts were part of the eating disorder bubble. Those accounts totaled over 28 million followers, 20 million of which were unique.
These pages posted content ranging from memes and photos of extreme thinness to screenshots of progress on calorie counting apps. One user said they were on their third day of eating just 300 calories.
Others, including children under the age of 13, put their current weights and goal weights in their account bios. Some wrote that they “hate food” or were “starving for perfection.”
Content’s Impact on Children
Fairplay claimed that many of those in the bubble wanted to recover but were essentially trapped in Instagram’s algorithm.
“Many of the biographies of users in the bubble talk about wanting to or being in recovery, wanting to get ‘better’, to ‘heal’ or being aware of how unwell they were,” the report said. “However, these users are still in Instagram’s algorithmically curated bubble. They will still be feeding content from other accounts in the bubble, including the seed accounts, that normalizes, glamorizes or promotes eating disorders.”
The report also showcased the firsthand account of a 17-year-old eating disorder survivor and activist identified as Kelsey. Kelsey wrote that it was impossible to “imagine a time when the app didn’t have the sort of content that promotes disordered eating behavior.”
“I felt like my feed was always pushed towards this sort of content from the moment I opened my account,” Kelsey continued.
“That type of content at one point even got so normalized that prominent figures such as the Kardashians and other female and male influencers were openly promoting weight loss supplements and diet suppressors in order to help lose weight.”
Kelsey said Instagram delivered that content without any relevant searches, but posts about body positivity needed to be actively sought out.
The report concluded by arguing that there needs to be legislation that regulates platforms like Instagram by requiring them to prioritize user safety, particularly for children.
Meta and Instagram have long been accused of disregarding child safety. Last year, a whistleblower unveiled documents that revealed the company knew of the harm it posed to young people, specifically regarding body image. A Meta spokesperson told The Hill that they were unable to address the most recent allegations in Fairplay’s report.
“We’re not able to fully address this report because the authors declined to share it with us, but reports like this often misunderstand that completely removing content related to peoples’ journeys with or recovery from eating disorders can exacerbate difficult moments and cut people off from community,” the spokesperson said.
Etsy Sellers Strike Amid Increased Transaction Fees and Mandatory Offsite Advertising
“What began as an experiment in marketplace democracy has come to resemble a dictatorial relationship between a faceless tech empire and millions of exploited, majority-women craftspeople,” an Etsy seller wrote in a petition.
Thousands of Etsy Sellers Shut Down Shops
Roughly 15,000 Etsy sellers are closing up their online shops starting Monday in protest of several grievances they have with the platform, including a new fee increase.
Starting on Monday, transaction fees are getting boosted from 5% to 6.5% on the platform. CEO Josh Silverman sent a memo claiming that this hike will allow the company to “make significant investments in marketing, seller tools, and creating a world-class customer experience,” but sellers have been frustrated by the change.
“Etsy’s last fee increase was in July 2018. If this new one goes through, our basic fees to use the platform will have more than doubled in less than four years,” seller Kristi Cassidy wrote in a petition calling for a strike. As of Monday morning, over 50,000 Etsy sellers, customers, and employees had signed the petition.
“These basic fees do not include additional fees for Offsite ads – which started during the first wave of the pandemic,” Cassidy continued.
Offsite ads allow Etsy to advertise sellers’ products on other websites like Google. Sellers who make over $10,000 a year reportedly have no way of opting out of the program and Etsy takes at least 12% of sales generated through the promotions.
“Etsy fees are an unpredictable expense that can take more than 20% of each transaction,” Cassidy wrote. “We have no control over how these ads are administered, or how much of our money is spent.”
Etsy became a pandemic success story as online shopping rose amid lockdowns. Many turned to the platform to purchase masks and other goods, prompting its stock, sales, and number of sellers to rise.
“It’s really obnoxious to tell us sellers, ‘Hey, we made record profits last year and we’re gonna celebrate by raising your fees a whole bunch,’” Bella Stander, a maps and guidebooks publisher who sells on Etsy, told the Wall Street Journal.
What Etsy Sellers Are Demanding
Currently, there are over five million sellers on Etsy. Cassidy hopes that if enough of them unite, the company will have to respond.
“As individual crafters, makers and small businesspeople, we may be easy for a giant corporation like Etsy to take advantage of,” she wrote. “But as an organized front of people, determined to use our diverse skills and boundless creativity to win ourselves a fairer deal, Etsy won’t have such an easy time shoving us around.”
In the petition’s list of demands, it asks that Etsy cancel the transaction fee increase, allow sellers to opt out of offsite ads, and provide a transparent plan to crack down on resellers who take up space on the platform.
It also demanded that Etsy end its “Star Seller Program,” which impacts how sellers can interact with their buyers.
“Etsy was founded with a vision of ‘keeping commerce human’ by ‘democratizing access to entrepreneurship.’ As a result, people who have been marginalized in traditional retail economies — women, people of color, LGBTQ people, neurodivergent people, etc. — make up a significant proportion of Etsy’s sellers,” Cassidy wrote.
“But as Etsy has strayed further and further from its founding vision over the years, what began as an experiment in marketplace democracy has come to resemble a dictatorial relationship between a faceless tech empire and millions of exploited, majority-women craftspeople.”
In a statement to Yahoo Finance, an Etsy spokesperson claimed that sellers were the company’s “top priority.”
“We are always receptive to seller feedback and, in fact, the new fee structure will enable us to increase our investments in areas outlined in the petition, including marketing, customer support, and removing listings that don’t meet our policies,” the spokesperson said. “We are committed to providing great value for our 5.3 million sellers so they are able to grow their businesses while keeping Etsy a beloved, trusted, and thriving marketplace.”
The strike was a trending topic on Twitter Monday morning. Many sellers took to the social media site to pledge their support to the movement.
Many sellers are urging buyers to refrain from using the site for the remainder of the week, which is how long the protest is currently scheduled to last.