- Many were outraged this week over a desktop app called DeepNude, that allows users to remove clothing from pictures of women to make them look naked.
- Vice’s Motherboard published an article where they tested the app’s capabilities on pictures of celebrities and found that it only works on women.
- Motherboard described the app as “easier to use, and more easily accessible than deepfakes have ever been.”
- The app’s developers later pulled it from sale after much criticism, but the new technology has reignited debate about the need for social media companies and lawmakers to regulate and moderate deepfakes.
The New Deepfake App
Developers pulled a new desktop app called DeepNude that let users utilize deepfake technology to remove clothing from pictures of women to make them look naked.
The app was removed after an article published by Vice New’s tech publication Motherboard expressed concerns over the technology.
Motherboard downloaded and tested the app on more than a dozen pictures of both men and women. They found that while the app does work on women who are fully clothed, it works best on images where people are already showing more skin.
“The results vary dramatically,” the article said. “But when fed a well lit, high resolution image of a woman in a bikini facing the camera directly, the fake nude images are passably realistic.”
The article also contained several of the images Motherboard tested, including photos of celebrities like Taylor Swift, Tyra Banks, Natalie Portman, Gal Gadot, and Kim Kardashian. The pictures were later removed from the article.
Motherboard reported that the app explicitly only works on women. “When Motherboard tried using an image of a man,” they wrote, “It replaced his pants with a vulva.”
Motherboard emphasized how frighteningly accessible the app is. “DeepNude is easier to use, and more easily accessible than deepfakes have ever been,” they reported.
Anyone can get the app for free, or they can purchase a premium version. Motherboard reported that the premium version costs $50, but a screenshot published in the Verge indicated that it was $99.
In the free version, the output image is partly covered by a watermark. In the paid version, the watermark is removed but there is a stamp that says “FAKE” in the upper-left corner.
However, as Motherboard notes, it would be extremely easy to crop out the “FAKE” stamp or remove it with photoshop.
On Thursday, the day after Motherboard published the article, DeepNude announced on their Twitter account that they had pulled the app.
“Despite the safety measures adopted (watermarks) if 500,000 people use it, the probability that people will misuse it is too high,” the statement said. “We don’t want to make money this way. Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it.”
“The world is not yet ready for DeepNude,” the statement concluded. The DeepNude website has now been taken down.
Where Did it Come From?
According to the Twitter account for DeepNude, the developers launched downloadable software for the app for Windows and Linux on June 23.
After a few days, the apps developers had to move the website offline because it was receiving too much traffic, according to DeepNude’s Twitter.
Currently, it is unclear who these developers are or where they are from. Their Twitter account lists their location as Estonia, but does not provide more information.
Motherboard was able to reach the anonymous creator by email, who requested to go by the name Alberto. Alberto told them that the app’s software is based on an open source algorithm called pix2pix that was developed by researchers at UC Berkeley back in 2017.
That algorithm is similar to the ones used for deepfake videos, and weirdly enough it’s also similar to the technology that self-driving cars use to formulate driving scenarios.
Alberto told Motherboard that the algorithm only works on women because “images of nude women are easier to find online,” but he said he wants to make a male version too.
Alberto also told Motherboard that during his development process, he asked himself if it was morally questionable to make the app, but ultimately decided it was not because he believed that the invention of the app was inevitable.
“I also said to myself: the technology is ready (within everyone’s reach),” Alberto told Motherboard. “So if someone has bad intentions, having DeepNude doesn’t change much… If I don’t do it, someone else will do it in a year.”
The Need for Regulation
This inevitability argument is one that has been discussed often in the debates surrounding deepfakes.
It also goes along with the idea that even if these deepfakes are banned by Pornhub and Reddit, they are just going to pop up in other places. These kind of arguments are also an important part of the discussion of how to detect and regulate deepfakes.
Motherboard showed the DeepNude app to Hany Farid, a computer science professor at UC Berkeley who is an expert on deepfakes. Faird said that he was shocked by how easily the app created the fakes.
Usually, deepfake videos take hours to make. By contrast, DeepNude only takes about 30 seconds to render these images.
“We are going to have to get better at detecting deepfakes,” Farid told Motherboard. “In addition, social media platforms are going to have to think more carefully about how to define and enforce rules surrounding this content.”
“And, our legislators are going to have to think about how to thoughtfully regulate in this space.”
The Role of Social Media
The need for social media platforms and politicians to regulate this kind of content has become increasingly prevalent in the discussion about deepfakes.
Over the last few years, deepfakes have become widespread internationally, but any kind of laws or regulations have been unable to keep up with the technology.
On Wednesday, Facebook CEO Mark Zuckerberg said that his company is looking into ways to deal with deepfakes during a conversation at the Aspen Ideas Festival.
He did not say exactly how Facebook is doing this, but he did say that the problem from his perspective was how deepfakes are defined.
“Is it AI-manipulated media or manipulated media using AI that makes someone say something they didn’t say?” Zuckerberg said. “I think that’s probably a pretty reasonable definition.”
However, that definition is also exceptionally narrow. Facebook recently received significant backlash after it decided not to take down a controversial video of Nancy Pelosi that had been slowed down, making her drunk or impaired.
Zuckerberg said he argued that the video should be left up because it is better to show people fake content than hide it. However, experts worry that that kind of thinking could set a dangerous precedent for deepfakes.
The Role of Lawmakers
On Monday, lawmakers in California proposed a bill that would ban deepfakes in the state. The assemblymember that introduced the bill said he did it because of the Pelosi video.
On the federal level, similar efforts to regulate deepfake technology have been stalled.
Separate bills have been introduced in both the House and the Senate to criminalize deepfakes, but both of the bills have only been referred to committees, and it is unclear whether or not they have even been discussed by lawmakers.
However, even if these bills do move forward, there are a lot of legal hurdles they have to go through. An attorney named Carrie Goldberg, whose law firm specializes in revenge porn, spoke to Motherboard about these issues.
“It’s a real bind,” said Goldberg. “Deepfakes defy most state revenge porn laws because it’s not the victim’s own nudity depicted, but also our federal laws protect the companies and social media platforms where it proliferates.”
However, the article’s author, Samantha Cole, also argued that the political narratives around deepfakes leave out the women victimized by them.
“Though deepfakes have been weaponized most often against unconsenting women, most headlines and political fear of them have focused on their fake news potential,” she wrote.
That idea of deepfakes being “fake news” or disinformation seems to be exactly how Zuckerberg and Facebook are orienting their policies.
Moving forward, many feel that policy discussions about deepfakes should also consider how the technology disproportionately affects women and can be tied to revenge porn.
See what others are saying: (Vice) (The Verge) (The Atlantic)
TikTok Suppressed Content From “Ugly,” Poor, and Disabled Users, Report Says
- A report from The Intercept claimed that in an effort to attract new users, TikTok had policies in place for its moderators to suppress content from users deemed “ugly,” poor, or disabled.
- The documents also showed that TikTok outlined bans to be placed on users who criticized “political or religious leaders” or “endangered national honor.”
- Sources said the policies were created last year and were in use as recently as the end of 2019.
- A TikTok spokesperson said the majority of the guidelines were never in use or are no longer in use, but the ones targeting users’ appearances were aimed at preventing bullying.
- However, the documents reviewed by The Intercept do not explicitly mention anti-bullying efforts.
Newly released documents reveal that TikTok creators directed their moderators to censor posts from users believed to be poor, disabled, or “ugly,” among other guidelines.
The leaked policies were first reported by The Intercept on Monday, exposing an inconsistency within the highly popular video-sharing app whose tagline is “Real People. Real Videos.” However, based on this recently-exposed information, it seems like TikTok only wants to funnel certain types of “real people” on the “For You” feed, its page dedicated to promoting select content to its millions of users.
The Intercept noted that the documents appear to have originally been printed in Chinese — the language of the app’s home country — but had been translated into sometimes-choppy English for global distribution. Of the multiple pages of policies the news outlet posted, one outlines characteristics that the app considers undesirable such as “abnormal body shape, chubby, have obvious beer belly, obese, or too thin.”
The rules also encourage restrictions of “ugly facial looks” including wrinkles, noticeable scars, and physical disabilities. Criteria for the backgrounds of videos were also included in the policies, discouraging “shabby and dilapidated” environments including slums, dirty and messy settings, and old decorations.
As far as the reasoning for these guidelines, TikTok wrote: “If the character’s appearance or the shooting environment is not good, the video will be much less attractive, not [worthy] to be recommended to new users.”
A spokesperson for the app told The Verge that the guidelines reported by The Intercept are regional and “were not for the U.S. market.”
The other policies that The Intercept released detail more types of content that should be banned across the platform, including defamation or criticism of “civil servants, political or religious leaders,” as well as family members of these leaders. Moderators were instructed to punish any users who “endang[er] national honor” or distort “local or other countries’ history,” using May 1998 riots in Indonesia, Cambodian genocide, and Tiananmen Square incidents as examples.
The Intercept reported that sources told them the policies were created last year and were in use until at least late 2019.
A spokesperson for the app told The Intercept that “most of” these exposed rules “are either no longer in use, or in some cases appear to never have been in place.”
The spokesperson also told the outlet that the policies geared toward suppressing disabled, seemingly impoverished, or unattractive users “represented an early blunt attempt at preventing bullying, but are no longer in place, and were already out of use when The Intercept obtained them.”
These intentions have been pushed by the platform in the past — in December, TikTok admitted that at one point they prevented the spread of videos from disabled, LGBTQ, or overweight users, claiming it was an attempt to curb bullying.
A TikTok spokesperson told The Intercept that these newly-released policies “appear to be the same or similar” as the ones revealed in December, but the guidelines published this week are notably different — they don’t mention anti-bullying motives and instead focus on how to appeal to more users.
Criticism of TikTok’s Moderation and App’s Response
TikTok has faced scrutiny in the past for appearing to censor certain content, including pro-democracy protests in Hong Kong and criticism of the Chinese government.
It’s also worth noting that the app has been under fire for its data-sharing policies and the U.S. government has even suggested this is a national security threat.
TikTok said this week that it will stop using China-based moderators to review overseas content, noting that these employees hadn’t been monitoring content in U.S. regions.
And in further attempts to counter the criticism of their moderation tactics, TikTok announced last week that it plans to open a “transparency center” in Los Angeles in May. This center will allow outside observers to better understand how the platform moderates its content.
See what others are saying: (The Intercept) (The Verge) (Business Insider)
Expect Increased Post Removals While Social Media Sites Combat Coronavirus Misinformation
- Major tech companies like Google, Twitter, Reddit, and Facebook have pledged to work together to combat the spread of coronavirus misinformation.
- But as thousands of their employees shift to working from home, sites like YouTube and Twitter said they are relying more on automated enforcement systems.
- Because of this, users should expect delays in responses from support teams and a potential increase in posts removed by mistake.
Top social media and technology companies are teaming up to help fight off the online spread of fake news about the coronavirus.
As you’ve probably noticed, the internet has been heavily saturated with information about COVID-19 in recent weeks– some of it accurate and some of it not. The World Health Organization has labeled this phenomenon an “infodemic,” an over-abundance of information that makes it hard for people to find trustworthy sources and reliable guidance when they need it.
So to face this pressing issue, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube released a joint statement Monday saying they are working closely together in their response efforts.
“We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world,” the companies said.
“We invite other companies to join us as we work to keep our communities healthy and safe.”
How Are They Doing This?
As far as how they plan to tackle misinformation, over the past few weeks, each company has announced and updated its own individual strategies.
Facebook and Instagram, for instance, already banned ads and listings selling medical face masks, with product director Robert Leathern promising more action if the company sees “people trying to exploit this public health emergency.”
On top of that, the sites rolled out automatic pop-up messages featuring information from the World Health Organization and other health authorities, among other measures.
Facebook COO Sheryl Sandberg even said that Facebook – which has a policy of not fact-checking political ads – would remove coronavirus misinformation shared by politicians, celebrities, and private groups.
Meanwhile, Reddit has set up a banner on its site linking to the r/coronavirus community for timely discussions and information from the Center for Disease Control. Reddit said it will hold AMA (Ask me Anything) chats with public health experts but warned that it may also “apply a quarantine to communities that contains hoax or misinformation content. A quarantine will remove the community from search results, warn the user that it may contain misinformation, and require an explicit opt-in.
Expect Issues, Especially on Twitter and YouTube
Twitter, on the other hand, said it will monitor tweets during the outbreak, but warned that it’s relying more on automated systems to help enforce rules while they support social distancing and working from home.
“This might result in some mistakes,” the company said. “We’re meeting daily to see what changes we need to make.”
The platform stressed that it will not permanently suspend accounts based solely on automated enforcement systems. It also said it would review its rules in the context of COVID-19 and consider “the way in which they may need to evolve to account for new account behavior.”
Similarly, Google warned customers to expect some changes while its employees work remotely. In a blog post, it said all of its products will be active, but “some users, advertisers, developers and publishers may experience delays in some support response times for non-critical services, which will now be supported primarily through our chat, email, and self-service channels.”
YouTube specifically warned that there may actually be an increase in videos that are removed for policy violations because, like Twitter, they are depending more on automated systems.
“As a result of the new measures we’re taking, we will temporarily start relying more on technology to help with some of the work normally done by reviewers,” YouTube said in its blog post.
“This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.”
However, YouTube explained that it will only issue “strikes” against uploads where it has “high confidence” that the video violates its terms. Creators can still appeal content they feel was removed by error, but again, they should expect delays in responses.
The company also noted that it will be more cautious about what content gets promoted, including live streams. And in some cases, it said unreviewed content “may not be available via search, on the homepage, or in recommendations.”
See what others are saying: (CNBC) (Tech Crunch) (Business Insider)
Internet Reacts to “Fleets,” Tweets that Disappear After 24 Hours
- On Wednesday, Twitter announced that it is testing a new feature in Brazil that allows users to publish content that will disappear after 24 hours.
- The temporary posts, called “Fleets,” were created in hopes that users share more of their “fleeting thoughts.” Fleets may be rolled out in other countries later on, depending on how the test goes.
- Some are mocking the feature’s name, which matches a brand name enema. Others are disappointed that Twitter is rolling out this change as opposed to others.
- But some are excited about the new addition and think it is a good idea.
Twitter announced on Wednesday that it is testing a new feature that allows content to disappear after 24 hours, similar to the “stories” component across other social media platforms.
The temporary posts — called “fleets” — are text-based but are also able to be accompanied by photos, videos, and GIFs. Fleets can be viewed by tapping on somebody’s profile picture, but they cannot be retweeted. Similar to Instagram stories, any replies or reactions to fleets are sent as direct messages to the creator instead of posted publicly.
Currently, the test is only available for Twitter users in Brazil. It was introduced there first because it is one of the countries where people talk the most on the platform, according to Twitter’s product manager Mo Al Adham. Depending on how the test goes, it’s possible that fleets will be made available in other countries.
Kayvon Beykpour, the company’s product lead, revealed the rationale behind the new feature in a series of tweets on Wednesday.
“People often tell us that they don’t feel comfortable Tweeting because Tweets can be seen and replied to by anybody, feel permanent and performative,” Beykpour wrote.
“We’re hoping that Fleets can help people share the fleeting thoughts that they would have been unlikely to Tweet,” Beykpour added. “This is a substantial change to Twitter, so we’re excited to learn by testing it (starting with the rollout today in Brazil) and seeing how our customers use it.”
Fleets have the potential to ease users’ worries about what they post online, as old tweets have led people to lose jobs and be publicly slammed, but it’s still unclear exactly how low-risk these posts are. After reaching the end of their 24-hour life cycle, fleets will be kept by Twitter for a limited time in case of any rule violations.
“We’ll maintain a copy of fleets for a limited time after they are deleted to enforce any rule violations and so people can appeal enforcement actions,” Aly Pavela, a communications manager at Twitter, told Wired.
After this review period, Fleets will be deleted from the company’s systems, according to CNN. But this still begs the question of whether the disappearing content can simply be screenshotted and saved in that way, a detail Twitter hasn’t formally addressed yet.
Upon hearing of Twitter’s test, some were quick to crack jokes about the new feature’s name, which happens to match the brand name of a widely-used enema.
“Tw*tter moments are gonna be called fleets? like the enemas? why? cuz it’s shitty??? LOL,” one user wrote.
Fleets? LoL That’s the brand name for an enema. pic.twitter.com/2p3ST2UuE0— Sherree Worrell (@Sherree_W) March 4, 2020
As Fleet enemas are widely recognized among the LGBTQ community, several users questioned how the new feature’s name was greenlighted.
Twitter was quick to respond to the mockery, and a message from their communications team revealed that they are indeed familiar with the title.
“Yes we know what fleets means. thanks – gay intern,” the team tweeted from their official account.
Others had a more serious response to the temporary posts feature, expressing their disappointment in the company for rolling this out instead of other changes that users have been requesting for years. On Wednesday night, the hashtag #RIPTwitter was trending.
However, some thought the idea was a good one that will boost the company’s success and engagement.