- Many were outraged this week over a desktop app called DeepNude, that allows users to remove clothing from pictures of women to make them look naked.
- Vice’s Motherboard published an article where they tested the app’s capabilities on pictures of celebrities and found that it only works on women.
- Motherboard described the app as “easier to use, and more easily accessible than deepfakes have ever been.”
- The app’s developers later pulled it from sale after much criticism, but the new technology has reignited debate about the need for social media companies and lawmakers to regulate and moderate deepfakes.
The New Deepfake App
Developers pulled a new desktop app called DeepNude that let users utilize deepfake technology to remove clothing from pictures of women to make them look naked.
The app was removed after an article published by Vice New’s tech publication Motherboard expressed concerns over the technology.
Motherboard downloaded and tested the app on more than a dozen pictures of both men and women. They found that while the app does work on women who are fully clothed, it works best on images where people are already showing more skin.
“The results vary dramatically,” the article said. “But when fed a well lit, high resolution image of a woman in a bikini facing the camera directly, the fake nude images are passably realistic.”
The article also contained several of the images Motherboard tested, including photos of celebrities like Taylor Swift, Tyra Banks, Natalie Portman, Gal Gadot, and Kim Kardashian. The pictures were later removed from the article.
Motherboard reported that the app explicitly only works on women. “When Motherboard tried using an image of a man,” they wrote, “It replaced his pants with a vulva.”
Motherboard emphasized how frighteningly accessible the app is. “DeepNude is easier to use, and more easily accessible than deepfakes have ever been,” they reported.
Anyone can get the app for free, or they can purchase a premium version. Motherboard reported that the premium version costs $50, but a screenshot published in the Verge indicated that it was $99.
In the free version, the output image is partly covered by a watermark. In the paid version, the watermark is removed but there is a stamp that says “FAKE” in the upper-left corner.
However, as Motherboard notes, it would be extremely easy to crop out the “FAKE” stamp or remove it with photoshop.
On Thursday, the day after Motherboard published the article, DeepNude announced on their Twitter account that they had pulled the app.
“Despite the safety measures adopted (watermarks) if 500,000 people use it, the probability that people will misuse it is too high,” the statement said. “We don’t want to make money this way. Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it.”
“The world is not yet ready for DeepNude,” the statement concluded. The DeepNude website has now been taken down.
Where Did it Come From?
According to the Twitter account for DeepNude, the developers launched downloadable software for the app for Windows and Linux on June 23.
After a few days, the apps developers had to move the website offline because it was receiving too much traffic, according to DeepNude’s Twitter.
Currently, it is unclear who these developers are or where they are from. Their Twitter account lists their location as Estonia, but does not provide more information.
Motherboard was able to reach the anonymous creator by email, who requested to go by the name Alberto. Alberto told them that the app’s software is based on an open source algorithm called pix2pix that was developed by researchers at UC Berkeley back in 2017.
That algorithm is similar to the ones used for deepfake videos, and weirdly enough it’s also similar to the technology that self-driving cars use to formulate driving scenarios.
Alberto told Motherboard that the algorithm only works on women because “images of nude women are easier to find online,” but he said he wants to make a male version too.
Alberto also told Motherboard that during his development process, he asked himself if it was morally questionable to make the app, but ultimately decided it was not because he believed that the invention of the app was inevitable.
“I also said to myself: the technology is ready (within everyone’s reach),” Alberto told Motherboard. “So if someone has bad intentions, having DeepNude doesn’t change much… If I don’t do it, someone else will do it in a year.”
The Need for Regulation
This inevitability argument is one that has been discussed often in the debates surrounding deepfakes.
It also goes along with the idea that even if these deepfakes are banned by Pornhub and Reddit, they are just going to pop up in other places. These kind of arguments are also an important part of the discussion of how to detect and regulate deepfakes.
Motherboard showed the DeepNude app to Hany Farid, a computer science professor at UC Berkeley who is an expert on deepfakes. Faird said that he was shocked by how easily the app created the fakes.
Usually, deepfake videos take hours to make. By contrast, DeepNude only takes about 30 seconds to render these images.
“We are going to have to get better at detecting deepfakes,” Farid told Motherboard. “In addition, social media platforms are going to have to think more carefully about how to define and enforce rules surrounding this content.”
“And, our legislators are going to have to think about how to thoughtfully regulate in this space.”
The Role of Social Media
The need for social media platforms and politicians to regulate this kind of content has become increasingly prevalent in the discussion about deepfakes.
Over the last few years, deepfakes have become widespread internationally, but any kind of laws or regulations have been unable to keep up with the technology.
On Wednesday, Facebook CEO Mark Zuckerberg said that his company is looking into ways to deal with deepfakes during a conversation at the Aspen Ideas Festival.
He did not say exactly how Facebook is doing this, but he did say that the problem from his perspective was how deepfakes are defined.
“Is it AI-manipulated media or manipulated media using AI that makes someone say something they didn’t say?” Zuckerberg said. “I think that’s probably a pretty reasonable definition.”
However, that definition is also exceptionally narrow. Facebook recently received significant backlash after it decided not to take down a controversial video of Nancy Pelosi that had been slowed down, making her drunk or impaired.
Zuckerberg said he argued that the video should be left up because it is better to show people fake content than hide it. However, experts worry that that kind of thinking could set a dangerous precedent for deepfakes.
The Role of Lawmakers
On Monday, lawmakers in California proposed a bill that would ban deepfakes in the state. The assemblymember that introduced the bill said he did it because of the Pelosi video.
On the federal level, similar efforts to regulate deepfake technology have been stalled.
Separate bills have been introduced in both the House and the Senate to criminalize deepfakes, but both of the bills have only been referred to committees, and it is unclear whether or not they have even been discussed by lawmakers.
However, even if these bills do move forward, there are a lot of legal hurdles they have to go through. An attorney named Carrie Goldberg, whose law firm specializes in revenge porn, spoke to Motherboard about these issues.
“It’s a real bind,” said Goldberg. “Deepfakes defy most state revenge porn laws because it’s not the victim’s own nudity depicted, but also our federal laws protect the companies and social media platforms where it proliferates.”
However, the article’s author, Samantha Cole, also argued that the political narratives around deepfakes leave out the women victimized by them.
“Though deepfakes have been weaponized most often against unconsenting women, most headlines and political fear of them have focused on their fake news potential,” she wrote.
That idea of deepfakes being “fake news” or disinformation seems to be exactly how Zuckerberg and Facebook are orienting their policies.
Moving forward, many feel that policy discussions about deepfakes should also consider how the technology disproportionately affects women and can be tied to revenge porn.
See what others are saying: (Vice) (The Verge) (The Atlantic)
Tinder Plans to Roll Out Panic Button and Other Safety Features
- The popular dating app Tinder plans to unveil new in-app safety features for users who feel threatened during face-to-face meetups.
- Match Group, Tinder’s parent company, is investing in a safety platform called Noonlight, which tracks users’ locations and alerts local authorities if any issues arise.
- The safety tools are free to use and will be introduced to U.S. Tinder users at the end of the month.
- Match Group’s other dating apps will see the new features later this year.
Tinder’s New Features
Tinder is planning to add free in-app safety features for users whose dates go awry, including a panic button that can be pressed if something goes wrong, security check-ins, and an option to call authorities if needed.
Match Group, Tinder’s parent company who also owns Hinge and OkCupid, is making these features possible by investing in the safety platform Noonlight. Noonlight tracks users’ locations and alerts local authorities if any concerns arise.
“I think a lot about safety, especially on our platforms, and what we can do to curtail bad behavior,” Match Group CEO Mandy Ginsberg told The Wall Street Journal, who first reported the story. “There are a lot of things we tell users to do. But if we can provide tools on top of that, we should do that as well.”
Prior to in-person dates, Tinder users will have the option to manually enter information into a tool linked to Noonlight, such as details about the other party and times.
If at any point a user feels unsafe, they can press the alert button. Noonlight will then send a code for the user to enter. If the code isn’t entered, Noonlight will send a text. If the text goes unanswered, Noonlight will call the user. If the call is not answered or if the user confirms that they need assistance, Noonlight will alert local authorities and share the information previously entered with them, as well as the user’s location.
Once the Noonlight tool is instated, Tinder users will also be able to add an emblem to their profiles to indicate the additional protection they have opted to take.
The new security measures will be introduced to U.S. Tinder users at the end of January, while other Match Group dating apps will see the features in the next few months.
Tinder is also currently testing a feature aimed to eliminate “catfishing” in which users will be required to take photos in certain poses to prove that they look like the images they upload. Profiles that pass the test will have a blue checkmark to indicate they were verified.
New Wave of Safety for Tech Platforms
While Tinder has previously monitored abusive language and images via in-app conversation, this is the first move it has taken to play a hand in regulating in-person interactions once users decide to meet up.
This step comes after multiple cases of sexual assault and other crimes that users have traced back to relations made through the app.
The dating app is following the lead of other platforms like Uber and Lyft, who have both rolled out additional security features in the wake of criticism for not doing enough to protect users from safety threats.
See what others are saying: (Wall Street Journal) (CNN) (The Verge)
Facial Recognition Technology on College Campuses
Facial Recognition Technology, better known by its acronym, FRT, has been a hot topic for nearly a decade. Most fields have some form of FRT from Taylor Swift using it to identify stalkers at her concerts to police making quicker arrests by matching faces of suspects to a database of mugshots. All forms of FRT have one way or another been contested, but some of the most controversial places that it’s being used are college campuses.
Recently, an anti-FRT group named Fight for the Future launched the largest nation-wide student campaign to demand that universities never use FRT on their campuses. There are multiple reasons why people love and despise FRT and in this video, we’re going to show you both sides of the argument and why it’s so controversial to use on college campuses.
Angled Toilet Designed to Shorten Employees’ Bathroom Breaks Met With Criticism
- A British company, StandardToilet, has filed a patent for a toilet fixture designed with a downward-sloping seat.
- The product is meant to be uncomfortable to sit on for more than five minutes, in an effort to reduce bathroom breaks and increase employee productivity.
- StandardToilet also says their product will reduce bathroom lines in public spaces and serve better for people’s health.
- The company’s idea has been supported by some, but largely slammed by others who claim it promotes an unhealthy expectation of workplace productivity and is inconsiderate to a range of users with differing needs.
A New Type of Toilet
A British startup has developed a toilet designed to be uncomfortable to sit on for longer than five minutes in an effort to increase workplace productivity.
StandardToilet has filed a patent for a toilet fixture with a seating surface sloped forward between 11-13 degrees. The company claims that this design will decrease the time that employees spend taking bathroom breaks, thus allowing them to devote more minutes to work.
“In modern times, the workplace toilet has become private texting and social media usage space,” StandardToilet says on their website.
The company estimates that about £16 billion ($20.8 billion) are lost annually to the time that people are spending using the bathroom at work in the U.K. They claim that reducing time spent sitting on the toilet will save about £4 billion of that sum.
Mahabir Gill, the founder of StandardToilet, told Wired that sitting on the angled fixture for more than five minutes will cause strain on the legs, but “not enough to cause health issues.”
“Anything higher than that would cause wider problems,” Gill said. “Thirteen degrees is not too inconvenient, but you’d soon want to get off the seat quite quickly.”
StandardToilet says that in addition to increasing employee productivity, their design will shorten bathroom lines in public places such as shopping malls and train stations.
They also claim studies have suggested that flat-surfaced toilets used now can cause medical issues, like swollen haemorrhoids and weakening of pelvic muscles. The company says its product can reduce musculoskeletal disorder “through promoting the engagement of upper leg muscles.”
Response to StandardToilet
While news of the proposed time-saving toilet has been supported by some, like the British Toilet Association (BTA), an organization that campaigns for better toilet facilities, it was also largely met with criticism. Jennifer Kaufmann-Buhler, an assistant professor of design history at Purdue University in Indiana, expressed that the idea is a bit controlling.
“In an office, the one space you have where you can find privacy is often the toilet,” Kaufmann-Buhler told Wired. “So, god forbid that we want to make the one place where workers should have at least some autonomy – the toilet – another place where people impose the very capitalist idea that people should always be working.”
Kaufmann-Buhler’s sentiment was echoed across Twitter, where people were upset by StandardToilet’s motive.
Pls explain to me how this isn’t abuse of employees. I’m actually a manager and I don’t see how taking a 7 or 8 minute dump is a problem. Also what if your sick? Or on a break?— don capone (@ucantcme13) December 18, 2019
Hey gotta squeeze every second of productivity out of your worker bees. God forbid they should have a few moments to themselves.— second nature (@second_nature) December 19, 2019
Others pointed out the discomfort StandardToilet’s design would bring to those with physical disabilities.
The company told HuffPost in an email that the product isn’t designed to take the place of toilets for people with disabilities. StandardToilet’s website also notes that another benefit of the slanted toilet is “reduction in overspill usage of disabled facilities.”
Nadine Vogel is the CEO of Springboard Consulting, a company that works with other businesses on how to serve workers with disabilities. She noted to HuffPost that there are other kinds of hindrances that might justify more time in the bathroom.
Vogel brought up examples of diabetic people testing their glucose levels or others simply needing a break for their mental health.
“The fact that the concern is extended employee breaks ― well, what about people that have some kind of mental health situation that actually need that kind of longer break?” Vogel said.