- Many were outraged this week over a desktop app called DeepNude, that allows users to remove clothing from pictures of women to make them look naked.
- Vice’s Motherboard published an article where they tested the app’s capabilities on pictures of celebrities and found that it only works on women.
- Motherboard described the app as “easier to use, and more easily accessible than deepfakes have ever been.”
- The app’s developers later pulled it from sale after much criticism, but the new technology has reignited debate about the need for social media companies and lawmakers to regulate and moderate deepfakes.
The New Deepfake App
Developers pulled a new desktop app called DeepNude that let users utilize deepfake technology to remove clothing from pictures of women to make them look naked.
The app was removed after an article published by Vice New’s tech publication Motherboard expressed concerns over the technology.
Motherboard downloaded and tested the app on more than a dozen pictures of both men and women. They found that while the app does work on women who are fully clothed, it works best on images where people are already showing more skin.
“The results vary dramatically,” the article said. “But when fed a well lit, high resolution image of a woman in a bikini facing the camera directly, the fake nude images are passably realistic.”
The article also contained several of the images Motherboard tested, including photos of celebrities like Taylor Swift, Tyra Banks, Natalie Portman, Gal Gadot, and Kim Kardashian. The pictures were later removed from the article.
Motherboard reported that the app explicitly only works on women. “When Motherboard tried using an image of a man,” they wrote, “It replaced his pants with a vulva.”
Motherboard emphasized how frighteningly accessible the app is. “DeepNude is easier to use, and more easily accessible than deepfakes have ever been,” they reported.
Anyone can get the app for free, or they can purchase a premium version. Motherboard reported that the premium version costs $50, but a screenshot published in the Verge indicated that it was $99.
In the free version, the output image is partly covered by a watermark. In the paid version, the watermark is removed but there is a stamp that says “FAKE” in the upper-left corner.
However, as Motherboard notes, it would be extremely easy to crop out the “FAKE” stamp or remove it with photoshop.
On Thursday, the day after Motherboard published the article, DeepNude announced on their Twitter account that they had pulled the app.
“Despite the safety measures adopted (watermarks) if 500,000 people use it, the probability that people will misuse it is too high,” the statement said. “We don’t want to make money this way. Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it.”
“The world is not yet ready for DeepNude,” the statement concluded. The DeepNude website has now been taken down.
Where Did it Come From?
According to the Twitter account for DeepNude, the developers launched downloadable software for the app for Windows and Linux on June 23.
After a few days, the apps developers had to move the website offline because it was receiving too much traffic, according to DeepNude’s Twitter.
Currently, it is unclear who these developers are or where they are from. Their Twitter account lists their location as Estonia, but does not provide more information.
Motherboard was able to reach the anonymous creator by email, who requested to go by the name Alberto. Alberto told them that the app’s software is based on an open source algorithm called pix2pix that was developed by researchers at UC Berkeley back in 2017.
That algorithm is similar to the ones used for deepfake videos, and weirdly enough it’s also similar to the technology that self-driving cars use to formulate driving scenarios.
Alberto told Motherboard that the algorithm only works on women because “images of nude women are easier to find online,” but he said he wants to make a male version too.
Alberto also told Motherboard that during his development process, he asked himself if it was morally questionable to make the app, but ultimately decided it was not because he believed that the invention of the app was inevitable.
“I also said to myself: the technology is ready (within everyone’s reach),” Alberto told Motherboard. “So if someone has bad intentions, having DeepNude doesn’t change much… If I don’t do it, someone else will do it in a year.”
The Need for Regulation
This inevitability argument is one that has been discussed often in the debates surrounding deepfakes.
It also goes along with the idea that even if these deepfakes are banned by Pornhub and Reddit, they are just going to pop up in other places. These kind of arguments are also an important part of the discussion of how to detect and regulate deepfakes.
Motherboard showed the DeepNude app to Hany Farid, a computer science professor at UC Berkeley who is an expert on deepfakes. Faird said that he was shocked by how easily the app created the fakes.
Usually, deepfake videos take hours to make. By contrast, DeepNude only takes about 30 seconds to render these images.
“We are going to have to get better at detecting deepfakes,” Farid told Motherboard. “In addition, social media platforms are going to have to think more carefully about how to define and enforce rules surrounding this content.”
“And, our legislators are going to have to think about how to thoughtfully regulate in this space.”
The Role of Social Media
The need for social media platforms and politicians to regulate this kind of content has become increasingly prevalent in the discussion about deepfakes.
Over the last few years, deepfakes have become widespread internationally, but any kind of laws or regulations have been unable to keep up with the technology.
On Wednesday, Facebook CEO Mark Zuckerberg said that his company is looking into ways to deal with deepfakes during a conversation at the Aspen Ideas Festival.
He did not say exactly how Facebook is doing this, but he did say that the problem from his perspective was how deepfakes are defined.
“Is it AI-manipulated media or manipulated media using AI that makes someone say something they didn’t say?” Zuckerberg said. “I think that’s probably a pretty reasonable definition.”
However, that definition is also exceptionally narrow. Facebook recently received significant backlash after it decided not to take down a controversial video of Nancy Pelosi that had been slowed down, making her drunk or impaired.
Zuckerberg said he argued that the video should be left up because it is better to show people fake content than hide it. However, experts worry that that kind of thinking could set a dangerous precedent for deepfakes.
The Role of Lawmakers
On Monday, lawmakers in California proposed a bill that would ban deepfakes in the state. The assemblymember that introduced the bill said he did it because of the Pelosi video.
On the federal level, similar efforts to regulate deepfake technology have been stalled.
Separate bills have been introduced in both the House and the Senate to criminalize deepfakes, but both of the bills have only been referred to committees, and it is unclear whether or not they have even been discussed by lawmakers.
However, even if these bills do move forward, there are a lot of legal hurdles they have to go through. An attorney named Carrie Goldberg, whose law firm specializes in revenge porn, spoke to Motherboard about these issues.
“It’s a real bind,” said Goldberg. “Deepfakes defy most state revenge porn laws because it’s not the victim’s own nudity depicted, but also our federal laws protect the companies and social media platforms where it proliferates.”
However, the article’s author, Samantha Cole, also argued that the political narratives around deepfakes leave out the women victimized by them.
“Though deepfakes have been weaponized most often against unconsenting women, most headlines and political fear of them have focused on their fake news potential,” she wrote.
That idea of deepfakes being “fake news” or disinformation seems to be exactly how Zuckerberg and Facebook are orienting their policies.
Moving forward, many feel that policy discussions about deepfakes should also consider how the technology disproportionately affects women and can be tied to revenge porn.
See what others are saying: (Vice) (The Verge) (The Atlantic)
The Boeing MAX 8 Scandal & Controversy Explained!
When Boeing first introduced the 737 MAX 8, the new plane was supposed to help usher in a new generation of commercial aircraft. Then two MAX 8’s crashed within five months of each other, killing a total of 346 people.
Since then, the controversy around Boeing has kept growing and growing as numerous investigations revealed a number of highly questionable and even negligent business and regulatory practices that ultimately led to the crashes.
Even now, more than a year after the first crash, Boeing is still in the news and under the microscope as it struggles to keep up appearances.
Facebook to Pay $550 Million to Settle Facial Recognition Suit
- Facebook agreed to pay $550 million to settle a class-action lawsuit in Illinois that claimed its “Tag Suggestions” feature illegally harvested facial data from millions of users in Illinois without their permission.
- Facebook disclosed the settlement while also announcing it made $21 billion last quarter.
- Some championed the settlement as a victory for consumer privacy rights.
- Others argued that no matter how much Facebook pays in lawsuits and settlements, the company has continued to grow and has not fundamentally changed its business practices.
Facebook Announces Settlement
Facebook announced Wednesday that it had agreed to pay $550 million to settle a class-action lawsuit involving facial recognition technology.
The lawsuit was filed in Illinois in 2015 and claimed that Facebook’s “Tag Suggestions” feature violated the state’s 2008 Biometric Information Privacy Act (BIPA).
The “Tag Suggestion” tool uses facial recognition software to scan users’ faces and then suggest the names of other users who might be in the picture.
The lawsuit alleged that Facebook used it to illegally harvest facial data from millions of users in Illinois without their permission or without telling them how the data was kept.
Illinois is one of three states that has its own biometric privacy laws, and BIPA is arguably the strongest of all three.
Under BIPA, companies that collect biometric data, which includes data from finger, face, and iris scans, must get prior consent from consumers and detail how the data will be used and how long the company will keep it. BIPA also allows private citizens to sue.
The lawsuit accused Facebook of failing to comply with those restrictions.
Facebook, for its part, argued that the people who it collected data from without consent could not prove that they experienced any concrete harm, like financial losses. However, the company still ultimately decided to settle.
Once the federal judge overseeing the case approves the settlement, people eligible to claim money are expected to receive a couple hundred dollars.
Other Settlements & Controversies
Many privacy experts and advocates applauded the settlement and said it was a victory for consumer privacy rights.
But others argued that the settlement does not really change anything, because it is not a big deal for Facebook. While $550 million might seem like a lot, for Facebook, its basically pocket change.
Even the way Facebook announced the settlement seemed to emphasize that point. The tech giant disclosed the settlement while announcing its financial results for 2019, reporting that revenue rose 25% to $21 billion in the last quarter alone.
Not only did that indicate how minor the Illinois settlement was for the company financially, it also showcased their incredible ability to weather scandals and controversy.
Over the last few years, Facebook has received a lot of backlash, largely over privacy concerns and the spread of misinformation on the platform.
Most recently Facebook has been under fire for its decision to essentially let politicians lie in political ads.
In July, the Federal Trade Commission (FTC) fined Facebook $5 billion over privacy violations— the largest fine the FTC has ever imposed on a tech company by far.
Facebook’s Continued Growth
But even in the face of massive financial costs and prominent controversies, Facebook still continues to grow.
In an article published by Axios, writer Sara Fischer described Facebook’s ability for continued growth despite those obstacles.
“Facebook closed out the second decade of the millennium stronger than ever,” she wrote. “Facebook’s continued ability to post double-digit revenue growth every year speaks to how well it has been able to innovate and adapt, even in the face of regulatory headwinds and increased competition.”
Fischer gave the example of North America and Europe where Facebook has gotten more money per user each year despite the fact that its user growth in those regions has stayed relatively stagnant.
She also mentioned the Illinois case, FTC fine, and other growing concerns over privacy and advertizing Facebook has warned its investors about.
“So far these fines have proven moot in getting the tech giant to fundamentally change its business, which continues to grow substantially,” she said.
While Facebook did agree to be more transparent about how it uses facial recognition technology as part of the FTC settlement, many are skeptical that the Illinois case will bring about any substantive change.
However, in an investor call following the release of Facebook’s earnings report Wednesday, CEO and founder Mark Zuckerberg said that he wanted to be more transparent about the company’s values.
“One critique of our approach for much of the last decade is that because we wanted to be liked, we didn’t want to communicate our views as clearly, because we worried about offending people,” he said.
“Our goal for the next decade isn’t to be liked, but understood. In order to be trusted, people need to know what we stand for.”
See what others are saying: (Axios) (The Verge) (The New York Times)
New 2020 Emoji Include Transgender Flag and More Gender-Inclusive Options
- Over 100 new emoji were revealed on Wednesday, set to be released sometime in 2020.
- The new additions will consist of 62 brand-new emoji as well as 55 gender and skin-tone variants.
- The transgender flag, a woman in a tuxedo, and a more gender-inclusive alternative to Mr. and Mrs. Santa Claus will be among the new options.
- Other emoji introduced include boba tea, a dodo bird, a smiley face with a tear, and an anatomical heart.
More than 100 new emoji will be available for mobile phone users this year, providing both fun new icons as well as more inclusive and diverse options.
The list was unveiled on Wednesday by the Unicode Consortium, an organization devoted to developing and maintaining software internalization standards and data.
There will be 62 brand-new emoji as well as 55 gender and skin-tone variants, reflecting a push toward a more inclusive collection. Among the new icons will be the transgender symbol as well as the transgender pride flag, an idea proposed by advocates and artists with the help of Google and Microsoft.
Along this same vein, more gender-inclusive options will be seen with this new wave. Both a woman and a non-binary figure in a tuxedo will soon be available, as well as a man and a non-binary figure in a wedding veil.
To complement the already-existing Mr. and Mrs. Santa Claus options, a more gender-inclusive alternative will be included as well — under the name of Mx. Claus.
There will also be new emoji depicting parents feeding a baby.
Other new emoji include a smiley face with a tear, two figures hugging, boba tea, and an anatomical heart. The animal section is getting a boost too, as a beaver, a seal, a polar bear, and even a dodo bird will be introduced.
The release date of the new emoji depends on each individual vendor, but Unicode Consortium noted that typically the new icons are rolled out in the fall.
Praise for New Emoji
After the new additions were revealed, many took to Twitter to express their joy about the more inclusive options.
“Incredible power in the new 2020 emojis,” one person wrote.