Connect with us

Business

Deepfake App Pulled After Many Expressed Concerns

Published

on

  • Many were outraged this week over a desktop app called DeepNude, that allows users to remove clothing from pictures of women to make them look naked.
  • Vice’s Motherboard published an article where they tested the app’s capabilities on pictures of celebrities and found that it only works on women.
  • Motherboard described the app as “easier to use, and more easily accessible than deepfakes have ever been.”
  • The app’s developers later pulled it from sale after much criticism, but the new technology has reignited debate about the need for social media companies and lawmakers to regulate and moderate deepfakes.

The New Deepfake App

Developers pulled a new desktop app called DeepNude that let users utilize deepfake technology to remove clothing from pictures of women to make them look naked.

The app was removed after an article published by Vice New’s tech publication Motherboard expressed concerns over the technology.

Motherboard downloaded and tested the app on more than a dozen pictures of both men and women. They found that while the app does work on women who are fully clothed, it works best on images where people are already showing more skin. 

“The results vary dramatically,” the article said. “But when fed a well lit, high resolution image of a woman in a bikini facing the camera directly, the fake nude images are passably realistic.”

The article also contained several of the images Motherboard tested, including photos of celebrities like Taylor Swift, Tyra Banks, Natalie Portman, Gal Gadot, and Kim Kardashian. The pictures were later removed from the article. 

Motherboard reported that the app explicitly only works on women. “When Motherboard tried using an image of a man,” they wrote, “It replaced his pants with a vulva.”

Motherboard emphasized how frighteningly accessible the app is. “DeepNude is easier to use, and more easily accessible than deepfakes have ever been,” they reported. 

Anyone can get the app for free, or they can purchase a premium version. Motherboard reported that the premium version costs $50, but a screenshot published in the Verge indicated that it was $99.

Source: The Verge

In the free version, the output image is partly covered by a watermark. In the paid version, the watermark is removed but there is a stamp that says “FAKE” in the upper-left corner.

However, as Motherboard notes, it would be extremely easy to crop out the “FAKE” stamp or remove it with photoshop. 

On Thursday, the day after Motherboard published the article, DeepNude announced on their Twitter account that they had pulled the app.

“Despite the safety measures adopted (watermarks) if 500,000 people use it, the probability that people will misuse it is too high,” the statement said. “We don’t want to make money this way. Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it.”

“The world is not yet ready for DeepNude,” the statement concluded. The DeepNude website has now been taken down.

Where Did it Come From?

According to the Twitter account for DeepNude, the developers launched downloadable software for the app for Windows and Linux on June 23.

After a few days, the apps developers had to move the website offline because it was receiving too much traffic, according to DeepNude’s Twitter.

Currently, it is unclear who these developers are or where they are from. Their Twitter account lists their location as Estonia, but does not provide more information.

Motherboard was able to reach the anonymous creator by email, who requested to go by the name Alberto. Alberto told them that the app’s software is based on an open source algorithm called pix2pix that was developed by researchers at UC Berkeley back in 2017.

That algorithm is similar to the ones used for deepfake videos, and weirdly enough it’s also similar to the technology that self-driving cars use to formulate driving scenarios.

Alberto told Motherboard that the algorithm only works on women because “images of nude women are easier to find online,” but he said he wants to make a male version too.

Alberto also told Motherboard that during his development process, he asked himself if it was morally questionable to make the app, but ultimately decided it was not because he believed that the invention of the app was inevitable.

“I also said to myself: the technology is ready (within everyone’s reach),” Alberto told Motherboard. “So if someone has bad intentions, having DeepNude doesn’t change much… If I don’t do it, someone else will do it in a year.”

The Need for Regulation

This inevitability argument is one that has been discussed often in the debates surrounding deepfakes.

It also goes along with the idea that even if these deepfakes are banned by Pornhub and Reddit, they are just going to pop up in other places. These kind of arguments are also an important part of the discussion of how to detect and regulate deepfakes.

Motherboard showed the DeepNude app to Hany Farid, a computer science professor at UC Berkeley who is an expert on deepfakes. Faird said that he was shocked by how easily the app created the fakes.

Usually, deepfake videos take hours to make. By contrast, DeepNude only takes about 30 seconds to render these images.

“We are going to have to get better at detecting deepfakes,” Farid told Motherboard. “In addition, social media platforms are going to have to think more carefully about how to define and enforce rules surrounding this content.”

“And, our legislators are going to have to think about how to thoughtfully regulate in this space.”

The Role of Social Media

The need for social media platforms and politicians to regulate this kind of content has become increasingly prevalent in the discussion about deepfakes.

Over the last few years, deepfakes have become widespread internationally, but any kind of laws or regulations have been unable to keep up with the technology.

On Wednesday, Facebook CEO Mark Zuckerberg said that his company is looking into ways to deal with deepfakes during a conversation at the Aspen Ideas Festival.

He did not say exactly how Facebook is doing this, but he did say that the problem from his perspective was how deepfakes are defined.

“Is it AI-manipulated media or manipulated media using AI that makes someone say something they didn’t say?” Zuckerberg said. “I think that’s probably a pretty reasonable definition.”

However, that definition is also exceptionally narrow. Facebook recently received significant backlash after it decided not to take down a controversial video of Nancy Pelosi that had been slowed down, making her drunk or impaired.

Zuckerberg said he argued that the video should be left up because it is better to show people fake content than hide it. However, experts worry that that kind of thinking could set a dangerous precedent for deepfakes.

The Role of Lawmakers

On Monday, lawmakers in California proposed a bill that would ban deepfakes in the state. The assemblymember that introduced the bill said he did it because of the Pelosi video.

On the federal level, similar efforts to regulate deepfake technology have been stalled.

Separate bills have been introduced in both the House and the Senate to criminalize deepfakes, but both of the bills have only been referred to committees, and it is unclear whether or not they have even been discussed by lawmakers.

However, even if these bills do move forward, there are a lot of legal hurdles they have to go through. An attorney named Carrie Goldberg, whose law firm specializes in revenge porn, spoke to Motherboard about these issues.

“It’s a real bind,” said Goldberg. “Deepfakes defy most state revenge porn laws because it’s not the victim’s own nudity depicted, but also our federal laws protect the companies and social media platforms where it proliferates.”

However, the article’s author, Samantha Cole, also argued that the political narratives around deepfakes leave out the women victimized by them.

“Though deepfakes have been weaponized most often against unconsenting women, most headlines and political fear of them have focused on their fake news potential,” she wrote.

That idea of deepfakes being “fake news” or disinformation seems to be exactly how Zuckerberg and Facebook are orienting their policies.

Moving forward, many feel that policy discussions about deepfakes should also consider how the technology disproportionately affects women and can be tied to revenge porn.

See what others are saying: (Vice) (The Verge) (The Atlantic)

Business

FDA Recalls 11,000 Ice Cream Containers and Sportsmix Pet Food Products

Published

on

  • Over 11,000 cartons of Weis Markets ice cream were recalled after a customer discovered an “intact piece of metal equipment” inside a 48-ounce container of the brand’s Cookies and Cream flavor. 
  • The FDA also expanded a recall of Sportsmix pet food over concerns that the products may contain potentially fatal levels of aflatoxins.
  • So far, more than 70 dogs have died and more than 80 pets have become sick after eating Sportsmix food. The agency recommends taking your pet to a veterinarian if they have eaten the recalled products, even if they aren’t showing symptoms.

Metal Pieces in Weis Ice Cream Cause Massive Recall

The Food and Drug Administration announced two major product recalls this week following serious consumer complaints.

The first came Sunday when the agency revealed that over 11,000 cartons of Weis Market ice cream were recalled. “The products may be contaminated with extraneous material, specifically metal filling equipment parts,” the FDA’s statement explained.

At least one customer discovered an “intact piece of metal equipment” inside a 48-ounce container of the brand’s Cookies and Cream flavor.

Those containers were available in 197 Weis Market grocery stores, but they have already been pulled from shelves. The products have a sell-by date of October 21, 2020, and customers who purchased the product can return it for a full refund.

Along with removing 10,869 units of the Cookies and Cream containers, the brand also recalled 502 3-gallon bulk containers of Klein’s Vanilla Dairy Ice Cream.

Those bulk containers were not for retail sale, but were instead sold to one retail establishment in New York and have since been removed.

Sportsmix Recall Follows 70 Pet Deaths, 80 Illnesses

The second major recall came Tuesday when the FDA expanded a recall of Sportmix dog food.

According to the agency, the product may contain potentially fatal levels of aflatoxins – toxins produced by the Aspergillus flavus mold, which can grow on corn and other grains used as ingredients in pet food.

As of Tuesday, more than 70 pets have died and more than 80 have gotten sick after eating Sportsmix pet food. Not all the cases have been officially confirmed as aflatoxin poisoning at this time. This count also may not reflect the total number of pets affected.

For now, the FDA is asking pet owners and veterinary professionals to stop using the impacted Sportsmix products that have an expiration date on or before July 9, 2022, and have “05” in the date or lot code.

More detailed information about the recalled products can be found on the FDA’s announcement page.

Pets experiencing aflatoxin poisoning may have symptoms like sluggishness, loss of appetite, vomiting, jaundice, and/or diarrhea. In some cases, this toxicity can cause long-term liver issues without showing any symptoms. Because of this, pet owners are being advised to take their animals to a veterinarian if they have eaten the recalled products, even if they aren’t showing symptoms.

There is currently no evidence that pet owners who have handled the affected food are at risk of aflatoxin poisoning. Still, the FDA recommends that wash your hands after handling pet food.

See what others are saying: (CNN) (USA TODAY) (PEOPLE)

Continue Reading

Business

Signal and Telegram Downloads Surge After WhatsApp Announces It Will Share Data With Facebook

Published

on

  • Downloads for Signal and Telegram have skyrocketed in the last week, with the encrypted messaging apps boasting 7.5 million and 9 million new followers, respectively.
  • The growth comes after WhatsApp said it will require almost all users to share personal data with its parent company Facebook.
  • It also comes after Parler’s shutdown and bans against President Trump from Twitter and Facebook, which prompted his supporters to turn specifically to Telegram.

Telegram and Signal See Big Boost

Downloads for the encrypted messaging apps Signal and Telegram have surged in the last week after WhatsApp announced that it will start forcing all users outside the E.U. and U.K. to share personal data with Facebook.

Last week, WhatsApp, which is owned by Facebook, told users that they must allow Facebook and its subsidiaries to collect their phone numbers, locations, and the phone numbers of their contacts, among other things.

Anyone who does not agree to the new terms by Feb. 8 will lose access to the messaging app. The move prompted many to call for people to delete WhatsApp and start using other services like Signal or Telegram.

Now, it appears those calls to use other encrypted messaging apps have been heard. According to data from app analytics firm Sensor Tower, Signal saw 7.5 million installs globally through the App Store and Google Play from Jan. 6 to Jan. 10 alone, marking a 4,200% increase from the previous week.

Meanwhile, Telegram saw even more downloads. During the same time, it gained 9 million users, up 91% from the previous week. It was also the most downloaded app in the U.S.

WhatsApp responded to the exodus by attempting to clarify its new policy in a statement Monday.

“We want to be clear that the policy update does not affect the privacy of your messages with friends or family in any way,” the company said. “Instead, this update includes changes related to messaging a business on WhatsApp, which is optional, and provides further transparency about how we collect and use data.”

Other Causes of App Growth

Notably, some of the spikes in the Telegram downloads, specifically, also come from many supporters of President Donald Trump flocking to alternative platforms after Parler was shut down and Trump was banned from Twitter and Facebook.

Far-right chat room membership on the platform has increased significantly in recent days, NBC News reported. Conversations in pre-existing chatrooms where white supremacist content has already been shared for months has also increased since the pro-Trump insurrection at the U.S. Capitol last week.

According to the outlet, many of the president’s supporters have moved their operations to the app in large part because it has very lax community guidelines. Companies like Facebook and Twitter have recently cracked down on groups and users sharing incendiary content, known conspiracy theories, and attempting to organize events that could lead to violence.

There have been several documented instances of Trump supporters now using Telegram channels to discuss planned events and urge acts of direct violence. Per NBC, in one channel named “fascist,” users have called on others to “shoot politicians” and “encourage armed struggle.” A post explaining how to radicalize Trump supporters to become neo-Nazis also made rounds on the “fascist” channel, among others. 

Membership one channel frequently used by members of the Proud Boys has grown by more than 10,000 in recent days, seeming to directly attract users from Parler.

“Now that they forced us off the main platforms it doesn’t mean we go away, it just means we are going to go to places they don’t see,” a user posted in the chatroom, according to NBC.

See what others are saying: (NBC News) (Business Insider) (CNBC)

Continue Reading

Business

Pornhub Removes All Unverified User Uploads, Taking Down Most of Its Videos

Published

on

  • Pornhub is now removing all videos that were not uploaded by verified users.
  • Before the massive purge, the site hosted around 13.5 million videos. As of Monday morning, there were only 2.9 million videos left. 
  • The move is part of a series of sweeping changes the company made days after The New York Times published a shocking op-ed detailing numerous instances of abuse on the site, including nonconsensual uploads of underage girls.
  • Following the article, numerous businesses cut ties with the company, including Mastercard and Visa, which both announced Thursday that they will not process any payments on the site.

Pornhub Purges Videos

Pornhub removed the vast majority of its existing videos Monday, just hours after the company announced that it would take down all existing videos uploaded by non-verified users.

According to reports, before the new move was announced Sunday night, Pornhub hosted about 13.5 million videos, according to the number displayed on the site’s search bar. As of writing, that search bar shows just over 2.9 million videos. 

The decision comes less than a week after the company announced it would only allow video uploads from content partners and members of its Model program.

At the time, Pornhub claimed it made the decision following an independent review launched in April to eliminate illegal content. However, many speculated that it was actually in large part due to an op-ed published in The New York Times just days before. That piece, among other things, found that the site had been hosting videos of young girls uploaded without their consent, including some content where minors were raped or assaulted.

The article prompted a wave of backlash against Pornhub and calls for other businesses to cut ties with the company. On Thursday, both Visa and Mastercard announced that they would stop processing all payments on the site.

“Our investigation over the past several days has confirmed violations of our standards prohibiting unlawful content on their site,” Mastercard said in a statement.

Less than an hour later, Visa tweeted that it would also be suspending payments while it completed its own investigation.

Pornhub Claims It’s Being Targeted

However, in its blogpost announcing the most recent decision, Pornhub claimed that it was being unfairly targeted.

Specifically, the company noted that Facebook’s own transparency report found 84 million instances of child sexual abuse content over the last three years. By contrast, a report by the third-party Internet Watch Foundation found 118 similar instances on Pornhub in the same time period.

Notably, the author of The Times report, Nicholas Krisof, specifically said the Internet Watch Foundation’s findings represented a massive undercount, and that he was able to find hundreds of these kinds of videos on Pornhub in just half an hour.

Still, the site used the disputed numbers to point a finger at others.

“It is clear that Pornhub is being targeted not because of our policies and how we compare to our peers, but because we are an adult content platform,” the statement continued.

“Every piece of Pornhub content is from verified uploaders, a requirement that platforms like Facebook, Instagram, TikTok, YouTube, Snapchat and Twitter have yet to institute,” the company added. 

However, Pornhub’s implication that it is somehow more responsible because it only let verified users post content is a highly impractical comparison. First of all, Pornhub is a platform created exclusively for porn, content the social media companies the company name-checked explicitly prohibit.

Second of all, and the vast majority of people who use those platforms are not verified, and it would be impossible for a company like Facebook or YouTube to limit content to only verified users without entirely undermining their own purposes.

Verification Concerns

Even beyond that, there are also still questions about Pornhub’s verification process. According to their site, all someone needs to do to become verified is to simply have a Pornhub account with an avatar and then upload a selfie of themselves holding a piece of paper with their username and Pornhub.com written on it.

While the company did tell reporters the process would be made more thorough sometime next year, they did not provide any specific details, prompting questions about exhaustive the verification process will ultimately be.

That question is highly important because, at least per its current policies, the verification process makes it so users are eligible to monetize their videos as part of the ModelHub program.

If the new verification process is still weak or has loopholes, people could easily slip through the cracks and continue to profit. However, on the other side, there are also big concerns among sex-workers that if the process is too limited, they will be able to make money on the platform.

That concern has already been exacerbated by some of the other actions taken since The Times article was published. For example, after Mastercard and Visa made their announcements, numerous sex workers and activists condemned the decision, saying it would seriously hurt how porn performers collect income —  not just on Pornbub, but on other platforms as well. 

“By targeting Pornhub and successfully destroying the ability for independent creators to monetize their content, they have made it easier to remove payment options from smaller platforms too,” model Avalon Fey told Motherboard last week. “This has nothing to do with helping abused victims, and everything to do with hurting online adult entertainers to stop them from creating and sharing adult content.”  

Other performers also expressed similar concerns that the move could spillover to smaller platforms. 

“I am watching to see if my OnlyFans will be their next target and sincerely hoping not,” amateur performer Dylan Thomas also told the outlet.

“Sex workers are scared by this change, despite not having uploaded any illegal content,” Fey continued, “because we have seen these patterns before and have had sites and payment processors permanently and unexpectedly shut down.”

See what others are saying: (Motherboard) (The Verge) (Bloomberg)

Continue Reading