Connect with us

Business

Deepfake App Pulled After Many Expressed Concerns

Published

on

  • Many were outraged this week over a desktop app called DeepNude, that allows users to remove clothing from pictures of women to make them look naked.
  • Vice’s Motherboard published an article where they tested the app’s capabilities on pictures of celebrities and found that it only works on women.
  • Motherboard described the app as “easier to use, and more easily accessible than deepfakes have ever been.”
  • The app’s developers later pulled it from sale after much criticism, but the new technology has reignited debate about the need for social media companies and lawmakers to regulate and moderate deepfakes.

The New Deepfake App

Developers pulled a new desktop app called DeepNude that let users utilize deepfake technology to remove clothing from pictures of women to make them look naked.

The app was removed after an article published by Vice New’s tech publication Motherboard expressed concerns over the technology.

Motherboard downloaded and tested the app on more than a dozen pictures of both men and women. They found that while the app does work on women who are fully clothed, it works best on images where people are already showing more skin. 

“The results vary dramatically,” the article said. “But when fed a well lit, high resolution image of a woman in a bikini facing the camera directly, the fake nude images are passably realistic.”

The article also contained several of the images Motherboard tested, including photos of celebrities like Taylor Swift, Tyra Banks, Natalie Portman, Gal Gadot, and Kim Kardashian. The pictures were later removed from the article. 

Motherboard reported that the app explicitly only works on women. “When Motherboard tried using an image of a man,” they wrote, “It replaced his pants with a vulva.”

Motherboard emphasized how frighteningly accessible the app is. “DeepNude is easier to use, and more easily accessible than deepfakes have ever been,” they reported. 

Anyone can get the app for free, or they can purchase a premium version. Motherboard reported that the premium version costs $50, but a screenshot published in the Verge indicated that it was $99.

Source: The Verge

In the free version, the output image is partly covered by a watermark. In the paid version, the watermark is removed but there is a stamp that says “FAKE” in the upper-left corner.

However, as Motherboard notes, it would be extremely easy to crop out the “FAKE” stamp or remove it with photoshop. 

On Thursday, the day after Motherboard published the article, DeepNude announced on their Twitter account that they had pulled the app.

“Despite the safety measures adopted (watermarks) if 500,000 people use it, the probability that people will misuse it is too high,” the statement said. “We don’t want to make money this way. Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it.”

“The world is not yet ready for DeepNude,” the statement concluded. The DeepNude website has now been taken down.

Where Did it Come From?

According to the Twitter account for DeepNude, the developers launched downloadable software for the app for Windows and Linux on June 23.

After a few days, the apps developers had to move the website offline because it was receiving too much traffic, according to DeepNude’s Twitter.

Currently, it is unclear who these developers are or where they are from. Their Twitter account lists their location as Estonia, but does not provide more information.

Motherboard was able to reach the anonymous creator by email, who requested to go by the name Alberto. Alberto told them that the app’s software is based on an open source algorithm called pix2pix that was developed by researchers at UC Berkeley back in 2017.

That algorithm is similar to the ones used for deepfake videos, and weirdly enough it’s also similar to the technology that self-driving cars use to formulate driving scenarios.

Alberto told Motherboard that the algorithm only works on women because “images of nude women are easier to find online,” but he said he wants to make a male version too.

Alberto also told Motherboard that during his development process, he asked himself if it was morally questionable to make the app, but ultimately decided it was not because he believed that the invention of the app was inevitable.

“I also said to myself: the technology is ready (within everyone’s reach),” Alberto told Motherboard. “So if someone has bad intentions, having DeepNude doesn’t change much… If I don’t do it, someone else will do it in a year.”

The Need for Regulation

This inevitability argument is one that has been discussed often in the debates surrounding deepfakes.

It also goes along with the idea that even if these deepfakes are banned by Pornhub and Reddit, they are just going to pop up in other places. These kind of arguments are also an important part of the discussion of how to detect and regulate deepfakes.

Motherboard showed the DeepNude app to Hany Farid, a computer science professor at UC Berkeley who is an expert on deepfakes. Faird said that he was shocked by how easily the app created the fakes.

Usually, deepfake videos take hours to make. By contrast, DeepNude only takes about 30 seconds to render these images.

“We are going to have to get better at detecting deepfakes,” Farid told Motherboard. “In addition, social media platforms are going to have to think more carefully about how to define and enforce rules surrounding this content.”

“And, our legislators are going to have to think about how to thoughtfully regulate in this space.”

The Role of Social Media

The need for social media platforms and politicians to regulate this kind of content has become increasingly prevalent in the discussion about deepfakes.

Over the last few years, deepfakes have become widespread internationally, but any kind of laws or regulations have been unable to keep up with the technology.

On Wednesday, Facebook CEO Mark Zuckerberg said that his company is looking into ways to deal with deepfakes during a conversation at the Aspen Ideas Festival.

He did not say exactly how Facebook is doing this, but he did say that the problem from his perspective was how deepfakes are defined.

“Is it AI-manipulated media or manipulated media using AI that makes someone say something they didn’t say?” Zuckerberg said. “I think that’s probably a pretty reasonable definition.”

However, that definition is also exceptionally narrow. Facebook recently received significant backlash after it decided not to take down a controversial video of Nancy Pelosi that had been slowed down, making her drunk or impaired.

Zuckerberg said he argued that the video should be left up because it is better to show people fake content than hide it. However, experts worry that that kind of thinking could set a dangerous precedent for deepfakes.

The Role of Lawmakers

On Monday, lawmakers in California proposed a bill that would ban deepfakes in the state. The assemblymember that introduced the bill said he did it because of the Pelosi video.

On the federal level, similar efforts to regulate deepfake technology have been stalled.

Separate bills have been introduced in both the House and the Senate to criminalize deepfakes, but both of the bills have only been referred to committees, and it is unclear whether or not they have even been discussed by lawmakers.

However, even if these bills do move forward, there are a lot of legal hurdles they have to go through. An attorney named Carrie Goldberg, whose law firm specializes in revenge porn, spoke to Motherboard about these issues.

“It’s a real bind,” said Goldberg. “Deepfakes defy most state revenge porn laws because it’s not the victim’s own nudity depicted, but also our federal laws protect the companies and social media platforms where it proliferates.”

However, the article’s author, Samantha Cole, also argued that the political narratives around deepfakes leave out the women victimized by them.

“Though deepfakes have been weaponized most often against unconsenting women, most headlines and political fear of them have focused on their fake news potential,” she wrote.

That idea of deepfakes being “fake news” or disinformation seems to be exactly how Zuckerberg and Facebook are orienting their policies.

Moving forward, many feel that policy discussions about deepfakes should also consider how the technology disproportionately affects women and can be tied to revenge porn.

See what others are saying: (Vice) (The Verge) (The Atlantic)

Business

Kim Kardashian to Pay $1.26 Million to SEC Over Unlawful Crypto Promotion

Published

on

According to the agency, stars and influencers must disclose how much money they earned for crypto advertising. 


Kardashian Pays Up

The U.S. Securities and Exchange Commission announced Monday that it has charged reality TV star Kim Kardashian for “unlawfully touting crypto security.”

Kardashian has agreed to pay $1.26 million in penalties, disgorgement, and interest while cooperating with the SEC’s investigation. The media mogul did not admit to or deny the SEC’s findings as part of the settlement, but she did agree to not promote crypto assets for three years. 

According to a statement from the SEC, federal regulators found that Kardashian “failed to disclose that she was paid $250,000 to publish a post on her Instagram account about EMAX tokens.”

“This case is a reminder that, when celebrities or influencers endorse investment opportunities, including crypto asset securities, it doesn’t mean that those investment products are right for all investors,” SEC Chair Gary Gensler said in a statement. 

The investigation stemmed from a post that Kardashian made on her Instagram story in the summer of 2021 promoting EthereumMax. In it, she asked her 330 million followers if they were interested in cryptocurrency while giving information about the coin. The post included a swipe-up link for users to get more information and potentially invest in it themselves. 

While Kardashian did include a hashtag denoting the post as an ad, the SEC said that did not go far enough. In the group’s statement, Gurbir S. Grewal, the Director of the SEC’s Division of Enforcement, explained that anyone advertising crypto assets “must disclose the nature, source, and amount of compensation they received in exchange for the promotion.”

A “Reminder” For Crypto Promoters 

As a result, the billionaire businesswoman is paying a $1 million penalty fee. On top of that, she has to pay $260,000 in disgorgement, accounting for the payment she received from Ethereum Max and interest. 

Kardashian’s lawyer released a statement saying the star has “fully cooperated with the SEC from the very beginning.”

“She remains willing to do whatever she can to assist the SEC in this matter,” the statement continued. “She wanted to get this matter behind her to avoid a protracted dispute. The agreement she reached with the SEC allows her to do that so that she can move forward with her many different business pursuits.”

This is not the first time Kardashian’s EMAX post landed her in hot water. A U.K. watchdog previously condemned her for shilling the coin, and she was sued earlier this year over allegations that she artificially inflated the coin’s value. 

Gensler said that he hopes the charges from the SEC will serve as “a reminder to celebrities and others that the law requires them to disclose to the public when and how much they are paid to promote investing in securities.”

See what others are saying: (CNBC) (NPR) (Axios)

Continue Reading

Business

Misinformation Makes Up 20% of Top Search Results For Current Events on TikTok, New Research Finds

Published

on

According to the report, the app “is consistently feeding millions of young users health misinformation, including some claims that could be dangerous to users’ health.”


Misinformation Thrives on TikTok

As TikTok becomes Gen Z’s favorite search engine, new research by journalism and tech group NewsGuard found that the video app frequently suggests misinformation to users searching for news-related topics. 

NewsGuard used TikTok’s search bar to look up trending news subjects like the 2020 election, COVID-19, the invasion of Ukraine, the upcoming midterms, abortion, school shootings, and more. It analyzed 540 videos based on the top 20 results from 27 subject searches, finding false or misleading claims in 105 of those posts. 

In other words, roughly 20% of the results contained misinformation. 

Some of NewsGuard’s searches contained neutral phrases and words like “2022 election” or “mRNA vaccine,” while others were loaded with more controversial language like “January 6 FBI” or “Uvalde TX conspiracy.” In many cases, those controversial phrases were suggested by TikTok’s own search bar. 

The researchers noted that, for example, during a search on climate change, “climate change debunked” showed up. While looking up COVID-19 vaccines, searches for “covid vaccine injury” or “covid vaccine exposed” were recommended.

Dangerous Results Regarding Health and More

The consequences of some of the false claims made in these videos can be severe. NewsGuard wrote in its report that the search engine “is consistently feeding millions of young users health misinformation, including some claims that could be dangerous to users’ health.”

Among the hoards of hazardous health claims were videos falsely suggesting that COVID-19 vaccines are toxic and cause permanent damage to organs. The report found that there are still several videos touting the anti-parasite hydroxychloroquine as a cure-all remedy, not just for COVID, but for any illness. 

Searches regarding herbal abortions were particularly troublesome. While certain phrases like “mugwort abortion” were blocked, the researchers found several ways around this that lead to multiple videos touting debunked DIY abortion remedies that are not only proven to be ineffective, but can also pose serious health risks. 

NewsGuard claimed that the social media app vowed to remove this content in July, but “two months later, herbal abortion content continues to be easily accessible on the platform.”

Other standard forms of conspiracy fodder also occupied space in top search results, including claims that the Uvalde school shooting was planned and that the 2020 presidential election was stolen. 

TikTok’s Search Engine Vs. Google

As part of its research, NewsGuard compared TikTok’s search results and suggestions with Google and found that, by comparison, the latter “provided higher-quality and less-polarizing results, with far less misinformation.”

“For example, searching ‘covid vaccine’ on Google prompted ‘walk-in covid vaccine,’ ‘which covid vaccine is best,’ and ‘types of covid vaccines,’” NewsGuard wrote. “None of these terms was suggested by TikTok.”

This is significant because recent reports show that young Internet users have increasingly turned to TikTok as a search engine over Google. While this might elicit safe results for pasta recipes and DIY tutorials, for people searching for current affairs, there could be significant consequences. 

NewsGuard said that it flagged six videos containing misinformation to TikTok, and the social media app ended up taking those posts down. In a statement to Mashable, the company pledged to fight against misinformation on its platform. 

“Our Community Guidelines make clear that we do not allow harmful misinformation, including medical misinformation, and we will remove it from the platform,” the statement said. “We partner with credible voices to elevate authoritative content on topics related to public health, and partner with independent fact-checkers who help us to assess the accuracy of content.”

See what others are saying: (Mashable) (CNN) (USA Today)

Continue Reading

Business

Over 70 TikTok Creators Boycott Amazon as Workers Protest Conditions and Pay

Published

on

As the company fends off pressure on both fronts, the Amazon Labor Union continues to back election petitions around the country including one filed Tuesday in upstate New York.


Gen Z Goes to War With Amazon

More than 70 big TikTok creators have pledged not to work with Amazon until it gives in to union workers’ demands, including calls for higher pay, safer working conditions, and increased paid time off.

Twenty-year-old TikToker Elise Joshi, who serves as deputy executive director for the advocacy group organizing the boycott, Gen Z for Change, posted an open letter on Twitter Tuesday.

“Dear Amazon.com,” it reads, “We are a coalition of over 70 TikTok creators with a combined following of 51 million people. Today, August 16th, 2022, we are joining together in solidarity with Amazon workers and union organizers through our People Over Prime Pledge.”

Amazon has refused to recognize the Amazon Labor Union (ALU) since workers voted to unionize at a Staten Island warehouse in April, and it has resisted collective bargaining negotiations.

Although the ALU is not involved in the boycott, its co-founder and interim President Chris Smalls expressed support for it in a statement to The Washington Post, saying, “It’s a good fight to take on because Amazon definitely is afraid of how we used TikTok during our campaigns.”

While the ALU posts videos on TikTok to drum up popular support for the labor movement, Amazon has sought to win large influencers over to its side. In 2017, it launched the Amazon Influencer Program, which offered influencers the opportunity to earn revenue by recommending products in personalized Amazon storefronts.

Last May, the company flew over a dozen Instagram, YouTube, and TikTok stars to a luxurious resort in Mexico.

Emily Rayna Shaw, a TikTok creator with 5.4 million followers who has partnered with Amazon in the past, is participating in the boycott.

“I think their method of offering influencers life-changing payouts to make them feel as if they need to work with them while also refusing to pay their workers behind the scenes is extremely wrong,” she told The Post.

“As an influencer, it’s important to choose the right companies to work with,” said Jackie James, a 19-year-old TikTok creator with 3.4 million followers, who told the outlet she will cease doing deals with Amazon until it changes its ways.

The ALU is demanding that Amazon bump its minimum wage to $30 per hour and stop its union-busting activities.

Slogging Through the ‘Suffocating’ Heat

Amazon is also facing challenges from workers themselves, with some walking out this week at its largest air hub in California, where company-branded planes transport packages to warehouses across the country.

They are asking for the base pay rate to be raised from $17 per hour to $22 per hour.

A group organizing the work stoppage under the name Inland Empire Amazon Workers United said in a statement that over 150 workers participated, but Amazon countered that the true number was only 74.

The Warehouse Worker Resource Center counted 900 workers who signed a petition demanding pay raises.

Inland Empire Amazon Workers United has complained about the “suffocating” heat in the facility, saying that temperatures at the San Bernardino airport reached 95 degrees Fahrenheit or higher for 24 days last month.

Amazon spokesperson Paul Flaningan, however, claimed to CNBC that the temperature never surpassed 77 degrees and said the company respects its workers’ right to voice their opinions.

On Tuesday, the ALU backed another warehouse’s decision to file a petition for a union election in upstate New York, roughly 10 miles outside Albany.

The National Labor Relations Board requires signatures from 30% of employees to trigger an election.

See what others are Saying: (The Washington Post (CNBC) (Associated Press)

Continue Reading