Connect with us

Business

Deepfake App Pulled After Many Expressed Concerns

Published

on

  • Many were outraged this week over a desktop app called DeepNude, that allows users to remove clothing from pictures of women to make them look naked.
  • Vice’s Motherboard published an article where they tested the app’s capabilities on pictures of celebrities and found that it only works on women.
  • Motherboard described the app as “easier to use, and more easily accessible than deepfakes have ever been.”
  • The app’s developers later pulled it from sale after much criticism, but the new technology has reignited debate about the need for social media companies and lawmakers to regulate and moderate deepfakes.

The New Deepfake App

Developers pulled a new desktop app called DeepNude that let users utilize deepfake technology to remove clothing from pictures of women to make them look naked.

The app was removed after an article published by Vice New’s tech publication Motherboard expressed concerns over the technology.

Motherboard downloaded and tested the app on more than a dozen pictures of both men and women. They found that while the app does work on women who are fully clothed, it works best on images where people are already showing more skin. 

“The results vary dramatically,” the article said. “But when fed a well lit, high resolution image of a woman in a bikini facing the camera directly, the fake nude images are passably realistic.”

The article also contained several of the images Motherboard tested, including photos of celebrities like Taylor Swift, Tyra Banks, Natalie Portman, Gal Gadot, and Kim Kardashian. The pictures were later removed from the article. 

Motherboard reported that the app explicitly only works on women. “When Motherboard tried using an image of a man,” they wrote, “It replaced his pants with a vulva.”

Motherboard emphasized how frighteningly accessible the app is. “DeepNude is easier to use, and more easily accessible than deepfakes have ever been,” they reported. 

Anyone can get the app for free, or they can purchase a premium version. Motherboard reported that the premium version costs $50, but a screenshot published in the Verge indicated that it was $99.

Source: The Verge

In the free version, the output image is partly covered by a watermark. In the paid version, the watermark is removed but there is a stamp that says “FAKE” in the upper-left corner.

However, as Motherboard notes, it would be extremely easy to crop out the “FAKE” stamp or remove it with photoshop. 

On Thursday, the day after Motherboard published the article, DeepNude announced on their Twitter account that they had pulled the app.

“Despite the safety measures adopted (watermarks) if 500,000 people use it, the probability that people will misuse it is too high,” the statement said. “We don’t want to make money this way. Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it.”

“The world is not yet ready for DeepNude,” the statement concluded. The DeepNude website has now been taken down.

Where Did it Come From?

According to the Twitter account for DeepNude, the developers launched downloadable software for the app for Windows and Linux on June 23.

After a few days, the apps developers had to move the website offline because it was receiving too much traffic, according to DeepNude’s Twitter.

Currently, it is unclear who these developers are or where they are from. Their Twitter account lists their location as Estonia, but does not provide more information.

Motherboard was able to reach the anonymous creator by email, who requested to go by the name Alberto. Alberto told them that the app’s software is based on an open source algorithm called pix2pix that was developed by researchers at UC Berkeley back in 2017.

That algorithm is similar to the ones used for deepfake videos, and weirdly enough it’s also similar to the technology that self-driving cars use to formulate driving scenarios.

Alberto told Motherboard that the algorithm only works on women because “images of nude women are easier to find online,” but he said he wants to make a male version too.

Alberto also told Motherboard that during his development process, he asked himself if it was morally questionable to make the app, but ultimately decided it was not because he believed that the invention of the app was inevitable.

“I also said to myself: the technology is ready (within everyone’s reach),” Alberto told Motherboard. “So if someone has bad intentions, having DeepNude doesn’t change much… If I don’t do it, someone else will do it in a year.”

The Need for Regulation

This inevitability argument is one that has been discussed often in the debates surrounding deepfakes.

It also goes along with the idea that even if these deepfakes are banned by Pornhub and Reddit, they are just going to pop up in other places. These kind of arguments are also an important part of the discussion of how to detect and regulate deepfakes.

Motherboard showed the DeepNude app to Hany Farid, a computer science professor at UC Berkeley who is an expert on deepfakes. Faird said that he was shocked by how easily the app created the fakes.

Usually, deepfake videos take hours to make. By contrast, DeepNude only takes about 30 seconds to render these images.

“We are going to have to get better at detecting deepfakes,” Farid told Motherboard. “In addition, social media platforms are going to have to think more carefully about how to define and enforce rules surrounding this content.”

“And, our legislators are going to have to think about how to thoughtfully regulate in this space.”

The Role of Social Media

The need for social media platforms and politicians to regulate this kind of content has become increasingly prevalent in the discussion about deepfakes.

Over the last few years, deepfakes have become widespread internationally, but any kind of laws or regulations have been unable to keep up with the technology.

On Wednesday, Facebook CEO Mark Zuckerberg said that his company is looking into ways to deal with deepfakes during a conversation at the Aspen Ideas Festival.

He did not say exactly how Facebook is doing this, but he did say that the problem from his perspective was how deepfakes are defined.

“Is it AI-manipulated media or manipulated media using AI that makes someone say something they didn’t say?” Zuckerberg said. “I think that’s probably a pretty reasonable definition.”

However, that definition is also exceptionally narrow. Facebook recently received significant backlash after it decided not to take down a controversial video of Nancy Pelosi that had been slowed down, making her drunk or impaired.

Zuckerberg said he argued that the video should be left up because it is better to show people fake content than hide it. However, experts worry that that kind of thinking could set a dangerous precedent for deepfakes.

The Role of Lawmakers

On Monday, lawmakers in California proposed a bill that would ban deepfakes in the state. The assemblymember that introduced the bill said he did it because of the Pelosi video.

On the federal level, similar efforts to regulate deepfake technology have been stalled.

Separate bills have been introduced in both the House and the Senate to criminalize deepfakes, but both of the bills have only been referred to committees, and it is unclear whether or not they have even been discussed by lawmakers.

However, even if these bills do move forward, there are a lot of legal hurdles they have to go through. An attorney named Carrie Goldberg, whose law firm specializes in revenge porn, spoke to Motherboard about these issues.

“It’s a real bind,” said Goldberg. “Deepfakes defy most state revenge porn laws because it’s not the victim’s own nudity depicted, but also our federal laws protect the companies and social media platforms where it proliferates.”

However, the article’s author, Samantha Cole, also argued that the political narratives around deepfakes leave out the women victimized by them.

“Though deepfakes have been weaponized most often against unconsenting women, most headlines and political fear of them have focused on their fake news potential,” she wrote.

That idea of deepfakes being “fake news” or disinformation seems to be exactly how Zuckerberg and Facebook are orienting their policies.

Moving forward, many feel that policy discussions about deepfakes should also consider how the technology disproportionately affects women and can be tied to revenge porn.

See what others are saying: (Vice) (The Verge) (The Atlantic)

Business

Amazon Warehouse Workers in New York File Petition To Hold Unionization Vote

Published

on

A similar unionization effort among Amazon warehouse workers in Alabama failed earlier this year amid allegations that the company engaged in illegal union-busting tactics.


Staten Island Unionization Efforts Advance

Workers at a group of Amazon warehouses in Staten Island, New York, filed a petition with the National Labor Relations Board (NLRB) Monday to hold a unionization vote after collecting the necessary number of signatures.

The latest push is not affiliated with a national union but is instead organized by a grassroots worker group called the Amazon Labor Union, which is self-organized and financed via GoFundMe. 

The group is run by Chris Smalls, a former Amazon warehouse worker who led a walkout at the beginning of the pandemic to protest the lack of protective gear and other conditions. Smalls was later fired the same day.

For months now, Smalls and the other organizers have been forming a committee and collecting signatures from workers to back their push for a collective bargaining group, as well as pay raises, more paid time off, longer breaks, less mandatory overtime, and the ability to cancel shifts in dangerous weather conditions.

On Monday, the leader said he had collected over 2,000 signatures from the four Staten Island facilities, which employ roughly 7,000 people, meeting the NLRB requirement that organizers get support from at least 30% of the workers they wish to represent.

Amazon’s Anti-Union Efforts Continue

The campaign faces an uphill battle because Amazon  — the second-largest private employer in the U.S. — has fought hard against unionization efforts for decades and won.

This past spring, Amazon warehouse workers in Alabama held a vote for unionization that ultimately failed by a wide margin.

However, the NLRB is now considering whether to hold another vote after a top agency official found in August that Amazon’s anti-union tactics interfered with the election so much that the results should be scrapped and another one should be held.

Amazon, for its part, is already trying to undermine the new effort in Staten Island. As far back as the walkout led by Smalls at the beginning of the pandemic, workers have filed 10 labor complaints claiming that Amazon has interfered with their organizing efforts. 

The NLRB has said that its attorneys have found merit in at least three of those claims and are continuing to look into the others.

Meanwhile, Smalls told NPR last week that the company has ramped up those efforts recently by putting up anti-union signs around the warehouses and installing a barbed wire to limit the organizers’ space. 

Representatives for Amazon did not comment on those allegations, but in a statement Monday, a spokesperson attempted to cast doubt on the number of signatures Smalls and his group have collected.

“We’re skeptical that a sufficient number of legitimate employee signatures has been secured to warrant an election,” the spokesperson said. “If there is an election, we want the voice of our employees to be heard and look forward to it.”

The labor board disputed that claim in a statement from the agency’s press secretary on Monday, stressing that the group submitted enough signatures.

See what others are saying: (The New York Times) (NPR) (The Washington Post)

Continue Reading

Business

Zuckerberg Says He’s “Retooling” Facebook To Attract Younger Adults

Published

on

The Facebook CEO made the remarks one day before the Senate expanded its questioning of how social media apps, in general, are protecting kids online.


Focus on Younger, Not Older

In an earnings call Monday, Facebook CEO Mark Zuckerberg assured investors that he’s “retooling” the company’s platforms to serve “young adults the North Star, rather than optimizing for the larger number of older people.”

Zuckerberg’s comments came the same day a consortium of 17 major news organizations published multiple articles detailing thousands of internal documents that were handed over to the Securities and Exchanges Commission earlier this year.

Several outlets, including Bloomberg and The Verge, reported that Facebook’s own research shows it is hemorrhaging growth with teen users, as well as stagnating with young adults — something that reportedly shocked investors. 

Amid his attempts to control the fallout, Zuckerberg said the company will specifically shift focus to appeal to users between 18 and 29. As part of that, he said the company is planning to ramp up Instagram’s Reels feature to more strongly compete with TikTok. 

He also defended Facebook amid the leaks, saying, “Good faith criticism helps us get better. But my view is that what we are seeing is a coordinated effort to selectively use leaked documents to paint a false picture of our company.”

But the information reaped from the leaked documents is nothing short of damning, touching on everything from human trafficking to the Jan. 6 insurrection, as well as Facebook’s inability to moderate hate speech and terrorism among non-English languages. 

Other Social Media Platforms Testify

On Tuesday, a Congressional subcommittee led by Sen. Richard Blumenthal (R-Ct.) directly addressed representatives from Snapchat, TikTok, and YouTube over child safety concerns on their platforms.

Facebook’s controversies have dominated social media news coverage since mid-September when The Wall Street Journal published six internal slide docs that showed Facebook researchers presenting data on the effect the company’s platforms have on minors’ mental health.

Now, Tuesday’s hearing marks a significant shift to grilling the whole of social media. Notably, this is also the first time Snap and TikTok have testified before Congress.

While each of the companies before senators generally said they support legislation to boost online protections for kids, they didn’t commit to supporting any specific proposals currently on the table. 

In fact, at one point, Sen. Ed Markey (D-Ma.) criticized a Snapchat executive after she said she wanted to “talk a bit more” before the company would support updates to his Children’s Online Privacy Protect Act, which was passed in 1998.

“Look, this is just what drives us crazy,” he said “‘We want to talk, we want to talk, we want to talk.’ This bill’s been out there for years and you still don’t have a view on it. Do you support it or not?”

See what others are saying: (Business Insider) (CNBC) (The Washington Post)

Continue Reading

Business

Key Takeaways From the Explosive “Facebook Papers”

Published

on

Among the most startling revelations, The Washington Post reported that CEO Mark Zuckerberg personally agreed to silence dissident users in Vietnam after the country’s ruling Communist Party threatened to block access to Facebook.


“The Facebook Papers” 

A coalition of 17 major news organizations published a series of articles known as “The Facebook Papers” on Monday in what some are now calling Facebook’s biggest crisis ever. 

The papers are a collection of thousands of redacted internal documents that were originally turned over to the U.S. Securities and Exchanges Commission by former product manager Francis Haugen earlier this year. 

The outlets that published pieces Monday reportedly first obtained the documents at the beginning of October and spent weeks sifting through their contents. Below is a breakdown of many of their findings.

Facebook Is Hemorrhaging Teens 

Both Bloomberg and The Verge reported that Facebook is struggling to retain its hold over teens.  

For example, The Verge said the internal documents it reviewed showed that since 2019, teen users on Facebook’s app have fallen by 13%, with the company expecting another staggering falloff of 45% over the next two years. Meanwhile, the company reportedly expects its app usage among 20- to 30-year-olds to decline by 4% in the same timeframe.

Facebook also found that fewer teens are signing up for new accounts. Similarly, the age group is moving away from using Facebook Messenger.

In an internal presentation, Facebook data scientists directly told executives that the “aging up issue is real”  and warned that if the app’s average age continues to increase as it’s doing right now, it could disengage younger users “even more.”

“Most young adults perceive Facebook as a place for people in their 40s and 50s,” they explained. “Young adults perceive content as boring, misleading, and negative. They often have to get past irrelevant content to get to what matters.” 

The researcher added that users under 18 additionally seem to be migrating from the platform because of concerns related to privacy and its impact on their wellbeing.

Facebook Opted Not To Remove “Like” and “Share” Buttons

In its article, The New York Times cited documents that indicated Facebook wrestled with whether or not it should remove the “like” and “share” buttons.

The original argument behind getting rid of the buttons was multi-faceted. There was a belief that their removal could decrease the anxiety teens feel since social media pressures many to want to achieve a certain number of likes per post. There was also the hope that a decrease in this pressure could lead to teens posting more. Away from that, Facebook additionally needed to tackle growing concerns about the lightning-quick spread of misinformation.

Ultimately, its hypotheses failed. According to the documents reviewed by The Times, hiding the “like” button didn’t alleviate the social anxiety teens feel. It also didn’t lead them to post more. 

In fact, it actually led to users engaging with posts and ads less, and as a result, Facebook decided to keep the buttons. 

Despite that, in 2019, researchers for Facebook still asserted that the platform’s “core product mechanics” were allowing misinformation and hate to flourish.

“The mechanics of our platform are not neutral,” they said in the internal documents.

Facebook Isn’t Really Regulating International Hate

The Atlantic, WIRED, and The Associated Press all reported that terrorist content and hate speech continue to spread with ease on Facebook.

That’s largely because Facebook does not employ a significant number of moderators who speak the languages of many countries where the platform is popular. As a result, its current moderators are widely unable to understand cultural contexts. 

Theoretically, Facebook could solidify an AI-driven solution to catching harmful content spreading among different languages, but it still hasn’t been able to perfect that technology. 

“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” Eliza Campbell, director of the Middle East Institute’s Cyber Program, told the AP. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”

According to The Atlantic, as little as 6% of Arabic-language hate content on Instagram was detected by Facebook’s systems as recently as late last year. Another document detailed by the outlet found that “of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools.”

According to The Atlantic, “employees blamed company leadership for insufficient investment” in both instances.

Facebook Was Lackluster on Human Trafficking Crackdowns Until Revenue Threats

In another major revelation, The Atlantic reported that these documents appear to confirm that the company only took strong action against human trafficking after Apple threatened to pull Facebook and Instagram from its App Store. 

Initially, the outlet said employees participated in a concerted and successful effort to identify and remove sex trafficking-related content; however, the company did not disable or take down associated profiles. 

Because of that, the BBC in 2019 later uncovered a broad network of human traffickers operating an active ring on the platform. In response, Facebook took some additional action, but according to the internal documents, “domestic servitude content remained on the platform.”

Later in 2019, Apple finally issued its threat. After reviewing the documents, The Atlantic said that threat alone — and not any new information — is what finally motivated Facebook to “[kick it] into high gear.” 

“Was this issue known to Facebook before BBC enquiry and Apple escalation? Yes,” one internal message reportedly reads. 

Zuckerberg Personally Made Vietnam Decision

According to The Washington Post, CEO Mark Zuckerberg personally called a decision last year to have Facebook agree to demands set forth by Vietnam’s ruling Communist Party.

The party had previously threatened to disconnect Facebook in the country if it didn’t silence anti-government posts.

“In America, the tech CEO is a champion of free speech, reluctant to remove even malicious and misleading content from the platform,” the article’s authors wrote. “But in Vietnam, upholding the free speech rights of people who question government leaders could have come with a significant cost in a country where the social network earns more than $1 billion in annual revenue.” 

“Zuckerberg’s role in the Vietnam decision, which has not been previously reported, exemplifies his relentless determination to ensure Facebook’s dominance, sometimes at the expense of his stated values,” they added.

In the coming days and weeks, there will likely be more questions regarding Zuckerberg’s role in the decision, as well as inquiries into whether the SEC will take action against him directly. 

Still, Facebook has already started defending its reasoning for making the decision. It told The Post that the choice to censor was justified “to ensure our services remain available for millions of people who rely on them every day.”

In the U.S., Zuckerberg has repeatedly claimed to champion free speech while testifying before lawmakers.

Other Revelations

Among other findings, the Financial Times reported that Facebook employees urged management not to exempt notable figures such as politicians and celebrities from moderation rules. 

Meanwhile, reports from Politico, CNN, NBC, and a host of other outlets cover documents related to Facebook’s market dominance, how much it downplayed its role in the insurrection, and more.  

Outside of these documents, similar to Haugen, another whistleblower submitted an affidavit to the SEC on Friday alleging that Facebook allows hate to go unchecked.

As the documents leaked, Haugen spent Monday testifying before a committee of British Parliament.

See what others are saying: (Business Insider) (Axios) (Protocol)

Continue Reading