Connect with us

Business

Deepfake App Pulled After Many Expressed Concerns

Published

on

  • Many were outraged this week over a desktop app called DeepNude, that allows users to remove clothing from pictures of women to make them look naked.
  • Vice’s Motherboard published an article where they tested the app’s capabilities on pictures of celebrities and found that it only works on women.
  • Motherboard described the app as “easier to use, and more easily accessible than deepfakes have ever been.”
  • The app’s developers later pulled it from sale after much criticism, but the new technology has reignited debate about the need for social media companies and lawmakers to regulate and moderate deepfakes.

The New Deepfake App

Developers pulled a new desktop app called DeepNude that let users utilize deepfake technology to remove clothing from pictures of women to make them look naked.

The app was removed after an article published by Vice New’s tech publication Motherboard expressed concerns over the technology.

Motherboard downloaded and tested the app on more than a dozen pictures of both men and women. They found that while the app does work on women who are fully clothed, it works best on images where people are already showing more skin. 

“The results vary dramatically,” the article said. “But when fed a well lit, high resolution image of a woman in a bikini facing the camera directly, the fake nude images are passably realistic.”

The article also contained several of the images Motherboard tested, including photos of celebrities like Taylor Swift, Tyra Banks, Natalie Portman, Gal Gadot, and Kim Kardashian. The pictures were later removed from the article. 

Motherboard reported that the app explicitly only works on women. “When Motherboard tried using an image of a man,” they wrote, “It replaced his pants with a vulva.”

Motherboard emphasized how frighteningly accessible the app is. “DeepNude is easier to use, and more easily accessible than deepfakes have ever been,” they reported. 

Anyone can get the app for free, or they can purchase a premium version. Motherboard reported that the premium version costs $50, but a screenshot published in the Verge indicated that it was $99.

Source: The Verge

In the free version, the output image is partly covered by a watermark. In the paid version, the watermark is removed but there is a stamp that says “FAKE” in the upper-left corner.

However, as Motherboard notes, it would be extremely easy to crop out the “FAKE” stamp or remove it with photoshop. 

On Thursday, the day after Motherboard published the article, DeepNude announced on their Twitter account that they had pulled the app.

“Despite the safety measures adopted (watermarks) if 500,000 people use it, the probability that people will misuse it is too high,” the statement said. “We don’t want to make money this way. Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it.”

“The world is not yet ready for DeepNude,” the statement concluded. The DeepNude website has now been taken down.

Where Did it Come From?

According to the Twitter account for DeepNude, the developers launched downloadable software for the app for Windows and Linux on June 23.

After a few days, the apps developers had to move the website offline because it was receiving too much traffic, according to DeepNude’s Twitter.

Currently, it is unclear who these developers are or where they are from. Their Twitter account lists their location as Estonia, but does not provide more information.

Motherboard was able to reach the anonymous creator by email, who requested to go by the name Alberto. Alberto told them that the app’s software is based on an open source algorithm called pix2pix that was developed by researchers at UC Berkeley back in 2017.

That algorithm is similar to the ones used for deepfake videos, and weirdly enough it’s also similar to the technology that self-driving cars use to formulate driving scenarios.

Alberto told Motherboard that the algorithm only works on women because “images of nude women are easier to find online,” but he said he wants to make a male version too.

Alberto also told Motherboard that during his development process, he asked himself if it was morally questionable to make the app, but ultimately decided it was not because he believed that the invention of the app was inevitable.

“I also said to myself: the technology is ready (within everyone’s reach),” Alberto told Motherboard. “So if someone has bad intentions, having DeepNude doesn’t change much… If I don’t do it, someone else will do it in a year.”

The Need for Regulation

This inevitability argument is one that has been discussed often in the debates surrounding deepfakes.

It also goes along with the idea that even if these deepfakes are banned by Pornhub and Reddit, they are just going to pop up in other places. These kind of arguments are also an important part of the discussion of how to detect and regulate deepfakes.

Motherboard showed the DeepNude app to Hany Farid, a computer science professor at UC Berkeley who is an expert on deepfakes. Faird said that he was shocked by how easily the app created the fakes.

Usually, deepfake videos take hours to make. By contrast, DeepNude only takes about 30 seconds to render these images.

“We are going to have to get better at detecting deepfakes,” Farid told Motherboard. “In addition, social media platforms are going to have to think more carefully about how to define and enforce rules surrounding this content.”

“And, our legislators are going to have to think about how to thoughtfully regulate in this space.”

The Role of Social Media

The need for social media platforms and politicians to regulate this kind of content has become increasingly prevalent in the discussion about deepfakes.

Over the last few years, deepfakes have become widespread internationally, but any kind of laws or regulations have been unable to keep up with the technology.

On Wednesday, Facebook CEO Mark Zuckerberg said that his company is looking into ways to deal with deepfakes during a conversation at the Aspen Ideas Festival.

He did not say exactly how Facebook is doing this, but he did say that the problem from his perspective was how deepfakes are defined.

“Is it AI-manipulated media or manipulated media using AI that makes someone say something they didn’t say?” Zuckerberg said. “I think that’s probably a pretty reasonable definition.”

However, that definition is also exceptionally narrow. Facebook recently received significant backlash after it decided not to take down a controversial video of Nancy Pelosi that had been slowed down, making her drunk or impaired.

Zuckerberg said he argued that the video should be left up because it is better to show people fake content than hide it. However, experts worry that that kind of thinking could set a dangerous precedent for deepfakes.

The Role of Lawmakers

On Monday, lawmakers in California proposed a bill that would ban deepfakes in the state. The assemblymember that introduced the bill said he did it because of the Pelosi video.

On the federal level, similar efforts to regulate deepfake technology have been stalled.

Separate bills have been introduced in both the House and the Senate to criminalize deepfakes, but both of the bills have only been referred to committees, and it is unclear whether or not they have even been discussed by lawmakers.

However, even if these bills do move forward, there are a lot of legal hurdles they have to go through. An attorney named Carrie Goldberg, whose law firm specializes in revenge porn, spoke to Motherboard about these issues.

“It’s a real bind,” said Goldberg. “Deepfakes defy most state revenge porn laws because it’s not the victim’s own nudity depicted, but also our federal laws protect the companies and social media platforms where it proliferates.”

However, the article’s author, Samantha Cole, also argued that the political narratives around deepfakes leave out the women victimized by them.

“Though deepfakes have been weaponized most often against unconsenting women, most headlines and political fear of them have focused on their fake news potential,” she wrote.

That idea of deepfakes being “fake news” or disinformation seems to be exactly how Zuckerberg and Facebook are orienting their policies.

Moving forward, many feel that policy discussions about deepfakes should also consider how the technology disproportionately affects women and can be tied to revenge porn.

See what others are saying: (Vice) (The Verge) (The Atlantic)

Business

Uber and Lyft Drivers Offered Incentives to Fight Bill That Targets Gig-Economy

Published

on

  • On July 9 hundreds of Uber and Lyft drivers gathered outside the California State Capitol for a rally about Assembly Bill 5, which would impact how the state determines if a worker is an employee or an independent contractor. 
  • On Monday, the Los Angeles Times reported that the I’m Independent Coalition, a group who works closely with Uber and Lyft, offered to pay drivers to attend the rally against Assembly Bill 5. 
  • Drivers say they also received emails and in-app offers from Uber and Lyft if they attended the rally against the bill.  

The Rally

Drivers for Uber and Lyft say the ride-share companies offered incentives to workers that lobbied against a proposed bill that would allow drivers to be employees instead of independent contractors.

On Monday, the Los Angeles Times reported that drivers for Uber and Lyft who attended the July 9 rally outside California’s State Capitol were compensated for their “travel, parking and time.” 

According to the report, an email from the I’m Independent Coalition was sent to drivers, offering them anywhere from $25 to $100 if they rallied on the group’s behalf. I’m Independent is a coalition that is funded by the California Chamber of Commerce and works to change the proposed legislation. According to their website, both Uber and Lyft are supporters of I’m Independent.

Following the rally, the LA Times says that another email was sent out, reassuring workers that their compensation would be sent over soon. 

“We want to thank you again for taking time to attend the State Capitol Rally on July 9,” the email states. “Your voice had an impact and the Legislature heard loud and clear that you want to keep your flexibility and control over your work! Please expect a driver credit in the next five business days for your travel, parking, and time.”

I’m Independent later confirmed to the paper that the drivers who attended the rally had been paid.

However, the report says the coalition was not the only group offering vouchers and compensation for attending the rally. A Lyft spokesperson confirmed that the company had offered drivers $25 to help cover parking, while Uber sent a $15 lunch voucher through their app and told drivers it was for them, their families, “and anyone you know who also has a stake in maintaining driver flexibility.”

The Bill

The rally outside of the state capitol was held ahead of a Senate labor hearing for Assembly Bill 5, a bill that states it “would provide that the factors of the “ABC” test be applied in order to determine the status of a worker as an employee or independent contractor.” 

The “ABC” test comes from an April 2018 California Supreme Court case, Dynamex Operations West, Inc. v. Superior Court. During that case, the Court ruled that in order to determine if a worker was an independent contractor, three qualifications must be met. According to court documents, those requirements are: 

“(A) that the worker is free from the control and direction of the hirer in connection with the performance of the work, both under the contract for the performance of such work and in fact.”

(B) that the worker performs work that is outside the usual course of the hiring entity’s business,” and “(C) that the worker is customarily engaged in an independently established trade, occupation, or business of the same nature as the work performed for the hiring entity.” 

Under AB5, drivers for both Uber and Lyft would no longer be classified as independent contractors but instead employees. The main difference between an independent contractor and an employee is the regulations and requirements their employer must follow. If a worker is determined to be an employee, they receive things like sick pay, a required minimum wage, and a limit on the hours they can work. 

However, Assembly Bill 5 states that certain occupations are exempted from the “ABC” test, such as health care professionals like doctors and dentists, among others.  

In May, the bill passed in the state assembly in a 59 to 15 vote. Earlier this month the State Senate Committee on Labor, Public Employment, and Retirement voted the bill through. 

Uber and Lyft on AB5 

Uber has previously said the company will not take a side when it comes to the bill, but they do believe there are better solutions than Assembly Bill 5. 

However, at the beginning of June, Uber sent an email to their drivers saying the bill could “threaten your access to flexible work with Uber.” 

Lyft has taken a similar approach and also sent an email to its drivers, telling the workers that the ride-share company is trying to “protect” their jobs. 

“Legislators are considering changes that could cause Lyft to limit your hours and flexibility, resulting in scheduled shifts,” the email, which was later shared by Lyft, states. “We’re advocating to protect your flexibility with Lyft, in addition to establishing an earning minimum, offering protections and benefits and giving drivers representation so that you have a voice in the company.”

Previous Responses to AB5

In May, Uber and Lyft drivers around the world went on strike asking for similar requirements employees receive, such as a minimum hourly wage. The strikes took place just three weeks before the state assembly voted and passed Assembly Bill 5 and advocated for similar requirements for drivers. 

Previous responses from Uber and Lyft drivers

Even though the strikes did not create any massive change to the companies, according to a June 2019 Ipsos study, the majority of drivers from both Uber and Lyft still want “the same workers’ rights as those in more traditional employment positions.”

Assembly Bill 5 advanced to the appropriations committee earlier this month but the committees are currently in summer recess.

See what others are saying: (Los Angeles Times) (International Business Times) (SF Gate)

Continue Reading

Business

DeepNude App Banned on GitHub After Spreading to Multiple Platforms

Published

on

  • Reuploaded replicas of the app DeepNude have been popping up on social media platforms including Twitter, YouTube, and Reddit.
  • The app, which removed clothing from pictures of women to make them look naked, had previously been removed by its creator after an article published by Vice’s technology publication Motherboard created backlash. 
  • Discord and GitHub have since banned replica versions of the app after it was spread on their sites.
  • Over the last week, dozens of women in Singapore have had pictures from their social media accounts doctored and put on porn websites. Those pictures are believed to have been made with a version of the DeepNude App.

DeepNude App Explained

The open source software platform GitHub has banned all code from the controversial deepfake app known as DeepNude, a desktop application that removes clothing from pictures of women and generates a new photo of them appearing naked.

The app was originally released last month, but it did not receive notoriety until Vice’s tech publication Motherboard broke the story several days after it launched. The day after Motherboard’s exposé, the DeepNude creators announced they were pulling the app.

“The probability that people will misuse it is too high,” the creators said in a statement on Twitter. “Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it.”

“The world is not yet ready for DeepNude,” the statement concluded.

GritHub Bans DeepNude Replicas

Apparently, the world thought otherwise, because copies of the DeepNude app were shared and still are being shared all over the internet. 

The program was an app that was meant to be downloaded for use offline, and as a result, it could be easily replicated by anyone who had it on their hard drive.

That is exactly what happened. People who replicated the software reuploaded it on various social media platforms, like GitHub, which banned the app for violating its community guidelines.

“We do not proactively monitor user-generated content, but we do actively investigate abuse reports,” a GitHub spokesperson told Motherboard. “In this case, we disabled the project because we found it to be in violation of our acceptable use policy. We do not condone using GitHub for posting sexually obscene content and prohibit such conduct in our Terms of Service and Community Guidelines.”

According to The Verge, the DeepNude team itself actually uploaded the core algorithm of the app to GitHub.

“The reverse engineering of the app was already on GitHub. It no longer makes sense to hide the source code,” The Verge said the team wrote on a now-deleted page. “DeepNude uses an interesting method to solve a typical AI problem, so it could be useful for researchers and developers working in other fields such as fashion, cinema, and visual effects.”

However, Rogue Rocket was still able to find at least one GitHub repository that claimed to have DeepNude software for Android.

“Deep nudes for android. this is the age of FREEDOM, NOT CENSORSHIP! hackers rule the future!” the page’s description said. 

DeepNude Spreads

GitHub was not the only platform that the replicated app was shared on. 

Even with just a cursory search on Twitter, Rogue Rocket was able to locate two Twitter accounts that provided links to replicated versions of the app. One of the accounts links to a website called Deep Nude Pro, which bills itself as “the official update to the original DeepNude,” and sells the app for $39.99.

The other account links to a DeepNude Patreon where people can either download the app or send the account holder pictures they want to generate and then buy.

When Rogue Rocket searched YouTube, there appeared to be multiple videos explaining how to download new versions of the app, many of which had links to download the app in the description.

Others have also shared links on Reddit, and The Verge reported that links to downloads were being shared on Telegram channels and message boards like 4chan.

To make matters even worse, a lot of the replicated software includes versions that claim they removed the watermarks included in the original app, which were used to denote that the generated pictures were fake.

While it has been reported that a lot of the links to the reuploaded software are malware, download links to the new versions are still incredibly easy to find.

GitHub is also not the only platform to ban the app. According to Motherboard, last week Discord banned a server that was selling what was described as an updated version of the app, where customers could pay $20 in Bitcoin or Amazon gift cards to get “lifetime access.”

Source: VICE

The server and its users were removed for violating Discord’s community guidelines.

“The sharing of non-consensual pornography is explicitly prohibited in our terms of service and community guidelines,” a spokesperson for Discord told Motherboard in a statement.

We will investigate and take immediate action against any reported terms of service violation by a server or user. Non-consensual pornography warrants an instant shut down on the servers and ban of the users whenever we identify it, which is the action we took in this case.”

DeepNude App Used in Singapore

The rapid diffusion of the app on numerous social media platforms has now become an international problem.

On Wednesday, The Straits Times reported that over the past week “dozens of women in Singapore” have had pictures of them taken from their social media accounts and doctored to look like they are naked, then uploaded to pornographic sites.

Those photos are believed to have been doctored using a version of the DeepNude app, which have been shared via download links on a popular sex forum in Singapore.

Lawyers who spoke to The Straits Times told them that doctoring photos to make people look naked is considered a criminal offense in Singapore.

Even though the artificial intelligence aspect is new, one lawyer said that the broad definitions under the law could allow people to be prosecuted for doing so.

Another lawyer backed that up, saying that under Singapore’s Films Act, people who make DeepNude pictures can be jailed for up to two years and fined up to $40,000. They can also be charged with insult of modesty and face a separate fine and jail term of up to a year. 

Legal Efforts in the U.S.

The legal precedent in Singapore raises questions about laws that regulate deepfakes in the United States. While these efforts appear stalled on the federal level, several states have taken actions to address the issue.

On July 1, a new amendment to Virginia’s law against revenge porn, that includes deepfakes as nonconsensual pornography, went into effect. Under that amendment, anyone caught spreading deepfakes could face 12 months in prison and up to $2,500 in fines.

The idea of amending existing revenge porn laws to include deepfakes could be promising if it is effective. According to The New York Times, as of early this year, 41 states have banned revenge porn.

At the same time, lawmakers in New York state have also proposed a bill that would ban the creation of “digital replicas” of individuals without their consent. 

However, the Motion Picture Association of America has opposed the bill, arguing that it would “restrict the ability of our members to tell stories about and inspired by real people and events,” which would violate the First Amendment.

The opposition to the law in New York indicates that even as states take the lead with deepfake regulation, there are still many legal hurdles to overcome.

See what others are saying: (VICE) (The Verge) (The Strait Times)

Continue Reading

Business

Blue Bell Says It’s Working With Authorities to Track Down Viral Ice Cream Licker

Published

on

  • Footage has gone viral that shows a woman in a grocery store opening a container of ice cream, licking the top, and then placing it back into a store freezer for another customer to purchase. 
  • The ice cream brand Blue Bell now says they are working with authorities to track down the woman. 
  • The incident also prompted many to ask why the company does not have protective seals on its products.
  • Blue Bell says a “natural seal” is created when the ice cream hardens upside down during production.

Viral Video 

Blue Bell Ice Cream is looking for the woman seen in a viral clip opening a container of ice cream, licking the top, and putting it back into the store freezer.

Footage of the incident went viral on Twitter after a user who goes by the screen name Optimus Primal shared the clip on Saturday with the caption, “What kinda psychopathic behavior is this?!”

In the clip, which has over 11 million views, an individual offscreen can be heard encouraging the woman to lick the company’s Tin Roof flavored ice cream. 

Who is she?

In a statement to Time Magazine, the user who shared the clip said he doesn’t know the woman in it and actually found the video on Instagram as part of a story shared by the actress Alexis Fields-Jackson. 

The actress apparently doesn’t know the woman either. In a post on her Instagram, she wrote: “I am not the girl in the disgusting ice cream video. Leave me alone.” She also explained that she reposted the video like many others have and said it was removed by Instagram. The actress also promised to report those who have been defaming or threatening her over the confusion. 

Some social media users have identified the ice cream licker as a woman from San Antonio, Texas named Asia. According to a report from Heavy, she was allegedly the owner of the Instagram account “xx.asiaaaa.xx” while it was still active. Twitter users say she boasted about becoming famous over the video and said she recently had the flu, which sparked even more outrage.

“Now you can call it Flu Bell ice cream ’cause I was a lil sick last week,” a screenshot of one comment by the account reads. The post goes on to encourage others to follow her lead and use the hashtag #TinRoofChallenge writing, “Let’s see if we can start an epidemic (literally).”

Blue Bell Responds 

People on Twitter have also made sure to make the ice cream brand aware of the situation. Blue Bell responded to several users on Twitter saying they “take the issue very seriously.”

In a statement on their website, the company said they are “currently working with law enforcement, retail partners, and social media platforms” to investigate. 

“This type of incident will not be tolerated. Food safety is a top priority, and we work hard to provide a safe product and maintain the highest level of confidence from our consumers,” the company added.

Many users are calling for police to take action over the food tampering incident.

If the incident did happen in Texas, state law makes it a felony to tamper with consumer products if someone could be injured as a result. Depending on the degree of the charge, punishments can range from fines to jail time. 

One viral tweet said the ice cream licker was charged with a felony. However, as of now police have not confirmed any arrest. 

In fact, San Antonio police told KSAT that they can’t confirm that the incident even happened in their jurisdiction and say it does not appear that the girl in the video lives in San Antonio. So as of now, where the incident happened and who is responsible remains unclear.

Protective Seals

Aside from the outrage directed at the woman in the video, the incident also prompted many social media users to question why Blue Bell does not have protective seals on its ice cream containers.

Some even say the woman’s action don’t warrant an arrest but show that the company has a safety issue. 

The company responded to those complaints in their statement by saying, “During production, our half gallons are flipped upside down and sent to a hardening room where the ice cream freezes to the lid creating a natural seal.” 

“The lids are frozen tightly to the carton,” the statement continued. “Any attempt at opening the product should be noticeable.”

See what others are saying: (Time) (Heavy) (Insider)  


Continue Reading