Connect with us

Business

DeepNude App Banned on GitHub After Spreading to Multiple Platforms

Published

on

  • Reuploaded replicas of the app DeepNude have been popping up on social media platforms including Twitter, YouTube, and Reddit.
  • The app, which removed clothing from pictures of women to make them look naked, had previously been removed by its creator after an article published by Vice’s technology publication Motherboard created backlash. 
  • Discord and GitHub have since banned replica versions of the app after it was spread on their sites.
  • Over the last week, dozens of women in Singapore have had pictures from their social media accounts doctored and put on porn websites. Those pictures are believed to have been made with a version of the DeepNude App.

DeepNude App Explained

The open source software platform GitHub has banned all code from the controversial deepfake app known as DeepNude, a desktop application that removes clothing from pictures of women and generates a new photo of them appearing naked.

The app was originally released last month, but it did not receive notoriety until Vice’s tech publication Motherboard broke the story several days after it launched. The day after Motherboard’s exposé, the DeepNude creators announced they were pulling the app.

“The probability that people will misuse it is too high,” the creators said in a statement on Twitter. “Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it.”

“The world is not yet ready for DeepNude,” the statement concluded.

GritHub Bans DeepNude Replicas

Apparently, the world thought otherwise, because copies of the DeepNude app were shared and still are being shared all over the internet. 

The program was an app that was meant to be downloaded for use offline, and as a result, it could be easily replicated by anyone who had it on their hard drive.

That is exactly what happened. People who replicated the software reuploaded it on various social media platforms, like GitHub, which banned the app for violating its community guidelines.

“We do not proactively monitor user-generated content, but we do actively investigate abuse reports,” a GitHub spokesperson told Motherboard. “In this case, we disabled the project because we found it to be in violation of our acceptable use policy. We do not condone using GitHub for posting sexually obscene content and prohibit such conduct in our Terms of Service and Community Guidelines.”

According to The Verge, the DeepNude team itself actually uploaded the core algorithm of the app to GitHub.

“The reverse engineering of the app was already on GitHub. It no longer makes sense to hide the source code,” The Verge said the team wrote on a now-deleted page. “DeepNude uses an interesting method to solve a typical AI problem, so it could be useful for researchers and developers working in other fields such as fashion, cinema, and visual effects.”

However, Rogue Rocket was still able to find at least one GitHub repository that claimed to have DeepNude software for Android.

“Deep nudes for android. this is the age of FREEDOM, NOT CENSORSHIP! hackers rule the future!” the page’s description said. 

DeepNude Spreads

GitHub was not the only platform that the replicated app was shared on. 

Even with just a cursory search on Twitter, Rogue Rocket was able to locate two Twitter accounts that provided links to replicated versions of the app. One of the accounts links to a website called Deep Nude Pro, which bills itself as “the official update to the original DeepNude,” and sells the app for $39.99.

The other account links to a DeepNude Patreon where people can either download the app or send the account holder pictures they want to generate and then buy.

When Rogue Rocket searched YouTube, there appeared to be multiple videos explaining how to download new versions of the app, many of which had links to download the app in the description.

Others have also shared links on Reddit, and The Verge reported that links to downloads were being shared on Telegram channels and message boards like 4chan.

To make matters even worse, a lot of the replicated software includes versions that claim they removed the watermarks included in the original app, which were used to denote that the generated pictures were fake.

While it has been reported that a lot of the links to the reuploaded software are malware, download links to the new versions are still incredibly easy to find.

GitHub is also not the only platform to ban the app. According to Motherboard, last week Discord banned a server that was selling what was described as an updated version of the app, where customers could pay $20 in Bitcoin or Amazon gift cards to get “lifetime access.”

Source: VICE

The server and its users were removed for violating Discord’s community guidelines.

“The sharing of non-consensual pornography is explicitly prohibited in our terms of service and community guidelines,” a spokesperson for Discord told Motherboard in a statement.

We will investigate and take immediate action against any reported terms of service violation by a server or user. Non-consensual pornography warrants an instant shut down on the servers and ban of the users whenever we identify it, which is the action we took in this case.”

DeepNude App Used in Singapore

The rapid diffusion of the app on numerous social media platforms has now become an international problem.

On Wednesday, The Straits Times reported that over the past week “dozens of women in Singapore” have had pictures of them taken from their social media accounts and doctored to look like they are naked, then uploaded to pornographic sites.

Those photos are believed to have been doctored using a version of the DeepNude app, which have been shared via download links on a popular sex forum in Singapore.

Lawyers who spoke to The Straits Times told them that doctoring photos to make people look naked is considered a criminal offense in Singapore.

Even though the artificial intelligence aspect is new, one lawyer said that the broad definitions under the law could allow people to be prosecuted for doing so.

Another lawyer backed that up, saying that under Singapore’s Films Act, people who make DeepNude pictures can be jailed for up to two years and fined up to $40,000. They can also be charged with insult of modesty and face a separate fine and jail term of up to a year. 

Legal Efforts in the U.S.

The legal precedent in Singapore raises questions about laws that regulate deepfakes in the United States. While these efforts appear stalled on the federal level, several states have taken actions to address the issue.

On July 1, a new amendment to Virginia’s law against revenge porn, that includes deepfakes as nonconsensual pornography, went into effect. Under that amendment, anyone caught spreading deepfakes could face 12 months in prison and up to $2,500 in fines.

The idea of amending existing revenge porn laws to include deepfakes could be promising if it is effective. According to The New York Times, as of early this year, 41 states have banned revenge porn.

At the same time, lawmakers in New York state have also proposed a bill that would ban the creation of “digital replicas” of individuals without their consent. 

However, the Motion Picture Association of America has opposed the bill, arguing that it would “restrict the ability of our members to tell stories about and inspired by real people and events,” which would violate the First Amendment.

The opposition to the law in New York indicates that even as states take the lead with deepfake regulation, there are still many legal hurdles to overcome.

See what others are saying: (VICE) (The Verge) (The Strait Times)

Business

Uber and Lyft Drivers Offered Incentives to Fight Bill That Targets Gig-Economy

Published

on

  • On July 9 hundreds of Uber and Lyft drivers gathered outside the California State Capitol for a rally about Assembly Bill 5, which would impact how the state determines if a worker is an employee or an independent contractor. 
  • On Monday, the Los Angeles Times reported that the I’m Independent Coalition, a group who works closely with Uber and Lyft, offered to pay drivers to attend the rally against Assembly Bill 5. 
  • Drivers say they also received emails and in-app offers from Uber and Lyft if they attended the rally against the bill.  

The Rally

Drivers for Uber and Lyft say the ride-share companies offered incentives to workers that lobbied against a proposed bill that would allow drivers to be employees instead of independent contractors.

On Monday, the Los Angeles Times reported that drivers for Uber and Lyft who attended the July 9 rally outside California’s State Capitol were compensated for their “travel, parking and time.” 

According to the report, an email from the I’m Independent Coalition was sent to drivers, offering them anywhere from $25 to $100 if they rallied on the group’s behalf. I’m Independent is a coalition that is funded by the California Chamber of Commerce and works to change the proposed legislation. According to their website, both Uber and Lyft are supporters of I’m Independent.

Following the rally, the LA Times says that another email was sent out, reassuring workers that their compensation would be sent over soon. 

“We want to thank you again for taking time to attend the State Capitol Rally on July 9,” the email states. “Your voice had an impact and the Legislature heard loud and clear that you want to keep your flexibility and control over your work! Please expect a driver credit in the next five business days for your travel, parking, and time.”

I’m Independent later confirmed to the paper that the drivers who attended the rally had been paid.

However, the report says the coalition was not the only group offering vouchers and compensation for attending the rally. A Lyft spokesperson confirmed that the company had offered drivers $25 to help cover parking, while Uber sent a $15 lunch voucher through their app and told drivers it was for them, their families, “and anyone you know who also has a stake in maintaining driver flexibility.”

The Bill

The rally outside of the state capitol was held ahead of a Senate labor hearing for Assembly Bill 5, a bill that states it “would provide that the factors of the “ABC” test be applied in order to determine the status of a worker as an employee or independent contractor.” 

The “ABC” test comes from an April 2018 California Supreme Court case, Dynamex Operations West, Inc. v. Superior Court. During that case, the Court ruled that in order to determine if a worker was an independent contractor, three qualifications must be met. According to court documents, those requirements are: 

“(A) that the worker is free from the control and direction of the hirer in connection with the performance of the work, both under the contract for the performance of such work and in fact.”

(B) that the worker performs work that is outside the usual course of the hiring entity’s business,” and “(C) that the worker is customarily engaged in an independently established trade, occupation, or business of the same nature as the work performed for the hiring entity.” 

Under AB5, drivers for both Uber and Lyft would no longer be classified as independent contractors but instead employees. The main difference between an independent contractor and an employee is the regulations and requirements their employer must follow. If a worker is determined to be an employee, they receive things like sick pay, a required minimum wage, and a limit on the hours they can work. 

However, Assembly Bill 5 states that certain occupations are exempted from the “ABC” test, such as health care professionals like doctors and dentists, among others.  

In May, the bill passed in the state assembly in a 59 to 15 vote. Earlier this month the State Senate Committee on Labor, Public Employment, and Retirement voted the bill through. 

Uber and Lyft on AB5 

Uber has previously said the company will not take a side when it comes to the bill, but they do believe there are better solutions than Assembly Bill 5. 

However, at the beginning of June, Uber sent an email to their drivers saying the bill could “threaten your access to flexible work with Uber.” 

Lyft has taken a similar approach and also sent an email to its drivers, telling the workers that the ride-share company is trying to “protect” their jobs. 

“Legislators are considering changes that could cause Lyft to limit your hours and flexibility, resulting in scheduled shifts,” the email, which was later shared by Lyft, states. “We’re advocating to protect your flexibility with Lyft, in addition to establishing an earning minimum, offering protections and benefits and giving drivers representation so that you have a voice in the company.”

Previous Responses to AB5

In May, Uber and Lyft drivers around the world went on strike asking for similar requirements employees receive, such as a minimum hourly wage. The strikes took place just three weeks before the state assembly voted and passed Assembly Bill 5 and advocated for similar requirements for drivers. 

Previous responses from Uber and Lyft drivers

Even though the strikes did not create any massive change to the companies, according to a June 2019 Ipsos study, the majority of drivers from both Uber and Lyft still want “the same workers’ rights as those in more traditional employment positions.”

Assembly Bill 5 advanced to the appropriations committee earlier this month but the committees are currently in summer recess.

See what others are saying: (Los Angeles Times) (International Business Times) (SF Gate)

Continue Reading

Business

Blue Bell Says It’s Working With Authorities to Track Down Viral Ice Cream Licker

Published

on

  • Footage has gone viral that shows a woman in a grocery store opening a container of ice cream, licking the top, and then placing it back into a store freezer for another customer to purchase. 
  • The ice cream brand Blue Bell now says they are working with authorities to track down the woman. 
  • The incident also prompted many to ask why the company does not have protective seals on its products.
  • Blue Bell says a “natural seal” is created when the ice cream hardens upside down during production.

Viral Video 

Blue Bell Ice Cream is looking for the woman seen in a viral clip opening a container of ice cream, licking the top, and putting it back into the store freezer.

Footage of the incident went viral on Twitter after a user who goes by the screen name Optimus Primal shared the clip on Saturday with the caption, “What kinda psychopathic behavior is this?!”

In the clip, which has over 11 million views, an individual offscreen can be heard encouraging the woman to lick the company’s Tin Roof flavored ice cream. 

Who is she?

In a statement to Time Magazine, the user who shared the clip said he doesn’t know the woman in it and actually found the video on Instagram as part of a story shared by the actress Alexis Fields-Jackson. 

The actress apparently doesn’t know the woman either. In a post on her Instagram, she wrote: “I am not the girl in the disgusting ice cream video. Leave me alone.” She also explained that she reposted the video like many others have and said it was removed by Instagram. The actress also promised to report those who have been defaming or threatening her over the confusion. 

Some social media users have identified the ice cream licker as a woman from San Antonio, Texas named Asia. According to a report from Heavy, she was allegedly the owner of the Instagram account “xx.asiaaaa.xx” while it was still active. Twitter users say she boasted about becoming famous over the video and said she recently had the flu, which sparked even more outrage.

“Now you can call it Flu Bell ice cream ’cause I was a lil sick last week,” a screenshot of one comment by the account reads. The post goes on to encourage others to follow her lead and use the hashtag #TinRoofChallenge writing, “Let’s see if we can start an epidemic (literally).”

Blue Bell Responds 

People on Twitter have also made sure to make the ice cream brand aware of the situation. Blue Bell responded to several users on Twitter saying they “take the issue very seriously.”

In a statement on their website, the company said they are “currently working with law enforcement, retail partners, and social media platforms” to investigate. 

“This type of incident will not be tolerated. Food safety is a top priority, and we work hard to provide a safe product and maintain the highest level of confidence from our consumers,” the company added.

Many users are calling for police to take action over the food tampering incident.

If the incident did happen in Texas, state law makes it a felony to tamper with consumer products if someone could be injured as a result. Depending on the degree of the charge, punishments can range from fines to jail time. 

One viral tweet said the ice cream licker was charged with a felony. However, as of now police have not confirmed any arrest. 

In fact, San Antonio police told KSAT that they can’t confirm that the incident even happened in their jurisdiction and say it does not appear that the girl in the video lives in San Antonio. So as of now, where the incident happened and who is responsible remains unclear.

Protective Seals

Aside from the outrage directed at the woman in the video, the incident also prompted many social media users to question why Blue Bell does not have protective seals on its ice cream containers.

Some even say the woman’s action don’t warrant an arrest but show that the company has a safety issue. 

The company responded to those complaints in their statement by saying, “During production, our half gallons are flipped upside down and sent to a hardening room where the ice cream freezes to the lid creating a natural seal.” 

“The lids are frozen tightly to the carton,” the statement continued. “Any attempt at opening the product should be noticeable.”

See what others are saying: (Time) (Heavy) (Insider)  


Continue Reading

Business

Twitter to Label Rule-Breaking Tweets by Political Leaders

Published

on

  • Twitter announced that it would add a warning notice to political leaders’ tweets that break the site’s rules.
  • While Twitter believes it is important to convey messages from people in power, users will now have to click or tap on a notice to see rule-breaking content.
  • Many have pointed out that this could potentially impact President Donald Trump, as his use of Twitter has long faced public scrutiny.

Twitter Announces Warning Notice

Twitter announced that it will be placing a warning on tweets posted by political leaders that violate the site’s community standards.

In a Thursday blog post, the social media giant said that it would “protect the health of the public conversation,” to keep tweets from government officials on their site, even if the content in those tweets break rules in their policy.

While that content will be allowed to stay on the site, it will now come with a warning notice to provide users with more context. Trust and Safety, Legal, Public Policy and regional teams will all have a role in deciding if the rule-violating content should be allowed to stay up. Those teams will consider several factors in the decision process.

They will look at the immediacy and severity of potential harm of the tweet. Additionally, the teams will consider whether preserving a tweet will allow others to hold officials accountable. They will also look into whether or not the information provided in the tweet can be accessed elsewhere.

Users will have the option to tap or click the notice to see the content of the affected tweets. 

According to their post, the notice will read, “The Twitter Rules about abusive behavior apply to this Tweet. However, Twitter has determined that it may be in the public’s interest for the Tweet to remain available.”

Photo via Twitter.

In addition to having this notice attached to the tweet, Twitter also said that they will “also take steps to make sure the Tweet is not algorithmically elevated on our service.” 

As part of this, tweets of this nature will not appear on the explore page, safe search, the notifications tab, live events pages, or in a timeline’s top tweets. 

Impact on President Donald Trump

This notice will apply to government officials, those running for or being considered for a political position, and those representing government officials. The person must also have a verified account with over 100,000 followers. It would notably apply to President Donal Trump, whose use of Twitter has sparked public debate about political figures and social media. 

Twitter has faced backlash for allowing some of Trump’s content to stay online, even though many argue that some of his past tweets have violated their policies when it comes to matters like bullying. In a January 2018 blog post titled “World Leaders,” the company addressed this issue. 

“Blocking a world leader from Twitter or removing their controversial Tweets would hide important information people should be able to see and debate,” Twitter said, in defense of its choice to not remove posts from leaders like Trump. “It would also not silence that leader, but it would certainly hamper necessary discussion around their words and actions.”

Trump has not responded to Twitter’s announcement. However, during a Wednesday interview with Fox Business, he did criticize the company.

“I mean, what they did to me on Twitter is incredible,” he said during a phone interview with Maria Bartiromo. “You know, I have millions and millions of followers, but I will tell you, they make it very hard for people to join me in Twitter, and they make it very much harder for me to get out the message.” 

Twitter said that the warning will only be applied to future tweets and that it does not anticipate having to use the feature often.

See what others are saying: (NPR) (CNBC) (Wired)

Continue Reading