Connect with us

Industry

Creators File Lawsuit Against YouTube Over Alleged LGBTQ+ Discrimination

Published

on

  • A group of LGBTQ+ creators have filed a lawsuit against YouTube and Google claiming that YouTube flags, suppresses, and demonetizes LGBTQ+ videos.
  • The lawsuit claims YouTube restricts content featuring certain LGBTQ+ tags such as “gay,” “lesbian,” “bisexual,” or “transgender.”
  • YouTube has denied such claims in the past but has not responded specifically to the lawsuit. 

The Lawsuit Against YouTube and Google

Several LGBTQ+ creators are suing YouTube and its parent company Google for allegedly discriminating against LGBTQ+ content on YouTube. 

Among the accusations, the creators claim YouTube restricts recommendations, demonetizes, and alters the thumbnails of LGBTQ+ videos. 

Creators Bria Kam and Chrissy Chambers of BriaAndChrissy, Amp Somers of Watts The Safeword, Chase Ross, Linsday Amer, Chris Knight, Celso Dulay, and Cameron Stiehl all filed the class-action lawsuit Tuesday in San Jose, California.

“Our LGBTQ+ content is being demonetized, restricted, and not sent out to viewers which has highly affected our ability to reach the community we strongly want to help,” Chambers said in a video posted the same day.

In the suit, Kam and Chambers argue that their channel previously earned about $3,500 each month but now only generates about $400-500 monthly. 

After posting a music video called “Face Your Fears,” Kam and Chambers said the video was categorized under “restricted mode.” The video was filmed as a dedication to the 2016 Orlando Pulse Shooting, and it features Bria and Chrissy kissing in front of anti-gay protesters.

“They flagged our pride,” YouTuber Chase Ross said. “They did not allow us to buy ads. They restricted us, they demonetized us, and they did not stand up for us.” 

Last year, Ross, who often posts about trans issues, accused YouTube of age-gating his videos for including the word “transgender” in the titles.

Growing up, I was in a very religious household,” said Amp Somers of the sex education channel Watts The Safeword. “I didn’t get any sort of gay education, alone queer education, that applied to me and the sex I was going to have. I created content on the internet that I wish I would have had growing up, but we’re finding it harder and harder to create content on this platform. Google and YouTube continue to censor us and tell us that we’re not breaking any rules but that our content is still not allowed and going to be restricted on this platform.” 

YouTube Content Selection and Enforcement

The creators also claim YouTube is restricting LGBTQ+ content featuring words like  “gay,” “lesbian,” “bisexual,” “transgender,” or “queer.” Notably, YouTube does not publish its algorithm, which can make it hard to tell if your content is actually being suppressed. 

While a YouTube spokesperson replied with “no comment” to the lawsuit, YouTube has denied similar claims in the past. Last week, YouTube CEO Susan Wojcicki pushed back against claims that videos are demonetized for falling under LGBTQ+ categories.

In an interview with vlogger Alfie Deyes, she said, “We do not automatically demonetize LGBTQ content… We work incredibly hard to make sure that our systems are fair.”

She also said YouTube does not have a policy to demonetize a video if it has a certain word in the title, and said both the process for recommending videos and determining ads are independent of each other.

On Wednesday morning, after news of the lawsuit spread, Wojcicki posted Deyes’ Aug. 4 video on Twitter, though it’s unclear if the timing is related.

Another part of the lawsuit says because YouTube is the largest video streaming website, it holds a near-monopoly.

The suit states YouTube “used their monopoly power over content regulation to selectively apply their rules and restrictions in a manner that allowed them to gain an unfair advantage to profit from their own content to the detriment of its consumers.”

The creators use the argument to claim YouTube “goes easy” on some of its biggest creators and cite content from James Charles, an issue that has also been raised in the past with YouTubers like Logan Paul and Felix Kjellberg, also known as PewDiePie.

“[YouTube] continue[s] to restrain the innocuous travel videos of Watts The Safeword under its Restricted Mode, age restrictions, and demonetization rules and practices, while allowing objectively and sexually explicit content that Google/YouTube sponsor and/or profit from to run unrestricted on the YouTube platform,” the suit alleges.

It continues by citing examples from a recent video on the beauty YouTuber’s channel showing him wearing a G-string and spanking a woman’s bare butt while at Coachella.

Even though Watts The Safeword features more mature content, the channel says it personally applies the restricted mode filter to its more sexually explicit videos. 

According to the Washington Post, “eleven current and past moderators, who have worked on the front lines of content decisions, believe that popular creators often get special treatment in the form of looser interpretations of YouTube’s guidelines prohibiting demeaning speech, bullying and other forms of graphic content.”

YouTube has also denied those claims.

Response

Following this lawsuit, many online said they were standing with the creators suing YouTube and Google.

Some on Twitter even shared their own experiences trying to generate LGBTQ+ content on YouTube.

See what others are saying: (The Verge) (Washington Post) (Business Insider)

Industry

Influencers Exposed for Posting Fake Private Jet Photos

Published

on

  • A viral tweet showed a studio set in Los Angeles, California that is staged to look like the inside of a private jet.
  • Some influencers were called out for using that very same studio to take social media photos and videos.
  • While some slammed them for faking their lifestyles online, others poked fun at the behavior and noted that this is something stars like Bow Wow have been caught doing before.
  • Others have even gone so far as to buy and pose with empty designer shopping bags to pretend they went on a massive spending spree.

A tweet went viral over the weekend exposing the secret behind some influencer travel photos.

“Nahhhhh I just found out LA ig girlies are using studio sets that look like private jets for their Instagram pics,” Twitter user @maisonmelissa wrote Thursday.

“It’s crazy that anything you’re looking at could be fake. The setting, the clothes, the body… idk it just kinda of shakes my reality a bit lol,” she continued in a tweet that quickly garnered over 100,000 likes.

The post included photos of a private jet setup that’s actually a studio in California, which you can rent for $64 an hour on the site Peerspace.

As the tweet picked up attention, many began calling out influencers who they noticed have posted photos or videos in that very same studio.

@the7angels

Come fly with the angels 👼

♬ Hugh Hefner – ppcocaine

Perhaps the most notable influencers to be called out were the Mian Twins, who eventually edited their Instagram captions to admit they were on a set.

While a ton of people were upset about this, others pointed out that it’s not exactly that new of an idea. Even Bow Wow was once famously called out in 2017 for posting a private plane photo on social media before being spotted on a commercial flight. 

Twitter users even noted other ridiculous things some people do for the gram, like buying out empty shopping bags to pretend they’ve gone on a shopping spree.

Meanwhile, others poked fun at the topic, like Lil Nas X, who is never one to miss out on a viral internet moment. He photoshopped himself into the fake private jet, sarcastically writing, “thankful for it all,” in his caption.

So ultimately, it seems like the moral of this story is: don’t believe everything you see on social media.

See what others are saying: (LADBible) (Dazed Digital) (Metro UK)

Continue Reading

Industry

South Korea’s Supreme Court Upholds Rape Case Sentences for Korean Stars Jung Joon-young and Choi Jong-hoon

Published

on

  • On Thursday morning, the Supreme Court in Seoul upheld the sentences of Jung Joon Young and Choi Jong Hoon for aggravated rape and related charges.
  • Jung will serve five years in prison, while Choi will go to prison for two-and-a-half.
  • Videos of Jung, Choi, and others raping women were found in group chats that stemmed from investigations into Seungri, of the k-pop group BigBang, as part of the Burning Sun Scandal.
  • The two stars tried to claim that some of the sex was consensual, but the courts ultimately found testimony from survivors trustworthy. Courts did, however, have trouble finding victims who were willing to come forward over fears of social stigma.

Burning Sun Scandal Fall Out

South Korea’s Supreme Court upheld the rape verdicts against stars Jung Joon-young and Choi Jong-hoon on Thursday after multiple appeals by the stars and their co-defendants.

Both Jung and Choi were involved in an ever-growing scandal involving the rapes and sexual assaults of multiple women. Those crimes were filmed and distributed to chatrooms without their consent.

The entire scandal came to light in March of 2019 when Seungri from the k-pop group BigBang was embroiled in what’s now known as the Burning Sun Scandal. As part of an investigation into the scandal, police found a chatroom that featured some stars engaging in what seemed to be non-consensual sex with various women. Police found that many of the message in the Kakaotalk chatroom (the major messaging app in South Korea) from between 2015 and 2016 were sent by Jung and Choi.

A Year of Court Proceedings

Jung, Choi, and five other defendants found themselves in court in November 2019 facing charges related to filming and distributing their acts without the consent of the victims, as well as aggravated rape charges. In South Korea, this means a rape involving two or more perpetrators.

The court found them all guilty of the rape charge. Jung was sentenced to six years behind bars, while Choi and the others were sentenced to five years. Jung was given a harsher sentence because he was also found guilty of filming and distributing the videos of their acts without the victim’s consent.

During proceedings, the court had trouble getting victims to tell their stories. Many feared being shamed or judged because of the incidents and didn’t want the possibility of that information going public. Compounding the court’s problems was the fact that other victims were hard to find.

To that end, the defendants argued that the sexual acts with some of the victims were consensual, albeit this didn’t leave out the possibility that there were still victims of their crimes. However, the court found that the testimony of survivors was trustworthy and contradicted the defendant’s claims.

Jung and Choi appealed the decision, which led to more court proceedings. In May 2020, the Seoul High Court upheld their convictions but reduced their sentences to five years for Jung and two and a half years for Choi.

Choi’s sentence was reduced because the court found that he had reached a settlement with a victim.

The decision was appealed a final time to the Supreme Court. This time they argued that most of the evidence against them, notably the Kakaotalk chatroom messages and videos, were illegally obtained by police.

On Thursday morning, the Supreme Court ultimately disagreed with Jung and Choi and said their revised sentences would stand.

Jung, Choi, and the other defendants will also still have to do 80 hours of sexual violence treatment courses and are banned from working with children for five years.

See What Others Are Saying: (ABC) (Yonhap News) (Soompi)

Continue Reading

Industry

YouTube Says It Will Use AI to Age-Restrict Content

Published

on

  • YouTube announced Tuesday that it would be expanding its machine learning to handle age-restricting content.
  • The decision has been controversial, especially after news that other AI systems employed by the company took down videos at nearly double the rate.
  • The decision likely stems from both legal responsibilities in some parts of the world, as well as practical reasons regarding the amount of content loaded to the site.
  • It might also help with moderator burn out since the platform is currently understaffed and struggles with extremely high turn over.
  • In fact, the platform still faces a lawsuit from a moderator claiming the job gave them Post Traumatic Stress Disorder. They also claim the company offered little resources to cope with the content they are required to watch.

AI-Age Restrictions

YouTube announced Tuesday that it will use AI and machine learning to automatically apply age restriction to videos.

In a recent blog post, the platform wrote, “our Trust & Safety team applies age-restrictions when, in the course of reviewing content, they encounter a video that isn’t appropriate for viewers under 18.”

“Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions.”

Flagged videos would effectively be blocked from being viewed by anyone who isn’t signed into an account or who has an account indicating they are below the age of 18. YouTube stated these changes were a continuation of their efforts to make YouTube a safer place for families. Initially, it rolled out YouTube Kids as a dedicated platform for those under 13, and now it wants to try and sterilize the platform site-wide. Although notably, it doesn’t plan to make the entire platform a new YouTube Kids.

It’s also not a coincidence that this move helps YouTube to better fall in line with regulations across the world. In Europe, users may face other steps if YouTube can’t confirm their age in addition to rolling out AI-age restrictions. This can include measures such as providing a government ID or credit card to prove one is over 18.

If a video is age-restricted by YouTube, the company did say it will have an appeals process that will get the video in front of an actual person to check it.

On that note, just days before announcing that it would implement AI to age-restrict, YouTube also said it would be expanding its moderation team after it had largely been on hiatus because of the pandemic.

It’s hard to say how much these changes will actually affect creators or how much money that can make from the platform. The only assurances YouTube gave were to creators who are part of the YouTube Partner Program.

“For creators in the YouTube Partner Program, we expect these automated age-restrictions to have little to no impact on revenue as most of these videos also violate our advertiser-friendly guidelines and therefore have limited or no ads.”

This means that most creators with the YouTube Partner Program don’t make much, or anything, from ads already and that’s unlikely to change.

Community Backlash

Every time YouTube makes a big change there are a lot of reactions, especially if it involves AI to automatically handle processes. Tuesday’s announcement was no different.

On YouTube’s tweet announcing the changes, common responses included complaints like, “what’s the point in an age restriction on a NON kids app. That’s why we have YouTube kids. really young kids shouldn’t be on normal youtube. So we don’t realistically need an age restriction.”

“Please don’t implement this until you’ve worked out all the kinks,” one user pleaded. “I feel like this might actually hurt a lot of creators, who aren’t making stuff for kids, but get flagged as kids channels because of bright colors and stuff like that”

Hiccups relating to the rollout of this new system were common among users. Although it’s possible that YouTube’s Sept 20. announcement saying it would bring back human moderators to the platform was made to help balance out how much damage a new AI could do.

In a late-August transparency report, YouTube found that AI-moderation was far more restrictive. When the moderators were first down-sized between April and June, YouTube’s AI largely took over and it removed around 11 million videos. That’s double the normal rate.

YouTube did allow creators to appeal those decisions, and about 300,000 videos were appealed; about half of which were reinstated. In a similar move, Facebook also had a similar problem, and will also bring back moderators to handle both restrictive content and the upcoming election.

Other Reasons for the Changes

YouTube’s decision to expand its use of AI not only falls in line with various laws regarding the verification of a user’s age and what content is widely available to the public but also likely for practical reasons.

The site gets over 400 hours of content uploaded every minute. Notwithstanding different time zones or having people work staggered schedules, YouTube would need to employ over 70,000 people to just check what’s uploaded to the site.

Outlets like The Verge have done a series about how YouTube, Google, and Facebook moderators are dealing with depression, anger, and Post Traumatic Stress Disorder because of their job. These issues were particularly prevalent among people working in what YouTube calls the “terror” or “violent extremism” queue.

One moderator told The Verge, “Every day you watch someone beheading someone, or someone shooting his girlfriend. After that, you feel like wow, this world is really crazy. This makes you feel ill. You’re feeling there is nothing worth living for. Why are we doing this to each other?”

That same individual noted that since working there, he began to gain weight, lose hair, have a short temper, and experience general signs of anxiety.

On top of these claims, YouTube is also facing a lawsuit filed in a California court Monday by a former content moderator at YouTube.

The complaint states that Jane Doe, “has trouble sleeping and when she does sleep, she has horrific nightmares. She often lays awake at night trying to go to sleep, replaying videos that she has seen in her mind.

“She cannot be in crowded places, including concerts and events, because she fears mass shootings. She has severe and debilitating panic attacks,” it continued. “She has lost many friends because of her anxiety around people. She has trouble interacting and being around kids and is now scared to have children.”

These issues weren’t just for people working on the “terror” queue, but anyone training to become a moderator.

“For example, during training, Plaintiff witnessed a video of a smashed open skull with people eating from it; a woman who was kidnapped and beheaded by a cartel; a person’s head being run over by a tank; beastiality; suicides; self-harm; children being rapped [sic]; births and abortions,” the complaint alleges.

“As the example was being presented, Content Moderators were told that they could step out of the room. But Content Moderators were concerned that leaving the room would mean they might lose their job because at the end of the training new Content Moderators were required to pass a test applying the Community Guidelines to the content.”

During their three-week training, moderators allegedly don’t receive much resilience training or wellness resources.

These kinds of lawsuits aren’t unheard of. Facebook faced a similar suit in 2018, where a woman claimed that during her time as a moderator she developed PTSD as a result of “constant and unmitigated exposure to highly toxic and extremely disturbing images at the workplace.”

That case hasn’t yet been decided in court. Currently, Facebook and the plaintiff agreed to settle for $52 million, pending approval from the court.

The settlement would only apply to U.S. moderators

See what others are saying: (CNET) (The Verge) (Vice)

Continue Reading