Connect with us

Industry

PewDiePie Asks Fans to End “Subscribe to PewDiePie” Meme

Published

on

  • In a video posted Sunday, massive YouTuber Felix Kjellberg, also known as PewDiePie, called for the end of the “Subscribe to PewDiePie” movement.
  • The phrase was popularized by fans during his battle against the Indian media company, T-Series, for the most subscribers on YouTube.
  • According to PewDiePie, the movement was once a light-hearted meme, but it has since been used in acts of hate and violence, which he does not condone.

End of “Subscribe to PewDiePie”

PewDiePie addressed the recent negative uses of the “Subscribe to PewDiePie” meme in a video posted Sunday, calling for an end to the movement.

For months now, Felix Kjellberg, known online as PewDiePie, and the Indian media company T-Series, have been facing off for the title of No.1 most subscribed on YouTube. PewDiePie fans have gone to great lengths to encourage people to “Subscribe to PewDiePie” in an effort to protect his spot. However, the YouTuber is now calling for the meme to stop.

In a video titled “Ending the Subscribe to Pewdiepie Meme,” Kjellberg says that the movement started out as something fun and positive, but it took a turn when someone defaced a World War II memorial with the phrase “Subscribe to PewDiePie.”

Kjellberg disavowed the act and donated to the memorial, saying that he hoped that would be the end of it. Unfortunately for PewDiePie, it wasn’t.

In March, Kjellberg made headlines when a shooter said “Subscribe to PewDiePie” on a live stream before killing 50 people at two mosques in Christchurch, New Zealand.

Kjellberg disavowed the act on Twitter when it happened, but has since deleted that message.

PewDiePie’s now deleted-tweet about the attack in Christchurch.

“I didn’t want hate to win”

In his latest video, he addressed the attack for the first time on camera saying, “to have my name associated with something so unspeakably vile has affected me in more ways than I’ve let show.”

Kjellberg went on to say that he waited to comment on the situation to avoid giving the shooter more attention, but he now says that it is clear the movement should have ended after the Christchurch attack. “I didn’t want to make it about me because I don’t think it has anything to do with me. To put it plainly I didn’t want hate to win,” said Kjellberg.

Response to India’s Diss Track Ban

PewDiePie also addressed the Indian High Court’s decision to block two of his diss tracks towards T-Series, “Bitch Lasagna” and “Congratulations.”

Previous coverage of India banning PewDiePie diss tracks.

Kjellberg says that while the songs were all meant to be fun and not meant to be taken seriously. “It’s clearly not fun anymore. It’s clearly gone too far and out of respect for that I’m going to keep the videos blocked,” the YouTuber said.

He also addresses those who have accused the “Subscribe to PewDiePie” movement of being focused on race or politics by saying, “I don’t agree with that at all and I want that to stop. This negative rhetoric is something I don’t agree with at all.”

“To make it perfectly clear: No I’m not racist. I don’t support any form of racist comments or hate toward anyone.”

He closes the video by saying that he does not want to make the milestone of reaching 100 million subscribers focused on beating another channel.

This movement started out of love and support, so let’s end it with that.”

See what others are saying: (The Verge) (Business Insider) (Engadget)

Industry

Influencers Exposed for Posting Fake Private Jet Photos

Published

on

  • A viral tweet showed a studio set in Los Angeles, California that is staged to look like the inside of a private jet.
  • Some influencers were called out for using that very same studio to take social media photos and videos.
  • While some slammed them for faking their lifestyles online, others poked fun at the behavior and noted that this is something stars like Bow Wow have been caught doing before.
  • Others have even gone so far as to buy and pose with empty designer shopping bags to pretend they went on a massive spending spree.

A tweet went viral over the weekend exposing the secret behind some influencer travel photos.

“Nahhhhh I just found out LA ig girlies are using studio sets that look like private jets for their Instagram pics,” Twitter user @maisonmelissa wrote Thursday.

“It’s crazy that anything you’re looking at could be fake. The setting, the clothes, the body… idk it just kinda of shakes my reality a bit lol,” she continued in a tweet that quickly garnered over 100,000 likes.

The post included photos of a private jet setup that’s actually a studio in California, which you can rent for $64 an hour on the site Peerspace.

As the tweet picked up attention, many began calling out influencers who they noticed have posted photos or videos in that very same studio.

@the7angels

Come fly with the angels 👼

♬ Hugh Hefner – ppcocaine

Perhaps the most notable influencers to be called out were the Mian Twins, who eventually edited their Instagram captions to admit they were on a set.

While a ton of people were upset about this, others pointed out that it’s not exactly that new of an idea. Even Bow Wow was once famously called out in 2017 for posting a private plane photo on social media before being spotted on a commercial flight. 

Twitter users even noted other ridiculous things some people do for the gram, like buying out empty shopping bags to pretend they’ve gone on a shopping spree.

Meanwhile, others poked fun at the topic, like Lil Nas X, who is never one to miss out on a viral internet moment. He photoshopped himself into the fake private jet, sarcastically writing, “thankful for it all,” in his caption.

So ultimately, it seems like the moral of this story is: don’t believe everything you see on social media.

See what others are saying: (LADBible) (Dazed Digital) (Metro UK)

Continue Reading

Industry

South Korea’s Supreme Court Upholds Rape Case Sentences for Korean Stars Jung Joon-young and Choi Jong-hoon

Published

on

  • On Thursday morning, the Supreme Court in Seoul upheld the sentences of Jung Joon Young and Choi Jong Hoon for aggravated rape and related charges.
  • Jung will serve five years in prison, while Choi will go to prison for two-and-a-half.
  • Videos of Jung, Choi, and others raping women were found in group chats that stemmed from investigations into Seungri, of the k-pop group BigBang, as part of the Burning Sun Scandal.
  • The two stars tried to claim that some of the sex was consensual, but the courts ultimately found testimony from survivors trustworthy. Courts did, however, have trouble finding victims who were willing to come forward over fears of social stigma.

Burning Sun Scandal Fall Out

South Korea’s Supreme Court upheld the rape verdicts against stars Jung Joon-young and Choi Jong-hoon on Thursday after multiple appeals by the stars and their co-defendants.

Both Jung and Choi were involved in an ever-growing scandal involving the rapes and sexual assaults of multiple women. Those crimes were filmed and distributed to chatrooms without their consent.

The entire scandal came to light in March of 2019 when Seungri from the k-pop group BigBang was embroiled in what’s now known as the Burning Sun Scandal. As part of an investigation into the scandal, police found a chatroom that featured some stars engaging in what seemed to be non-consensual sex with various women. Police found that many of the message in the Kakaotalk chatroom (the major messaging app in South Korea) from between 2015 and 2016 were sent by Jung and Choi.

A Year of Court Proceedings

Jung, Choi, and five other defendants found themselves in court in November 2019 facing charges related to filming and distributing their acts without the consent of the victims, as well as aggravated rape charges. In South Korea, this means a rape involving two or more perpetrators.

The court found them all guilty of the rape charge. Jung was sentenced to six years behind bars, while Choi and the others were sentenced to five years. Jung was given a harsher sentence because he was also found guilty of filming and distributing the videos of their acts without the victim’s consent.

During proceedings, the court had trouble getting victims to tell their stories. Many feared being shamed or judged because of the incidents and didn’t want the possibility of that information going public. Compounding the court’s problems was the fact that other victims were hard to find.

To that end, the defendants argued that the sexual acts with some of the victims were consensual, albeit this didn’t leave out the possibility that there were still victims of their crimes. However, the court found that the testimony of survivors was trustworthy and contradicted the defendant’s claims.

Jung and Choi appealed the decision, which led to more court proceedings. In May 2020, the Seoul High Court upheld their convictions but reduced their sentences to five years for Jung and two and a half years for Choi.

Choi’s sentence was reduced because the court found that he had reached a settlement with a victim.

The decision was appealed a final time to the Supreme Court. This time they argued that most of the evidence against them, notably the Kakaotalk chatroom messages and videos, were illegally obtained by police.

On Thursday morning, the Supreme Court ultimately disagreed with Jung and Choi and said their revised sentences would stand.

Jung, Choi, and the other defendants will also still have to do 80 hours of sexual violence treatment courses and are banned from working with children for five years.

See What Others Are Saying: (ABC) (Yonhap News) (Soompi)

Continue Reading

Industry

YouTube Says It Will Use AI to Age-Restrict Content

Published

on

  • YouTube announced Tuesday that it would be expanding its machine learning to handle age-restricting content.
  • The decision has been controversial, especially after news that other AI systems employed by the company took down videos at nearly double the rate.
  • The decision likely stems from both legal responsibilities in some parts of the world, as well as practical reasons regarding the amount of content loaded to the site.
  • It might also help with moderator burn out since the platform is currently understaffed and struggles with extremely high turn over.
  • In fact, the platform still faces a lawsuit from a moderator claiming the job gave them Post Traumatic Stress Disorder. They also claim the company offered little resources to cope with the content they are required to watch.

AI-Age Restrictions

YouTube announced Tuesday that it will use AI and machine learning to automatically apply age restriction to videos.

In a recent blog post, the platform wrote, “our Trust & Safety team applies age-restrictions when, in the course of reviewing content, they encounter a video that isn’t appropriate for viewers under 18.”

“Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions.”

Flagged videos would effectively be blocked from being viewed by anyone who isn’t signed into an account or who has an account indicating they are below the age of 18. YouTube stated these changes were a continuation of their efforts to make YouTube a safer place for families. Initially, it rolled out YouTube Kids as a dedicated platform for those under 13, and now it wants to try and sterilize the platform site-wide. Although notably, it doesn’t plan to make the entire platform a new YouTube Kids.

It’s also not a coincidence that this move helps YouTube to better fall in line with regulations across the world. In Europe, users may face other steps if YouTube can’t confirm their age in addition to rolling out AI-age restrictions. This can include measures such as providing a government ID or credit card to prove one is over 18.

If a video is age-restricted by YouTube, the company did say it will have an appeals process that will get the video in front of an actual person to check it.

On that note, just days before announcing that it would implement AI to age-restrict, YouTube also said it would be expanding its moderation team after it had largely been on hiatus because of the pandemic.

It’s hard to say how much these changes will actually affect creators or how much money that can make from the platform. The only assurances YouTube gave were to creators who are part of the YouTube Partner Program.

“For creators in the YouTube Partner Program, we expect these automated age-restrictions to have little to no impact on revenue as most of these videos also violate our advertiser-friendly guidelines and therefore have limited or no ads.”

This means that most creators with the YouTube Partner Program don’t make much, or anything, from ads already and that’s unlikely to change.

Community Backlash

Every time YouTube makes a big change there are a lot of reactions, especially if it involves AI to automatically handle processes. Tuesday’s announcement was no different.

On YouTube’s tweet announcing the changes, common responses included complaints like, “what’s the point in an age restriction on a NON kids app. That’s why we have YouTube kids. really young kids shouldn’t be on normal youtube. So we don’t realistically need an age restriction.”

“Please don’t implement this until you’ve worked out all the kinks,” one user pleaded. “I feel like this might actually hurt a lot of creators, who aren’t making stuff for kids, but get flagged as kids channels because of bright colors and stuff like that”

Hiccups relating to the rollout of this new system were common among users. Although it’s possible that YouTube’s Sept 20. announcement saying it would bring back human moderators to the platform was made to help balance out how much damage a new AI could do.

In a late-August transparency report, YouTube found that AI-moderation was far more restrictive. When the moderators were first down-sized between April and June, YouTube’s AI largely took over and it removed around 11 million videos. That’s double the normal rate.

YouTube did allow creators to appeal those decisions, and about 300,000 videos were appealed; about half of which were reinstated. In a similar move, Facebook also had a similar problem, and will also bring back moderators to handle both restrictive content and the upcoming election.

Other Reasons for the Changes

YouTube’s decision to expand its use of AI not only falls in line with various laws regarding the verification of a user’s age and what content is widely available to the public but also likely for practical reasons.

The site gets over 400 hours of content uploaded every minute. Notwithstanding different time zones or having people work staggered schedules, YouTube would need to employ over 70,000 people to just check what’s uploaded to the site.

Outlets like The Verge have done a series about how YouTube, Google, and Facebook moderators are dealing with depression, anger, and Post Traumatic Stress Disorder because of their job. These issues were particularly prevalent among people working in what YouTube calls the “terror” or “violent extremism” queue.

One moderator told The Verge, “Every day you watch someone beheading someone, or someone shooting his girlfriend. After that, you feel like wow, this world is really crazy. This makes you feel ill. You’re feeling there is nothing worth living for. Why are we doing this to each other?”

That same individual noted that since working there, he began to gain weight, lose hair, have a short temper, and experience general signs of anxiety.

On top of these claims, YouTube is also facing a lawsuit filed in a California court Monday by a former content moderator at YouTube.

The complaint states that Jane Doe, “has trouble sleeping and when she does sleep, she has horrific nightmares. She often lays awake at night trying to go to sleep, replaying videos that she has seen in her mind.

“She cannot be in crowded places, including concerts and events, because she fears mass shootings. She has severe and debilitating panic attacks,” it continued. “She has lost many friends because of her anxiety around people. She has trouble interacting and being around kids and is now scared to have children.”

These issues weren’t just for people working on the “terror” queue, but anyone training to become a moderator.

“For example, during training, Plaintiff witnessed a video of a smashed open skull with people eating from it; a woman who was kidnapped and beheaded by a cartel; a person’s head being run over by a tank; beastiality; suicides; self-harm; children being rapped [sic]; births and abortions,” the complaint alleges.

“As the example was being presented, Content Moderators were told that they could step out of the room. But Content Moderators were concerned that leaving the room would mean they might lose their job because at the end of the training new Content Moderators were required to pass a test applying the Community Guidelines to the content.”

During their three-week training, moderators allegedly don’t receive much resilience training or wellness resources.

These kinds of lawsuits aren’t unheard of. Facebook faced a similar suit in 2018, where a woman claimed that during her time as a moderator she developed PTSD as a result of “constant and unmitigated exposure to highly toxic and extremely disturbing images at the workplace.”

That case hasn’t yet been decided in court. Currently, Facebook and the plaintiff agreed to settle for $52 million, pending approval from the court.

The settlement would only apply to U.S. moderators

See what others are saying: (CNET) (The Verge) (Vice)

Continue Reading