Connect with us

Industry

YouTube Updates Harassment Policy to Curb Threats and Personal Attacks

Published

on

  • YouTube announced new bullying and harassment policies that will prohibit implied threats and malicious insults based on a person’s sexuality, race, or gender expression.
  • Under the new policy, channels who show a pattern of harassing behavior by continuously making remarks that come close to violating the harassment policy could also receive consequences.
  • These changes come several months after a public controversy where former Vox host Carlos Maza accused conservative commentator Steven Crowder of harassing him on his channel. While Crowder did repeatedly call Maza names like “lispy queer,” YouTube said this was not a violation of their policy.
  • Many were not happy with YouTube’s new policy, resulting in #YouTubeIsOverParty trending on Twitter. Some creators say they have already been impacted by the guidelines.

YouTube’s New Policy

YouTube announced new policy changes that will prohibit implied threats and malicious insults based on a person’s sexuality, race, or gender expression.

In a Wednesday blog post, the company announced that it was tightening the rules in regards to its bullying and harassment guidelines. These rules come after months of review with creators, experts from bullying organizations, free speech proponents, and advisers along all sides of the political spectrum.

“Harassment hurts our community by making people less inclined to share their opinions and engage with each other,” YouTube’s post said. “We heard this time and again from creators, including those who met with us during the development of this policy update.”

The company’s first major change aims to take “a stronger stance against threats and personal attacks.” YouTube’s guidelines previously said videos with explicit threats in them would have actions taken against them, and its new policy extends that to include videos with veiled or implied threats.

“This includes content simulating violence toward an individual or language suggesting physical violence may occur,” the post explains.  

On top of threatening someone, this will also cover demeaning language that YouTube feels crosses the line. This will include “content that maliciously insults someone based on protected attributes such as their race, gender expression, or sexual orientation.”

YouTube also addressed consequences for a “pattern of harassing behavior.” The company’s post says that creators found that harassment sometimes stemmed from remarks repeatedly made over the course of a series of videos or comments. Even though these individual videos or comments may not directly violate YouTube’s policy on their own, the company still has a plan to combat this. 

“Channels that repeatedly brush up against our harassment policy will be suspended from [YouTube Partner Program], eliminating their ability to make money on YouTube,” YouTube said. The platform added that this content could be removed, and channels could receive strikes or be terminated. 

YouTube clarified that these changes would also apply to the platform’s comment section, not just the videos posted. The company believes this will result in the number of comments removed from the site increasing, noting that 16 million were removed in their third quarter. 

YouTube also outlined newer tools that have recently been added that give creators some control over their comment section.

“When we’re not sure a comment violates our policies, but it seems potentially inappropriate, we give creators the option to review it before it’s posted on their channel,” YouTube said.

In the early stages of the roll-out, YouTube saw a 75% reduction in user flags on comments. Most creators now have this setting, but can opt-out of it if they would like. They can also ignore the held comments. 

“We expect there will continue to be healthy debates over some of the decisions and we have an appeals process in place if creators believe we’ve made the wrong call on a video,” the company said of this new update. 

Why Did YouTube Change Its Policy?

Many believe these changes were prompted by the controversy between Carlos Maza, who hosted a series for Vox, and Steven Crowder, who hosts a series called Louder with Crowder on YouTube. Back in May, Maza tweeted a thread calling Crowder out for repeatedly calling him names on his show. Crowder repeatedly referred to Maza as “Mr. Gay Vox,” a “lispy queer,” and “gay Latino from Vox” in a mocking tone. 

Crowder defended himself, saying this should not count as bullying, as he made these comments while providing criticism of Maza’s series. YouTube ended up responding to Maza saying that his comments, while potentially offensive, did not violate their policy.

Maza continued to call YouTube out for this decision. He said this “gives bigots free license” and accused the site of using its gay creators. Many criticized YouTube’s response, which came in June as the company celebrated pride month. Some found it hypocritical for the company to be publically celebrating the LGBTQ community while also allowing comments some perceived as homophobic to stay on their site.

Because of all this backlash, YouTube ended up suspending Crowder’s revenue. This decision was also met with outrage.

Maza and Crowder React

Maza tweeted a thread about the new policy on Wednesday morning. He claims that the real problem is whether or not YouTube will enforce it on all creators, which he thinks is unlikely. 

“YouTube loves to manage PR crises by rolling out vague content policies they don’t actually enforce,” he wrote. “These policies only work if YouTube is willing to take down its most popular rule-breakers. And there’s no reason, so far, to believe that it is.”

Before YouTube made their official announcement, Crowder posted a video titled “Urgent. The YouTube ‘Purge’ Is Coming.” The video was uploaded Tuesday and is based largely on murmurs about what was to come. He said the policies could silence and negatively impact his channel and others like it. 

“Obviously my heart goes out to any future conservative or any future independent voices that are affected because people got their feelings hurt,” he said. 

Policy Gets Negative Feedback

Other creators also shared their reactions, with some saying they were already being impacted by the new changes. Ian Carter, known online as iDubbbz, tweeted a screenshot of an email from YouTube saying his video “Content Cop: Leafy” was taken down for violating guidelines.

He uses vulgar and antagonistic language in the video, and jokes about bullying being okay. Many, however, don’t think the video should have been removed as it was meant to call out someone else’s bad behavior. 

Another creator, Gokanaru said his video critiquing h3h3 productions was removed. 

Some online were frustrated with this, noting that their videos should not be taken down while someone like Onision, who has been accused of predatory behavior and grooming, still has videos online. 

#YouTubeIsOverParty was a trending topic on Twitter by late Wednesday morning. Many used the hashtag to say that the policy could negatively impact creativity on the platform and that YouTube should not try to make this seem like this was a policy that creators asked for.  

Even though the trending topic gained a lot of traction, YouTuber Taylor Harris said that as far as the use of the site goes, YouTube will likely be unimpacted. 

See what others are saying: (Axios) (Tech Crunch) (Vox)

Industry

South Korea’s Supreme Court Upholds Rape Case Sentences for Korean Stars Jung Joon-young and Choi Jong-hoon

Published

on

  • On Thursday morning, the Supreme Court in Seoul upheld the sentences of Jung Joon Young and Choi Jong Hoon for aggravated rape and related charges.
  • Jung will serve five years in prison, while Choi will go to prison for two-and-a-half.
  • Videos of Jung, Choi, and others raping women were found in group chats that stemmed from investigations into Seungri, of the k-pop group BigBang, as part of the Burning Sun Scandal.
  • The two stars tried to claim that some of the sex was consensual, but the courts ultimately found testimony from survivors trustworthy. Courts did, however, have trouble finding victims who were willing to come forward over fears of social stigma.

Burning Sun Scandal Fall Out

South Korea’s Supreme Court upheld the rape verdicts against stars Jung Joon-young and Choi Jong-hoon on Thursday after multiple appeals by the stars and their co-defendants.

Both Jung and Choi were involved in an ever-growing scandal involving the rapes and sexual assaults of multiple women. Those crimes were filmed and distributed to chatrooms without their consent.

The entire scandal came to light in March of 2019 when Seungri from the k-pop group BigBang was embroiled in what’s now known as the Burning Sun Scandal. As part of an investigation into the scandal, police found a chatroom that featured some stars engaging in what seemed to be non-consensual sex with various women. Police found that many of the message in the Kakaotalk chatroom (the major messaging app in South Korea) from between 2015 and 2016 were sent by Jung and Choi.

A Year of Court Proceedings

Jung, Choi, and five other defendants found themselves in court in November 2019 facing charges related to filming and distributing their acts without the consent of the victims, as well as aggravated rape charges. In South Korea, this means a rape involving two or more perpetrators.

The court found them all guilty of the rape charge. Jung was sentenced to six years behind bars, while Choi and the others were sentenced to five years. Jung was given a harsher sentence because he was also found guilty of filming and distributing the videos of their acts without the victim’s consent.

During proceedings, the court had trouble getting victims to tell their stories. Many feared being shamed or judged because of the incidents and didn’t want the possibility of that information going public. Compounding the court’s problems was the fact that other victims were hard to find.

To that end, the defendants argued that the sexual acts with some of the victims were consensual, albeit this didn’t leave out the possibility that there were still victims of their crimes. However, the court found that the testimony of survivors was trustworthy and contradicted the defendant’s claims.

Jung and Choi appealed the decision, which led to more court proceedings. In May 2020, the Seoul High Court upheld their convictions but reduced their sentences to five years for Jung and two and a half years for Choi.

Choi’s sentence was reduced because the court found that he had reached a settlement with a victim.

The decision was appealed a final time to the Supreme Court. This time they argued that most of the evidence against them, notably the Kakaotalk chatroom messages and videos, were illegally obtained by police.

On Thursday morning, the Supreme Court ultimately disagreed with Jung and Choi and said their revised sentences would stand.

Jung, Choi, and the other defendants will also still have to do 80 hours of sexual violence treatment courses and are banned from working with children for five years.

See What Others Are Saying: (ABC) (Yonhap News) (Soompi)

Continue Reading

Industry

YouTube Says It Will Use AI to Age-Restrict Content

Published

on

  • YouTube announced Tuesday that it would be expanding its machine learning to handle age-restricting content.
  • The decision has been controversial, especially after news that other AI systems employed by the company took down videos at nearly double the rate.
  • The decision likely stems from both legal responsibilities in some parts of the world, as well as practical reasons regarding the amount of content loaded to the site.
  • It might also help with moderator burn out since the platform is currently understaffed and struggles with extremely high turn over.
  • In fact, the platform still faces a lawsuit from a moderator claiming the job gave them Post Traumatic Stress Disorder. They also claim the company offered little resources to cope with the content they are required to watch.

AI-Age Restrictions

YouTube announced Tuesday that it will use AI and machine learning to automatically apply age restriction to videos.

In a recent blog post, the platform wrote, “our Trust & Safety team applies age-restrictions when, in the course of reviewing content, they encounter a video that isn’t appropriate for viewers under 18.”

“Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions.”

Flagged videos would effectively be blocked from being viewed by anyone who isn’t signed into an account or who has an account indicating they are below the age of 18. YouTube stated these changes were a continuation of their efforts to make YouTube a safer place for families. Initially, it rolled out YouTube Kids as a dedicated platform for those under 13, and now it wants to try and sterilize the platform site-wide. Although notably, it doesn’t plan to make the entire platform a new YouTube Kids.

It’s also not a coincidence that this move helps YouTube to better fall in line with regulations across the world. In Europe, users may face other steps if YouTube can’t confirm their age in addition to rolling out AI-age restrictions. This can include measures such as providing a government ID or credit card to prove one is over 18.

If a video is age-restricted by YouTube, the company did say it will have an appeals process that will get the video in front of an actual person to check it.

On that note, just days before announcing that it would implement AI to age-restrict, YouTube also said it would be expanding its moderation team after it had largely been on hiatus because of the pandemic.

It’s hard to say how much these changes will actually affect creators or how much money that can make from the platform. The only assurances YouTube gave were to creators who are part of the YouTube Partner Program.

“For creators in the YouTube Partner Program, we expect these automated age-restrictions to have little to no impact on revenue as most of these videos also violate our advertiser-friendly guidelines and therefore have limited or no ads.”

This means that most creators with the YouTube Partner Program don’t make much, or anything, from ads already and that’s unlikely to change.

Community Backlash

Every time YouTube makes a big change there are a lot of reactions, especially if it involves AI to automatically handle processes. Tuesday’s announcement was no different.

On YouTube’s tweet announcing the changes, common responses included complaints like, “what’s the point in an age restriction on a NON kids app. That’s why we have YouTube kids. really young kids shouldn’t be on normal youtube. So we don’t realistically need an age restriction.”

“Please don’t implement this until you’ve worked out all the kinks,” one user pleaded. “I feel like this might actually hurt a lot of creators, who aren’t making stuff for kids, but get flagged as kids channels because of bright colors and stuff like that”

Hiccups relating to the rollout of this new system were common among users. Although it’s possible that YouTube’s Sept 20. announcement saying it would bring back human moderators to the platform was made to help balance out how much damage a new AI could do.

In a late-August transparency report, YouTube found that AI-moderation was far more restrictive. When the moderators were first down-sized between April and June, YouTube’s AI largely took over and it removed around 11 million videos. That’s double the normal rate.

YouTube did allow creators to appeal those decisions, and about 300,000 videos were appealed; about half of which were reinstated. In a similar move, Facebook also had a similar problem, and will also bring back moderators to handle both restrictive content and the upcoming election.

Other Reasons for the Changes

YouTube’s decision to expand its use of AI not only falls in line with various laws regarding the verification of a user’s age and what content is widely available to the public but also likely for practical reasons.

The site gets over 400 hours of content uploaded every minute. Notwithstanding different time zones or having people work staggered schedules, YouTube would need to employ over 70,000 people to just check what’s uploaded to the site.

Outlets like The Verge have done a series about how YouTube, Google, and Facebook moderators are dealing with depression, anger, and Post Traumatic Stress Disorder because of their job. These issues were particularly prevalent among people working in what YouTube calls the “terror” or “violent extremism” queue.

One moderator told The Verge, “Every day you watch someone beheading someone, or someone shooting his girlfriend. After that, you feel like wow, this world is really crazy. This makes you feel ill. You’re feeling there is nothing worth living for. Why are we doing this to each other?”

That same individual noted that since working there, he began to gain weight, lose hair, have a short temper, and experience general signs of anxiety.

On top of these claims, YouTube is also facing a lawsuit filed in a California court Monday by a former content moderator at YouTube.

The complaint states that Jane Doe, “has trouble sleeping and when she does sleep, she has horrific nightmares. She often lays awake at night trying to go to sleep, replaying videos that she has seen in her mind.

“She cannot be in crowded places, including concerts and events, because she fears mass shootings. She has severe and debilitating panic attacks,” it continued. “She has lost many friends because of her anxiety around people. She has trouble interacting and being around kids and is now scared to have children.”

These issues weren’t just for people working on the “terror” queue, but anyone training to become a moderator.

“For example, during training, Plaintiff witnessed a video of a smashed open skull with people eating from it; a woman who was kidnapped and beheaded by a cartel; a person’s head being run over by a tank; beastiality; suicides; self-harm; children being rapped [sic]; births and abortions,” the complaint alleges.

“As the example was being presented, Content Moderators were told that they could step out of the room. But Content Moderators were concerned that leaving the room would mean they might lose their job because at the end of the training new Content Moderators were required to pass a test applying the Community Guidelines to the content.”

During their three-week training, moderators allegedly don’t receive much resilience training or wellness resources.

These kinds of lawsuits aren’t unheard of. Facebook faced a similar suit in 2018, where a woman claimed that during her time as a moderator she developed PTSD as a result of “constant and unmitigated exposure to highly toxic and extremely disturbing images at the workplace.”

That case hasn’t yet been decided in court. Currently, Facebook and the plaintiff agreed to settle for $52 million, pending approval from the court.

The settlement would only apply to U.S. moderators

See what others are saying: (CNET) (The Verge) (Vice)

Continue Reading

Industry

Chinese State Media Calls TikTok-Oracle Deal “Reasonable” as Trump Signals Approval

Published

on

  • On Friday, the United States Commerce Department issued an order that would ban U.S. downloads of TikTok and WeChat starting Sunday night.
  • The order for TikTok was delayed for one week on Saturday after President Donald Trump gave his preliminary approval on a deal between TikTok and the software company Oracle.
  • A federal judge also issued a temporary injunction Sunday against the WeChat ban, which would have largely destroyed the app’s functionality.
  • Oracle and Walmart have since released more details of the deal, including that TikTok Global will likely pay $5 billion in U.S. taxes. This does not seem to be the same as a commission from the deal, even though Trump suggested such.
  • On Monday, Chinese state media called the deal “unfair” on ByteDance, TikTok’s parent company. However, it also described it as “reasonable,” suggesting the Chinese government may approve the deal.

U.S. and China Signal Support for Deal

What began as a tumultuous weekend for TikTok ended with both the U.S. and Chinese governments potentially signaling approval of its deal with Oracle. 

Last week, TikTok’s parent company, ByteDance, struck a deal with Oracle to avoid a U.S. ban. On Monday, Chinese state media called the deal “more reasonable to ByteDance,” and said it’s less costly than a shutdown.

“The plan shows that ByteDance’s moves to defend its legitimate rights have, to some extent, worked,” it added.

While not officially confirmed, this seems to suggest that the Chinese government may approve the deal. 

It also came off the heels of Saturday, when President Donald Trump, after having suggested unhappiness with the deal last week, said he has given his approval “in concept.” He will still need to officially sign off on it before the deal is set into motion.

Because of that, the U.S. Commerce Department staved off a download ban that was set for Sunday, now pushing it back to this coming Sunday, Sept. 27.

Some Republicans, such as Senator Marco Rubio (R-Fl.), have still expressed concern because ByteDance won’t be handing over its secretive algorithm as part of the deal.

What’s in the Deal?

On Saturday, Oracle released more details of its deal with TikTok. Under it, Oracle and Walmart would take a combined 20% stake in TikTok Global.

Still, there’s been much back and forth over how much control ByteDance, will have under the agreement. For his part, Trump has claimed that TikTok Global will “be a brand new company… It will have nothing to do with China.”

However, ByteDance has maintained that it will retain 80% of the stake. The discrepancy here seems to be because 40% of ByteDance is owned by U.S. venture capital firms. Therefore, Trump could technically claim that TikTok Global will be majority-owned by U.S. money.

Trump doubled down Monday and said that he would not approve the deal if ByteDance retained ownership. He added that the Chinese-owned company will “have nothing to do with it, and if they do, we just won’t make the deal.”

Later, Oracle announced that ByteDance will not have any stake in TikTok Global, though this statement heavily conflicts with what is being reported in China.

“Upon creation of TikTok Global, Oracle/Walmart will make their investment and the TikTok Global shares will be distributed to their owners, Americans will be the majority and ByteDance will have no ownership in TikTok Global,” the company said.

According to Walmart and Oracle, if this deal goes through, TikTok Global will pay $5 billion in new tax dollars to the U.S. Treasury over the next few years. As both companies noted, this is just a projection of future corporate taxes, and that estimate could change.

The water around that $5 billion figure was later muddied as Trump claimed that TikTok Global would be donating “$5 billion into a fund for education so we can educate people as to [the] real history of our country — the real history, not the fake history.”

To be clear, Trump is referring to his plans to establish a “patriotic education” commission.

On Sunday, ByteDance said in a statement that this was the first it had heard about a $5 billion education fund.

In fact, TikTok Global never promised to start an education fund. Instead, it promised to create an “educational initiative to develop and deliver an AI-driven online video curriculum to teach children from inner cities to the suburbs a variety of courses from basic reading and math to science, history and computer engineering.” 

That initiative doesn’t seem to have anything to do with that $5 billion tax figure. Since he began pursuing a ban, Trump has vowed that the U.S. will receive some form of commission from a deal with TikTok. As far as it is known, this $5 billion figure is also not that commission.

As previously reported, this deal will allow Oracle to host TikTok’s user data on its cloud service and review TikTok’s code for security. According to Treasury Secretary Steven Mnuchin, it would also shift TikTok’s global headquarters from China to the U.S.

On top of that, TikTok’s board members would reportedly have to be approved by the U.S. government, with one being an expert in data security. That person would also hold a top-secret security clearance.

Commerce Department Announces Download Ban

Friday seemed like the beginning of the end for TikTok. That morning, the Commerce Department issued an order that would ban U.S. downloads of not only TikTok but also WeChat starting Sunday night.

Both bans were a result of concerns the Trump administration has that ByteDance and WeChat’s parent company, Tencent, are either already giving or could give U.S. user data to the Chinese government.

The Trump administration has repeatedly said that both apps pose a national security threat.

TikTok and ByteDance have consistently denied these claims, saying that U.S. user data is stored domestically with a backup in Singapore. WeChat, for its part, has also made similar statements.

The download ban was announced in response to two Aug. 6 executive orders from Trump. Those orders ban any U.S.-based transactions with TikTok and WeChat starting on Sept. 20, which is why the Commerce Department set the deadline for this past Sunday.

While this ban would have been much more restrictive for WeChat because a large part of its functionality relies heavily on in-app transactions, for TikTok at least, it would only affect new downloads and updates to the app.

“So if that were to continue over a long period of time, there might be a gradual degradation of services, but the basic TikTok will stay intact until Nov. 12,” Commerce Secretary Wilbur Ross told Fox Business on Friday.

“If there’s not a deal by Nov. 12, under the provisions of the old order, then TikTok would also be, for all practical purposes, shut down.” 

What Happens on Nov. 12?

Ross is referring to another executive order, this one signed on Aug. 14. Notably, it gives ByteDance 90 days to divest from its American assets and any data that TikTok had gathered in the U.S. As Ross pointed out, that requirement could be satisfied if a deal is reached before the deadline.

If that doesn’t happen, the TikTok app could begin to see lags, lack of functionality, and sporadic outages.

However, it’s not just the U.S. One of the big questions that loomed after Oracle and ByteDance confirmed their deal last week was whether or not China would also need to approve it. ByteDance later confirmed that it will need the confirmation of the Chinese government, despite the deal not involving a technology transfer. 

Downloads Soar and TikTok Sues

On Friday, downloads for both apps soared. TikTok was downloaded nearly a quarter of a million times that day, up 12% from the previous day. WeChat was downloaded 10,000 times, up 150%.

The same Friday, TikTok as a company criticized the Commerce Department order, saying it had already committed to “unprecedented levels of additional transparency.”

TikTok added that the order “threatens to deprive the American people and small businesses across the US of a significant platform for both a voice and livelihoods.”

Later Friday, TikTok sued the Trump Administration to stop the download ban. 

On Sunday, a federal judge also halted the download ban for WeChat with a preliminary injunction. The injunction additionally blocks the Commerce Department’s attempt to bar transactions on the app.  

The Commerce Department responded by saying that it’s preparing for a long legal battle.

TikTokers: “Scared, angry, and confused”

“I’ve mostly just been feeling scared, angry, and confused,” TikToker Isabella Avila, known online as onlyjayus, told Rogue Rocket on Monday. “Those are just the main things.” 

Avila has amassed a following of 8.7 million followers on TikTok in a relatively short amount of time. She’s also gained about half a million followers on YouTube and Instagram each.

A couple of months ago, Avila said she thought a potential ban was all just talk; however, as the situation progressed, she said she became more worried.

While she said that she personally thought her career could survive a TikTok ban (thanks in part to a Netflix podcast deal), she added, “The people in-between a 100,000 to a million [followers], they have a platform right now, and if TikTok’s were to be gone, their platform’s pretty much gone if they haven’t built an audience on anything else. 

“This is where we go to express ourselves,” she said. “This is where we go to make videos. I don’t know, TikTok gave everybody a chance to kind of get famous and have a following. That’s what people liked about it. YouTube, it’s really hard to get followers and subscribers. TikTok was a lot easier.” 

Avila also expressed that a ban wouldn’t just be detrimental to creators. 

“I feel like my generation needed an app,” Avila said. “There was Instagram and Twitter, but it was kind of like for the millennials. Gen Z didn’t really have an app, and TikTok kind of fit that spot, so if TikTok’s gone, I don’t know, I feel like Gen Z isn’t really going to have a place.” 

Avila now says she is largely hopeful that TikTok will not be banned in the U.S.

See what others are saying: (The Washington Post) (NBC News) (Axios)

Continue Reading