- YouTube announced Tuesday that it would be expanding its machine learning to handle age-restricting content.
- The decision has been controversial, especially after news that other AI systems employed by the company took down videos at nearly double the rate.
- The decision likely stems from both legal responsibilities in some parts of the world, as well as practical reasons regarding the amount of content loaded to the site.
- It might also help with moderator burn out since the platform is currently understaffed and struggles with extremely high turn over.
- In fact, the platform still faces a lawsuit from a moderator claiming the job gave them Post Traumatic Stress Disorder. They also claim the company offered little resources to cope with the content they are required to watch.
YouTube announced Tuesday that it will use AI and machine learning to automatically apply age restriction to videos.
In a recent blog post, the platform wrote, “our Trust & Safety team applies age-restrictions when, in the course of reviewing content, they encounter a video that isn’t appropriate for viewers under 18.”
“Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions.”
Flagged videos would effectively be blocked from being viewed by anyone who isn’t signed into an account or who has an account indicating they are below the age of 18. YouTube stated these changes were a continuation of their efforts to make YouTube a safer place for families. Initially, it rolled out YouTube Kids as a dedicated platform for those under 13, and now it wants to try and sterilize the platform site-wide. Although notably, it doesn’t plan to make the entire platform a new YouTube Kids.
It’s also not a coincidence that this move helps YouTube to better fall in line with regulations across the world. In Europe, users may face other steps if YouTube can’t confirm their age in addition to rolling out AI-age restrictions. This can include measures such as providing a government ID or credit card to prove one is over 18.
If a video is age-restricted by YouTube, the company did say it will have an appeals process that will get the video in front of an actual person to check it.
On that note, just days before announcing that it would implement AI to age-restrict, YouTube also said it would be expanding its moderation team after it had largely been on hiatus because of the pandemic.
It’s hard to say how much these changes will actually affect creators or how much money that can make from the platform. The only assurances YouTube gave were to creators who are part of the YouTube Partner Program.
“For creators in the YouTube Partner Program, we expect these automated age-restrictions to have little to no impact on revenue as most of these videos also violate our advertiser-friendly guidelines and therefore have limited or no ads.”
This means that most creators with the YouTube Partner Program don’t make much, or anything, from ads already and that’s unlikely to change.
Every time YouTube makes a big change there are a lot of reactions, especially if it involves AI to automatically handle processes. Tuesday’s announcement was no different.
On YouTube’s tweet announcing the changes, common responses included complaints like, “what’s the point in an age restriction on a NON kids app. That’s why we have YouTube kids. really young kids shouldn’t be on normal youtube. So we don’t realistically need an age restriction.”
“Please don’t implement this until you’ve worked out all the kinks,” one user pleaded. “I feel like this might actually hurt a lot of creators, who aren’t making stuff for kids, but get flagged as kids channels because of bright colors and stuff like that”
Hiccups relating to the rollout of this new system were common among users. Although it’s possible that YouTube’s Sept 20. announcement saying it would bring back human moderators to the platform was made to help balance out how much damage a new AI could do.
In a late-August transparency report, YouTube found that AI-moderation was far more restrictive. When the moderators were first down-sized between April and June, YouTube’s AI largely took over and it removed around 11 million videos. That’s double the normal rate.
YouTube did allow creators to appeal those decisions, and about 300,000 videos were appealed; about half of which were reinstated. In a similar move, Facebook also had a similar problem, and will also bring back moderators to handle both restrictive content and the upcoming election.
Other Reasons for the Changes
YouTube’s decision to expand its use of AI not only falls in line with various laws regarding the verification of a user’s age and what content is widely available to the public but also likely for practical reasons.
The site gets over 400 hours of content uploaded every minute. Notwithstanding different time zones or having people work staggered schedules, YouTube would need to employ over 70,000 people to just check what’s uploaded to the site.
Outlets like The Verge have done a series about how YouTube, Google, and Facebook moderators are dealing with depression, anger, and Post Traumatic Stress Disorder because of their job. These issues were particularly prevalent among people working in what YouTube calls the “terror” or “violent extremism” queue.
One moderator told The Verge, “Every day you watch someone beheading someone, or someone shooting his girlfriend. After that, you feel like wow, this world is really crazy. This makes you feel ill. You’re feeling there is nothing worth living for. Why are we doing this to each other?”
That same individual noted that since working there, he began to gain weight, lose hair, have a short temper, and experience general signs of anxiety.
On top of these claims, YouTube is also facing a lawsuit filed in a California court Monday by a former content moderator at YouTube.
The complaint states that Jane Doe, “has trouble sleeping and when she does sleep, she has horrific nightmares. She often lays awake at night trying to go to sleep, replaying videos that she has seen in her mind.“
“She cannot be in crowded places, including concerts and events, because she fears mass shootings. She has severe and debilitating panic attacks,” it continued. “She has lost many friends because of her anxiety around people. She has trouble interacting and being around kids and is now scared to have children.”
These issues weren’t just for people working on the “terror” queue, but anyone training to become a moderator.
“For example, during training, Plaintiff witnessed a video of a smashed open skull with people eating from it; a woman who was kidnapped and beheaded by a cartel; a person’s head being run over by a tank; beastiality; suicides; self-harm; children being rapped [sic]; births and abortions,” the complaint alleges.
“As the example was being presented, Content Moderators were told that they could step out of the room. But Content Moderators were concerned that leaving the room would mean they might lose their job because at the end of the training new Content Moderators were required to pass a test applying the Community Guidelines to the content.”
During their three-week training, moderators allegedly don’t receive much resilience training or wellness resources.
These kinds of lawsuits aren’t unheard of. Facebook faced a similar suit in 2018, where a woman claimed that during her time as a moderator she developed PTSD as a result of “constant and unmitigated exposure to highly toxic and extremely disturbing images at the workplace.”
That case hasn’t yet been decided in court. Currently, Facebook and the plaintiff agreed to settle for $52 million, pending approval from the court.
The settlement would only apply to U.S. moderators
Schools Across the U.S. Cancel Classes Friday Over Unverified TikTok Threat
Officials in multiple states said they haven’t found any credible threats but are taking additional precautions out of an abundance of safety.
Schools in no fewer than 10 states either canceled classes or increased their police presence on Friday after a series of TikToks warned of imminent shooting and bombs threats.
Despite that, officials said they found little evidence to suggest the threats are credible. It’s possible no real threat was actually ever made as it’s unclear if the supposed threats originated on TikTok, another social media platform, or elsewhere.
“We handle even rumored threats with utmost seriousness, which is why we’re working with law enforcement to look into warnings about potential violence at schools even though we have not found evidence of such threats originating or spreading via TikTok,” TikTok’s Communications team tweeted Thursday afternoon.
Still, given the uptick of school shootings in the U.S. in recent years, many school districts across the country decided to respond to the rumors. According to The Verge, some districts in California, Minnesota, Missouri, and Texas shut down Friday.
“Based on law enforcement interviews, Little Falls Community Schools was specifically identified in a TikTok post related to this threat,” one school district in Minnesota said in a letter Thursday. “In conversations with local law enforcement, the origins of this threat remain unknown. Therefore, school throughout the district is canceled tomorrow, Friday, December 17.”
In Gilroy, California, one high school that closed its doors Friday said it would reschedule final exams that were expected to take place the same day to January.
According to the Associated Press, several other districts in Arizona, Connecticut, Illinois, Montana, New York, and Pennsylvania stationed more police officers at their schools Friday.
Viral Misinformation or Legitimate Warnings?
As The Verge notes, “The reports of threats on TikTok may be self-perpetuating.”
For example, many of the videos online may have been created in response to initial warnings as more people hopped onto the trend. Amid school cancellations, videos have continued to sprout up — many awash with both rumors and factual information.
“I’m scared off my ass, what do I do???” one TikTok user said in a now-deleted video, according to People.
“The post is vague and not directed at a specific school, and is circulating around school districts across the country,” Chicago Public Schools said in a letter, though it did not identify any specific post. “Please do not re-share any suspicious or concerning posts on social media.”
According to Dr. Amy Klinger, the director of programs for the nonprofit Educator’s School Safety Network, “This is not 2021 phenomenon.”
Instead, she told The Today Show that her network has been tracking school shooting threats since 2013, and she noted that in recent years, they’ve become more prominent on social media.
“It’s not just somebody in a classroom of 15 people hearing someone make a threat,” she said. “It’s 15,000 people on social media, because it gets passed around and it becomes larger and larger and larger.”
Jake Paul Says He “Can’t Get Cancelled” as a Boxer
The controversial YouTuber opened up about what it has been like to go from online fame to professional boxing.
The New Yorker Profiles Jake Paul
YouTuber and boxer Jake Paul talked about his career switch, reputation, and cancel culture in a profile published Monday in The New Yorker.
While Paul rose to fame as the Internet’s troublemaker, he now spends most of his time in the ring. He told the outlet that one difference between YouTube and boxing is that his often controversial reputation lends better to his new career.
“One thing that is great about being a fighter is, like, you can’t get cancelled,” Paul said. The profile noted that the sport often rewards and even encourages some degree of bad behavior.
“I’m not a saint,” Paul later continued. “I’m also not a bad guy, but I can very easily play the role.”
Paul also said the other difference between his time online and his time in boxing is the level of work. While he says he trains hard, he confessed that there was something more challenging about making regular YouTube content.
“Being an influencer was almost harder than being a boxer,” he told The New Yorker. “You wake up in the morning and you’re, like, Damn, I have to create fifteen minutes of amazing content, and I have twelve hours of sunlight.”
Jake Paul Vs. Tommy Fury
The New Yorker profile came just after it was announced over the weekend Paul will be fighting boxer Tommy Fury in an 8-round cruiserweight fight on Showtime in December.
“It’s time to kiss ur last name and ur family’s boxing legacy goodbye,” Paul tweeted. “DEC 18th I’m changing this wankers name to Tommy Fumbles and celebrating with Tom Brady.”
Both Paul and Fury are undefeated, according to ESPN. Like Paul, Fury has found fame outside of the sport. He has become a reality TV star in the U.K. after appearing on the hit show “Love Island.”
See what others are saying: (The New Yorker) (Dexerto) (ESPN)
Hackers Hit Twitch Again, This Time Replacing Backgrounds With Image of Jeff Bezos
The hack appears to be a form of trolling, though it’s possible that the infiltrators were able to uncover a security flaw while reviewing Twitch’s newly-leaked source code.
Hackers targeted Twitch for a second time this week, but rather than leaking sensitive information, the infiltrators chose to deface the platform on Friday by swapping multiple background images with a photo of former Amazon CEO Jeff Bezos.
According to those who saw the replaced images firsthand, the hack appears to have mostly — and possibly only — affected game directory headers. Though the incident appears to be nothing more than a surface-level prank, as Amazon owns Twitch, it could potentially signal greater security flaws.
For example, it’s possible the hackers could have used leaked internal security data from earlier this week to discover a network vulnerability and sneak into the platform.
The latest jab at the platforms came after Twitch assured its users it has seen “no indication” that their login credentials were stolen during the first hack. Still, concerns have remained regarding the potential for others to now spot cracks in Twitch’s security systems.
It’s also possible the Bezos hack resulted from what’s known as “cache poisoning,” which, in this case, would refer to a more limited form of hacking that allowed the infiltrators to manipulate similar images all at once. If true, the hackers likely would not have been able to access Twitch’s back end.
The photo changes only lasted several hours before being returned to their previous conditions.
First Twitch Hack
Despite suspicions and concerns, it’s unclear whether the Bezos hack is related to the major leak of Twitch’s internal data that was posted to 4chan on Wednesday.
That leak exposed Twitch’s full source code — including its security tools — as well as data on how much Twitch has individually paid every single streamer on the platform since August 2019.
It also revealed Amazon’s at least partially developed plans for a cloud-based gaming library, codenamed Vapor, which would directly compete with the massively popular library known as Steam.
Even though Twitch has said its login credentials appear to be secure, it announced Thursday that it has reset all stream keys “out of an abundance of caution.” Users are still being urged to change their passwords and update or implement two-factor authentication if they haven’t already.