Connect with us

Industry

YouTube Says It Will Use AI to Age-Restrict Content

Published

on

  • YouTube announced Tuesday that it would be expanding its machine learning to handle age-restricting content.
  • The decision has been controversial, especially after news that other AI systems employed by the company took down videos at nearly double the rate.
  • The decision likely stems from both legal responsibilities in some parts of the world, as well as practical reasons regarding the amount of content loaded to the site.
  • It might also help with moderator burn out since the platform is currently understaffed and struggles with extremely high turn over.
  • In fact, the platform still faces a lawsuit from a moderator claiming the job gave them Post Traumatic Stress Disorder. They also claim the company offered little resources to cope with the content they are required to watch.

AI-Age Restrictions

YouTube announced Tuesday that it will use AI and machine learning to automatically apply age restriction to videos.

In a recent blog post, the platform wrote, “our Trust & Safety team applies age-restrictions when, in the course of reviewing content, they encounter a video that isn’t appropriate for viewers under 18.”

“Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions.”

Flagged videos would effectively be blocked from being viewed by anyone who isn’t signed into an account or who has an account indicating they are below the age of 18. YouTube stated these changes were a continuation of their efforts to make YouTube a safer place for families. Initially, it rolled out YouTube Kids as a dedicated platform for those under 13, and now it wants to try and sterilize the platform site-wide. Although notably, it doesn’t plan to make the entire platform a new YouTube Kids.

It’s also not a coincidence that this move helps YouTube to better fall in line with regulations across the world. In Europe, users may face other steps if YouTube can’t confirm their age in addition to rolling out AI-age restrictions. This can include measures such as providing a government ID or credit card to prove one is over 18.

If a video is age-restricted by YouTube, the company did say it will have an appeals process that will get the video in front of an actual person to check it.

On that note, just days before announcing that it would implement AI to age-restrict, YouTube also said it would be expanding its moderation team after it had largely been on hiatus because of the pandemic.

It’s hard to say how much these changes will actually affect creators or how much money that can make from the platform. The only assurances YouTube gave were to creators who are part of the YouTube Partner Program.

“For creators in the YouTube Partner Program, we expect these automated age-restrictions to have little to no impact on revenue as most of these videos also violate our advertiser-friendly guidelines and therefore have limited or no ads.”

This means that most creators with the YouTube Partner Program don’t make much, or anything, from ads already and that’s unlikely to change.

Community Backlash

Every time YouTube makes a big change there are a lot of reactions, especially if it involves AI to automatically handle processes. Tuesday’s announcement was no different.

On YouTube’s tweet announcing the changes, common responses included complaints like, “what’s the point in an age restriction on a NON kids app. That’s why we have YouTube kids. really young kids shouldn’t be on normal youtube. So we don’t realistically need an age restriction.”

“Please don’t implement this until you’ve worked out all the kinks,” one user pleaded. “I feel like this might actually hurt a lot of creators, who aren’t making stuff for kids, but get flagged as kids channels because of bright colors and stuff like that”

Hiccups relating to the rollout of this new system were common among users. Although it’s possible that YouTube’s Sept 20. announcement saying it would bring back human moderators to the platform was made to help balance out how much damage a new AI could do.

In a late-August transparency report, YouTube found that AI-moderation was far more restrictive. When the moderators were first down-sized between April and June, YouTube’s AI largely took over and it removed around 11 million videos. That’s double the normal rate.

YouTube did allow creators to appeal those decisions, and about 300,000 videos were appealed; about half of which were reinstated. In a similar move, Facebook also had a similar problem, and will also bring back moderators to handle both restrictive content and the upcoming election.

Other Reasons for the Changes

YouTube’s decision to expand its use of AI not only falls in line with various laws regarding the verification of a user’s age and what content is widely available to the public but also likely for practical reasons.

The site gets over 400 hours of content uploaded every minute. Notwithstanding different time zones or having people work staggered schedules, YouTube would need to employ over 70,000 people to just check what’s uploaded to the site.

Outlets like The Verge have done a series about how YouTube, Google, and Facebook moderators are dealing with depression, anger, and Post Traumatic Stress Disorder because of their job. These issues were particularly prevalent among people working in what YouTube calls the “terror” or “violent extremism” queue.

One moderator told The Verge, “Every day you watch someone beheading someone, or someone shooting his girlfriend. After that, you feel like wow, this world is really crazy. This makes you feel ill. You’re feeling there is nothing worth living for. Why are we doing this to each other?”

That same individual noted that since working there, he began to gain weight, lose hair, have a short temper, and experience general signs of anxiety.

On top of these claims, YouTube is also facing a lawsuit filed in a California court Monday by a former content moderator at YouTube.

The complaint states that Jane Doe, “has trouble sleeping and when she does sleep, she has horrific nightmares. She often lays awake at night trying to go to sleep, replaying videos that she has seen in her mind.

“She cannot be in crowded places, including concerts and events, because she fears mass shootings. She has severe and debilitating panic attacks,” it continued. “She has lost many friends because of her anxiety around people. She has trouble interacting and being around kids and is now scared to have children.”

These issues weren’t just for people working on the “terror” queue, but anyone training to become a moderator.

“For example, during training, Plaintiff witnessed a video of a smashed open skull with people eating from it; a woman who was kidnapped and beheaded by a cartel; a person’s head being run over by a tank; beastiality; suicides; self-harm; children being rapped [sic]; births and abortions,” the complaint alleges.

“As the example was being presented, Content Moderators were told that they could step out of the room. But Content Moderators were concerned that leaving the room would mean they might lose their job because at the end of the training new Content Moderators were required to pass a test applying the Community Guidelines to the content.”

During their three-week training, moderators allegedly don’t receive much resilience training or wellness resources.

These kinds of lawsuits aren’t unheard of. Facebook faced a similar suit in 2018, where a woman claimed that during her time as a moderator she developed PTSD as a result of “constant and unmitigated exposure to highly toxic and extremely disturbing images at the workplace.”

That case hasn’t yet been decided in court. Currently, Facebook and the plaintiff agreed to settle for $52 million, pending approval from the court.

The settlement would only apply to U.S. moderators

See what others are saying: (CNET) (The Verge) (Vice)

Industry

Hackers Hit Twitch Again, This Time Replacing Backgrounds With Image of Jeff Bezos

Published

on

The hack appears to be a form of trolling, though it’s possible that the infiltrators were able to uncover a security flaw while reviewing Twitch’s newly-leaked source code.


Bezos Prank

Hackers targeted Twitch for a second time this week, but rather than leaking sensitive information, the infiltrators chose to deface the platform on Friday by swapping multiple background images with a photo of former Amazon CEO Jeff Bezos. 

According to those who saw the replaced images firsthand, the hack appears to have mostly — and possibly only — affected game directory headers. Though the incident appears to be nothing more than a surface-level prank, as Amazon owns Twitch, it could potentially signal greater security flaws. 

For example, it’s possible the hackers could have used leaked internal security data from earlier this week to discover a network vulnerability and sneak into the platform. 

The latest jab at the platforms came after Twitch assured its users it has seen “no indication” that their login credentials were stolen during the first hack. Still, concerns have remained regarding the potential for others to now spot cracks in Twitch’s security systems.

It’s also possible the Bezos hack resulted from what’s known as “cache poisoning,” which, in this case, would refer to a more limited form of hacking that allowed the infiltrators to manipulate similar images all at once. If true, the hackers likely would not have been able to access Twitch’s back end. 

The photo changes only lasted several hours before being returned to their previous conditions. 

First Twitch Hack 

Despite suspicions and concerns, it’s unclear whether the Bezos hack is related to the major leak of Twitch’s internal data that was posted to 4chan on Wednesday.

That leak exposed Twitch’s full source code — including its security tools — as well as data on how much Twitch has individually paid every single streamer on the platform since August 2019. 

It also revealed Amazon’s at least partially developed plans for a cloud-based gaming library, codenamed Vapor, which would directly compete with the massively popular library known as Steam.

Even though Twitch has said its login credentials appear to be secure, it announced Thursday that it has reset all stream keys “out of an abundance of caution.” Users are still being urged to change their passwords and update or implement two-factor authentication if they haven’t already. 

See what others are saying: (The Verge) (Forbes) (CNET)

Continue Reading

Industry

Twitch Blames Server Configuration Error for Hack, Says There’s No Indication That Login Info Leaked

Published

on

The platform also said full credit card numbers were not reaped by hackers, as that data is stored externally. 


Login and Credit Card Info Secure

Twitch released a security update late Wednesday claiming it had seen “no indication” that users’ login credentials were stolen by hackers who leaked the entire platform’s source code earlier in the day.

“Full credit card numbers are not stored by Twitch, so full credit card numbers were not exposed,” the company added in its announcement.

The leaked data, uploaded to 4chan, includes code related to the platform’s security tools, as well as exact totals of how much it has individually paid every single streamer on the platform since August 2019. 

Early Thursday, Twitch also announced that it has now reset all stream keys “out of an abundance of caution.” Streamers looking for their new keys can visit a dashboard set up by the platform, though users may need to manually update their software with the new key before being able to stream again depending on what kind of software they use.

As far as what led to the hackers being able to steal the data, Twitch blamed an error in a “server configuration change that was subsequently accessed by a malicious third party,” confirming that the leak was not the work of a current employee who used internal tools. 

Will Users Go to Other Streaming Platforms?

While no major creators have said they are leaving Twitch for a different streaming platform because of the hack, many small users have either announced their intention to leave Twitch or have said they are considering such a move. 

It’s unclear if the leak, coupled with other ongoing Twitch controversies, will ultimately lead to a significant user exodus, but there’s little doubt that other platforms are ready and willing to leverage this hack in the hopes of attracting new users. 

At least one big-name streamer has already done as much, even if largely only presenting the idea as a playful jab rather than with serious intention. 

“Pretty crazy day today,” YouTube’s Valkyrae said on a stream Wednesday while referencing a tweet she wrote earlier the day.

“YouTube is looking to sign more streamers,” that tweet reads. 

I mean, they are! … No shade to Twitch… Ah! Well…” Valkyrae said on stream before interrupting herself to note that she was not being paid by YouTube to make her comments. 

See what others are saying: (Engadget) (BBC) (Gamerant)

Continue Reading

Industry

The Entirety of Twitch Has Been Leaked Online, Including How Much Top Creators Earn

Published

on

The data dump, which could be useful for some of Twitch’s biggest competitors, could signify one of the most encompassing platform leaks ever.


Massive Collection of Data Leaked 

Twitch’s full source code was uploaded to 4chan Wednesday morning after it was obtained by hackers.

Among the 125 GB of stolen data is information revealing that Amazon, which owns Twitch, has at least partially developed plans for a cloud-based gaming library. That library, codenamed Vapor, would directly compete with the massively popular library known as Steam.

With Amazon being the all-encompassing giant that it is, it’s not too surprising that it would try to develop a Steam rival, but it’s eyecatching news nonetheless considering how much the release of Vapor could shake up the market.

The leaked data also showcased exactly how much Twitch has paid its creators, including the platform’s top accounts, such as the group CriticalRole, as well as steamers xQcOW, Tfue, Ludwig, Moistcr1tikal, Shroud, HasanAbi, Sykkuno, Pokimane, Ninja, and Amouranth.

These figures only represent payouts directly from Twitch. Each creator mentioned has made additional money through donations, sponsorships, and other off-platform ventures. Sill, the information could be massively useful for competitors like YouTube Gaming, which is shelling out big bucks to ink deals with creators. 

Data related to Twitch’s internal security tools, as well as code related to software development kits and its use of Amazon Web Services, was also released with the hack. In fact, so much data was made public that it could constitute one of the most encompassing platform dumps ever.

Creators Respond

Streamer CDawgVA, who has just under 500,000 subscribers on Twitch, tweeted about the severity of the data breach on Wednesday.

“I feel like calling what Twitch just experienced as “leak” is similar to me shitting myself in public and trying to call it a minor inconvenience,” he wrote. “It really doesn’t do the situation justice.”

Despite that, many of the platform’s top streamers have been quite casual about the situation.

“Hey, @twitch EXPLAIN?”xQc tweeted. Amouranth replied with a laughing emoji and the text, “This is our version of the Pandora papers.” 

Meanwhile, Pokimane tweeted, “at least people can’t over-exaggerate me ‘making millions a month off my viewers’ anymore.”

Others, such as Moistcr1tikal and HasanAbi argued that their Twitch earning are already public information given that they can be easily determined with simple calculations. 

Could More Data Come Out?

This may not be the end of the leak, which was labeled as “part one.” If true, there’s no reason to think that the leakers wouldn’t publish a part two. 

For example, they don’t seem to be too fond of Twitch and said they hope this data dump “foster[s] more disruption and competition in the online video streaming space.”

They added that the platform is a “disgusting toxic cesspool” and included the hashtag #DoBetterTwitch, which has been used in recent weeks to drive boycotts against the platform as smaller creators protest the ease at which trolls can use bots to spam their chats with racist, sexist, and homophobic messages.

Still, this leak does appear to lack one notable set of data: password and address information of Twitch users.

That doesn’t necessarily mean the leakers don’t have it. It could just mean they are only currently interested in sharing Twitch’s big secrets. 

Regardless, Twitch users and creators are being strongly urged to change their passwords as soon as possible and enable two-factor authentication.

See what others are saying: (The Verge) (Video Games Chronicle) (Kotaku)

Continue Reading