Connect with us

Industry

YouTube Says It Will Use AI to Age-Restrict Content

Published

on

  • YouTube announced Tuesday that it would be expanding its machine learning to handle age-restricting content.
  • The decision has been controversial, especially after news that other AI systems employed by the company took down videos at nearly double the rate.
  • The decision likely stems from both legal responsibilities in some parts of the world, as well as practical reasons regarding the amount of content loaded to the site.
  • It might also help with moderator burn out since the platform is currently understaffed and struggles with extremely high turn over.
  • In fact, the platform still faces a lawsuit from a moderator claiming the job gave them Post Traumatic Stress Disorder. They also claim the company offered little resources to cope with the content they are required to watch.

AI-Age Restrictions

YouTube announced Tuesday that it will use AI and machine learning to automatically apply age restriction to videos.

In a recent blog post, the platform wrote, “our Trust & Safety team applies age-restrictions when, in the course of reviewing content, they encounter a video that isn’t appropriate for viewers under 18.”

“Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions.”

Flagged videos would effectively be blocked from being viewed by anyone who isn’t signed into an account or who has an account indicating they are below the age of 18. YouTube stated these changes were a continuation of their efforts to make YouTube a safer place for families. Initially, it rolled out YouTube Kids as a dedicated platform for those under 13, and now it wants to try and sterilize the platform site-wide. Although notably, it doesn’t plan to make the entire platform a new YouTube Kids.

It’s also not a coincidence that this move helps YouTube to better fall in line with regulations across the world. In Europe, users may face other steps if YouTube can’t confirm their age in addition to rolling out AI-age restrictions. This can include measures such as providing a government ID or credit card to prove one is over 18.

If a video is age-restricted by YouTube, the company did say it will have an appeals process that will get the video in front of an actual person to check it.

On that note, just days before announcing that it would implement AI to age-restrict, YouTube also said it would be expanding its moderation team after it had largely been on hiatus because of the pandemic.

It’s hard to say how much these changes will actually affect creators or how much money that can make from the platform. The only assurances YouTube gave were to creators who are part of the YouTube Partner Program.

“For creators in the YouTube Partner Program, we expect these automated age-restrictions to have little to no impact on revenue as most of these videos also violate our advertiser-friendly guidelines and therefore have limited or no ads.”

This means that most creators with the YouTube Partner Program don’t make much, or anything, from ads already and that’s unlikely to change.

Community Backlash

Every time YouTube makes a big change there are a lot of reactions, especially if it involves AI to automatically handle processes. Tuesday’s announcement was no different.

On YouTube’s tweet announcing the changes, common responses included complaints like, “what’s the point in an age restriction on a NON kids app. That’s why we have YouTube kids. really young kids shouldn’t be on normal youtube. So we don’t realistically need an age restriction.”

“Please don’t implement this until you’ve worked out all the kinks,” one user pleaded. “I feel like this might actually hurt a lot of creators, who aren’t making stuff for kids, but get flagged as kids channels because of bright colors and stuff like that”

Hiccups relating to the rollout of this new system were common among users. Although it’s possible that YouTube’s Sept 20. announcement saying it would bring back human moderators to the platform was made to help balance out how much damage a new AI could do.

In a late-August transparency report, YouTube found that AI-moderation was far more restrictive. When the moderators were first down-sized between April and June, YouTube’s AI largely took over and it removed around 11 million videos. That’s double the normal rate.

YouTube did allow creators to appeal those decisions, and about 300,000 videos were appealed; about half of which were reinstated. In a similar move, Facebook also had a similar problem, and will also bring back moderators to handle both restrictive content and the upcoming election.

Other Reasons for the Changes

YouTube’s decision to expand its use of AI not only falls in line with various laws regarding the verification of a user’s age and what content is widely available to the public but also likely for practical reasons.

The site gets over 400 hours of content uploaded every minute. Notwithstanding different time zones or having people work staggered schedules, YouTube would need to employ over 70,000 people to just check what’s uploaded to the site.

Outlets like The Verge have done a series about how YouTube, Google, and Facebook moderators are dealing with depression, anger, and Post Traumatic Stress Disorder because of their job. These issues were particularly prevalent among people working in what YouTube calls the “terror” or “violent extremism” queue.

One moderator told The Verge, “Every day you watch someone beheading someone, or someone shooting his girlfriend. After that, you feel like wow, this world is really crazy. This makes you feel ill. You’re feeling there is nothing worth living for. Why are we doing this to each other?”

That same individual noted that since working there, he began to gain weight, lose hair, have a short temper, and experience general signs of anxiety.

On top of these claims, YouTube is also facing a lawsuit filed in a California court Monday by a former content moderator at YouTube.

The complaint states that Jane Doe, “has trouble sleeping and when she does sleep, she has horrific nightmares. She often lays awake at night trying to go to sleep, replaying videos that she has seen in her mind.

“She cannot be in crowded places, including concerts and events, because she fears mass shootings. She has severe and debilitating panic attacks,” it continued. “She has lost many friends because of her anxiety around people. She has trouble interacting and being around kids and is now scared to have children.”

These issues weren’t just for people working on the “terror” queue, but anyone training to become a moderator.

“For example, during training, Plaintiff witnessed a video of a smashed open skull with people eating from it; a woman who was kidnapped and beheaded by a cartel; a person’s head being run over by a tank; beastiality; suicides; self-harm; children being rapped [sic]; births and abortions,” the complaint alleges.

“As the example was being presented, Content Moderators were told that they could step out of the room. But Content Moderators were concerned that leaving the room would mean they might lose their job because at the end of the training new Content Moderators were required to pass a test applying the Community Guidelines to the content.”

During their three-week training, moderators allegedly don’t receive much resilience training or wellness resources.

These kinds of lawsuits aren’t unheard of. Facebook faced a similar suit in 2018, where a woman claimed that during her time as a moderator she developed PTSD as a result of “constant and unmitigated exposure to highly toxic and extremely disturbing images at the workplace.”

That case hasn’t yet been decided in court. Currently, Facebook and the plaintiff agreed to settle for $52 million, pending approval from the court.

The settlement would only apply to U.S. moderators

See what others are saying: (CNET) (The Verge) (Vice)

Industry

Twitter CEO Jack Dorsey Says Trump Ban Was the “Right Decision” But Sets “Dangerous” Precedent

Published

on

  • While defending Twitter’s decision to permanently ban President Donald Trump, CEO Jack Dorsey noted the “dangerous” precedent such a move set.
  • “Having to take these actions fragment the public conversation,” Dorsey said in a lengthy Twitter thread on Wednesday. “They divide us. They limit the potential for clarification, redemption, and learning.”
  • Dorsey’s message came the same day Twitter fully reinstated Rep. Lauren Boebert’s (R-Co.) account, hours after locking it for violating Twitter rules. A Twitter spokesperson later described the lock as an “incorrect enforcement action.”

Dorsey Describes Trump Ban as a Double-Edged Sword

In a lengthy Twitter thread published Wednesday, CEO Jack Dorsey defended his platform’s decision to permanently ban President Donald Trump, while also noting the “dangerous” precedent such a unilateral move sets.

Twitter made the decision to ban Trump on Jan. 8, two days after pro-Trump insurrectionists stormed the U.S. Capitol complex in an assault that left multiple dead.

“I do not celebrate or feel pride in our having to ban [Trump] from Twitter, or how we got here,” Dorsey said in the first of 13 tweets. 

Nonetheless, Dorsey described Trump’s ban as “the right decision for Twitter.”

“Offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all,” he added.

“That said, having to ban an account has real and significant ramifications,” Dorsey continued.

“[It] sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.”

Dorsey described most bans as a failure of Twitter to “promote healthy conversation,” though he noted that exceptions to such a mindset also exist. Among other failures, Dorsey said extreme actions like a ban can “fragment public conversation,” divide people, and limit “clarification, redemption, and learning.”

Dorsey: Trump Bans Were Not Coordinated

Dorsey continued his thread by addressing claims and criticism that Trump’s ban on Twitter violated free speech.

“A company making a business decision to moderate itself is different from a government removing access, yet can feel much the same,” he said.

Indeed, multiple legal experts have stated that Trump’s ban on social media does not amount to First Amendment violations, as the First Amendment only addresses government censorship. 

“If folks do not agree with our rules and enforcement, they can simply go to another internet service,” Dorsey added. However, Dorsey noted that such a concept has been challenged over the past week. 

Trump has now been banned or suspended from a number of platforms, including Facebook, Instagram, and YouTube. On Wednesday, Snapchat announced plans to terminate Trump’s account in the “interest of public safety.” Previously, Snapchat had only suspended his account, but as of Jan. 20, it will be permanently banned. 

Addressing criticism of the swift bans handed down by these platforms in the wake of the Capitol attack, Dorsey said he doesn’t believe Trump’s bans on social media were coordinated.

“More likely: companies came to their own conclusions or were emboldened by the actions of others,” he said.

Twitter Reverses Course of Locking Rep. Lauren Boebert’s Account

Dorsey’s thread regarding the fragile nature of regulating users’ privileges on the platform seemed to play out earlier the same day.

On Wednesday, newly-elected Rep. Lauren Boebert (R-Co.) posted a screenshot to Instagram showing that her Twitter account had been locked for six days. The screenshot stated that she had violated Twitter’s rules and would be unable to tweet, retweet, or like until her account was unlocked. 

Hours later, Twitter reversed course and fully reinstated her account. 

“In this instance, our teams took the incorrect enforcement action. The Tweet in question is now labeled in accordance with our Civic Integrity Policy. The Tweet will not be required to be removed and the account will not be temporarily locked,” a spokesperson for the platform told Insider.

It is unknown what tweet caused that initial ban, as Twitter refused to say. 

The latest tweet from Boebert’s account to be tagged with a fact check warning is from Sunday. In that tweet, she baselessly and falsely accuses the DNC of rigging the 2020 Election, a claim that largely inspired the Capitol attacks. 

See what others are saying: (Business Insider) (CNN) (Associated Press)

Continue Reading

Industry

Uber and Lyft Drivers Sue To Overturn California’s Prop 22

Published

on

  • A group of Uber and Lyft drivers filed a lawsuit Tuesday against California’s controversial Prop 22, a ballot measure that was approved by nearly 59% of state voters in the 2020 election. 
  • While Prop 22 does promise drivers wage guarantees and health insurance stipends, it also eliminated some protections as well as benefits like sick pay and workers’ compensation.
  • In their lawsuit, the drivers argue that Prop 22 “illegally” prevents them from being able to access the state’s workers’ compensation program. 

What’s in the Lawsuit?

In a lawsuit filed Tuesday, a group of Uber and Lyft drivers asked California’s Supreme Court to overturn the state’s controversial Prop 22 ballot measure.

The drivers behind the lawsuit, along with Service Employees International Union, allege that Prop 22 “illegally” bars them from being able to participate in the state’s workers’ compensation program. 

Additionally, they argue that the measure violates California’s constitution by“stripping” the state legislature of its ability to protect who unionize. 

“Every day, rideshare drivers like me struggle to make ends meet because companies like Uber and Lyft prioritize corporate profits over our wellbeing,” Plaintiff Saori Okawa said in a statement. 

Conversely, Uber driver and Prop 22 activist Jim Pyatt denounced the lawsuit, saying,“Voters across the political spectrum spoke loud and clear, passing Prop 22 in a landslide. Meritless lawsuits that seek to undermine the clear democratic will of the people do not stand up to scrutiny in the courts.”

California ballot measures have been occasionally repealed in the past; however, most of the time, they’ve only been repealed following subsequent ballot measures. If this lawsuit fails, such an initiative would likely be the last option for overturning Prop 22.

What is Prop 22?

Prop 22, which was approved by 59% of state voters in the 2020 Election, exempts app-based transportation and delivery companies from having to classify their drivers as employees. Rather, those drivers are listed as “independent contractors,” also known as gig workers. 

Notably, Prop 22 was supported by major industry players like DoorDash, Uber, Lyft, and Instacart, which launched a massive $200 million lobbying and advertising campaign.

While those companies did promise wage guarantees and health insurance stipends for drivers, Prop 22 also eliminated a number of protections and benefits drivers would have seen under an “employee” status, including sick pay and workers’ compensation. 

Because of that, many opponents have argued that the measure incentivizes companies to lay off their employees in favor of cheaper labor options.

Last week, it was reported that grocery stores like Albertsons, Vons, and Pavilions began laying off their delivery workers in favor of switching to ”third-party logistics providers.” According to Albertson’s, unionized delivery workers were not included in the layoffs. 

In recent coverage from KPBS, one San Diego Vons delivery worker detailed a situation in which he and delivery workers were called into a meeting with management. 

“I thought they were going to give us a bonus or a raise or something like that,” he said. 

Ultimately, that employee was told he would be losing his job in late February, even though he had been with the company for two-and-a-half years. 

“I didn’t want to tell them,” the employee said of his parents, one of whom is disabled. “I’m the breadwinner for the family.”

See what others are saying: (The Verge) (The Washington Post) (CNN)

Continue Reading

Industry

Daniel Silva Blames Cory La Barrie for His Own Death in New Legal Filing

Published

on

  • Popular Tattoo artist Daniel Silva said the death of YouTuber Cory La Barrie was due to La Barrie’s “own negligence,” in response to a wrongful death lawsuit from his family.
  • La Barrie died last May after Silva lost control of the sports car they were in, crashing into a street sign and tree. 
  • La Barrie’s family has accused Silva of negligence, saying his excessive speeding caused the crash. They also claim he was under the influence, though he was never formally charged with a DUI. 
  • According to TMZ, Silva filed documents saying La Barrie “assumed the risk of death when he jumped into Daniel’s car that fateful night back in May.”

Corey La Barrie’s Death

Popular tattoo artist Daniel Silva has blamed YouTuber Corey La Barrie for his own death in response to a wrongful death lawsuit from La Barrie’s family, according to TMZ.

The tabloid says he filed legal documents saying, “the car crash that led to Corey’s death was due to his own negligence, and he assumed the risk of death when he jumped into Daniel’s car that fateful night back in May.”

La Barrie died on May 10, his 25th birthday, after Silva was speeding and lost control of the sports car they were in, crashing into a street sign and tree.

Police say Silva tried to leave the scene but was stopped by witnesses. He was later arrested and charged with murder. Silva eventually reached an agreement with prosecutors to plead no contest to vehicular manslaughter with gross negligence.

In August, Silva was sentenced to 364 days in jail, with credit for 216 days served because of California sentencing guidelines, even though it had only been 108 days since the crash at the time.

He also earned five years of probation, 250 hours of community service, and a suspended prison sentence of four years, which would be imposed if he violates the terms of his probation.

Wrongful Death Suit

Silva still faces the family’s lawsuit, which they filed the same month their son died.

In it, La Barrie’s family has accused Silva of negligence, saying his excessive speeding caused the crash. They also claim he was driving under the influence.

It’s worth noting that people close to Silva have disputed that claim and he was never charged with a DUI. However, the first police statement about the crash labeled it aDUI Fatal Traffic Collision.” Witnesses have said the two were partying earlier that night, though

See what others are saying: (TMZ) (USA Today) (Variety)

Continue Reading