Connect with us

Industry

YouTube Says It Will Use AI to Age-Restrict Content

Published

on

  • YouTube announced Tuesday that it would be expanding its machine learning to handle age-restricting content.
  • The decision has been controversial, especially after news that other AI systems employed by the company took down videos at nearly double the rate.
  • The decision likely stems from both legal responsibilities in some parts of the world, as well as practical reasons regarding the amount of content loaded to the site.
  • It might also help with moderator burn out since the platform is currently understaffed and struggles with extremely high turn over.
  • In fact, the platform still faces a lawsuit from a moderator claiming the job gave them Post Traumatic Stress Disorder. They also claim the company offered little resources to cope with the content they are required to watch.

AI-Age Restrictions

YouTube announced Tuesday that it will use AI and machine learning to automatically apply age restriction to videos.

In a recent blog post, the platform wrote, “our Trust & Safety team applies age-restrictions when, in the course of reviewing content, they encounter a video that isn’t appropriate for viewers under 18.”

“Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions.”

Flagged videos would effectively be blocked from being viewed by anyone who isn’t signed into an account or who has an account indicating they are below the age of 18. YouTube stated these changes were a continuation of their efforts to make YouTube a safer place for families. Initially, it rolled out YouTube Kids as a dedicated platform for those under 13, and now it wants to try and sterilize the platform site-wide. Although notably, it doesn’t plan to make the entire platform a new YouTube Kids.

It’s also not a coincidence that this move helps YouTube to better fall in line with regulations across the world. In Europe, users may face other steps if YouTube can’t confirm their age in addition to rolling out AI-age restrictions. This can include measures such as providing a government ID or credit card to prove one is over 18.

If a video is age-restricted by YouTube, the company did say it will have an appeals process that will get the video in front of an actual person to check it.

On that note, just days before announcing that it would implement AI to age-restrict, YouTube also said it would be expanding its moderation team after it had largely been on hiatus because of the pandemic.

It’s hard to say how much these changes will actually affect creators or how much money that can make from the platform. The only assurances YouTube gave were to creators who are part of the YouTube Partner Program.

“For creators in the YouTube Partner Program, we expect these automated age-restrictions to have little to no impact on revenue as most of these videos also violate our advertiser-friendly guidelines and therefore have limited or no ads.”

This means that most creators with the YouTube Partner Program don’t make much, or anything, from ads already and that’s unlikely to change.

Community Backlash

Every time YouTube makes a big change there are a lot of reactions, especially if it involves AI to automatically handle processes. Tuesday’s announcement was no different.

On YouTube’s tweet announcing the changes, common responses included complaints like, “what’s the point in an age restriction on a NON kids app. That’s why we have YouTube kids. really young kids shouldn’t be on normal youtube. So we don’t realistically need an age restriction.”

“Please don’t implement this until you’ve worked out all the kinks,” one user pleaded. “I feel like this might actually hurt a lot of creators, who aren’t making stuff for kids, but get flagged as kids channels because of bright colors and stuff like that”

Hiccups relating to the rollout of this new system were common among users. Although it’s possible that YouTube’s Sept 20. announcement saying it would bring back human moderators to the platform was made to help balance out how much damage a new AI could do.

In a late-August transparency report, YouTube found that AI-moderation was far more restrictive. When the moderators were first down-sized between April and June, YouTube’s AI largely took over and it removed around 11 million videos. That’s double the normal rate.

YouTube did allow creators to appeal those decisions, and about 300,000 videos were appealed; about half of which were reinstated. In a similar move, Facebook also had a similar problem, and will also bring back moderators to handle both restrictive content and the upcoming election.

Other Reasons for the Changes

YouTube’s decision to expand its use of AI not only falls in line with various laws regarding the verification of a user’s age and what content is widely available to the public but also likely for practical reasons.

The site gets over 400 hours of content uploaded every minute. Notwithstanding different time zones or having people work staggered schedules, YouTube would need to employ over 70,000 people to just check what’s uploaded to the site.

Outlets like The Verge have done a series about how YouTube, Google, and Facebook moderators are dealing with depression, anger, and Post Traumatic Stress Disorder because of their job. These issues were particularly prevalent among people working in what YouTube calls the “terror” or “violent extremism” queue.

One moderator told The Verge, “Every day you watch someone beheading someone, or someone shooting his girlfriend. After that, you feel like wow, this world is really crazy. This makes you feel ill. You’re feeling there is nothing worth living for. Why are we doing this to each other?”

That same individual noted that since working there, he began to gain weight, lose hair, have a short temper, and experience general signs of anxiety.

On top of these claims, YouTube is also facing a lawsuit filed in a California court Monday by a former content moderator at YouTube.

The complaint states that Jane Doe, “has trouble sleeping and when she does sleep, she has horrific nightmares. She often lays awake at night trying to go to sleep, replaying videos that she has seen in her mind.

“She cannot be in crowded places, including concerts and events, because she fears mass shootings. She has severe and debilitating panic attacks,” it continued. “She has lost many friends because of her anxiety around people. She has trouble interacting and being around kids and is now scared to have children.”

These issues weren’t just for people working on the “terror” queue, but anyone training to become a moderator.

“For example, during training, Plaintiff witnessed a video of a smashed open skull with people eating from it; a woman who was kidnapped and beheaded by a cartel; a person’s head being run over by a tank; beastiality; suicides; self-harm; children being rapped [sic]; births and abortions,” the complaint alleges.

“As the example was being presented, Content Moderators were told that they could step out of the room. But Content Moderators were concerned that leaving the room would mean they might lose their job because at the end of the training new Content Moderators were required to pass a test applying the Community Guidelines to the content.”

During their three-week training, moderators allegedly don’t receive much resilience training or wellness resources.

These kinds of lawsuits aren’t unheard of. Facebook faced a similar suit in 2018, where a woman claimed that during her time as a moderator she developed PTSD as a result of “constant and unmitigated exposure to highly toxic and extremely disturbing images at the workplace.”

That case hasn’t yet been decided in court. Currently, Facebook and the plaintiff agreed to settle for $52 million, pending approval from the court.

The settlement would only apply to U.S. moderators

See what others are saying: (CNET) (The Verge) (Vice)

Industry

TikTok and Twitter Are Now Deleting Videos That Expose Closeted Olympians on Grindr

Published

on

On top of outing people who may not be ready to have their sexuality revealed to the world, these videos could have endangered LGBTQ+ athletes from countries where homosexuality is illegal.


Closeted Olympians Being Doxxed

Openly LGBTQ+ Olympians are currently more visible than they have ever been before, but unfortunately, so are closeted ones.

That’s because some people have been using the LGBTQ+ dating app Grindr to try and find Olympians. They’ve been doing so by using the app’s “Explore” feature, which allows people to search and see users in specific locations (ie. Olympic Village).

But some aren’t content with just discovering which athletes belong to the LGBTQ+ community. They’re also sharing that information on platforms like TikTok and Twitter. 

“I used Grindr’s explore feature to find myself [an] Olympian boyfriend,” one TikTok user said in a post that had been viewed 140,000 times, according to Insider

That video reportedly went on to show the poster scrolling through Grindr to expose over 30 users’ full faces. 

As many have argued, not only does this potentially out already-stressed Olympians who may not yet be comfortable sharing their sexuality, it also could put some users at serious risk if they live in countries where being LGBTQ+ is illegal. 

In fact, the video cited by Insider seemingly did just that, as it reportedly shows the face of a user who appears to be from a country “known for its anti-LGBTQ policies.”

Grindr Responds, TikTok and Twitter Take Action

In response, Grindr said the posts violate its rules against “publicly displaying, publishing, or otherwise distributing any content or information” from the app. It then asked the posters to remove the content.

Ultimately, it was TikTok and Twitter themselves that largely took action, with the two deleting at least 14 posts scattered across their platforms.

A Highly-Visible LGBTQ+ Presence at the Games 

According to Outsports, at least 172 of around 11,000 Olympians are openly LGBTQ+. While that number is still well below the statistical average, it’s triple the number of LGBTQ+ athletes that attended Rio’s 2016 Games.

In fact, if they were their own country, openly LGBTQ+ athletes would reportedly rank 11th in medals, according to an Outsports report published Tuesday. 

Among those winners is British diver Tom Daley, who secured his first gold medal on Monday and used his platform to send a hopeful message to LGBTQ+ youth by telling them, “You are not alone.”

After winning a silver medal on Wednesday, U.S. swimmer Erica Sullivan talked about her experience as both a member of the LGBTQ+ community and a person of color. 

Still, the Olympics has faced criticism for its exclusion of intersex individuals, particularly those like South African middle-distance runner Caster Semenya, who won gold medals in both 2012 and 2016. Rules implemented in 2019 now prevent Semenya from competing as a woman without the use of medication to suppress her testosterone levels. 

See what others are saying: (Insider) (Pink News) (Out)

Continue Reading

Industry

Jake Paul Launches Anti-Bullying Charity

Published

on

The charity, called Boxing Bullies, aims to use the sport to give kids confidence and courage.


Jake Paul Launches Boxing Bullies Foundation

YouTuber Jake Paul — best known as the platform’s boxer, wreckless partier, and general troublemaker — has seemingly launched a non-profit to combat bullying.  

The charity is called Boxing Bullies. According to a mission statement posted on Instagram, it aims to “instill self confidence, leadership, and courage within the youth through the sport of boxing while using our platform, voice, and social media to fight back against bullying.”

If the notion of a Paul-founded anti-bullying charity called “Boxing Bullies” was not already begging to be compared to former First Lady Melania Trump’s “Best Best” initiative, maybe the group’s “Boxing Bullies Commandments” will help connect the dots. Those commandments use an acronym for the word “BOX” to spell out the charity’s golden rules.

Be kind to everyone; Only defend, never initiate; X-out bullying.” 

Paul Hopes To “Inspire” Kids To Stand Up For Themselves

Paul first said he was launching Boxing Bullies during a July 13 interview following a press conference for his upcoming fight against Tyron Woodley.

“I know who I am at the end of the day, which is a good person,” he told reporters. “I’m trying to change this sport, bring more eyeballs. I’m trying to support other fighters, increase fighter pay. I’m starting my charity, I’m launching that in 12 days here called Boxing Bullies and we’re helping to fight against cyberbullying.”

It has not been quite 12 days since the interview, so it’s likely that more information about the organization will be coming soon. Currently, the group has been the most active on Instagram, where it boasts a following of just around 1,200 followers. It has posted once to Twitter, where it has 32 followers; and has a TikTok account that has yet to publish any content. It also has a website, though there is not too much on it as of yet.

On its Instagram, one post introducing Paul as the founder claims the rowdy YouTuber started this charity because he has been on the receiving end of bullying.

Having been a victim of bullying himself, Jake experienced firsthand the impact it has on a person’s life,” the post says. “Jake believes that this is a prevailing issue in society that isn’t talked about enough. Boxing gave Jake the confidence to not care about what others think and he wants to share the sport and the welfare it‘s had on him with as many kids as possible.”

It adds that he hopes his group can“inspire the next generation of kids to be leaders, be athletes, and to fight back against bullying.”

Paul Previously Accused of Being a Bully

While fighting against bullying is a noble cause, it is an ironic project for Paul to start, as he has faced no shortage of bullying accusations. While Paul previously sang about “stopping kids from getting bullied” in the lunchroom, some have alleged he himself was actually a classic high school bully who threw kids’ backpacks into garbage cans. 

This behavior allegedly continued into his adulthood, as a New York Times report from earlier this year claimed he ran his Team 10 house with a culture of toxicity and bullying. Among other things, sources said he involved others in violent pranks, pressured people into doing dangerous stunts, and destroyed peoples’ personal property to make content.

Earlier this year, Paul was also accused of sexual assault, though he denied those allegations.

See what others are saying: (Dexerto)

Continue Reading

Industry

Director Defends Recreating Anthony Bourdain’s Voice With AI in New Documentary

Published

on

The film’s director claims he received permission from Bourdain’s estate and literary agent, but on Thursday, Bourdain’s widow publicly denied ever giving that permission. 


Bourdain’s Voice Recreated

“You are successful, and I am successful, and I’m wondering: Are you happy?” Anthony Bourdain says in a voiceover featured in “Roadrunnner,” a newly released documentary about the late chef — except Bourdain never actually said those words aloud.

Instead, it’s one of three lines in the film, which features frequent voiceovers from Bourdain, that were created through the use of artificial intelligence technology.

That said, the words are Bourdain’s own. In fact, they come from an email Bourdain reportedly wrote to a friend prior to his 2018 suicide. Nonetheless, many have now questioned whether recreating Bourdain’s voice was ethical, especially since documentaries are meant to reflect reality.

Director Defends Use of AI Voice

The film’s director, Academy Award winner Morgan Neville, has defended his use of the synthetic voice, telling Variety that he received permission from Bourdain’s estate and literary agent before inserting the lines into the film. 

“There were a few sentences that Tony wrote that he never spoke aloud,” Neville said. “It was a modern storytelling technique that I used in a few places where I thought it was important to make Tony’s words come alive.” 

Bourdain’s widow — Ottavia Bourdain, who is the executor of his estate — later denied Neville’s claim on Twitter, saying, “I certainly was NOT the one who said Tony would have been cool with that.”

In another interview with GQ, Neville described the process, saying the film’s creators “fed more than ten hours of Tony’s voice into an AI model.”

“The bigger the quantity, the better the result,” he added. “We worked with four companies before settling on the best.”

“If you watch the film,” Neville told The New Yorker, “you probably don’t know what the other lines are that were spoken by the AI, and you’re not going to know. We can have a documentary-ethics panel about it later.” 

The Ethics Debate Isn’t Being Tabled

But many want to have that discussion now.

Boston-based film critic Sean Burns, who gave the film a rare negative review, later criticized it again for its unannounced use of AI, saying he wasn’t aware that Bourdain’s voice had been recreated until after he watched the documentary.  

Meanwhile, The New Yorker’s Helen Rosner wrote that the “seamlessness of the effect is eerie.”

“If it had been a human voice double I think the reaction would be “huh, ok,” but there’s something truly unsettling about the idea of it coming from a computer,” Rosner later tweeted. 

Online, many others have criticized the film’s use of AI, with some labeling it as a “deepfake.”

Others have offered more mixed criticism, saying that while the documentary highlights the need for posthumous AI use to be disclosed, it should not be ruled out altogether. 

“In a world where the living could consent to using AI to reproduce their voices posthumously, and where people were made aware that such a technology was being used, up front and in advance, one could envision that this kind of application might serve useful documentary purposes,” David Leslie, ethics lead at the Alan Turing Institute, told the BBC.

Celebrities Recreated After Death

The posthumous use of celebrity likeness in media is not a new debate. In 2012, a hologram of Tupac took the stage 15 years after his death. In 2014, the Billboard Music Awards brought a hologram of Michael Jackson onstage five years after his death. Meanwhile, the Star Wars franchise digitally recreated actor Peter Cushing in 2016’s “Rogue One,” and unused footage of actress Carrie Fisher was later translated into “The Rise of Skywalker,” though a digital version of Fisher was never used.

In recent years, it has become almost standard for filmmakers to say that they will not create digital versions of characters whose actors die unexpectedly. For example, several months after Chadwick Boseman’s death last year, “Black Panther: Wakanda Forever” executive producer Victoria Alonso confirmed Boseman would not be digitally recreated for his iconic role as King T’Challa.

See what others are saying: (BBC) (Yahoo! News) (Variety)

Continue Reading