Connect with us

Industry

YouTube Updates Harassment Policy to Curb Threats and Personal Attacks

Published

on

  • YouTube announced new bullying and harassment policies that will prohibit implied threats and malicious insults based on a person’s sexuality, race, or gender expression.
  • Under the new policy, channels who show a pattern of harassing behavior by continuously making remarks that come close to violating the harassment policy could also receive consequences.
  • These changes come several months after a public controversy where former Vox host Carlos Maza accused conservative commentator Steven Crowder of harassing him on his channel. While Crowder did repeatedly call Maza names like “lispy queer,” YouTube said this was not a violation of their policy.
  • Many were not happy with YouTube’s new policy, resulting in #YouTubeIsOverParty trending on Twitter. Some creators say they have already been impacted by the guidelines.

YouTube’s New Policy

YouTube announced new policy changes that will prohibit implied threats and malicious insults based on a person’s sexuality, race, or gender expression.

In a Wednesday blog post, the company announced that it was tightening the rules in regards to its bullying and harassment guidelines. These rules come after months of review with creators, experts from bullying organizations, free speech proponents, and advisers along all sides of the political spectrum.

“Harassment hurts our community by making people less inclined to share their opinions and engage with each other,” YouTube’s post said. “We heard this time and again from creators, including those who met with us during the development of this policy update.”

The company’s first major change aims to take “a stronger stance against threats and personal attacks.” YouTube’s guidelines previously said videos with explicit threats in them would have actions taken against them, and its new policy extends that to include videos with veiled or implied threats.

“This includes content simulating violence toward an individual or language suggesting physical violence may occur,” the post explains.  

On top of threatening someone, this will also cover demeaning language that YouTube feels crosses the line. This will include “content that maliciously insults someone based on protected attributes such as their race, gender expression, or sexual orientation.”

YouTube also addressed consequences for a “pattern of harassing behavior.” The company’s post says that creators found that harassment sometimes stemmed from remarks repeatedly made over the course of a series of videos or comments. Even though these individual videos or comments may not directly violate YouTube’s policy on their own, the company still has a plan to combat this. 

“Channels that repeatedly brush up against our harassment policy will be suspended from [YouTube Partner Program], eliminating their ability to make money on YouTube,” YouTube said. The platform added that this content could be removed, and channels could receive strikes or be terminated. 

YouTube clarified that these changes would also apply to the platform’s comment section, not just the videos posted. The company believes this will result in the number of comments removed from the site increasing, noting that 16 million were removed in their third quarter. 

YouTube also outlined newer tools that have recently been added that give creators some control over their comment section.

“When we’re not sure a comment violates our policies, but it seems potentially inappropriate, we give creators the option to review it before it’s posted on their channel,” YouTube said.

In the early stages of the roll-out, YouTube saw a 75% reduction in user flags on comments. Most creators now have this setting, but can opt-out of it if they would like. They can also ignore the held comments. 

“We expect there will continue to be healthy debates over some of the decisions and we have an appeals process in place if creators believe we’ve made the wrong call on a video,” the company said of this new update. 

Why Did YouTube Change Its Policy?

Many believe these changes were prompted by the controversy between Carlos Maza, who hosted a series for Vox, and Steven Crowder, who hosts a series called Louder with Crowder on YouTube. Back in May, Maza tweeted a thread calling Crowder out for repeatedly calling him names on his show. Crowder repeatedly referred to Maza as “Mr. Gay Vox,” a “lispy queer,” and “gay Latino from Vox” in a mocking tone. 

Crowder defended himself, saying this should not count as bullying, as he made these comments while providing criticism of Maza’s series. YouTube ended up responding to Maza saying that his comments, while potentially offensive, did not violate their policy.

Maza continued to call YouTube out for this decision. He said this “gives bigots free license” and accused the site of using its gay creators. Many criticized YouTube’s response, which came in June as the company celebrated pride month. Some found it hypocritical for the company to be publically celebrating the LGBTQ community while also allowing comments some perceived as homophobic to stay on their site.

Because of all this backlash, YouTube ended up suspending Crowder’s revenue. This decision was also met with outrage.

Maza and Crowder React

Maza tweeted a thread about the new policy on Wednesday morning. He claims that the real problem is whether or not YouTube will enforce it on all creators, which he thinks is unlikely. 

“YouTube loves to manage PR crises by rolling out vague content policies they don’t actually enforce,” he wrote. “These policies only work if YouTube is willing to take down its most popular rule-breakers. And there’s no reason, so far, to believe that it is.”

Before YouTube made their official announcement, Crowder posted a video titled “Urgent. The YouTube ‘Purge’ Is Coming.” The video was uploaded Tuesday and is based largely on murmurs about what was to come. He said the policies could silence and negatively impact his channel and others like it. 

“Obviously my heart goes out to any future conservative or any future independent voices that are affected because people got their feelings hurt,” he said. 

Policy Gets Negative Feedback

Other creators also shared their reactions, with some saying they were already being impacted by the new changes. Ian Carter, known online as iDubbbz, tweeted a screenshot of an email from YouTube saying his video “Content Cop: Leafy” was taken down for violating guidelines.

He uses vulgar and antagonistic language in the video, and jokes about bullying being okay. Many, however, don’t think the video should have been removed as it was meant to call out someone else’s bad behavior. 

Another creator, Gokanaru said his video critiquing h3h3 productions was removed. 

Some online were frustrated with this, noting that their videos should not be taken down while someone like Onision, who has been accused of predatory behavior and grooming, still has videos online. 

#YouTubeIsOverParty was a trending topic on Twitter by late Wednesday morning. Many used the hashtag to say that the policy could negatively impact creativity on the platform and that YouTube should not try to make this seem like this was a policy that creators asked for.  

Even though the trending topic gained a lot of traction, YouTuber Taylor Harris said that as far as the use of the site goes, YouTube will likely be unimpacted. 

See what others are saying: (Axios) (Tech Crunch) (Vox)

Industry

Black Mirror or Reality? Microsoft Granted Patent for Tech That Lets It Create Chatbots of Dead People

Published

on

  • Microsoft has been granted a patent that would allow it to create artificial intelligence chatbots of dead people using “voice data, social media posts, electronic messages, written letters, etc.”
  • As Microsoft noted in its patent proposal, chatbots could also be created to imitate living people — opening the door for users to train a digital version of themselves to be used after they die. 
  • In the patent filing, Microsoft also suggested creating 2D or 3D models of chatbot subjects by studying images and videos of them.
  • Online, many noted the similarities between Microsoft’s patent and a 2013 episode of Black Mirror in which a woman creates an AI version of her deceased boyfriend. 

Microsoft Granted Controversial Patent

The United States Patent and Trademark Office has granted Microsoft a patent for technology that would allow it digitally revive dead people.

If implemented, Microsoft would use information like “voice data, social media posts, electronic messages, written letters, etc.,” to create artificial intelligence chatbots meant to replicate the person.

In its filing, Microsoft noted that the person could be “a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity, etc.”

Microsoft also noted, “the specific person may also correspond to oneself (e.g., the user creating/training the chat bot), or a version of oneself (e.g., oneself at a particular age or stage of life).”

As The Independent pointed out, that opens up the door for living users to be able to “train a digital replacement in the event of their death.”

But it doesn’t stop there. Microsoft has also suggested creating 2D or even 3D models of the person by studying images and videos of them.

Has Life Finally Become an Episode of Black Mirror?

Online, many noted the similarities between Microsoft’s patent and a 2013 episode of Black Mirror in which a character, played by Hayley Atwell, revives her recently-deceased boyfriend through an AI chatbot. As the episode progresses, that AI — played by Domhnall Gleeson — eventually becomes an exact replica android of her boyfriend. 

“More people that need to remember Black Mirror is a warning sign, not a product manual,” said Tama Leaver, an internet studies professor at Curtin University in Australia. 

Indeed, many critics have interpreted the episode, which focuses on the grief felt by Atwell’s character because of her loss, as an examination of “our own mortality and our desire to play God.” 

“It shines a spotlight on our desperate need to reverse a natural and necessary part of life without considering the consequences on our emotional well-being,” Roxanne Sancto said in a review for Paste Magazine.

In fact, series creator Charlie Brooker said part of his direct inspiration for writing the episode was based on Twitter and the question: “What if these people were dead and it was software emulating their thoughts?”

See what others are saying: (The Independent) (IGN) (Indie Wire)

Continue Reading

Industry

JoJo Siwa Fans Caution Against Labeling the Star’s Sexuality

Published

on

  • JoJo Siwa was featured in two TikTok videos Wednesday that many felt signaled her as a member of the LGBTQ+ community.
  • One showed her dancing and lip-syncing to Paramore’s “Aint It Fun,” along with members of the TikTok group Pride House LA. Siwa specifically mouthed the lyric “Now you’re one of us,” which is also the caption of the post. 
  • The second video showed her lip-syncing to Lady Gaga’s “Born This Way,” a song that has long been heralded as an LGBTQ+ anthem.  
  • The 17-year-old entertainer has not directly addressed speculations about her sexuality, prompting many to caution against labeling her.

JoJo Siwa TikToks Trigger Sexuality Speculations

JoJo Siwa fans are urging the public not to label the 17-year-old entertainer’s sexuality, especially when she has not explicitly done so herself.

The request came after Siwa became a trending topic Wednesday when many speculated that she had come out as a member of the LGBTQ+ community.

The speculations stem from two TikTok videos she was featured in. The first was posted on choreographer Kent Boyd’s account. It features him and other members of the TikTok group Pride House LA, which includes several stars from Disney Channel’s “Teen Beach Movie.”

It showed them all lip-syncing and dancing along to Paramore’s hit song “Ain’t It Fun.” Siaw specifically mouthed the lyric “Now you’re one of us.” That lyric was also the caption of the post.

@kentboyd_

Now your one of us!! @itsjojosiwa @molleegray @garrettclayton91 @jekajane #pridehousela

♬ now ur one of us – Mia Mugavero

Later in the day, Siwa posted a video on her personal TikTok account that featured her lip-syncing to Lady Gaga’s “Born This Way,” a song that has long been heralded as an LGBTQ+ anthem. 

Part of the lyrics she sang along to were: “No matter gay, straight or bi, lesbian transgender life / I’m on the right track baby, I was born to survive.” 

Reactions

These posts really started all the rumors online, and things picked up when influencers like James Charles, Bretman Rock, and others expressed their support.

Many fans also left comments on the videos saying they were proud of her, and journalist Yashar Ali tweeted, “This feels like a big deal if it is what I think it is…JoJo Siwa is hugely popular with kids.”

“And as someone just pointed out, if it is what I think it is, she’s doing it at the height of her fame when she’s selling out arenas,” he continued.

Despite the wave of praise, other fans feel that it’s inappropriate and harmful to make speculations about anyone’s sexuality.

Many have even shared their own experiences coming out, reminding people not to label Siwa as anything until she explicitly chooses to share that information herself. 

While Siwa hasn’t directly addressed any of the responses as of yet, she has retweeted a post that features her video, the pride flag emoji, and the caption, “@itsjojosiwa is on the right track, she was born this way.”

Still, others also noted that she has publicly asked Lady Gaga to collaborate with her in the past, so perhaps this is a signal about that happening soon.

Others believe it could also be Siwa’s way of signaling that she is an ally of the LGBTQ+ community.

See what others are saying: (Insider) (Metro) (Teen Vogue)

Continue Reading

Industry

Google Investigates Top AI Researcher Who Was Looking Into a Previous Firing

Published

on

  • Google is investigating the co-leader of its Ethical AI team, Margaret Mitchell.
  • While Mitchell has not been fired, her account has been locked because Google said she “exfiltrated thousands of files” and shared them with people outside of the company. 
  • In a tweet, Mitchell indicated that she had been “documenting current critical issues” related to the firing of another Google AI Ethicist in December.
  • Sources reportedly told Axios that Mitchell had been specifically looking for messages that showed discriminatory treatment of that fired researcher.

Google Investigates Margaret Mitchell

On Tuesday, Google stated that it is now investigating the co-leader of its Ethical Al team, Margaret Mitchell.

Mitchell has reportedly not been fired, but her company email account has been locked.

According to Google, its security systems automatically lock employee accounts “when they detect that the account is at risk of compromise due to credential problems or when an automated rule involving the handling of sensitive data has been triggered.”

In this case, Google said Mitchell “exfiltrated thousands of files” and then shared them with people outside of the company. 

Why Did Mitchell Begin Looking Through Files?

Mitchell’s investigation is related to the ousting of another top AI ethicist at Google, Timnit Gebru, who was fired at the beginning of December. 

Before Gebru was fired, managers reportedly instructed her to withdraw an unpublished research paper upon her return from vacation. In an email to the internal listserv Google Brain Women and Allies, Gebru then voiced frustration at managers for allegedly making the decision without her input. 

“You are not worth having any conversations about this, since you are not someone whose humanity (let alone expertise recognized by journalists, governments, scientists, civic organizations such as the electronic frontiers foundation etc) is acknowledged or valued in this company,” Gebru said in a critique of the decision. 

Gebru’s firing led to such a massive outcry from Google employees that Google CEO Sundar Pichai pledged to investigate the situation. 

On Friday, Mitchell indicated in a tweet that she was also looking into Gebru’s firing, saying that she was “documenting current critical issues from [Gebru’s] firing, point by point, inside and outside work.”

According to Axios, sources have said that Mitchell used automated scripts to siphon through messages that potentially document discriminatory treatment against Gebru.

See what others are saying: (Axios) (The Verge) (Bloomberg)

Continue Reading