Connect with us

Industry

YouTube Tightens Policies Around Election-Related Content

Published

on

  • In a blog post, YouTube said it would ban misinformation and some misleading election content while also raising up political creators and authoritative voices such as major news outlets.
  • Notably, videos taken out of context will not be removed unless they violate a different rule.
  • A YouTube spokesperson said deepfakes will be deleted if they display “malicious intent,” but some deepfakes, such as parody videos, may be allowed on the platform.

YouTube to Ban Misleading Election-Related Content

YouTube announced it will be tightening its policies on election-related content, an announcement which came the same day the 2020 primary season kicked off in Iowa.

Leslie Miller, Google’s Vice President of Government Affairs and Public Policy, laid out new policies the platform will follow in a blog post, including one policy which aims to remove manipulated and doctored content. 

For example, that includes videos that make a government official appear to be dead. Speaking to The New York Times, YouTube Spokesperson Ivy Choi also cited another example: that of a 2019 video featuring Speaker of the House Nancy Pelosi appearing to slur her words. That video was doctored by being slowed down, making it look intentionally misleading, Choi said. 

However, content that can simply be taken out of context will not be removed, per YouTube guidelines. To explain this, Choi used the example of a recent clip of former Vice President Joe Biden. That clip was edited to make it seem as if Biden had said a racist remark at a campaign event. 

In the blog post, Miller also says YouTube will remove content that misleads people about voting and census processes. For example, that includes videos that give incorrect voting dates.

In another move that is reminiscent of the Barack Obama birther conspiracy, Miller announced the platform would remove content that “advances false claims related to the technical eligibility requirements for current political candidates and sitting elected government officials to serve in office.”

Additionally, YouTube will continue to terminate channels that impersonate another person or channel, as well as channels that artificially increase views, likes, and comments. 

Recognizing Reputable News Outlets and Creators 

While implementing the aforementioned bans, Miller said the platform also aims to “raise up authoritative election news.”

Essentially, Miller is referring to major news like CNN and Fox News, which will be more likely to show up in search results and “watch next” panels. That news, however, is less of an announcement and more of a continued goal on the part of YouTube.

YouTube talked about curating reputable news content in a different blog post in December. The platform has also been making changes in this area over the last couple of years, with Miller saying that because of those changes, consumption of content from authoritative news grew by 60% last year.

Finally, Miller said YouTube will “recognize and reward campaigns, candidates, and political creators.”

“YouTube remains committed to maintaining the balance of openness and responsibility, before, during and after the 2020 U.S. election,” Miller said at the end of the post. “We’ll have even more to share on this work in the coming months.”

How Will YouTube Handle Deepfakes?

Miller’s post doesn’t come as much of a surprise, especially as other major social media platforms like Facebook and Twitter tighten their policies around political content.

Still, that doesn’t mean YouTube is free of scrutiny. YouTube will still need to face critics when it comes to rolling out the new policies, especially as reviewers filter through more than 500 hours of content uploaded every minute. With out-of-context videos compromising a sizeable portion of misleading content, YouTube may also have to face criticism for not opting to remove those videos. 

The new policy has also raised the question of how YouTube will treat political deepfakes. The answer? It depends. According to Choi, if a deepfake video was created with malicious intent, then it would be taken down. Parody videos, however, could remain up depending on their content and context.

“The best way to quickly remove content is to stay ahead of new technologies and tactics that could be used by malicious actors, including technically-manipulated content,” Miller said in the post. “We also heavily invest in research and development. In 2018, we formed an Intelligence Desk to detect new trends surrounding inappropriate content and problematic behaviors, and to make sure our teams are prepared to address them before they become a larger issue.”

See what others are saying: (The Washington Post) (Mashable) (Yahoo News)

Industry

Twitch Sues Two Users for Creating Hate Raid Bots That Targeted Black and LGBTQ+ Streamers

Published

on

Twitch said the two users were so relentless in their racist, sexist, and anti-LGBTQ+ hate raids that they forced some creators to stop streaming.


Twitch Sues Two Users

Twitch has filed a lawsuit against two of its users for allegedly creating hate raid bots that targeted Black and LGBTQ+ streamers with racist, sexist, and anti-LGBTQ+ content. 

The users named in the lawsuit, filed late Thursday, are CruzzControl and CreatineOverdose. While their legal names are currently unknown, Twitch said it traced one to the Netherlands and the other to Austria. It added that it will amend the suit to include their real names once it learns them. 

Twitch said both users began using bots to flood streamers’ chats with hate-filled messages in August. Despite multiple suspensions and bans, Twitch said the two continually created new accounts to continue their hate raid crusades. 

According to the lawsuit, CruzzControl operated nearly 3,000 bots that were used to spam the discriminatory and harassing content. Meanwhile, CreatineOverdose used “their bot software to demonstrate how it could be used to spam Twitch channels with racial slurs, graphic descriptions of violence against minorities, and claims that the hate raiders are the KKK.”

Twitch didn’t just stop at accusations of hateful actions and rule-breaking. It even claimed the two users were so forceful in their efforts to attack creators that they pressured some to stop streaming altogether, “eliminating an important source of revenue for them.”

Twitch Users Demand Change

Twitch creators have long complained about hate raids, but a number of small creators began organizing a cohesive movement in early August following what appeared to be a growing number of hate raids. 

Many demanded that Twitch address the situation by holding round tables with affected creators and enabling different features that would give them the ability to shut down incoming raids. Critics also called on the platform to provide detailed information about how it plans to protect creators moving forward. While Twitch did promise to implement fixes, a large portion of users weren’t satisfied with its messaging. 

The bulk of users’ efforts culminated on Sep. 1 when various creators participated in #ADayOffTwitch, a one-day walkout designed to reduce traffic on the platform. 

Despite Twitch’s lawsuit, a number of users have still said they won’t be completely satisfied with the platform’s actions until more is accomplished. For now, their primary goal is to have Twitch directly outline what steps it’s taking to prevent hate raids.

In its lawsuit, Twitch does make a cursory mention of several changes it said it’s introduced recently, including “implementing stricter identity controls with accounts, machine learning algorithms to detect bot accounts that are used to engage in harmful chat, and augmenting the banned word list.”

“Twitch mobilized its communications staff to address the community harm flowing from the hate raids and assured its community that it was taking proactive measures to stop them,” it added. “Twitch also worked with impacted streamers to educate them on moderation toolkits for their chats and solicited and responded to streamers’ and users’ comments and concerns.”

See what others are saying: (The Washington Post) (BuzzFeed News) (Kotaku)

Continue Reading

Industry

Streamers Protest Racist and Homophobic Hate Raids With #ADayOffTwitch

Published

on

The creators participating in the walkout want Twitch to implement policies that actively combat hate-raiding.


#ADayOffTwitch

Numerous Twitch streamers went dark on the platform Wednesday as part of a movement called #ADayOffTwitch, which participants have described as a way to stand “in solidarity with marginalized creators under attack by botting & hate-raids.” 

The protest was organized last month after a smaller creator by the name of RekItRaven, who is Black and uses they/them pronouns, had their streams flooded with racist messages twice.  

“This channel now belongs to the KKK,” dozens of users commented during the streams. 

Source: @RekItRaven

For RekItRaven, those messages also came at a particularly disparaging time, as they had just finished talking about how several traumatic experiences had shaped their life. 

Following the stream, RekItRaven began using #TwitchDoBetter, saying, “I love Twitch. I love the community that I built there… BUT THAT DOES NOT MEAN I HAVE TO ACCEPT BEING TREATED LIKE SHIT ON THE PLATFORM.”

Soon, RekItRaven’s concerns gained traction, prompting a number of other smaller creators to step forward with their experiences about being on the receiving end of hate-raids. Eventually, that morphed into Tuesday’s #ADayOffTwitch protest, which has been spearheaded by RekItRaven and two other small creators known as ShineyPen and Lucia Everblack.

Protesters’ Demands

The protesters are demanding that Twitch make several concessions moving forward. Those demands include the platform:

  • Holding round-tables with affected creators to assist with the creation of tools that combat abuse on the platform.
  • Enabling creators to select the account age for prospective chatters.
  • Allowing creators the ability to deny incoming raids.
  • Removing the ability to attach more than three Twitch accounts to one email address since hate-raiders can currently use a single email to register unlimited accounts. 
  • Providing transparency into the actions being taken to protect creators, including giving a timeframe for that implementation.

For its part, Twitch has already promised to implement fixes, saying on Aug. 20, “Hate spam attacks are the result of highly motivated bad actors, and do not have a simple fix.”

“We’ve been building channel-level ban evasion detection and account improvements to combat this malicious behavior for months,” it added. “However, as we work on solutions, bad actors work in parallel to find ways around them—which is why we can’t always share details.” 

However, for now, creators must still deal with potentially being hate-raided while streaming, which is why their anger toward Twitch has persisted.

Do Small Creators Have a Big Enough Voice?

The protest led by mostly smaller creators is also almost entirely composed of them. Because of this, the vacuum of silence from large creators, who hold a disproportionate amount of influence on the platform, has also led to frustration.

Many have pointed out that large creators will publicly show their support for minority causes during events such as Black History Month and Pride Month, but smaller users said they feel abandoned when those same creators don’t also actively participate in causes that directly combat minority hate. 

“Nobody gives a fuck if you take the day off. Nobody knows who you are That’s the truth,” streamer Asmongold, who has 2.4 million followers on Twitch, on a stream last month. “If people got together and they said, we’re all going to collectively do it, I would do it in a heartbeat. Right, I would do it. I’ve got no problem because I do believe in power in numbers, I absolutely do, which is why I don’t believe in this. Like, you can’t get a bunch of 20 Andy’s together and think that you’re going to do anything. Nobody gives a fuck.”

That said, some influential streamers have added their voices to #ADayOff Twitch. For example, both Rhymestyle and Meg Turney participated in Tuesday’s protest; however, both creators have hundreds of thousands of more followers outside of Twitch rather than on it. 

A number of smaller creators have also argued that it’s not feasible for them to take a day off even though they want to support the cause. For example, taking a day off could jeopardize them keeping their affiliate or partner status, which could, in turn, jeopardize their channels.

Meanwhile, others have argued that outcomes such as those are exactly what hate-raiders want to achieve, so logging off Twitch for a day could be playing into their hands. 

Others still said they wanted to participate but are contractually obligated to stream every day either because of sponsorships or other deals.

See what others are saying: (The Verge) (Engadget) (NBC News)

Continue Reading

Industry

CallMeCarson Announces Return to Streaming Following Grooming Allegations

Published

on

In his return announcement, the YouTuber promised to donate 100% of his proceeds to charity in hopes that he can turn a negative situation with a lot of eyes on it into something positive.” 


CallMeCarson Returns

Popular “Minecraft” YouTuber and streamer Carson King, known online as CallMeCarson, announced Wednesday that he will return to streaming following accusations he faced earlier this year of grooming and sexting underage fans.

In a video titled “Moving Forward,” King said he would begin streaming on Twitch again on Sept. 1 as part of what he is calling a “Year of Charity.” For the next 12 months, King plans to donate 100% of his proceeds to different charities, selecting a new one each month. 

“Before you start looking at this as an excuse to sweep things under the rug, that’s not what this is,” he explained in his video. “I’m doing this to turn a negative situation with a lot of eyes on it into something positive that can help a lot of people.” 

King did not address the details of the allegations that have been levied against him. Instead, he said he wanted to focus on what he can do in the future. 

“I’ve learned a lot this past year,” King said. “I’m not seeking forgiveness nor am I looking to make excuses.”

Grooming Allegations Made Against CallMeCarson

In January, members of his YouTube group The Lunch Club told “DramaAlert” that in March of 2020, King had admitted to grooming underage fans. They claimed to not know many details but stated that his confession ultimately led to the group disbanding. One former member, known as “Slimecicle,” even said he reported Carson to authorities.

The victims themselves ended up coming forward online. One, who identified herself as Sam, said Carson sent her sexually suggestive messages in 2019 when he was 19 and she was 17. She also posted Discord messages the two exchanged where King said he could not “control” himself and asked when she turned 18. 

Another girl, who went by CopiiCatt, said King sent her nude photos when she was 17 and he was 20. 

Following this, King took a hiatus online, and now, his return has been met with mixed reactions.

His “Moving Forward” video has been viewed over 1.2 million times, receiving 252,000 likes and just 14,000 dislikes. 

On Twitter, however, more people expressed frustration with his return and were upset by the swell of support for King despite the accusations against him. 

See what others are saying: (Dexerto) (Dot Esports) (HITC)

Continue Reading