YouTube Tightens Policies Around Election-Related Content
- In a blog post, YouTube said it would ban misinformation and some misleading election content while also raising up political creators and authoritative voices such as major news outlets.
- Notably, videos taken out of context will not be removed unless they violate a different rule.
- A YouTube spokesperson said deepfakes will be deleted if they display “malicious intent,” but some deepfakes, such as parody videos, may be allowed on the platform.
YouTube to Ban Misleading Election-Related Content
YouTube announced it will be tightening its policies on election-related content, an announcement which came the same day the 2020 primary season kicked off in Iowa.
Leslie Miller, Google’s Vice President of Government Affairs and Public Policy, laid out new policies the platform will follow in a blog post, including one policy which aims to remove manipulated and doctored content.
For example, that includes videos that make a government official appear to be dead. Speaking to The New York Times, YouTube Spokesperson Ivy Choi also cited another example: that of a 2019 video featuring Speaker of the House Nancy Pelosi appearing to slur her words. That video was doctored by being slowed down, making it look intentionally misleading, Choi said.
However, content that can simply be taken out of context will not be removed, per YouTube guidelines. To explain this, Choi used the example of a recent clip of former Vice President Joe Biden. That clip was edited to make it seem as if Biden had said a racist remark at a campaign event.
In the blog post, Miller also says YouTube will remove content that misleads people about voting and census processes. For example, that includes videos that give incorrect voting dates.
In another move that is reminiscent of the Barack Obama birther conspiracy, Miller announced the platform would remove content that “advances false claims related to the technical eligibility requirements for current political candidates and sitting elected government officials to serve in office.”
Additionally, YouTube will continue to terminate channels that impersonate another person or channel, as well as channels that artificially increase views, likes, and comments.
Recognizing Reputable News Outlets and Creators
While implementing the aforementioned bans, Miller said the platform also aims to “raise up authoritative election news.”
Essentially, Miller is referring to major news like CNN and Fox News, which will be more likely to show up in search results and “watch next” panels. That news, however, is less of an announcement and more of a continued goal on the part of YouTube.
YouTube talked about curating reputable news content in a different blog post in December. The platform has also been making changes in this area over the last couple of years, with Miller saying that because of those changes, consumption of content from authoritative news grew by 60% last year.
Finally, Miller said YouTube will “recognize and reward campaigns, candidates, and political creators.”
“YouTube remains committed to maintaining the balance of openness and responsibility, before, during and after the 2020 U.S. election,” Miller said at the end of the post. “We’ll have even more to share on this work in the coming months.”
How Will YouTube Handle Deepfakes?
Miller’s post doesn’t come as much of a surprise, especially as other major social media platforms like Facebook and Twitter tighten their policies around political content.
Still, that doesn’t mean YouTube is free of scrutiny. YouTube will still need to face critics when it comes to rolling out the new policies, especially as reviewers filter through more than 500 hours of content uploaded every minute. With out-of-context videos compromising a sizeable portion of misleading content, YouTube may also have to face criticism for not opting to remove those videos.
The new policy has also raised the question of how YouTube will treat political deepfakes. The answer? It depends. According to Choi, if a deepfake video was created with malicious intent, then it would be taken down. Parody videos, however, could remain up depending on their content and context.
“The best way to quickly remove content is to stay ahead of new technologies and tactics that could be used by malicious actors, including technically-manipulated content,” Miller said in the post. “We also heavily invest in research and development. In 2018, we formed an Intelligence Desk to detect new trends surrounding inappropriate content and problematic behaviors, and to make sure our teams are prepared to address them before they become a larger issue.”
See what others are saying: (The Washington Post) (Mashable) (Yahoo News)
Schools Across the U.S. Cancel Classes Friday Over Unverified TikTok Threat
Officials in multiple states said they haven’t found any credible threats but are taking additional precautions out of an abundance of safety.
Schools in no fewer than 10 states either canceled classes or increased their police presence on Friday after a series of TikToks warned of imminent shooting and bombs threats.
Despite that, officials said they found little evidence to suggest the threats are credible. It’s possible no real threat was actually ever made as it’s unclear if the supposed threats originated on TikTok, another social media platform, or elsewhere.
“We handle even rumored threats with utmost seriousness, which is why we’re working with law enforcement to look into warnings about potential violence at schools even though we have not found evidence of such threats originating or spreading via TikTok,” TikTok’s Communications team tweeted Thursday afternoon.
Still, given the uptick of school shootings in the U.S. in recent years, many school districts across the country decided to respond to the rumors. According to The Verge, some districts in California, Minnesota, Missouri, and Texas shut down Friday.
“Based on law enforcement interviews, Little Falls Community Schools was specifically identified in a TikTok post related to this threat,” one school district in Minnesota said in a letter Thursday. “In conversations with local law enforcement, the origins of this threat remain unknown. Therefore, school throughout the district is canceled tomorrow, Friday, December 17.”
In Gilroy, California, one high school that closed its doors Friday said it would reschedule final exams that were expected to take place the same day to January.
According to the Associated Press, several other districts in Arizona, Connecticut, Illinois, Montana, New York, and Pennsylvania stationed more police officers at their schools Friday.
Viral Misinformation or Legitimate Warnings?
As The Verge notes, “The reports of threats on TikTok may be self-perpetuating.”
For example, many of the videos online may have been created in response to initial warnings as more people hopped onto the trend. Amid school cancellations, videos have continued to sprout up — many awash with both rumors and factual information.
“I’m scared off my ass, what do I do???” one TikTok user said in a now-deleted video, according to People.
“The post is vague and not directed at a specific school, and is circulating around school districts across the country,” Chicago Public Schools said in a letter, though it did not identify any specific post. “Please do not re-share any suspicious or concerning posts on social media.”
According to Dr. Amy Klinger, the director of programs for the nonprofit Educator’s School Safety Network, “This is not 2021 phenomenon.”
Instead, she told The Today Show that her network has been tracking school shooting threats since 2013, and she noted that in recent years, they’ve become more prominent on social media.
“It’s not just somebody in a classroom of 15 people hearing someone make a threat,” she said. “It’s 15,000 people on social media, because it gets passed around and it becomes larger and larger and larger.”
See what others are saying: (The Verge) (Associated Press) (People)
Jake Paul Says He “Can’t Get Cancelled” as a Boxer
The controversial YouTuber opened up about what it has been like to go from online fame to professional boxing.
The New Yorker Profiles Jake Paul
YouTuber and boxer Jake Paul talked about his career switch, reputation, and cancel culture in a profile published Monday in The New Yorker.
While Paul rose to fame as the Internet’s troublemaker, he now spends most of his time in the ring. He told the outlet that one difference between YouTube and boxing is that his often controversial reputation lends better to his new career.
“One thing that is great about being a fighter is, like, you can’t get cancelled,” Paul said. The profile noted that the sport often rewards and even encourages some degree of bad behavior.
“I’m not a saint,” Paul later continued. “I’m also not a bad guy, but I can very easily play the role.”
Paul also said the other difference between his time online and his time in boxing is the level of work. While he says he trains hard, he confessed that there was something more challenging about making regular YouTube content.
“Being an influencer was almost harder than being a boxer,” he told The New Yorker. “You wake up in the morning and you’re, like, Damn, I have to create fifteen minutes of amazing content, and I have twelve hours of sunlight.”
Jake Paul Vs. Tommy Fury
The New Yorker profile came just after it was announced over the weekend Paul will be fighting boxer Tommy Fury in an 8-round cruiserweight fight on Showtime in December.
“It’s time to kiss ur last name and ur family’s boxing legacy goodbye,” Paul tweeted. “DEC 18th I’m changing this wankers name to Tommy Fumbles and celebrating with Tom Brady.”
Both Paul and Fury are undefeated, according to ESPN. Like Paul, Fury has found fame outside of the sport. He has become a reality TV star in the U.K. after appearing on the hit show “Love Island.”
See what others are saying: (The New Yorker) (Dexerto) (ESPN)
Hackers Hit Twitch Again, This Time Replacing Backgrounds With Image of Jeff Bezos
The hack appears to be a form of trolling, though it’s possible that the infiltrators were able to uncover a security flaw while reviewing Twitch’s newly-leaked source code.
Hackers targeted Twitch for a second time this week, but rather than leaking sensitive information, the infiltrators chose to deface the platform on Friday by swapping multiple background images with a photo of former Amazon CEO Jeff Bezos.
According to those who saw the replaced images firsthand, the hack appears to have mostly — and possibly only — affected game directory headers. Though the incident appears to be nothing more than a surface-level prank, as Amazon owns Twitch, it could potentially signal greater security flaws.
For example, it’s possible the hackers could have used leaked internal security data from earlier this week to discover a network vulnerability and sneak into the platform.
The latest jab at the platforms came after Twitch assured its users it has seen “no indication” that their login credentials were stolen during the first hack. Still, concerns have remained regarding the potential for others to now spot cracks in Twitch’s security systems.
It’s also possible the Bezos hack resulted from what’s known as “cache poisoning,” which, in this case, would refer to a more limited form of hacking that allowed the infiltrators to manipulate similar images all at once. If true, the hackers likely would not have been able to access Twitch’s back end.
The photo changes only lasted several hours before being returned to their previous conditions.
First Twitch Hack
Despite suspicions and concerns, it’s unclear whether the Bezos hack is related to the major leak of Twitch’s internal data that was posted to 4chan on Wednesday.
That leak exposed Twitch’s full source code — including its security tools — as well as data on how much Twitch has individually paid every single streamer on the platform since August 2019.
It also revealed Amazon’s at least partially developed plans for a cloud-based gaming library, codenamed Vapor, which would directly compete with the massively popular library known as Steam.
Even though Twitch has said its login credentials appear to be secure, it announced Thursday that it has reset all stream keys “out of an abundance of caution.” Users are still being urged to change their passwords and update or implement two-factor authentication if they haven’t already.