- A new report from the New York Times says YouTube’s recommendation algorithm sent researchers from videos with sexual content to videos of children.
- The report has reignited concerns over predators abusing the platform, an issue that first came to light in February.
- YouTube responded by pointing to measures it has already taken, including restricting live features for children, disabling comments on videos featuring children, and limiting recommendations.
- The researchers suggest that YouTube’s recommendations should not feature videos with children at all, but YouTube says it fears a change like that would hurt creators.
Researchers say YouTube’s recommendation algorithm sent them from videos with sexual content to videos featuring minors, increasing the concern about pedophiles and predators abusing the platform, The New York Times reported Monday.
In the article, the Times interviewed a Brazillian mother who’s 10-year-old daughter uploaded a video to YouTube. The video, which featured the 10-year-old and a friend swimming in a backyard pool, racked up 400,000 views.
“I saw the video again and I got scared by the number of views,” the mother said.
The Times noted that the video was promoted by YouTube months after February, when the company was alerted about issues with pedophiles and predators on the platform.
At the time, YouTuber MattsWhatItIs published a video highlighting predators that would leave timecodes for compromising moments in the comments under videos of children. This was done in order to lead other pedophiles to these specific moments.
YouTube responded and said they would be disabling comments on videos that feature minors.
According to this latest report, the majority of videos on YouTube are viewed through the company’s recommendation algorithm. The algorithm creates a playlist of suggested videos YouTube believes the user should watch next. The researchers found that once they began viewing content with sexual themes, YouTube would start to recommend videos featuring children.
“Users do not need to look for videos of children to end up watching them,” the Times states. “The platform can lead them there through a progression of recommendations.”
Often the videos featuring children were uploaded as innocent fun, such as parents sharing home movies or a film made by their child. The concern comes when those innocent videos are recommended to users looking for sexual content.
The Times reports that researchers from Harvard’s Berkman Klein Center tested the recommendation algorithm for themselves while in Brazil. The researchers would start with sexually themed videos and follow the first recommended video shown, eventually landing on content that disturbed them.
The article explains that “videos of women discussing sex, for example, sometimes led to videos of women in underwear or breast-feeding, sometimes mentioning their age: 19, 18, even 16.”
As the researchers continued, YouTube then began suggesting that they watch videos of women seeking “sugar daddies” and adults in children’s clothing.
“From there,” the Times wrote, “YouTube would suddenly begin recommending videos of young and partially clothed children, then a near-endless stream of them drawn primarily from Latin America and Eastern Europe.”
When the Times told YouTube about their findings, the company removed some, but not all, of the videos they were shown. The article states that the recommendation system changed immediately and no longer linked some of the videos together. According to YouTube, it was likely the result of routine adjustments and not a deliberate policy change.
YouTube also published a blog post where they laid out all of the actions they have already put in place to combat pedophiles on the platform. This includes restricting live features for children, disabling comments on videos featuring children, and further limiting recommendations.
While these steps may help to combat the problem, according to researchers, the one thing that would truly make children safe is turning off the recommendation system for videos of children.
When the Times pressed the company, YouTube was wary to make the change because of how this could hurt creators, with recommendations driving 70 percent of views. However, they did say they would limit recommendations on videos that put children at risk.
See what others are saying (New York Times) (MIT Technology Review) (Gizmodo)
Schools Across the U.S. Cancel Classes Friday Over Unverified TikTok Threat
Officials in multiple states said they haven’t found any credible threats but are taking additional precautions out of an abundance of safety.
Schools in no fewer than 10 states either canceled classes or increased their police presence on Friday after a series of TikToks warned of imminent shooting and bombs threats.
Despite that, officials said they found little evidence to suggest the threats are credible. It’s possible no real threat was actually ever made as it’s unclear if the supposed threats originated on TikTok, another social media platform, or elsewhere.
“We handle even rumored threats with utmost seriousness, which is why we’re working with law enforcement to look into warnings about potential violence at schools even though we have not found evidence of such threats originating or spreading via TikTok,” TikTok’s Communications team tweeted Thursday afternoon.
Still, given the uptick of school shootings in the U.S. in recent years, many school districts across the country decided to respond to the rumors. According to The Verge, some districts in California, Minnesota, Missouri, and Texas shut down Friday.
“Based on law enforcement interviews, Little Falls Community Schools was specifically identified in a TikTok post related to this threat,” one school district in Minnesota said in a letter Thursday. “In conversations with local law enforcement, the origins of this threat remain unknown. Therefore, school throughout the district is canceled tomorrow, Friday, December 17.”
In Gilroy, California, one high school that closed its doors Friday said it would reschedule final exams that were expected to take place the same day to January.
According to the Associated Press, several other districts in Arizona, Connecticut, Illinois, Montana, New York, and Pennsylvania stationed more police officers at their schools Friday.
Viral Misinformation or Legitimate Warnings?
As The Verge notes, “The reports of threats on TikTok may be self-perpetuating.”
For example, many of the videos online may have been created in response to initial warnings as more people hopped onto the trend. Amid school cancellations, videos have continued to sprout up — many awash with both rumors and factual information.
“I’m scared off my ass, what do I do???” one TikTok user said in a now-deleted video, according to People.
“The post is vague and not directed at a specific school, and is circulating around school districts across the country,” Chicago Public Schools said in a letter, though it did not identify any specific post. “Please do not re-share any suspicious or concerning posts on social media.”
According to Dr. Amy Klinger, the director of programs for the nonprofit Educator’s School Safety Network, “This is not 2021 phenomenon.”
Instead, she told The Today Show that her network has been tracking school shooting threats since 2013, and she noted that in recent years, they’ve become more prominent on social media.
“It’s not just somebody in a classroom of 15 people hearing someone make a threat,” she said. “It’s 15,000 people on social media, because it gets passed around and it becomes larger and larger and larger.”
Jake Paul Says He “Can’t Get Cancelled” as a Boxer
The controversial YouTuber opened up about what it has been like to go from online fame to professional boxing.
The New Yorker Profiles Jake Paul
YouTuber and boxer Jake Paul talked about his career switch, reputation, and cancel culture in a profile published Monday in The New Yorker.
While Paul rose to fame as the Internet’s troublemaker, he now spends most of his time in the ring. He told the outlet that one difference between YouTube and boxing is that his often controversial reputation lends better to his new career.
“One thing that is great about being a fighter is, like, you can’t get cancelled,” Paul said. The profile noted that the sport often rewards and even encourages some degree of bad behavior.
“I’m not a saint,” Paul later continued. “I’m also not a bad guy, but I can very easily play the role.”
Paul also said the other difference between his time online and his time in boxing is the level of work. While he says he trains hard, he confessed that there was something more challenging about making regular YouTube content.
“Being an influencer was almost harder than being a boxer,” he told The New Yorker. “You wake up in the morning and you’re, like, Damn, I have to create fifteen minutes of amazing content, and I have twelve hours of sunlight.”
Jake Paul Vs. Tommy Fury
The New Yorker profile came just after it was announced over the weekend Paul will be fighting boxer Tommy Fury in an 8-round cruiserweight fight on Showtime in December.
“It’s time to kiss ur last name and ur family’s boxing legacy goodbye,” Paul tweeted. “DEC 18th I’m changing this wankers name to Tommy Fumbles and celebrating with Tom Brady.”
Both Paul and Fury are undefeated, according to ESPN. Like Paul, Fury has found fame outside of the sport. He has become a reality TV star in the U.K. after appearing on the hit show “Love Island.”
See what others are saying: (The New Yorker) (Dexerto) (ESPN)
Hackers Hit Twitch Again, This Time Replacing Backgrounds With Image of Jeff Bezos
The hack appears to be a form of trolling, though it’s possible that the infiltrators were able to uncover a security flaw while reviewing Twitch’s newly-leaked source code.
Hackers targeted Twitch for a second time this week, but rather than leaking sensitive information, the infiltrators chose to deface the platform on Friday by swapping multiple background images with a photo of former Amazon CEO Jeff Bezos.
According to those who saw the replaced images firsthand, the hack appears to have mostly — and possibly only — affected game directory headers. Though the incident appears to be nothing more than a surface-level prank, as Amazon owns Twitch, it could potentially signal greater security flaws.
For example, it’s possible the hackers could have used leaked internal security data from earlier this week to discover a network vulnerability and sneak into the platform.
The latest jab at the platforms came after Twitch assured its users it has seen “no indication” that their login credentials were stolen during the first hack. Still, concerns have remained regarding the potential for others to now spot cracks in Twitch’s security systems.
It’s also possible the Bezos hack resulted from what’s known as “cache poisoning,” which, in this case, would refer to a more limited form of hacking that allowed the infiltrators to manipulate similar images all at once. If true, the hackers likely would not have been able to access Twitch’s back end.
The photo changes only lasted several hours before being returned to their previous conditions.
First Twitch Hack
Despite suspicions and concerns, it’s unclear whether the Bezos hack is related to the major leak of Twitch’s internal data that was posted to 4chan on Wednesday.
That leak exposed Twitch’s full source code — including its security tools — as well as data on how much Twitch has individually paid every single streamer on the platform since August 2019.
It also revealed Amazon’s at least partially developed plans for a cloud-based gaming library, codenamed Vapor, which would directly compete with the massively popular library known as Steam.
Even though Twitch has said its login credentials appear to be secure, it announced Thursday that it has reset all stream keys “out of an abundance of caution.” Users are still being urged to change their passwords and update or implement two-factor authentication if they haven’t already.