Connect with us

Industry

#BroomChallenge Debunked: You Can Stand a Broom Up Any Day of the Year

Published

on

  • A viral tweet claimed NASA said February 10 would be the only day a broom could stand up on its own because of “the gravitational pull.”
  • Internet users, celebrities, and influencers posted photos and videos of themselves testing out the theory.
  • But the trick can actually be done any day and is a gimmick that usually appears around this time of year. 

#BroomChallenge Sweeps the Internet 

By now you’ve probably seen people all over the internet posting pictures and videos showing their brooms standing up all on their own. 

That’s because one viral tweet shared Monday suggested NASA said it was the only day a broom could do this “because of the gravitational pull.”

Though there isn’t actually any information from NASA that supports this, the internet ran with it anyway. Thousands of people took to social media to post themselves trying to trick, thrusting #BroomChallenge and other related terms onto Twitter’s trending page. 

Celebrities like Paula Abdul, Future, DJ Khaled and tons of others joined in on the fun with posts of their own. Even social media stars like Colleen Ballinger, Austin McBroom, and LaurDIY tested the theory out. 

Instagram: @Future
View this post on Instagram

Smh theory

A post shared by DJ KHALED (@djkhaled) on


The trend eventually morphed into more than just brooms, with people posting videos of everything from Roombas to chicken wings standing upright. 

The trend even got so annoying that people like Chrissy Teigen pointed out how dumb it was, though she later backtracked after seeing how much joy it brought people. 

Myth Debunked 

Well, we hate to break it to you but the truth is, you can make your broom stand up on its own any day of the year. And it has nothing to do with the earth’s gravitational pull on a particular day, planetary alignments, or a full moon, despite what other internet users might tell you. 

It’s actually just the work of balance. The center of gravity is low in a broom and rests directly over the bristles. So if you can get the bristles positioned right (like a tripod), your broom will stand upright any time of the year. 

However, the success of this challenge is also based on the kind of broom you have. If it wasn’t already obvious, flat bottomed brooms are more likely to stand upright. So if this challenge didn’t work for you, you might just have a broom that isn’t shaped best for this. But still, the failed attempts that were shared across the internet made for some pretty hilarious posts as well. 

Reoccuring Trend 

And if this whole trend seems familiar to you, that’s because it’s not exactly new. This gimmick pops up often – almost every year around the spring equinox, which won’t occur until March 19 this year. 

It actually spread among Twitter users in Mexico and Brazil earlier this month, according to BBC. The fact-checking site Snopes even noted that the same challenge appeared in February 2012 and also exists in another form known as the balancing egg trick. 

There’s even a YouTube clip from that year of a CNN meteorologist debunking the broom myth. 


Dr. Becky Smethurst, an astrophysicist from the University of Oxford, told BBC she could not believe the misinformation being spread online. “Broom balancing itself is not that impressive. It’s a good party trick. The broom is wide at the bottom and at the right angle can be balanced,” she said. 

“We feel the same gravitational pull at all times of the year, so no matter whether it’s the spring equinox or not, the way the Earth is tilted would never be the cause of ordinary objects just balancing.”

Smethurst also said the spread of the false theory was actually surprising. “When I saw this today on social media and couldn’t believe what I was seeing in terms of the misinformation that was spreading. It highlights the importance of social media verification and using trusted sources from the scientific community.”

But LA-based meteorologist Cory Smith found the trend humorous and felt it provided an opportunity to talk about science. 

Smith told BBC, “While it is discouraging to see people believe a false premise for something like this, it still makes for a fun and easy social media challenge and a nice little experiment to talk about physics and the centre of gravity.”

Eventually, even NASA tweeted about the challenge, showing people that basic physics works every day of the year.

So don’t be surprised if you see another appearance of the #BroomChallenge in the near future, and if you do, at least now you know not to be fooled. 

See what others are saying: (CNN) (The Daily Dot) (BBC)

Industry

Influencers Exposed for Posting Fake Private Jet Photos

Published

on

  • A viral tweet showed a studio set in Los Angeles, California that is staged to look like the inside of a private jet.
  • Some influencers were called out for using that very same studio to take social media photos and videos.
  • While some slammed them for faking their lifestyles online, others poked fun at the behavior and noted that this is something stars like Bow Wow have been caught doing before.
  • Others have even gone so far as to buy and pose with empty designer shopping bags to pretend they went on a massive spending spree.

A tweet went viral over the weekend exposing the secret behind some influencer travel photos.

“Nahhhhh I just found out LA ig girlies are using studio sets that look like private jets for their Instagram pics,” Twitter user @maisonmelissa wrote Thursday.

“It’s crazy that anything you’re looking at could be fake. The setting, the clothes, the body… idk it just kinda of shakes my reality a bit lol,” she continued in a tweet that quickly garnered over 100,000 likes.

The post included photos of a private jet setup that’s actually a studio in California, which you can rent for $64 an hour on the site Peerspace.

As the tweet picked up attention, many began calling out influencers who they noticed have posted photos or videos in that very same studio.

@the7angels

Come fly with the angels 👼

♬ Hugh Hefner – ppcocaine

Perhaps the most notable influencers to be called out were the Mian Twins, who eventually edited their Instagram captions to admit they were on a set.

While a ton of people were upset about this, others pointed out that it’s not exactly that new of an idea. Even Bow Wow was once famously called out in 2017 for posting a private plane photo on social media before being spotted on a commercial flight. 

Twitter users even noted other ridiculous things some people do for the gram, like buying out empty shopping bags to pretend they’ve gone on a shopping spree.

Meanwhile, others poked fun at the topic, like Lil Nas X, who is never one to miss out on a viral internet moment. He photoshopped himself into the fake private jet, sarcastically writing, “thankful for it all,” in his caption.

So ultimately, it seems like the moral of this story is: don’t believe everything you see on social media.

See what others are saying: (LADBible) (Dazed Digital) (Metro UK)

Continue Reading

Industry

South Korea’s Supreme Court Upholds Rape Case Sentences for Korean Stars Jung Joon-young and Choi Jong-hoon

Published

on

  • On Thursday morning, the Supreme Court in Seoul upheld the sentences of Jung Joon Young and Choi Jong Hoon for aggravated rape and related charges.
  • Jung will serve five years in prison, while Choi will go to prison for two-and-a-half.
  • Videos of Jung, Choi, and others raping women were found in group chats that stemmed from investigations into Seungri, of the k-pop group BigBang, as part of the Burning Sun Scandal.
  • The two stars tried to claim that some of the sex was consensual, but the courts ultimately found testimony from survivors trustworthy. Courts did, however, have trouble finding victims who were willing to come forward over fears of social stigma.

Burning Sun Scandal Fall Out

South Korea’s Supreme Court upheld the rape verdicts against stars Jung Joon-young and Choi Jong-hoon on Thursday after multiple appeals by the stars and their co-defendants.

Both Jung and Choi were involved in an ever-growing scandal involving the rapes and sexual assaults of multiple women. Those crimes were filmed and distributed to chatrooms without their consent.

The entire scandal came to light in March of 2019 when Seungri from the k-pop group BigBang was embroiled in what’s now known as the Burning Sun Scandal. As part of an investigation into the scandal, police found a chatroom that featured some stars engaging in what seemed to be non-consensual sex with various women. Police found that many of the message in the Kakaotalk chatroom (the major messaging app in South Korea) from between 2015 and 2016 were sent by Jung and Choi.

A Year of Court Proceedings

Jung, Choi, and five other defendants found themselves in court in November 2019 facing charges related to filming and distributing their acts without the consent of the victims, as well as aggravated rape charges. In South Korea, this means a rape involving two or more perpetrators.

The court found them all guilty of the rape charge. Jung was sentenced to six years behind bars, while Choi and the others were sentenced to five years. Jung was given a harsher sentence because he was also found guilty of filming and distributing the videos of their acts without the victim’s consent.

During proceedings, the court had trouble getting victims to tell their stories. Many feared being shamed or judged because of the incidents and didn’t want the possibility of that information going public. Compounding the court’s problems was the fact that other victims were hard to find.

To that end, the defendants argued that the sexual acts with some of the victims were consensual, albeit this didn’t leave out the possibility that there were still victims of their crimes. However, the court found that the testimony of survivors was trustworthy and contradicted the defendant’s claims.

Jung and Choi appealed the decision, which led to more court proceedings. In May 2020, the Seoul High Court upheld their convictions but reduced their sentences to five years for Jung and two and a half years for Choi.

Choi’s sentence was reduced because the court found that he had reached a settlement with a victim.

The decision was appealed a final time to the Supreme Court. This time they argued that most of the evidence against them, notably the Kakaotalk chatroom messages and videos, were illegally obtained by police.

On Thursday morning, the Supreme Court ultimately disagreed with Jung and Choi and said their revised sentences would stand.

Jung, Choi, and the other defendants will also still have to do 80 hours of sexual violence treatment courses and are banned from working with children for five years.

See What Others Are Saying: (ABC) (Yonhap News) (Soompi)

Continue Reading

Industry

YouTube Says It Will Use AI to Age-Restrict Content

Published

on

  • YouTube announced Tuesday that it would be expanding its machine learning to handle age-restricting content.
  • The decision has been controversial, especially after news that other AI systems employed by the company took down videos at nearly double the rate.
  • The decision likely stems from both legal responsibilities in some parts of the world, as well as practical reasons regarding the amount of content loaded to the site.
  • It might also help with moderator burn out since the platform is currently understaffed and struggles with extremely high turn over.
  • In fact, the platform still faces a lawsuit from a moderator claiming the job gave them Post Traumatic Stress Disorder. They also claim the company offered little resources to cope with the content they are required to watch.

AI-Age Restrictions

YouTube announced Tuesday that it will use AI and machine learning to automatically apply age restriction to videos.

In a recent blog post, the platform wrote, “our Trust & Safety team applies age-restrictions when, in the course of reviewing content, they encounter a video that isn’t appropriate for viewers under 18.”

“Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions.”

Flagged videos would effectively be blocked from being viewed by anyone who isn’t signed into an account or who has an account indicating they are below the age of 18. YouTube stated these changes were a continuation of their efforts to make YouTube a safer place for families. Initially, it rolled out YouTube Kids as a dedicated platform for those under 13, and now it wants to try and sterilize the platform site-wide. Although notably, it doesn’t plan to make the entire platform a new YouTube Kids.

It’s also not a coincidence that this move helps YouTube to better fall in line with regulations across the world. In Europe, users may face other steps if YouTube can’t confirm their age in addition to rolling out AI-age restrictions. This can include measures such as providing a government ID or credit card to prove one is over 18.

If a video is age-restricted by YouTube, the company did say it will have an appeals process that will get the video in front of an actual person to check it.

On that note, just days before announcing that it would implement AI to age-restrict, YouTube also said it would be expanding its moderation team after it had largely been on hiatus because of the pandemic.

It’s hard to say how much these changes will actually affect creators or how much money that can make from the platform. The only assurances YouTube gave were to creators who are part of the YouTube Partner Program.

“For creators in the YouTube Partner Program, we expect these automated age-restrictions to have little to no impact on revenue as most of these videos also violate our advertiser-friendly guidelines and therefore have limited or no ads.”

This means that most creators with the YouTube Partner Program don’t make much, or anything, from ads already and that’s unlikely to change.

Community Backlash

Every time YouTube makes a big change there are a lot of reactions, especially if it involves AI to automatically handle processes. Tuesday’s announcement was no different.

On YouTube’s tweet announcing the changes, common responses included complaints like, “what’s the point in an age restriction on a NON kids app. That’s why we have YouTube kids. really young kids shouldn’t be on normal youtube. So we don’t realistically need an age restriction.”

“Please don’t implement this until you’ve worked out all the kinks,” one user pleaded. “I feel like this might actually hurt a lot of creators, who aren’t making stuff for kids, but get flagged as kids channels because of bright colors and stuff like that”

Hiccups relating to the rollout of this new system were common among users. Although it’s possible that YouTube’s Sept 20. announcement saying it would bring back human moderators to the platform was made to help balance out how much damage a new AI could do.

In a late-August transparency report, YouTube found that AI-moderation was far more restrictive. When the moderators were first down-sized between April and June, YouTube’s AI largely took over and it removed around 11 million videos. That’s double the normal rate.

YouTube did allow creators to appeal those decisions, and about 300,000 videos were appealed; about half of which were reinstated. In a similar move, Facebook also had a similar problem, and will also bring back moderators to handle both restrictive content and the upcoming election.

Other Reasons for the Changes

YouTube’s decision to expand its use of AI not only falls in line with various laws regarding the verification of a user’s age and what content is widely available to the public but also likely for practical reasons.

The site gets over 400 hours of content uploaded every minute. Notwithstanding different time zones or having people work staggered schedules, YouTube would need to employ over 70,000 people to just check what’s uploaded to the site.

Outlets like The Verge have done a series about how YouTube, Google, and Facebook moderators are dealing with depression, anger, and Post Traumatic Stress Disorder because of their job. These issues were particularly prevalent among people working in what YouTube calls the “terror” or “violent extremism” queue.

One moderator told The Verge, “Every day you watch someone beheading someone, or someone shooting his girlfriend. After that, you feel like wow, this world is really crazy. This makes you feel ill. You’re feeling there is nothing worth living for. Why are we doing this to each other?”

That same individual noted that since working there, he began to gain weight, lose hair, have a short temper, and experience general signs of anxiety.

On top of these claims, YouTube is also facing a lawsuit filed in a California court Monday by a former content moderator at YouTube.

The complaint states that Jane Doe, “has trouble sleeping and when she does sleep, she has horrific nightmares. She often lays awake at night trying to go to sleep, replaying videos that she has seen in her mind.

“She cannot be in crowded places, including concerts and events, because she fears mass shootings. She has severe and debilitating panic attacks,” it continued. “She has lost many friends because of her anxiety around people. She has trouble interacting and being around kids and is now scared to have children.”

These issues weren’t just for people working on the “terror” queue, but anyone training to become a moderator.

“For example, during training, Plaintiff witnessed a video of a smashed open skull with people eating from it; a woman who was kidnapped and beheaded by a cartel; a person’s head being run over by a tank; beastiality; suicides; self-harm; children being rapped [sic]; births and abortions,” the complaint alleges.

“As the example was being presented, Content Moderators were told that they could step out of the room. But Content Moderators were concerned that leaving the room would mean they might lose their job because at the end of the training new Content Moderators were required to pass a test applying the Community Guidelines to the content.”

During their three-week training, moderators allegedly don’t receive much resilience training or wellness resources.

These kinds of lawsuits aren’t unheard of. Facebook faced a similar suit in 2018, where a woman claimed that during her time as a moderator she developed PTSD as a result of “constant and unmitigated exposure to highly toxic and extremely disturbing images at the workplace.”

That case hasn’t yet been decided in court. Currently, Facebook and the plaintiff agreed to settle for $52 million, pending approval from the court.

The settlement would only apply to U.S. moderators

See what others are saying: (CNET) (The Verge) (Vice)

Continue Reading