Connect with us

Business

Twitter to Investigate Auto-Crop Algorithm After Accusations of Racial Bias

Published

on

  • Twitter users believe they discovered a racial bias in an algorithm the platform uses to automatically select which part of an image it shows in a photo preview.
  • Many argued that the auto-cropping tool showed a white bias after testing the theory with photos of Black and white people, cartoon characters, and even dogs. 
  • However, others who tested the theory generated results that did not support this idea. Regardless, most users admit that these experiments have their limitations and agree that the current results at least show that this is something worth looking into.
  • The company released a statement saying it tested its system for bias in the past but admitted it needs to conduct further analysis of it. Online, Twitter employees seemed to welcome the public discourse and the company promised to share its results as well as further actions it may take.

Potential White Bias 

Twitter responded to concerns over its automatic cropping algorithm Sunday after users believed they discovered a racial bias in the tool.

In 2018, Twitter began auto-cropping photos in its timeline previews to prevent them from taking up too much space in the main feed and to allow multiple photos to appear in the same tweet. To do this, the company uses several algorithmic tools that focus on the most important part of the picture, like faces or text. 

However, users recently began to spot issues with the algorithm. The first person credited for highlighting a potential problem was PhD student Colin Madland. He made his discovery while highlighting a different racial bias he thinks he found on the video-conference company Zoom. 

Madland tweeted that when his Black colleague uses a virtual background on Zoom, his head is erased. When he uploaded examples to show this happening to his Black colleague and not himself, he noticed that Twitter was only showing his own face in its preview. 

Soon after, others followed up with more targetted experiments. Cryptographic and infrastructure engineer Tony Arcieri, for example, tweeted out two long images with Senate Majority Leader Mitch McConnel and Former President Barack Obama. 

The two photos have the politicians stacked on top of each other in different orders but with white space in between them. The experiment showed that Twitter would focus on McConnell, no matter what order the photos were stacked in.

Another user found that the algorithm even focused on McConnell when two photos of Obama were present in a single stack.

A similar white preference appeared in examples of Black and white men in suits, Simpsons characters Lenny and Carl, and even black and white dogs. 

Examples That Don’t Support White Bias Theory

Others looking into this theory of a white bias found results that did not support the idea. 

For example, one user found that photos of Obama were cropped for the preview over photos of Donald Trump. 

Still, some researching the trends noted that these experiments do have their limitations and are likely influenced by tons of other factors. Some believe the algorithm recognized high profile figures or considers brightness and contrast, among other photo elements.

Twitter’s Chief Design Officer (CDO), Dantley Davis, even suggested that the choice of cropping sometimes takes brightness of the background into consideration.

However, ohers found examples that rejected that idea. Regardless, all these tests did a lot to convince people that there was something worth looking at here, including Davis, who has been experimenting himself.

He’s not alone in his research. In fact, plenty of other Twitter users have been going to great lengths to track their results as they try to study what is going on.

Twitter Promises to Investigate 

On Sunday, a Twitter spokesperson eventually released a statement admitting that the company had work to do.

“Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing,” the company explained.

But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate.” 

Davis also isn’t the only employee that has appeared to welcome all of this public discourse. The company’s Chief Technology Officer, Parag Argawal tweeted, “This is a very important question. To address it, we did analysis on our model when we shipped it, but needs continuous improvement. Love this public, open, and rigorous test — and eager to learn from this.”

See what others are saying; (The Next Web) (The Guardian) (Mashable

Business

FDA Recalls 11,000 Ice Cream Containers and Sportsmix Pet Food Products

Published

on

  • Over 11,000 cartons of Weis Markets ice cream were recalled after a customer discovered an “intact piece of metal equipment” inside a 48-ounce container of the brand’s Cookies and Cream flavor. 
  • The FDA also expanded a recall of Sportsmix pet food over concerns that the products may contain potentially fatal levels of aflatoxins.
  • So far, more than 70 dogs have died and more than 80 pets have become sick after eating Sportsmix food. The agency recommends taking your pet to a veterinarian if they have eaten the recalled products, even if they aren’t showing symptoms.

Metal Pieces in Weis Ice Cream Cause Massive Recall

The Food and Drug Administration announced two major product recalls this week following serious consumer complaints.

The first came Sunday when the agency revealed that over 11,000 cartons of Weis Market ice cream were recalled. “The products may be contaminated with extraneous material, specifically metal filling equipment parts,” the FDA’s statement explained.

At least one customer discovered an “intact piece of metal equipment” inside a 48-ounce container of the brand’s Cookies and Cream flavor.

Those containers were available in 197 Weis Market grocery stores, but they have already been pulled from shelves. The products have a sell-by date of October 21, 2020, and customers who purchased the product can return it for a full refund.

Along with removing 10,869 units of the Cookies and Cream containers, the brand also recalled 502 3-gallon bulk containers of Klein’s Vanilla Dairy Ice Cream.

Those bulk containers were not for retail sale, but were instead sold to one retail establishment in New York and have since been removed.

Sportsmix Recall Follows 70 Pet Deaths, 80 Illnesses

The second major recall came Tuesday when the FDA expanded a recall of Sportmix dog food.

According to the agency, the product may contain potentially fatal levels of aflatoxins – toxins produced by the Aspergillus flavus mold, which can grow on corn and other grains used as ingredients in pet food.

As of Tuesday, more than 70 pets have died and more than 80 have gotten sick after eating Sportsmix pet food. Not all the cases have been officially confirmed as aflatoxin poisoning at this time. This count also may not reflect the total number of pets affected.

For now, the FDA is asking pet owners and veterinary professionals to stop using the impacted Sportsmix products that have an expiration date on or before July 9, 2022, and have “05” in the date or lot code.

More detailed information about the recalled products can be found on the FDA’s announcement page.

Pets experiencing aflatoxin poisoning may have symptoms like sluggishness, loss of appetite, vomiting, jaundice, and/or diarrhea. In some cases, this toxicity can cause long-term liver issues without showing any symptoms. Because of this, pet owners are being advised to take their animals to a veterinarian if they have eaten the recalled products, even if they aren’t showing symptoms.

There is currently no evidence that pet owners who have handled the affected food are at risk of aflatoxin poisoning. Still, the FDA recommends that wash your hands after handling pet food.

See what others are saying: (CNN) (USA TODAY) (PEOPLE)

Continue Reading

Business

Signal and Telegram Downloads Surge After WhatsApp Announces It Will Share Data With Facebook

Published

on

  • Downloads for Signal and Telegram have skyrocketed in the last week, with the encrypted messaging apps boasting 7.5 million and 9 million new followers, respectively.
  • The growth comes after WhatsApp said it will require almost all users to share personal data with its parent company Facebook.
  • It also comes after Parler’s shutdown and bans against President Trump from Twitter and Facebook, which prompted his supporters to turn specifically to Telegram.

Telegram and Signal See Big Boost

Downloads for the encrypted messaging apps Signal and Telegram have surged in the last week after WhatsApp announced that it will start forcing all users outside the E.U. and U.K. to share personal data with Facebook.

Last week, WhatsApp, which is owned by Facebook, told users that they must allow Facebook and its subsidiaries to collect their phone numbers, locations, and the phone numbers of their contacts, among other things.

Anyone who does not agree to the new terms by Feb. 8 will lose access to the messaging app. The move prompted many to call for people to delete WhatsApp and start using other services like Signal or Telegram.

Now, it appears those calls to use other encrypted messaging apps have been heard. According to data from app analytics firm Sensor Tower, Signal saw 7.5 million installs globally through the App Store and Google Play from Jan. 6 to Jan. 10 alone, marking a 4,200% increase from the previous week.

Meanwhile, Telegram saw even more downloads. During the same time, it gained 9 million users, up 91% from the previous week. It was also the most downloaded app in the U.S.

WhatsApp responded to the exodus by attempting to clarify its new policy in a statement Monday.

“We want to be clear that the policy update does not affect the privacy of your messages with friends or family in any way,” the company said. “Instead, this update includes changes related to messaging a business on WhatsApp, which is optional, and provides further transparency about how we collect and use data.”

Other Causes of App Growth

Notably, some of the spikes in the Telegram downloads, specifically, also come from many supporters of President Donald Trump flocking to alternative platforms after Parler was shut down and Trump was banned from Twitter and Facebook.

Far-right chat room membership on the platform has increased significantly in recent days, NBC News reported. Conversations in pre-existing chatrooms where white supremacist content has already been shared for months has also increased since the pro-Trump insurrection at the U.S. Capitol last week.

According to the outlet, many of the president’s supporters have moved their operations to the app in large part because it has very lax community guidelines. Companies like Facebook and Twitter have recently cracked down on groups and users sharing incendiary content, known conspiracy theories, and attempting to organize events that could lead to violence.

There have been several documented instances of Trump supporters now using Telegram channels to discuss planned events and urge acts of direct violence. Per NBC, in one channel named “fascist,” users have called on others to “shoot politicians” and “encourage armed struggle.” A post explaining how to radicalize Trump supporters to become neo-Nazis also made rounds on the “fascist” channel, among others. 

Membership one channel frequently used by members of the Proud Boys has grown by more than 10,000 in recent days, seeming to directly attract users from Parler.

“Now that they forced us off the main platforms it doesn’t mean we go away, it just means we are going to go to places they don’t see,” a user posted in the chatroom, according to NBC.

See what others are saying: (NBC News) (Business Insider) (CNBC)

Continue Reading

Business

Pornhub Removes All Unverified User Uploads, Taking Down Most of Its Videos

Published

on

  • Pornhub is now removing all videos that were not uploaded by verified users.
  • Before the massive purge, the site hosted around 13.5 million videos. As of Monday morning, there were only 2.9 million videos left. 
  • The move is part of a series of sweeping changes the company made days after The New York Times published a shocking op-ed detailing numerous instances of abuse on the site, including nonconsensual uploads of underage girls.
  • Following the article, numerous businesses cut ties with the company, including Mastercard and Visa, which both announced Thursday that they will not process any payments on the site.

Pornhub Purges Videos

Pornhub removed the vast majority of its existing videos Monday, just hours after the company announced that it would take down all existing videos uploaded by non-verified users.

According to reports, before the new move was announced Sunday night, Pornhub hosted about 13.5 million videos, according to the number displayed on the site’s search bar. As of writing, that search bar shows just over 2.9 million videos. 

The decision comes less than a week after the company announced it would only allow video uploads from content partners and members of its Model program.

At the time, Pornhub claimed it made the decision following an independent review launched in April to eliminate illegal content. However, many speculated that it was actually in large part due to an op-ed published in The New York Times just days before. That piece, among other things, found that the site had been hosting videos of young girls uploaded without their consent, including some content where minors were raped or assaulted.

The article prompted a wave of backlash against Pornhub and calls for other businesses to cut ties with the company. On Thursday, both Visa and Mastercard announced that they would stop processing all payments on the site.

“Our investigation over the past several days has confirmed violations of our standards prohibiting unlawful content on their site,” Mastercard said in a statement.

Less than an hour later, Visa tweeted that it would also be suspending payments while it completed its own investigation.

Pornhub Claims It’s Being Targeted

However, in its blogpost announcing the most recent decision, Pornhub claimed that it was being unfairly targeted.

Specifically, the company noted that Facebook’s own transparency report found 84 million instances of child sexual abuse content over the last three years. By contrast, a report by the third-party Internet Watch Foundation found 118 similar instances on Pornhub in the same time period.

Notably, the author of The Times report, Nicholas Krisof, specifically said the Internet Watch Foundation’s findings represented a massive undercount, and that he was able to find hundreds of these kinds of videos on Pornhub in just half an hour.

Still, the site used the disputed numbers to point a finger at others.

“It is clear that Pornhub is being targeted not because of our policies and how we compare to our peers, but because we are an adult content platform,” the statement continued.

“Every piece of Pornhub content is from verified uploaders, a requirement that platforms like Facebook, Instagram, TikTok, YouTube, Snapchat and Twitter have yet to institute,” the company added. 

However, Pornhub’s implication that it is somehow more responsible because it only let verified users post content is a highly impractical comparison. First of all, Pornhub is a platform created exclusively for porn, content the social media companies the company name-checked explicitly prohibit.

Second of all, and the vast majority of people who use those platforms are not verified, and it would be impossible for a company like Facebook or YouTube to limit content to only verified users without entirely undermining their own purposes.

Verification Concerns

Even beyond that, there are also still questions about Pornhub’s verification process. According to their site, all someone needs to do to become verified is to simply have a Pornhub account with an avatar and then upload a selfie of themselves holding a piece of paper with their username and Pornhub.com written on it.

While the company did tell reporters the process would be made more thorough sometime next year, they did not provide any specific details, prompting questions about exhaustive the verification process will ultimately be.

That question is highly important because, at least per its current policies, the verification process makes it so users are eligible to monetize their videos as part of the ModelHub program.

If the new verification process is still weak or has loopholes, people could easily slip through the cracks and continue to profit. However, on the other side, there are also big concerns among sex-workers that if the process is too limited, they will be able to make money on the platform.

That concern has already been exacerbated by some of the other actions taken since The Times article was published. For example, after Mastercard and Visa made their announcements, numerous sex workers and activists condemned the decision, saying it would seriously hurt how porn performers collect income —  not just on Pornbub, but on other platforms as well. 

“By targeting Pornhub and successfully destroying the ability for independent creators to monetize their content, they have made it easier to remove payment options from smaller platforms too,” model Avalon Fey told Motherboard last week. “This has nothing to do with helping abused victims, and everything to do with hurting online adult entertainers to stop them from creating and sharing adult content.”  

Other performers also expressed similar concerns that the move could spillover to smaller platforms. 

“I am watching to see if my OnlyFans will be their next target and sincerely hoping not,” amateur performer Dylan Thomas also told the outlet.

“Sex workers are scared by this change, despite not having uploaded any illegal content,” Fey continued, “because we have seen these patterns before and have had sites and payment processors permanently and unexpectedly shut down.”

See what others are saying: (Motherboard) (The Verge) (Bloomberg)

Continue Reading