Connect with us

Business

EU Rules Facebook Can Be Forced to Remove Content Worldwide

Published

on

  • The EU’s highest court has ruled that if one EU-member country decides content posted on Facebook is illegal, Facebook can be forced to remove specific content worldwide.
  • Facebook and other critics argued the rule will violate freedom of expression laws in other countries because removing content that one country deems illegal might be protected as free speech in another country.
  • Some critics also claimed the rule will allow authoritarian leaders to justify censorship and stifling political dissent.

European Court of Justice Ruling

The European Union’s highest court ruled Thursday that Facebook can be ordered to remove specific content worldwide if one EU-member country finds it illegal.

In a statement, the European Court of Justice said that if the national court of one EU country decides a post on Facebook is illegal, Facebook will be required to remove all duplicates of that post: not just in that EU country, but everywhere in the world.

The ruling also says that in some cases, even posts that are similar to the post deemed illegal will also have to be removed.

The ECJ made the decision after Austrian politician Eva Glawischnig-Piesczek sued Facebook in Austrian courts demanding that the company remove a defamatory comment someone posted about her, as well as any “equivalent” comments disparaging her.

Reportedly, the post in question was made by a Facebook user why shared a link to a news article that called Glawischnig-Piesczek a “lousy traitor of the people,” a “corrupt oaf” and member of a “fascist party.”

Facebook at first had refused to remove the post, which in many countries would still be considered acceptable political speech. However, Austrian courts ruled that the post was intended to hurt her reputation, and the Austrian Supreme Court referred the case to the ECJ.

In the ECJ statement, the highest court did clarify that Facebook and other social media companies are not liable for illegal content posted on their platforms as long as they did not know it was illegal or removed it quickly.

Regardless, the ruling still comes as a massive blow and a huge change for Facebook and places much more responsibility on the tech giant to control its content.

Facebook’s Response

It should not come as a surprise that Facebook is not happy with the decision.

Before the high court’s decision, Facebook and others critical of the rule argued that allowing one country to force a platform to remove material globally limits free speech. Facebook also argued that the decision would most likely force them to use automated content filters. 

Some activists have claimed automated filters could cause legitimate posts to be taken down because the filters can not necessarily tell if a post is ironic or satirical or a meme⁠—a problem most grandparents also seem to have on Facebook.

Facebook condemned the ECJ ruling in a statement, where it argued that internet companies should not be responsible for monitoring and removing speech that might be illegal in one specific country.

“It undermines the long-standing principle that one country does not have the right to impose its laws on speech on another country,” the statement said. “It also opens the door to obligations being imposed on internet companies to proactively monitor content and then interpret if it is ‘equivalent’ to content that has been found to be illegal.”

“In order to get this right national courts will have to set out very clear definitions on what ‘identical’ and ‘equivalent’ means in practice,” Facebook continued. “We hope the courts take a proportionate and measured approach, to avoid having a chilling effect on freedom of expression.”

Free Speech Debate

Facebook’s statement has also been echoed by some experts in the field, like Thomas Hughes, the executive director of the UK rights group Article 19, who told Reuters that the decision of one country to remove content illegal in its borders could lead to the removal of content that should be protected as free speech in another country.

“Compelling social media platforms like Facebook to automatically remove posts regardless of their context will infringe our right to free speech and restrict the information we see online,” Hughes said. 

“This would set a dangerous precedent where the courts of one country can control what internet users in another country can see. This could be open to abuse, particularly by regimes with weak human rights records.”

Touching on that point, Eline Chivot, an analyst at the Center for Data Innovation told the Financial Times that the ruling could open a “Pandora’s box” whereby the global removal of content deemed illegal in one country could give authoritarian governments and dictators more tools for censorship.

“Expanding content bans worldwide will undermine internet users’ right to access information and freedom of expression in other countries,” she said. “This precedent will embolden other countries, including those with little respect for free speech, to make similar demands.”

EU’s Role in Tech Company Regulation

Ben Wagner, the director of the Privacy and Sustainable Computing Lab at Vienna University, also argued that decision brings up concerns about restricting political speech.

“We’re talking about a politician who is being insulted in a political context, that’s very different than a normal citizen,” he told The New York Times. “There needs to be a greater scope for freedom of opinion and expression.”

The possibility of stifling political speech is a common debate regarding the regulation of content on social media.

On Wednesday, Singapore enacted a “fake news” law that will basically let the government decide what is and is not fake news on social media, leading many to believe the law is simply a tool to limit free speech and suppress political dissent.

Discussions about the regulation of political speech are especially pertinent right now.

Just last week, Facebook announced that posts by politicians will be exempt from the platform’s rules and that they will not remove or label posts by politicians, even if they are disparaging or contains false information.

Now it seems like that will change.

It is also interesting because it speaks to a broader issue of global enforcement for these kinds of rules. As many have pointed out, the EU has increasingly set the standard for tougher regulation of social media and tech companies.

But creating consistent standards for enforcement and oversight has been challenging, especially when attempting to enforce a rule globally. 

At the end of September, the ECJ decided to limit the reach of a privacy law called “the right to be forgotten,” which lets European citizens request that personal data be removed from Google’s search results. 

The ECJ decided that Google could not be required to remove the links globally, but just in EU-member states. 

Before that decision, Google also claimed the law could be abused by authoritarian governments trying to cover up human rights abuses.

Facebook, however, should not expect the court’s rule to change, as the ECJ court’s decision cannot be appealed.

See what others are saying: (The New York Times) (Reuters) (Forbes)

Business

FDA Recalls 11,000 Ice Cream Containers and Sportsmix Pet Food Products

Published

on

  • Over 11,000 cartons of Weis Markets ice cream were recalled after a customer discovered an “intact piece of metal equipment” inside a 48-ounce container of the brand’s Cookies and Cream flavor. 
  • The FDA also expanded a recall of Sportsmix pet food over concerns that the products may contain potentially fatal levels of aflatoxins.
  • So far, more than 70 dogs have died and more than 80 pets have become sick after eating Sportsmix food. The agency recommends taking your pet to a veterinarian if they have eaten the recalled products, even if they aren’t showing symptoms.

Metal Pieces in Weis Ice Cream Cause Massive Recall

The Food and Drug Administration announced two major product recalls this week following serious consumer complaints.

The first came Sunday when the agency revealed that over 11,000 cartons of Weis Market ice cream were recalled. “The products may be contaminated with extraneous material, specifically metal filling equipment parts,” the FDA’s statement explained.

At least one customer discovered an “intact piece of metal equipment” inside a 48-ounce container of the brand’s Cookies and Cream flavor.

Those containers were available in 197 Weis Market grocery stores, but they have already been pulled from shelves. The products have a sell-by date of October 21, 2020, and customers who purchased the product can return it for a full refund.

Along with removing 10,869 units of the Cookies and Cream containers, the brand also recalled 502 3-gallon bulk containers of Klein’s Vanilla Dairy Ice Cream.

Those bulk containers were not for retail sale, but were instead sold to one retail establishment in New York and have since been removed.

Sportsmix Recall Follows 70 Pet Deaths, 80 Illnesses

The second major recall came Tuesday when the FDA expanded a recall of Sportmix dog food.

According to the agency, the product may contain potentially fatal levels of aflatoxins – toxins produced by the Aspergillus flavus mold, which can grow on corn and other grains used as ingredients in pet food.

As of Tuesday, more than 70 pets have died and more than 80 have gotten sick after eating Sportsmix pet food. Not all the cases have been officially confirmed as aflatoxin poisoning at this time. This count also may not reflect the total number of pets affected.

For now, the FDA is asking pet owners and veterinary professionals to stop using the impacted Sportsmix products that have an expiration date on or before July 9, 2022, and have “05” in the date or lot code.

More detailed information about the recalled products can be found on the FDA’s announcement page.

Pets experiencing aflatoxin poisoning may have symptoms like sluggishness, loss of appetite, vomiting, jaundice, and/or diarrhea. In some cases, this toxicity can cause long-term liver issues without showing any symptoms. Because of this, pet owners are being advised to take their animals to a veterinarian if they have eaten the recalled products, even if they aren’t showing symptoms.

There is currently no evidence that pet owners who have handled the affected food are at risk of aflatoxin poisoning. Still, the FDA recommends that wash your hands after handling pet food.

See what others are saying: (CNN) (USA TODAY) (PEOPLE)

Continue Reading

Business

Signal and Telegram Downloads Surge After WhatsApp Announces It Will Share Data With Facebook

Published

on

  • Downloads for Signal and Telegram have skyrocketed in the last week, with the encrypted messaging apps boasting 7.5 million and 9 million new followers, respectively.
  • The growth comes after WhatsApp said it will require almost all users to share personal data with its parent company Facebook.
  • It also comes after Parler’s shutdown and bans against President Trump from Twitter and Facebook, which prompted his supporters to turn specifically to Telegram.

Telegram and Signal See Big Boost

Downloads for the encrypted messaging apps Signal and Telegram have surged in the last week after WhatsApp announced that it will start forcing all users outside the E.U. and U.K. to share personal data with Facebook.

Last week, WhatsApp, which is owned by Facebook, told users that they must allow Facebook and its subsidiaries to collect their phone numbers, locations, and the phone numbers of their contacts, among other things.

Anyone who does not agree to the new terms by Feb. 8 will lose access to the messaging app. The move prompted many to call for people to delete WhatsApp and start using other services like Signal or Telegram.

Now, it appears those calls to use other encrypted messaging apps have been heard. According to data from app analytics firm Sensor Tower, Signal saw 7.5 million installs globally through the App Store and Google Play from Jan. 6 to Jan. 10 alone, marking a 4,200% increase from the previous week.

Meanwhile, Telegram saw even more downloads. During the same time, it gained 9 million users, up 91% from the previous week. It was also the most downloaded app in the U.S.

WhatsApp responded to the exodus by attempting to clarify its new policy in a statement Monday.

“We want to be clear that the policy update does not affect the privacy of your messages with friends or family in any way,” the company said. “Instead, this update includes changes related to messaging a business on WhatsApp, which is optional, and provides further transparency about how we collect and use data.”

Other Causes of App Growth

Notably, some of the spikes in the Telegram downloads, specifically, also come from many supporters of President Donald Trump flocking to alternative platforms after Parler was shut down and Trump was banned from Twitter and Facebook.

Far-right chat room membership on the platform has increased significantly in recent days, NBC News reported. Conversations in pre-existing chatrooms where white supremacist content has already been shared for months has also increased since the pro-Trump insurrection at the U.S. Capitol last week.

According to the outlet, many of the president’s supporters have moved their operations to the app in large part because it has very lax community guidelines. Companies like Facebook and Twitter have recently cracked down on groups and users sharing incendiary content, known conspiracy theories, and attempting to organize events that could lead to violence.

There have been several documented instances of Trump supporters now using Telegram channels to discuss planned events and urge acts of direct violence. Per NBC, in one channel named “fascist,” users have called on others to “shoot politicians” and “encourage armed struggle.” A post explaining how to radicalize Trump supporters to become neo-Nazis also made rounds on the “fascist” channel, among others. 

Membership one channel frequently used by members of the Proud Boys has grown by more than 10,000 in recent days, seeming to directly attract users from Parler.

“Now that they forced us off the main platforms it doesn’t mean we go away, it just means we are going to go to places they don’t see,” a user posted in the chatroom, according to NBC.

See what others are saying: (NBC News) (Business Insider) (CNBC)

Continue Reading

Business

Pornhub Removes All Unverified User Uploads, Taking Down Most of Its Videos

Published

on

  • Pornhub is now removing all videos that were not uploaded by verified users.
  • Before the massive purge, the site hosted around 13.5 million videos. As of Monday morning, there were only 2.9 million videos left. 
  • The move is part of a series of sweeping changes the company made days after The New York Times published a shocking op-ed detailing numerous instances of abuse on the site, including nonconsensual uploads of underage girls.
  • Following the article, numerous businesses cut ties with the company, including Mastercard and Visa, which both announced Thursday that they will not process any payments on the site.

Pornhub Purges Videos

Pornhub removed the vast majority of its existing videos Monday, just hours after the company announced that it would take down all existing videos uploaded by non-verified users.

According to reports, before the new move was announced Sunday night, Pornhub hosted about 13.5 million videos, according to the number displayed on the site’s search bar. As of writing, that search bar shows just over 2.9 million videos. 

The decision comes less than a week after the company announced it would only allow video uploads from content partners and members of its Model program.

At the time, Pornhub claimed it made the decision following an independent review launched in April to eliminate illegal content. However, many speculated that it was actually in large part due to an op-ed published in The New York Times just days before. That piece, among other things, found that the site had been hosting videos of young girls uploaded without their consent, including some content where minors were raped or assaulted.

The article prompted a wave of backlash against Pornhub and calls for other businesses to cut ties with the company. On Thursday, both Visa and Mastercard announced that they would stop processing all payments on the site.

“Our investigation over the past several days has confirmed violations of our standards prohibiting unlawful content on their site,” Mastercard said in a statement.

Less than an hour later, Visa tweeted that it would also be suspending payments while it completed its own investigation.

Pornhub Claims It’s Being Targeted

However, in its blogpost announcing the most recent decision, Pornhub claimed that it was being unfairly targeted.

Specifically, the company noted that Facebook’s own transparency report found 84 million instances of child sexual abuse content over the last three years. By contrast, a report by the third-party Internet Watch Foundation found 118 similar instances on Pornhub in the same time period.

Notably, the author of The Times report, Nicholas Krisof, specifically said the Internet Watch Foundation’s findings represented a massive undercount, and that he was able to find hundreds of these kinds of videos on Pornhub in just half an hour.

Still, the site used the disputed numbers to point a finger at others.

“It is clear that Pornhub is being targeted not because of our policies and how we compare to our peers, but because we are an adult content platform,” the statement continued.

“Every piece of Pornhub content is from verified uploaders, a requirement that platforms like Facebook, Instagram, TikTok, YouTube, Snapchat and Twitter have yet to institute,” the company added. 

However, Pornhub’s implication that it is somehow more responsible because it only let verified users post content is a highly impractical comparison. First of all, Pornhub is a platform created exclusively for porn, content the social media companies the company name-checked explicitly prohibit.

Second of all, and the vast majority of people who use those platforms are not verified, and it would be impossible for a company like Facebook or YouTube to limit content to only verified users without entirely undermining their own purposes.

Verification Concerns

Even beyond that, there are also still questions about Pornhub’s verification process. According to their site, all someone needs to do to become verified is to simply have a Pornhub account with an avatar and then upload a selfie of themselves holding a piece of paper with their username and Pornhub.com written on it.

While the company did tell reporters the process would be made more thorough sometime next year, they did not provide any specific details, prompting questions about exhaustive the verification process will ultimately be.

That question is highly important because, at least per its current policies, the verification process makes it so users are eligible to monetize their videos as part of the ModelHub program.

If the new verification process is still weak or has loopholes, people could easily slip through the cracks and continue to profit. However, on the other side, there are also big concerns among sex-workers that if the process is too limited, they will be able to make money on the platform.

That concern has already been exacerbated by some of the other actions taken since The Times article was published. For example, after Mastercard and Visa made their announcements, numerous sex workers and activists condemned the decision, saying it would seriously hurt how porn performers collect income —  not just on Pornbub, but on other platforms as well. 

“By targeting Pornhub and successfully destroying the ability for independent creators to monetize their content, they have made it easier to remove payment options from smaller platforms too,” model Avalon Fey told Motherboard last week. “This has nothing to do with helping abused victims, and everything to do with hurting online adult entertainers to stop them from creating and sharing adult content.”  

Other performers also expressed similar concerns that the move could spillover to smaller platforms. 

“I am watching to see if my OnlyFans will be their next target and sincerely hoping not,” amateur performer Dylan Thomas also told the outlet.

“Sex workers are scared by this change, despite not having uploaded any illegal content,” Fey continued, “because we have seen these patterns before and have had sites and payment processors permanently and unexpectedly shut down.”

See what others are saying: (Motherboard) (The Verge) (Bloomberg)

Continue Reading