- The EU’s highest court has ruled that if one EU-member country decides content posted on Facebook is illegal, Facebook can be forced to remove specific content worldwide.
- Facebook and other critics argued the rule will violate freedom of expression laws in other countries because removing content that one country deems illegal might be protected as free speech in another country.
- Some critics also claimed the rule will allow authoritarian leaders to justify censorship and stifling political dissent.
European Court of Justice Ruling
The European Union’s highest court ruled Thursday that Facebook can be ordered to remove specific content worldwide if one EU-member country finds it illegal.
In a statement, the European Court of Justice said that if the national court of one EU country decides a post on Facebook is illegal, Facebook will be required to remove all duplicates of that post: not just in that EU country, but everywhere in the world.
The ruling also says that in some cases, even posts that are similar to the post deemed illegal will also have to be removed.
The ECJ made the decision after Austrian politician Eva Glawischnig-Piesczek sued Facebook in Austrian courts demanding that the company remove a defamatory comment someone posted about her, as well as any “equivalent” comments disparaging her.
Reportedly, the post in question was made by a Facebook user why shared a link to a news article that called Glawischnig-Piesczek a “lousy traitor of the people,” a “corrupt oaf” and member of a “fascist party.”
Facebook at first had refused to remove the post, which in many countries would still be considered acceptable political speech. However, Austrian courts ruled that the post was intended to hurt her reputation, and the Austrian Supreme Court referred the case to the ECJ.
In the ECJ statement, the highest court did clarify that Facebook and other social media companies are not liable for illegal content posted on their platforms as long as they did not know it was illegal or removed it quickly.
Regardless, the ruling still comes as a massive blow and a huge change for Facebook and places much more responsibility on the tech giant to control its content.
It should not come as a surprise that Facebook is not happy with the decision.
Before the high court’s decision, Facebook and others critical of the rule argued that allowing one country to force a platform to remove material globally limits free speech. Facebook also argued that the decision would most likely force them to use automated content filters.
Some activists have claimed automated filters could cause legitimate posts to be taken down because the filters can not necessarily tell if a post is ironic or satirical or a meme—a problem most grandparents also seem to have on Facebook.
Facebook condemned the ECJ ruling in a statement, where it argued that internet companies should not be responsible for monitoring and removing speech that might be illegal in one specific country.
“It undermines the long-standing principle that one country does not have the right to impose its laws on speech on another country,” the statement said. “It also opens the door to obligations being imposed on internet companies to proactively monitor content and then interpret if it is ‘equivalent’ to content that has been found to be illegal.”
“In order to get this right national courts will have to set out very clear definitions on what ‘identical’ and ‘equivalent’ means in practice,” Facebook continued. “We hope the courts take a proportionate and measured approach, to avoid having a chilling effect on freedom of expression.”
Free Speech Debate
Facebook’s statement has also been echoed by some experts in the field, like Thomas Hughes, the executive director of the UK rights group Article 19, who told Reuters that the decision of one country to remove content illegal in its borders could lead to the removal of content that should be protected as free speech in another country.
“Compelling social media platforms like Facebook to automatically remove posts regardless of their context will infringe our right to free speech and restrict the information we see online,” Hughes said.
“This would set a dangerous precedent where the courts of one country can control what internet users in another country can see. This could be open to abuse, particularly by regimes with weak human rights records.”
Touching on that point, Eline Chivot, an analyst at the Center for Data Innovation told the Financial Times that the ruling could open a “Pandora’s box” whereby the global removal of content deemed illegal in one country could give authoritarian governments and dictators more tools for censorship.
“Expanding content bans worldwide will undermine internet users’ right to access information and freedom of expression in other countries,” she said. “This precedent will embolden other countries, including those with little respect for free speech, to make similar demands.”
EU’s Role in Tech Company Regulation
Ben Wagner, the director of the Privacy and Sustainable Computing Lab at Vienna University, also argued that decision brings up concerns about restricting political speech.
“We’re talking about a politician who is being insulted in a political context, that’s very different than a normal citizen,” he told The New York Times. “There needs to be a greater scope for freedom of opinion and expression.”
The possibility of stifling political speech is a common debate regarding the regulation of content on social media.
On Wednesday, Singapore enacted a “fake news” law that will basically let the government decide what is and is not fake news on social media, leading many to believe the law is simply a tool to limit free speech and suppress political dissent.
Discussions about the regulation of political speech are especially pertinent right now.
Just last week, Facebook announced that posts by politicians will be exempt from the platform’s rules and that they will not remove or label posts by politicians, even if they are disparaging or contains false information.
Now it seems like that will change.
It is also interesting because it speaks to a broader issue of global enforcement for these kinds of rules. As many have pointed out, the EU has increasingly set the standard for tougher regulation of social media and tech companies.
But creating consistent standards for enforcement and oversight has been challenging, especially when attempting to enforce a rule globally.
At the end of September, the ECJ decided to limit the reach of a privacy law called “the right to be forgotten,” which lets European citizens request that personal data be removed from Google’s search results.
The ECJ decided that Google could not be required to remove the links globally, but just in EU-member states.
Before that decision, Google also claimed the law could be abused by authoritarian governments trying to cover up human rights abuses.
Facebook, however, should not expect the court’s rule to change, as the ECJ court’s decision cannot be appealed.
See what others are saying: (The New York Times) (Reuters) (Forbes)
Amazon Warehouse Workers in New York File Petition To Hold Unionization Vote
A similar unionization effort among Amazon warehouse workers in Alabama failed earlier this year amid allegations that the company engaged in illegal union-busting tactics.
Staten Island Unionization Efforts Advance
Workers at a group of Amazon warehouses in Staten Island, New York, filed a petition with the National Labor Relations Board (NLRB) Monday to hold a unionization vote after collecting the necessary number of signatures.
The latest push is not affiliated with a national union but is instead organized by a grassroots worker group called the Amazon Labor Union, which is self-organized and financed via GoFundMe.
The group is run by Chris Smalls, a former Amazon warehouse worker who led a walkout at the beginning of the pandemic to protest the lack of protective gear and other conditions. Smalls was later fired the same day.
For months now, Smalls and the other organizers have been forming a committee and collecting signatures from workers to back their push for a collective bargaining group, as well as pay raises, more paid time off, longer breaks, less mandatory overtime, and the ability to cancel shifts in dangerous weather conditions.
On Monday, the leader said he had collected over 2,000 signatures from the four Staten Island facilities, which employ roughly 7,000 people, meeting the NLRB requirement that organizers get support from at least 30% of the workers they wish to represent.
Amazon’s Anti-Union Efforts Continue
The campaign faces an uphill battle because Amazon — the second-largest private employer in the U.S. — has fought hard against unionization efforts for decades and won.
This past spring, Amazon warehouse workers in Alabama held a vote for unionization that ultimately failed by a wide margin.
However, the NLRB is now considering whether to hold another vote after a top agency official found in August that Amazon’s anti-union tactics interfered with the election so much that the results should be scrapped and another one should be held.
Amazon, for its part, is already trying to undermine the new effort in Staten Island. As far back as the walkout led by Smalls at the beginning of the pandemic, workers have filed 10 labor complaints claiming that Amazon has interfered with their organizing efforts.
The NLRB has said that its attorneys have found merit in at least three of those claims and are continuing to look into the others.
Meanwhile, Smalls told NPR last week that the company has ramped up those efforts recently by putting up anti-union signs around the warehouses and installing a barbed wire to limit the organizers’ space.
Representatives for Amazon did not comment on those allegations, but in a statement Monday, a spokesperson attempted to cast doubt on the number of signatures Smalls and his group have collected.
“We’re skeptical that a sufficient number of legitimate employee signatures has been secured to warrant an election,” the spokesperson said. “If there is an election, we want the voice of our employees to be heard and look forward to it.”
The labor board disputed that claim in a statement from the agency’s press secretary on Monday, stressing that the group submitted enough signatures.
See what others are saying: (The New York Times) (NPR) (The Washington Post)
Zuckerberg Says He’s “Retooling” Facebook To Attract Younger Adults
The Facebook CEO made the remarks one day before the Senate expanded its questioning of how social media apps, in general, are protecting kids online.
Focus on Younger, Not Older
In an earnings call Monday, Facebook CEO Mark Zuckerberg assured investors that he’s “retooling” the company’s platforms to serve “young adults the North Star, rather than optimizing for the larger number of older people.”
Zuckerberg’s comments came the same day a consortium of 17 major news organizations published multiple articles detailing thousands of internal documents that were handed over to the Securities and Exchanges Commission earlier this year.
Several outlets, including Bloomberg and The Verge, reported that Facebook’s own research shows it is hemorrhaging growth with teen users, as well as stagnating with young adults — something that reportedly shocked investors.
Amid his attempts to control the fallout, Zuckerberg said the company will specifically shift focus to appeal to users between 18 and 29. As part of that, he said the company is planning to ramp up Instagram’s Reels feature to more strongly compete with TikTok.
He also defended Facebook amid the leaks, saying, “Good faith criticism helps us get better. But my view is that what we are seeing is a coordinated effort to selectively use leaked documents to paint a false picture of our company.”
But the information reaped from the leaked documents is nothing short of damning, touching on everything from human trafficking to the Jan. 6 insurrection, as well as Facebook’s inability to moderate hate speech and terrorism among non-English languages.
Other Social Media Platforms Testify
On Tuesday, a Congressional subcommittee led by Sen. Richard Blumenthal (R-Ct.) directly addressed representatives from Snapchat, TikTok, and YouTube over child safety concerns on their platforms.
Facebook’s controversies have dominated social media news coverage since mid-September when The Wall Street Journal published six internal slide docs that showed Facebook researchers presenting data on the effect the company’s platforms have on minors’ mental health.
Now, Tuesday’s hearing marks a significant shift to grilling the whole of social media. Notably, this is also the first time Snap and TikTok have testified before Congress.
While each of the companies before senators generally said they support legislation to boost online protections for kids, they didn’t commit to supporting any specific proposals currently on the table.
In fact, at one point, Sen. Ed Markey (D-Ma.) criticized a Snapchat executive after she said she wanted to “talk a bit more” before the company would support updates to his Children’s Online Privacy Protect Act, which was passed in 1998.
“Look, this is just what drives us crazy,” he said “‘We want to talk, we want to talk, we want to talk.’ This bill’s been out there for years and you still don’t have a view on it. Do you support it or not?”
See what others are saying: (Business Insider) (CNBC) (The Washington Post)
Key Takeaways From the Explosive “Facebook Papers”
Among the most startling revelations, The Washington Post reported that CEO Mark Zuckerberg personally agreed to silence dissident users in Vietnam after the country’s ruling Communist Party threatened to block access to Facebook.
“The Facebook Papers”
A coalition of 17 major news organizations published a series of articles known as “The Facebook Papers” on Monday in what some are now calling Facebook’s biggest crisis ever.
The papers are a collection of thousands of redacted internal documents that were originally turned over to the U.S. Securities and Exchanges Commission by former product manager Francis Haugen earlier this year.
The outlets that published pieces Monday reportedly first obtained the documents at the beginning of October and spent weeks sifting through their contents. Below is a breakdown of many of their findings.
Facebook Is Hemorrhaging Teens
For example, The Verge said the internal documents it reviewed showed that since 2019, teen users on Facebook’s app have fallen by 13%, with the company expecting another staggering falloff of 45% over the next two years. Meanwhile, the company reportedly expects its app usage among 20- to 30-year-olds to decline by 4% in the same timeframe.
Facebook also found that fewer teens are signing up for new accounts. Similarly, the age group is moving away from using Facebook Messenger.
In an internal presentation, Facebook data scientists directly told executives that the “aging up issue is real” and warned that if the app’s average age continues to increase as it’s doing right now, it could disengage younger users “even more.”
“Most young adults perceive Facebook as a place for people in their 40s and 50s,” they explained. “Young adults perceive content as boring, misleading, and negative. They often have to get past irrelevant content to get to what matters.”
The researcher added that users under 18 additionally seem to be migrating from the platform because of concerns related to privacy and its impact on their wellbeing.
Facebook Opted Not To Remove “Like” and “Share” Buttons
In its article, The New York Times cited documents that indicated Facebook wrestled with whether or not it should remove the “like” and “share” buttons.
The original argument behind getting rid of the buttons was multi-faceted. There was a belief that their removal could decrease the anxiety teens feel since social media pressures many to want to achieve a certain number of likes per post. There was also the hope that a decrease in this pressure could lead to teens posting more. Away from that, Facebook additionally needed to tackle growing concerns about the lightning-quick spread of misinformation.
Ultimately, its hypotheses failed. According to the documents reviewed by The Times, hiding the “like” button didn’t alleviate the social anxiety teens feel. It also didn’t lead them to post more.
In fact, it actually led to users engaging with posts and ads less, and as a result, Facebook decided to keep the buttons.
Despite that, in 2019, researchers for Facebook still asserted that the platform’s “core product mechanics” were allowing misinformation and hate to flourish.
“The mechanics of our platform are not neutral,” they said in the internal documents.
Facebook Isn’t Really Regulating International Hate
That’s largely because Facebook does not employ a significant number of moderators who speak the languages of many countries where the platform is popular. As a result, its current moderators are widely unable to understand cultural contexts.
Theoretically, Facebook could solidify an AI-driven solution to catching harmful content spreading among different languages, but it still hasn’t been able to perfect that technology.
“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” Eliza Campbell, director of the Middle East Institute’s Cyber Program, told the AP. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”
According to The Atlantic, as little as 6% of Arabic-language hate content on Instagram was detected by Facebook’s systems as recently as late last year. Another document detailed by the outlet found that “of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools.”
According to The Atlantic, “employees blamed company leadership for insufficient investment” in both instances.
Facebook Was Lackluster on Human Trafficking Crackdowns Until Revenue Threats
In another major revelation, The Atlantic reported that these documents appear to confirm that the company only took strong action against human trafficking after Apple threatened to pull Facebook and Instagram from its App Store.
Initially, the outlet said employees participated in a concerted and successful effort to identify and remove sex trafficking-related content; however, the company did not disable or take down associated profiles.
Because of that, the BBC in 2019 later uncovered a broad network of human traffickers operating an active ring on the platform. In response, Facebook took some additional action, but according to the internal documents, “domestic servitude content remained on the platform.”
Later in 2019, Apple finally issued its threat. After reviewing the documents, The Atlantic said that threat alone — and not any new information — is what finally motivated Facebook to “[kick it] into high gear.”
“Was this issue known to Facebook before BBC enquiry and Apple escalation? Yes,” one internal message reportedly reads.
Zuckerberg Personally Made Vietnam Decision
According to The Washington Post, CEO Mark Zuckerberg personally called a decision last year to have Facebook agree to demands set forth by Vietnam’s ruling Communist Party.
The party had previously threatened to disconnect Facebook in the country if it didn’t silence anti-government posts.
“In America, the tech CEO is a champion of free speech, reluctant to remove even malicious and misleading content from the platform,” the article’s authors wrote. “But in Vietnam, upholding the free speech rights of people who question government leaders could have come with a significant cost in a country where the social network earns more than $1 billion in annual revenue.”
“Zuckerberg’s role in the Vietnam decision, which has not been previously reported, exemplifies his relentless determination to ensure Facebook’s dominance, sometimes at the expense of his stated values,” they added.
In the coming days and weeks, there will likely be more questions regarding Zuckerberg’s role in the decision, as well as inquiries into whether the SEC will take action against him directly.
Still, Facebook has already started defending its reasoning for making the decision. It told The Post that the choice to censor was justified “to ensure our services remain available for millions of people who rely on them every day.”
In the U.S., Zuckerberg has repeatedly claimed to champion free speech while testifying before lawmakers.
Among other findings, the Financial Times reported that Facebook employees urged management not to exempt notable figures such as politicians and celebrities from moderation rules.
Outside of these documents, similar to Haugen, another whistleblower submitted an affidavit to the SEC on Friday alleging that Facebook allows hate to go unchecked.
As the documents leaked, Haugen spent Monday testifying before a committee of British Parliament.