- The European Parliament passed the European Union Copyright Directive on Tuesday, giving member states two years to implement the law before it goes into effect.
- The directive included the highly contentious Article 13, also called the “upload filter,” which will require media platforms to be liable for copyright infringements committed by their users.
- Tech companies that lobbied against the bill have condemned its passage, while others in the music, publishing, and film industries have applauded the new law.
European Parliament Passes EUCD
The European Parliament gave the final approval to the sweeping copyright reform known as the European Union Copyright Directive (EUCD) on Tuesday, sparking backlash from large tech companies that have repeatedly lobbied against the bill.
The decision comes after the final version of the directive was approved by the different branches of the EU in February, and a final vote was set for European Parliament for the following month.
The decision on Tuesday came as members of the European Parliament voted 348 in favor of the directive and 274 against. A last-minute proposal to remove the controversial Article 13, also called the “upload filter” was rejected by only five votes.
The EUCD will now be passed on to EU member states, who will have two years to implement the law in their countries.
Member states do get to decide the details of the legislation individually, but the law will still probably have a huge impact on how the internet works in Europe.
The most contentious provisions from the drafts of the directive, Articles 11 and 13, still remain in the final version of the bill, though Article 13 has been renamed Article 17.
Article 11 & Article 13
Article 11, also called the “link tax,” mandates that links to web pages and articles can only be posted or shared on other platforms with a license.
While there are some exceptions, Article 11 will massively hurt news aggregators like Google News, because it will let publishers charge them when they display snippets of news stories.
Google has said that if publishers do decide to charge licenses for their material, they will be forced to scale down the content they show on Google News and potentially shut it down altogether.
While Article 11 has received a lot of criticism, the real heavy hitter is Article 13, now Article 17, which has also been the “upload filter.”
Article 13 requires platforms like YouTube to be responsible for copyright infringements committed by their users. The language in the law is vague, but many think that it will force these platforms to monitor and block copyrighted content from being uploaded, or else they will be liable.
People have argued that this provision could lead to automated “upload filters”– hence the nickname. These filters would scan all user content before it’s uploaded to remove copyrighted material.
The law does not explicitly require automated filters, but many think that they are inevitable. There is so much content being uploaded to YouTube every second, which essentially makes it impossible for companies to manually sort through every video to make sure it does not violate copyright laws.
To make matters worse, experts have said that these filters are not ready for the market, and are likely to be error-prone or ineffective. They have also said that the technology is expensive.
While large tech companies like Facebook and YouTube could afford that technology, it would create a barrier for smaller companies who want to enter the market, because they would not be able to afford that kind of technology.
This, in turn, would further solidify big tech companies market dominance.
Which is especially ironic, because advocates of the directive have argued that it will balance the playing field between big U.S. tech companies and smaller European content creators by giving copyright holders more power in how their content is distributed.
The argument that smaller content creators will have more power under the EUCD is one that has been reiterated by its supporters over and over again. Despite the predominantly negative reaction to the passage of EUCD, groups from the music, publishing, and film industries have applauded the passage of the law.
“This is a vote against content theft.” Xavier Bouckaert the President of European Magazine Media Association said, “Publishers of all sizes and other creators will now have the right to set terms and conditions for others to re-use their content commercially, as is only fair and appropriate.”
Helen Smith, the head of the Independent Music Companies Association, called the move “A landmark day for Europe’s creators and citizens, and a significant step towards a fairer internet.”
“Platforms facilitate a unique relationship between artists and fans, and this will be given a boost as a result of this directive. It will have a ripple effect world wide,” Smith said.
On the other side, critics of the directive argue that it is vague and will end up censoring online content, hurt free speech and stifle innovation.
In response to the bill’s passage, YouTube thanked the creators who spoke out against Article 13 in a tweet.
A spokesperson for Google made a similar point, stating:
“The Copyright Directive is improved, but will still lead to legal uncertainty and will hurt Europe’s creative and digital economies […] The details matter, and we look forward to working with policy makers, publishers, creators, and rights holders as EU member states move to implement these new rules.”
With the passage of the law, many people in the U.S. are wondering if the directive will affect them.
While no one is entirely sure exactly how the law will affect people outside of the EU, there is a precedent for EU data protection laws influencing U.S. policy. Back in 2016, the EU passed the General Data Protection Regulation (GDRP), which set new rules for how companies manage and share personal data.
Theoretically, the GDPR would only apply to data belonging to EU citizens, but because the internet is a global commodity, nearly every online service was affected when the law was fully implemented last year.
The GDPR mandated that companies get consent before obtaining personal data, and it explicitly extended to companies outside the EU. It also imposed stricter penalties on companies for violating data privacy.
Those regulations in turn resulted in significant changes for U.S. users and forced U.S. companies to adapt. In response, companies like Google and Slack moved quickly to update their terms and contracts, and roll out new personal data tools.
The effect of the regulations have already taken a toll on U.S. tech companies.
In January, a French data protection authority announced that it fined Google $57 million for not properly disclosing how user data is collected for personalized advertisements across its services, including Google Maps and YouTube.
However, as of now, it is unclear if the EUCD will be as far-reaching as the GDRP.
See what others are saying: (The Verge) (Fortune) (Venture Beat)
Amazon Warehouse Workers in New York File Petition To Hold Unionization Vote
A similar unionization effort among Amazon warehouse workers in Alabama failed earlier this year amid allegations that the company engaged in illegal union-busting tactics.
Staten Island Unionization Efforts Advance
Workers at a group of Amazon warehouses in Staten Island, New York, filed a petition with the National Labor Relations Board (NLRB) Monday to hold a unionization vote after collecting the necessary number of signatures.
The latest push is not affiliated with a national union but is instead organized by a grassroots worker group called the Amazon Labor Union, which is self-organized and financed via GoFundMe.
The group is run by Chris Smalls, a former Amazon warehouse worker who led a walkout at the beginning of the pandemic to protest the lack of protective gear and other conditions. Smalls was later fired the same day.
For months now, Smalls and the other organizers have been forming a committee and collecting signatures from workers to back their push for a collective bargaining group, as well as pay raises, more paid time off, longer breaks, less mandatory overtime, and the ability to cancel shifts in dangerous weather conditions.
On Monday, the leader said he had collected over 2,000 signatures from the four Staten Island facilities, which employ roughly 7,000 people, meeting the NLRB requirement that organizers get support from at least 30% of the workers they wish to represent.
Amazon’s Anti-Union Efforts Continue
The campaign faces an uphill battle because Amazon — the second-largest private employer in the U.S. — has fought hard against unionization efforts for decades and won.
This past spring, Amazon warehouse workers in Alabama held a vote for unionization that ultimately failed by a wide margin.
However, the NLRB is now considering whether to hold another vote after a top agency official found in August that Amazon’s anti-union tactics interfered with the election so much that the results should be scrapped and another one should be held.
Amazon, for its part, is already trying to undermine the new effort in Staten Island. As far back as the walkout led by Smalls at the beginning of the pandemic, workers have filed 10 labor complaints claiming that Amazon has interfered with their organizing efforts.
The NLRB has said that its attorneys have found merit in at least three of those claims and are continuing to look into the others.
Meanwhile, Smalls told NPR last week that the company has ramped up those efforts recently by putting up anti-union signs around the warehouses and installing a barbed wire to limit the organizers’ space.
Representatives for Amazon did not comment on those allegations, but in a statement Monday, a spokesperson attempted to cast doubt on the number of signatures Smalls and his group have collected.
“We’re skeptical that a sufficient number of legitimate employee signatures has been secured to warrant an election,” the spokesperson said. “If there is an election, we want the voice of our employees to be heard and look forward to it.”
The labor board disputed that claim in a statement from the agency’s press secretary on Monday, stressing that the group submitted enough signatures.
See what others are saying: (The New York Times) (NPR) (The Washington Post)
Zuckerberg Says He’s “Retooling” Facebook To Attract Younger Adults
The Facebook CEO made the remarks one day before the Senate expanded its questioning of how social media apps, in general, are protecting kids online.
Focus on Younger, Not Older
In an earnings call Monday, Facebook CEO Mark Zuckerberg assured investors that he’s “retooling” the company’s platforms to serve “young adults the North Star, rather than optimizing for the larger number of older people.”
Zuckerberg’s comments came the same day a consortium of 17 major news organizations published multiple articles detailing thousands of internal documents that were handed over to the Securities and Exchanges Commission earlier this year.
Several outlets, including Bloomberg and The Verge, reported that Facebook’s own research shows it is hemorrhaging growth with teen users, as well as stagnating with young adults — something that reportedly shocked investors.
Amid his attempts to control the fallout, Zuckerberg said the company will specifically shift focus to appeal to users between 18 and 29. As part of that, he said the company is planning to ramp up Instagram’s Reels feature to more strongly compete with TikTok.
He also defended Facebook amid the leaks, saying, “Good faith criticism helps us get better. But my view is that what we are seeing is a coordinated effort to selectively use leaked documents to paint a false picture of our company.”
But the information reaped from the leaked documents is nothing short of damning, touching on everything from human trafficking to the Jan. 6 insurrection, as well as Facebook’s inability to moderate hate speech and terrorism among non-English languages.
Other Social Media Platforms Testify
On Tuesday, a Congressional subcommittee led by Sen. Richard Blumenthal (R-Ct.) directly addressed representatives from Snapchat, TikTok, and YouTube over child safety concerns on their platforms.
Facebook’s controversies have dominated social media news coverage since mid-September when The Wall Street Journal published six internal slide docs that showed Facebook researchers presenting data on the effect the company’s platforms have on minors’ mental health.
Now, Tuesday’s hearing marks a significant shift to grilling the whole of social media. Notably, this is also the first time Snap and TikTok have testified before Congress.
While each of the companies before senators generally said they support legislation to boost online protections for kids, they didn’t commit to supporting any specific proposals currently on the table.
In fact, at one point, Sen. Ed Markey (D-Ma.) criticized a Snapchat executive after she said she wanted to “talk a bit more” before the company would support updates to his Children’s Online Privacy Protect Act, which was passed in 1998.
“Look, this is just what drives us crazy,” he said “‘We want to talk, we want to talk, we want to talk.’ This bill’s been out there for years and you still don’t have a view on it. Do you support it or not?”
See what others are saying: (Business Insider) (CNBC) (The Washington Post)
Key Takeaways From the Explosive “Facebook Papers”
Among the most startling revelations, The Washington Post reported that CEO Mark Zuckerberg personally agreed to silence dissident users in Vietnam after the country’s ruling Communist Party threatened to block access to Facebook.
“The Facebook Papers”
A coalition of 17 major news organizations published a series of articles known as “The Facebook Papers” on Monday in what some are now calling Facebook’s biggest crisis ever.
The papers are a collection of thousands of redacted internal documents that were originally turned over to the U.S. Securities and Exchanges Commission by former product manager Francis Haugen earlier this year.
The outlets that published pieces Monday reportedly first obtained the documents at the beginning of October and spent weeks sifting through their contents. Below is a breakdown of many of their findings.
Facebook Is Hemorrhaging Teens
For example, The Verge said the internal documents it reviewed showed that since 2019, teen users on Facebook’s app have fallen by 13%, with the company expecting another staggering falloff of 45% over the next two years. Meanwhile, the company reportedly expects its app usage among 20- to 30-year-olds to decline by 4% in the same timeframe.
Facebook also found that fewer teens are signing up for new accounts. Similarly, the age group is moving away from using Facebook Messenger.
In an internal presentation, Facebook data scientists directly told executives that the “aging up issue is real” and warned that if the app’s average age continues to increase as it’s doing right now, it could disengage younger users “even more.”
“Most young adults perceive Facebook as a place for people in their 40s and 50s,” they explained. “Young adults perceive content as boring, misleading, and negative. They often have to get past irrelevant content to get to what matters.”
The researcher added that users under 18 additionally seem to be migrating from the platform because of concerns related to privacy and its impact on their wellbeing.
Facebook Opted Not To Remove “Like” and “Share” Buttons
In its article, The New York Times cited documents that indicated Facebook wrestled with whether or not it should remove the “like” and “share” buttons.
The original argument behind getting rid of the buttons was multi-faceted. There was a belief that their removal could decrease the anxiety teens feel since social media pressures many to want to achieve a certain number of likes per post. There was also the hope that a decrease in this pressure could lead to teens posting more. Away from that, Facebook additionally needed to tackle growing concerns about the lightning-quick spread of misinformation.
Ultimately, its hypotheses failed. According to the documents reviewed by The Times, hiding the “like” button didn’t alleviate the social anxiety teens feel. It also didn’t lead them to post more.
In fact, it actually led to users engaging with posts and ads less, and as a result, Facebook decided to keep the buttons.
Despite that, in 2019, researchers for Facebook still asserted that the platform’s “core product mechanics” were allowing misinformation and hate to flourish.
“The mechanics of our platform are not neutral,” they said in the internal documents.
Facebook Isn’t Really Regulating International Hate
That’s largely because Facebook does not employ a significant number of moderators who speak the languages of many countries where the platform is popular. As a result, its current moderators are widely unable to understand cultural contexts.
Theoretically, Facebook could solidify an AI-driven solution to catching harmful content spreading among different languages, but it still hasn’t been able to perfect that technology.
“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” Eliza Campbell, director of the Middle East Institute’s Cyber Program, told the AP. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”
According to The Atlantic, as little as 6% of Arabic-language hate content on Instagram was detected by Facebook’s systems as recently as late last year. Another document detailed by the outlet found that “of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools.”
According to The Atlantic, “employees blamed company leadership for insufficient investment” in both instances.
Facebook Was Lackluster on Human Trafficking Crackdowns Until Revenue Threats
In another major revelation, The Atlantic reported that these documents appear to confirm that the company only took strong action against human trafficking after Apple threatened to pull Facebook and Instagram from its App Store.
Initially, the outlet said employees participated in a concerted and successful effort to identify and remove sex trafficking-related content; however, the company did not disable or take down associated profiles.
Because of that, the BBC in 2019 later uncovered a broad network of human traffickers operating an active ring on the platform. In response, Facebook took some additional action, but according to the internal documents, “domestic servitude content remained on the platform.”
Later in 2019, Apple finally issued its threat. After reviewing the documents, The Atlantic said that threat alone — and not any new information — is what finally motivated Facebook to “[kick it] into high gear.”
“Was this issue known to Facebook before BBC enquiry and Apple escalation? Yes,” one internal message reportedly reads.
Zuckerberg Personally Made Vietnam Decision
According to The Washington Post, CEO Mark Zuckerberg personally called a decision last year to have Facebook agree to demands set forth by Vietnam’s ruling Communist Party.
The party had previously threatened to disconnect Facebook in the country if it didn’t silence anti-government posts.
“In America, the tech CEO is a champion of free speech, reluctant to remove even malicious and misleading content from the platform,” the article’s authors wrote. “But in Vietnam, upholding the free speech rights of people who question government leaders could have come with a significant cost in a country where the social network earns more than $1 billion in annual revenue.”
“Zuckerberg’s role in the Vietnam decision, which has not been previously reported, exemplifies his relentless determination to ensure Facebook’s dominance, sometimes at the expense of his stated values,” they added.
In the coming days and weeks, there will likely be more questions regarding Zuckerberg’s role in the decision, as well as inquiries into whether the SEC will take action against him directly.
Still, Facebook has already started defending its reasoning for making the decision. It told The Post that the choice to censor was justified “to ensure our services remain available for millions of people who rely on them every day.”
In the U.S., Zuckerberg has repeatedly claimed to champion free speech while testifying before lawmakers.
Among other findings, the Financial Times reported that Facebook employees urged management not to exempt notable figures such as politicians and celebrities from moderation rules.
Outside of these documents, similar to Haugen, another whistleblower submitted an affidavit to the SEC on Friday alleging that Facebook allows hate to go unchecked.
As the documents leaked, Haugen spent Monday testifying before a committee of British Parliament.