- The European Parliament passed the European Union Copyright Directive on Tuesday, giving member states two years to implement the law before it goes into effect.
- The directive included the highly contentious Article 13, also called the “upload filter,” which will require media platforms to be liable for copyright infringements committed by their users.
- Tech companies that lobbied against the bill have condemned its passage, while others in the music, publishing, and film industries have applauded the new law.
European Parliament Passes EUCD
The European Parliament gave the final approval to the sweeping copyright reform known as the European Union Copyright Directive (EUCD) on Tuesday, sparking backlash from large tech companies that have repeatedly lobbied against the bill.
The decision comes after the final version of the directive was approved by the different branches of the EU in February, and a final vote was set for European Parliament for the following month.
The decision on Tuesday came as members of the European Parliament voted 348 in favor of the directive and 274 against. A last-minute proposal to remove the controversial Article 13, also called the “upload filter” was rejected by only five votes.
The EUCD will now be passed on to EU member states, who will have two years to implement the law in their countries.
Member states do get to decide the details of the legislation individually, but the law will still probably have a huge impact on how the internet works in Europe.
The most contentious provisions from the drafts of the directive, Articles 11 and 13, still remain in the final version of the bill, though Article 13 has been renamed Article 17.
Article 11 & Article 13
Article 11, also called the “link tax,” mandates that links to web pages and articles can only be posted or shared on other platforms with a license.
While there are some exceptions, Article 11 will massively hurt news aggregators like Google News, because it will let publishers charge them when they display snippets of news stories.
Google has said that if publishers do decide to charge licenses for their material, they will be forced to scale down the content they show on Google News and potentially shut it down altogether.
While Article 11 has received a lot of criticism, the real heavy hitter is Article 13, now Article 17, which has also been the “upload filter.”
Article 13 requires platforms like YouTube to be responsible for copyright infringements committed by their users. The language in the law is vague, but many think that it will force these platforms to monitor and block copyrighted content from being uploaded, or else they will be liable.
People have argued that this provision could lead to automated “upload filters”– hence the nickname. These filters would scan all user content before it’s uploaded to remove copyrighted material.
The law does not explicitly require automated filters, but many think that they are inevitable. There is so much content being uploaded to YouTube every second, which essentially makes it impossible for companies to manually sort through every video to make sure it does not violate copyright laws.
To make matters worse, experts have said that these filters are not ready for the market, and are likely to be error-prone or ineffective. They have also said that the technology is expensive.
While large tech companies like Facebook and YouTube could afford that technology, it would create a barrier for smaller companies who want to enter the market, because they would not be able to afford that kind of technology.
This, in turn, would further solidify big tech companies market dominance.
Which is especially ironic, because advocates of the directive have argued that it will balance the playing field between big U.S. tech companies and smaller European content creators by giving copyright holders more power in how their content is distributed.
The argument that smaller content creators will have more power under the EUCD is one that has been reiterated by its supporters over and over again. Despite the predominantly negative reaction to the passage of EUCD, groups from the music, publishing, and film industries have applauded the passage of the law.
“This is a vote against content theft.” Xavier Bouckaert the President of European Magazine Media Association said, “Publishers of all sizes and other creators will now have the right to set terms and conditions for others to re-use their content commercially, as is only fair and appropriate.”
Helen Smith, the head of the Independent Music Companies Association, called the move “A landmark day for Europe’s creators and citizens, and a significant step towards a fairer internet.”
“Platforms facilitate a unique relationship between artists and fans, and this will be given a boost as a result of this directive. It will have a ripple effect world wide,” Smith said.
On the other side, critics of the directive argue that it is vague and will end up censoring online content, hurt free speech and stifle innovation.
In response to the bill’s passage, YouTube thanked the creators who spoke out against Article 13 in a tweet.
A spokesperson for Google made a similar point, stating:
“The Copyright Directive is improved, but will still lead to legal uncertainty and will hurt Europe’s creative and digital economies […] The details matter, and we look forward to working with policy makers, publishers, creators, and rights holders as EU member states move to implement these new rules.”
With the passage of the law, many people in the U.S. are wondering if the directive will affect them.
While no one is entirely sure exactly how the law will affect people outside of the EU, there is a precedent for EU data protection laws influencing U.S. policy. Back in 2016, the EU passed the General Data Protection Regulation (GDRP), which set new rules for how companies manage and share personal data.
Theoretically, the GDPR would only apply to data belonging to EU citizens, but because the internet is a global commodity, nearly every online service was affected when the law was fully implemented last year.
The GDPR mandated that companies get consent before obtaining personal data, and it explicitly extended to companies outside the EU. It also imposed stricter penalties on companies for violating data privacy.
Those regulations in turn resulted in significant changes for U.S. users and forced U.S. companies to adapt. In response, companies like Google and Slack moved quickly to update their terms and contracts, and roll out new personal data tools.
The effect of the regulations have already taken a toll on U.S. tech companies.
In January, a French data protection authority announced that it fined Google $57 million for not properly disclosing how user data is collected for personalized advertisements across its services, including Google Maps and YouTube.
However, as of now, it is unclear if the EUCD will be as far-reaching as the GDRP.
See what others are saying: (The Verge) (Fortune) (Venture Beat)
Twitter Users Bash Prince Harry Over Social Media and Fortnite Comments
- A day after breaking records with the launch of his official Instagram account, Prince Harry said that social media is more addictive than drugs and alcohol.
- He also said that Fornite “shouldn’t be allowed,” which prompted a flood of backlash from social media users.
What Did Harry Say?
Prince Harry found himself of Twitter’s bad side this week over comments he made about the dangers of social media and the popular video game Fortnite.
The comments were made on Wednesday during Harry’s visit to the YMCA in London, where he met with mental health organizations working with teens and young adults. At the conference, he said, “Growing up in today’s world, social media is more addictive than drugs and alcohol.”
“Yet it’s more dangerous because it’s normalized and there are no restrictions to it. We are in a mind-altering time,” he added.
Along with his comments about social media, Harry also angered Fortnite fans by saying, “A game like Fortnite, for instance, may not be so good for children.”
“Parents have got their hands up—they don’t know what to do about it. It’s like waiting for the damage to be done and kids turning up on your doorsteps and families being broken. That game shouldn’t be allowed. Where is the
Video Game Addiction
The topic of video game addiction is not new by any means. Just last year, the World Health Organization added “gaming disorder” to the list of mental health conditions in its International Classification of Diseases. The listing was added in an effort to help clinical professionals define the point at which the hobby of playing video games becomes an issue.
The ICD defines the disorder as “pattern of gaming behavior characterized by impaired control over gaming, increasing priority given to gaming over other activities to the extent that gaming takes precedence over other interests and daily activities, and continuation or escalation of gaming despite the occurrence of negative consequences.”
For gaming disorder to be diagnosed, the behavior pattern must be clear for at least one year and must be “of sufficient severity to result in significant impairment in personal, family, social, educational, occupational or other important areas of functioning,” according to the definition.
However, the WHO did note that studies “suggest that gaming disorder affects only a small proportion of people who engage in digital- or video-gaming activities.”
Because of its massive popularity, Fornite has become the face of the conversation around gaming addiction in recent months. Earlier this month, a doctor made headlines for prescribing an 11-year-old boy a two-week ban from computer games like Fornite and Minecraft.
According to Divorce Online, a U.K. company that offers divorce services and resources, 200 divorces in the UK from January to September 2018 mentioned addiction to Fortnite and other online games as one of the reasons behind their relationship breakdown.
However, not all researchers believe the game is addictive. Andrew Reid, a doctoral researcher of serious games at Glasgow Caledonian University, told the BBC that people found the game hard to stop playing, but he warned against using the term “addictive.”
Reid argued that using that term could stigmatize regular video game players. He also added that some research showed “positive characteristics of play.”
Social media users did not react well to Harry’s claims. Many also found the timing to be odd since Harry had just joined Instagram a day earlier.
On Tuesday, Kensington Palace announced it had launched a verified Instagram account, under the handle @sussexroyal, on behalf of the Duke and Duchess of Sussex. That account went on to break a world record for grabbing 1 million followers in the fastest amount of time, five hours and 45 minutes, according to Guinness World Records.
DOJ Warns Academy About Rule Change Banning Netflix From Oscars
- The Department of Justice sent the Academy of Motion Picture Arts and Sciences a warning about its potential rule change that would limit Netflix and other streaming services from Oscar eligibility
- The DOJ says that the move could be a violation of antitrust law.
The Justice Department warned the Academy of Motion Picture Arts and Sciences that any attempts to prevent Netflix and other streaming services from receiving Oscar eligibility could be considered a violation of antitrust law.
Variety reported the news Tuesday, along with a copy of the DOJ’s message to Academy CEO Dawn Hudson. In the letter, dated March 21, 2019, the DOJ’s Antitrust Division chief Makan Delrahim said he was concerned the new rules would be written in a way that would “suppress competition.”
“In the event that the Academy — an association that includes multiple competitors in its membership — establishes certain eligibility requirements for the Oscars that eliminate competition without procompetitive justification, such conduct may raise antitrust concerns,” Delrahim wrote.
Delrahim specifically says that a rule change like this could violate Section 1 of the Sherman Act, which “prohibits anticompetitive agreements among competitors.”
“Accordingly, agreements among competitors to exclude new competitors can violate the antitrust laws when their purpose or effect is to impede competition by goods or services that consumers purchase and enjoy but which threaten the profits of incumbent firms,” Delrahim wrote.
Delrahim’s warning follows reports that Steven Spielberg, an Academy board member, was preparing to propose a rule change that would stop films that debut on streaming services or have limited theatrical releases from obtaining Oscar consideration.
Spielberg has been vocal about his views on streaming services and Oscar eligibility. He told ITV News last year that Netflix and other streaming services have boosted the quality of television. However, he added, “Once you commit to a television format, you’re a TV movie. … If it’s a good show—deserve an Emmy, but not an Oscar.”
“I don’t believe films that are just given token qualifications in a couple of theaters for less than a week should qualify for the Academy Award nomination,” he continued.
Netflix in particular, grabbed a lot of attention at the Oscars this year with “Roma,” which won awards for best director, best foreign language film, and best cinematography. The company responded to word of potential rule changes on Twitter last month, without naming Speilberg.
According to Variety, an Academy spokesperson said, “We’ve received a letter from the Dept. of Justice and have responded accordingly.”
The spokesperson said that the Academy’s Board of Governors will meet on April 23 for its annual awards rules meeting. At that meeting, all branches will submit possible updates for consideration.
Read the full DOJ letter here.
See what others are saying: (Variety) (The Wall Street Journal) (Rolling Stone)
Facebook Bans White Nationalist Content on its Platform
- Facebook announced that it is banning white nationalist and separatist content on its platforms.
- Some have applauded the company, while others wonder if it will be effective, and why this was not their policy in the first place.
- Its previous policy made a distinction between white supremacy, which was banned, and white nationalism and separatism, which was allowed.
What’s Facebook’s New Policy?
Facebook announced Wednesday that it will ban all white nationalist and separatist content from its platform.
A post, titled “Standing Against Hate,” informed users that the policy will take effect starting next week on both Facebook and Instagram. In it, the company acknowledged that other forms of hate speech, like white supremacy, were already not allowed. However, they explained that they previously viewed white nationalism and separatism differently.
“Our policies have long prohibited hateful treatment of people based on characteristics such as race, ethnicity or religion — and that has always included white supremacy,” the post read. “We didn’t originally apply the same rationale to expressions of white nationalism and white separatism because we were thinking about broader concepts of nationalism and separatism — things like American pride and Basque separatism, which are an important part of people’s identity.”
According to the post, the company has spent the last several months speaking with organizations, academics, and other experts on race relations, who all said that the ideologies behind white nationalism and separatism were tied closely to white supremacy, and that a line can’t be truly drawn between them. This implored Facebook to change their policy.
“Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and white separatism,” the company said in their post.
In addition to this, the tech giant claimed it is also working on its speed and efficiency when it comes to removing hateful content. They also said that any search on white nationalism will link the user to Life After Hate, an organization founded by former extremists that provides outreach, education, and crisis intervention.
What Prompted This Change in Policy?
In the past, Facebook has received no shortage of criticism for the way it monitors hate speech. In 2018, Motherboard leaked Facebook’s training documents on their hate speech policies. Those documents specifically okayed white nationalism and separatism, while drawing a line at white supremacy. Many civil rights groups disagreed with this, likely leading the company to start the discussions that lead to Wednesday’s policy update.
While they didn’t mention it in their post, the timing of this announcement also follows the recent tragedy in New Zealand that left 50 dead. In that attack, a gunman used Facebook to stream himself killing people inside two mosques. The gunman was identified as a white nationalist, and after this incident many called for Facebook to do more about the hateful and harmful rhetoric on its site.
What Do People Think About This?
Their decision has been met with as much praise as it has skepticism.
New Zealand’s Prime Minister, Jacinda Ardern, called it “positive” in a press conference, while also noting that these ideas have always been hate speech.
“Arguably these categories should always fall within the community guidelines of hate speech,” Ardern said. “But nevertheless it’s positive the clarification has now been made in the wake of the attack in Christchurch.”
Kristen Clarke, the President of the National Lawyers’ Committee for Civil Rights Under Law, an organization that lobbied Facebook on the matter, congratulated the company in a tweet, calling it an “important victory.”
On the other side, Vera Eidelman, an attorney for the ACLU, thinks Facebook has the right sentiment, but is concerned about potential unintended consequences, telling NPR:
“White supremacist, nationalist and separatist views are repugnant, and Facebook as a private company is well within its rights to remove such hate and bigotry from its platform,” Eidelman wrote to NPR.
“In its attempts to police the speech of over two billion people, Facebook runs the risk of censoring those that attack white nationalism, too. Further, every time Facebook makes the choice to remove content, a single company is exercising an unchecked power to silence individuals and remove them from what has become an indispensable platform.”