- The Federal Trade Commission has fined Facebook $5 billion for violating the privacy of customers and imposed new accountability measures and restrictions for Facebook, WhatsApp, and Instagram.
- The fine is the largest penalty imposed on a tech company for privacy violations ever, which comes after a yearlong investigation into Facebook’s involvement in the Cambridge Analytica data breach.
- The FTC found that Facebook “deceived” their customers by allowing their data to be accessed by apps their friends used, despite telling the public they had stopped that practice.
- The FTC also alleges that Facebook enforced data sharing policies based “on whether Facebook benefited financially from its arrangements with the developer.”
The U.S. Federal Trade Commission (FTC) announced Wednesday that it was fining Facebook a record-breaking $5 billion for privacy violations as well as instituting sweeping privacy restrictions and oversight measures.
The penalty represents the largest fine that the FTC has ever imposed on a tech company by far. It is also the biggest penalty ever brought on a company for privacy violations, according to the FTC announcement.
The announcement comes after a yearlong investigation of Facebook over privacy violations.
That investigation was started right after The New York Times and The Observer of London reported that Facebook allowed British political consulting firm Cambridge Analytica to harvest the data of millions of Facebook users without their knowledge and build voter profiles from those users data without their consent.
Cambridge Analytica got the data from Facebook users who used a third-party gaming application called “This Is Your Digital Life.”
Although it has been estimated that only around 270,000 people used the app, the users who gave the app permission to access and acquire their data also gave the app permission to do the same for all of their Facebook friends.
That resulted in the personal information of nearly 87 million Facebook users being collected by Cambridge Analytica, despite the fact that the vast majority of those people had never given the firm permission to access their information, or even played the game.
Along with investigating Cambridge Analytica, the FTC’s investigation also expanded to look at other privacy concerns, such as the tech giant’s data-sharing policies with other third-party apps and device-makers that Facebook users might not have understood or been aware of.
All of that culminated in the report and announcement released Wednesday by the FTC.
In addition to the $5 billion fine, the FTC’s announcement also stated that Facebook must “submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy.”
That requirement, the FTC says, is mandated “to settle Federal Trade Commission charges that the company violated a 2012 FTC order by deceiving users about their ability to control the privacy of their personal information.”
The FTC goes on to describe the 2012 order in question, saying that it explicitly “prohibited Facebook from making misrepresentations about the privacy or security of consumers’ personal information, and the extent to which it shares personal information.”
The 2012 FTC order also required that Facebook “maintain a reasonable privacy program that safeguards the privacy and confidentiality of user information.”
Violations of 2012 Order
The FTC goes on to outline how Facebook specifically violated the 2012 order. The statement describes numerous instances, but the most significant examples center around privacy disclosures to customers.
For example, in 2012, Facebook put a disclosure on their Privacy Settings page telling users the information they shared with their friends could also be shared with the third-party apps their friends used.
The FTC claims that four months later, Facebook removed the disclosure “even though it was still sharing data from an app user’s Facebook friends with third-party developers.”
Then in 2014, Facebook announced they would stop letting third-party developers collect data about the friends of app users. However, the FTC says that Facebook separately told the developers that they could continue to access that data until April 2015.
Even then, Facebook still waited “until at least June 2018 to stop sharing user information with third-party apps used by their Facebook friends,” the FTC said.
The statement then goes on to say, “Facebook did not screen the developers or their apps before granting them access to vast amounts of user data.”
Facebook also claimed it had consequences for policy violations by third-parties, but it “did not enforce such policies consistently and often based enforcement of its policies on whether Facebook benefited financially from its arrangements with the developer,” the FTC alleged.
New Restrictions & Overhauls
In addition to spelling out Facebook’s privacy violations, the FTC announcement also included some of the new restrictions and oversight measures that Facebook will have to comply with under the settlement.
To ensure accountability with Facebook’s board of directors, the order will create “an independent privacy committee of Facebook’s board of directors,” in order to remove “unfettered control by Facebook’s CEO Mark Zuckerberg over decisions affecting user privacy.”
The settlement also requires the company to “designate compliance officers who will be responsible for Facebook’s privacy program,” and gives a third-party assessor more power to evaluate Facebook’s privacy programs.
Regarding restrictions the settlement imposes, Facebook will now have to conduct privacy reviews for any new or modified products and services before they can be implemented.
It will also be required to document any data breach involving 500 or more users.
The FTC statement continues to include a laundry list of new requirements, like exercising more oversight over third-party apps, encrypting passwords, and more.
Notably, it also requires Facebook to “establish, implement, and maintain a comprehensive data security program.”
Also of huge significance is that these new restrictions and accountability measures will also apply to Facebook-owned companies WhatsApp and Instagram.
The decision was approved by the FTC’s commissioners in a 3-to-2 vote earlier this month, with the three Republican commissioners voting to approve the settlement and the two Democrat commissioners voting to oppose.
In a statement to The New York Times, the three Republican commissioners, including agency chairman, Joseph Simons, said the settlement “will provide significant deterrence not just to Facebook, but to every other company that collects or uses consumer data.”
However, the two Democratic commissioners argued that the settlement did not do enough. They said that the $5 billion fine is just a slap on the wrist for Facebook, which made $55.8 billion in revenues last year alone.
They also pointed out that the settlement did not actually do anything to change or restrict Facebook’s ability to collect and share their user’s personal information.
“The proposed settlement does little to change the business model or practices that led to the recidivism,” Democratic Commissioner Rohit Chopra wrote in his dissenting statement. “Nor does it include any restrictions on the company’s mass surveillance or advertising tactics.”
The Democratic commissioners also reportedly disliked the settlement because they wanted to take the case to court, and felt that the Facebook executives should have been held personally accountable.
The Republican commissioners, however, have said that they did not have a strong enough case to move it to court.
See what others are saying: (The Chicago Tribune) (The Washington Post) (The New York Times)
TikTok Suppressed Content From “Ugly,” Poor, and Disabled Users, Report Says
- A report from The Intercept claimed that in an effort to attract new users, TikTok had policies in place for its moderators to suppress content from users deemed “ugly,” poor, or disabled.
- The documents also showed that TikTok outlined bans to be placed on users who criticized “political or religious leaders” or “endangered national honor.”
- Sources said the policies were created last year and were in use as recently as the end of 2019.
- A TikTok spokesperson said the majority of the guidelines were never in use or are no longer in use, but the ones targeting users’ appearances were aimed at preventing bullying.
- However, the documents reviewed by The Intercept do not explicitly mention anti-bullying efforts.
Newly released documents reveal that TikTok creators directed their moderators to censor posts from users believed to be poor, disabled, or “ugly,” among other guidelines.
The leaked policies were first reported by The Intercept on Monday, exposing an inconsistency within the highly popular video-sharing app whose tagline is “Real People. Real Videos.” However, based on this recently-exposed information, it seems like TikTok only wants to funnel certain types of “real people” on the “For You” feed, its page dedicated to promoting select content to its millions of users.
The Intercept noted that the documents appear to have originally been printed in Chinese — the language of the app’s home country — but had been translated into sometimes-choppy English for global distribution. Of the multiple pages of policies the news outlet posted, one outlines characteristics that the app considers undesirable such as “abnormal body shape, chubby, have obvious beer belly, obese, or too thin.”
The rules also encourage restrictions of “ugly facial looks” including wrinkles, noticeable scars, and physical disabilities. Criteria for the backgrounds of videos were also included in the policies, discouraging “shabby and dilapidated” environments including slums, dirty and messy settings, and old decorations.
As far as the reasoning for these guidelines, TikTok wrote: “If the character’s appearance or the shooting environment is not good, the video will be much less attractive, not [worthy] to be recommended to new users.”
A spokesperson for the app told The Verge that the guidelines reported by The Intercept are regional and “were not for the U.S. market.”
The other policies that The Intercept released detail more types of content that should be banned across the platform, including defamation or criticism of “civil servants, political or religious leaders,” as well as family members of these leaders. Moderators were instructed to punish any users who “endang[er] national honor” or distort “local or other countries’ history,” using May 1998 riots in Indonesia, Cambodian genocide, and Tiananmen Square incidents as examples.
The Intercept reported that sources told them the policies were created last year and were in use until at least late 2019.
A spokesperson for the app told The Intercept that “most of” these exposed rules “are either no longer in use, or in some cases appear to never have been in place.”
The spokesperson also told the outlet that the policies geared toward suppressing disabled, seemingly impoverished, or unattractive users “represented an early blunt attempt at preventing bullying, but are no longer in place, and were already out of use when The Intercept obtained them.”
These intentions have been pushed by the platform in the past — in December, TikTok admitted that at one point they prevented the spread of videos from disabled, LGBTQ, or overweight users, claiming it was an attempt to curb bullying.
A TikTok spokesperson told The Intercept that these newly-released policies “appear to be the same or similar” as the ones revealed in December, but the guidelines published this week are notably different — they don’t mention anti-bullying motives and instead focus on how to appeal to more users.
Criticism of TikTok’s Moderation and App’s Response
TikTok has faced scrutiny in the past for appearing to censor certain content, including pro-democracy protests in Hong Kong and criticism of the Chinese government.
It’s also worth noting that the app has been under fire for its data-sharing policies and the U.S. government has even suggested this is a national security threat.
TikTok said this week that it will stop using China-based moderators to review overseas content, noting that these employees hadn’t been monitoring content in U.S. regions.
And in further attempts to counter the criticism of their moderation tactics, TikTok announced last week that it plans to open a “transparency center” in Los Angeles in May. This center will allow outside observers to better understand how the platform moderates its content.
See what others are saying: (The Intercept) (The Verge) (Business Insider)
Expect Increased Post Removals While Social Media Sites Combat Coronavirus Misinformation
- Major tech companies like Google, Twitter, Reddit, and Facebook have pledged to work together to combat the spread of coronavirus misinformation.
- But as thousands of their employees shift to working from home, sites like YouTube and Twitter said they are relying more on automated enforcement systems.
- Because of this, users should expect delays in responses from support teams and a potential increase in posts removed by mistake.
Top social media and technology companies are teaming up to help fight off the online spread of fake news about the coronavirus.
As you’ve probably noticed, the internet has been heavily saturated with information about COVID-19 in recent weeks– some of it accurate and some of it not. The World Health Organization has labeled this phenomenon an “infodemic,” an over-abundance of information that makes it hard for people to find trustworthy sources and reliable guidance when they need it.
So to face this pressing issue, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube released a joint statement Monday saying they are working closely together in their response efforts.
“We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world,” the companies said.
“We invite other companies to join us as we work to keep our communities healthy and safe.”
How Are They Doing This?
As far as how they plan to tackle misinformation, over the past few weeks, each company has announced and updated its own individual strategies.
Facebook and Instagram, for instance, already banned ads and listings selling medical face masks, with product director Robert Leathern promising more action if the company sees “people trying to exploit this public health emergency.”
On top of that, the sites rolled out automatic pop-up messages featuring information from the World Health Organization and other health authorities, among other measures.
Facebook COO Sheryl Sandberg even said that Facebook – which has a policy of not fact-checking political ads – would remove coronavirus misinformation shared by politicians, celebrities, and private groups.
Meanwhile, Reddit has set up a banner on its site linking to the r/coronavirus community for timely discussions and information from the Center for Disease Control. Reddit said it will hold AMA (Ask me Anything) chats with public health experts but warned that it may also “apply a quarantine to communities that contains hoax or misinformation content. A quarantine will remove the community from search results, warn the user that it may contain misinformation, and require an explicit opt-in.
Expect Issues, Especially on Twitter and YouTube
Twitter, on the other hand, said it will monitor tweets during the outbreak, but warned that it’s relying more on automated systems to help enforce rules while they support social distancing and working from home.
“This might result in some mistakes,” the company said. “We’re meeting daily to see what changes we need to make.”
The platform stressed that it will not permanently suspend accounts based solely on automated enforcement systems. It also said it would review its rules in the context of COVID-19 and consider “the way in which they may need to evolve to account for new account behavior.”
Similarly, Google warned customers to expect some changes while its employees work remotely. In a blog post, it said all of its products will be active, but “some users, advertisers, developers and publishers may experience delays in some support response times for non-critical services, which will now be supported primarily through our chat, email, and self-service channels.”
YouTube specifically warned that there may actually be an increase in videos that are removed for policy violations because, like Twitter, they are depending more on automated systems.
“As a result of the new measures we’re taking, we will temporarily start relying more on technology to help with some of the work normally done by reviewers,” YouTube said in its blog post.
“This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.”
However, YouTube explained that it will only issue “strikes” against uploads where it has “high confidence” that the video violates its terms. Creators can still appeal content they feel was removed by error, but again, they should expect delays in responses.
The company also noted that it will be more cautious about what content gets promoted, including live streams. And in some cases, it said unreviewed content “may not be available via search, on the homepage, or in recommendations.”
See what others are saying: (CNBC) (Tech Crunch) (Business Insider)
Internet Reacts to “Fleets,” Tweets that Disappear After 24 Hours
- On Wednesday, Twitter announced that it is testing a new feature in Brazil that allows users to publish content that will disappear after 24 hours.
- The temporary posts, called “Fleets,” were created in hopes that users share more of their “fleeting thoughts.” Fleets may be rolled out in other countries later on, depending on how the test goes.
- Some are mocking the feature’s name, which matches a brand name enema. Others are disappointed that Twitter is rolling out this change as opposed to others.
- But some are excited about the new addition and think it is a good idea.
Twitter announced on Wednesday that it is testing a new feature that allows content to disappear after 24 hours, similar to the “stories” component across other social media platforms.
The temporary posts — called “fleets” — are text-based but are also able to be accompanied by photos, videos, and GIFs. Fleets can be viewed by tapping on somebody’s profile picture, but they cannot be retweeted. Similar to Instagram stories, any replies or reactions to fleets are sent as direct messages to the creator instead of posted publicly.
Currently, the test is only available for Twitter users in Brazil. It was introduced there first because it is one of the countries where people talk the most on the platform, according to Twitter’s product manager Mo Al Adham. Depending on how the test goes, it’s possible that fleets will be made available in other countries.
Kayvon Beykpour, the company’s product lead, revealed the rationale behind the new feature in a series of tweets on Wednesday.
“People often tell us that they don’t feel comfortable Tweeting because Tweets can be seen and replied to by anybody, feel permanent and performative,” Beykpour wrote.
“We’re hoping that Fleets can help people share the fleeting thoughts that they would have been unlikely to Tweet,” Beykpour added. “This is a substantial change to Twitter, so we’re excited to learn by testing it (starting with the rollout today in Brazil) and seeing how our customers use it.”
Fleets have the potential to ease users’ worries about what they post online, as old tweets have led people to lose jobs and be publicly slammed, but it’s still unclear exactly how low-risk these posts are. After reaching the end of their 24-hour life cycle, fleets will be kept by Twitter for a limited time in case of any rule violations.
“We’ll maintain a copy of fleets for a limited time after they are deleted to enforce any rule violations and so people can appeal enforcement actions,” Aly Pavela, a communications manager at Twitter, told Wired.
After this review period, Fleets will be deleted from the company’s systems, according to CNN. But this still begs the question of whether the disappearing content can simply be screenshotted and saved in that way, a detail Twitter hasn’t formally addressed yet.
Upon hearing of Twitter’s test, some were quick to crack jokes about the new feature’s name, which happens to match the brand name of a widely-used enema.
“Tw*tter moments are gonna be called fleets? like the enemas? why? cuz it’s shitty??? LOL,” one user wrote.
Fleets? LoL That’s the brand name for an enema. pic.twitter.com/2p3ST2UuE0— Sherree Worrell (@Sherree_W) March 4, 2020
As Fleet enemas are widely recognized among the LGBTQ community, several users questioned how the new feature’s name was greenlighted.
Twitter was quick to respond to the mockery, and a message from their communications team revealed that they are indeed familiar with the title.
“Yes we know what fleets means. thanks – gay intern,” the team tweeted from their official account.
Others had a more serious response to the temporary posts feature, expressing their disappointment in the company for rolling this out instead of other changes that users have been requesting for years. On Wednesday night, the hashtag #RIPTwitter was trending.
However, some thought the idea was a good one that will boost the company’s success and engagement.