- Controversial data gathering company Palantir Technologies went public Wednesday. Leading up to this move, many reports detailed why the group is so heavily criticized.
- One of the biggest reports came from BuzzFeed News, which obtained documents that show how officers in the Los Angeles Police Department are trained to use Palantir Gotham.
- This tool allows officers to create a database of people who may or may not be suspected of crimes, and then search that database by name, race, gender, tattoos, people they know, and more.
- Within the last few weeks, both Amnesty International and Rep. Alexandria Ocasio-Cortez have also lodged concerns of their own about Palantir, ranging from human rights violations to a lack of transparency with the public.
BuzzFeed News Report
As Palantir Technologies went public on Wednesday, so did numerous reports detailing why the data gathering and analysis company is so controversial.
One of the biggest reports came from BuzzFeed News, which obtained documents revealing how the Los Angeles Police Department used Palantir Gotham, a highly contentious law enforcement tool, to create a sweeping database. According to their report, this includes information like the names of those who have been arrested, convicted, or suspected of a crime, but goes much further.
“Maybe a police officer was told a person knew a suspected gang member. Maybe an officer spoke to a person who lived near a crime “hot spot,” or was in the area when a crime happened. Maybe a police officer simply had a hunch. The context is immaterial,” reporter Caroline Haskins wrote. “Once the LAPD adds a name to Palantir’s database, that person becomes a data point in a massive police surveillance system.”
The LAPD uses this system in effort to quickly search for and find criminals, but it has unsurprisingly faced backlash from those who see Palantir as a privacy overreach. Some believe that, especially as the country is having conversations about shrinking police budgets, tools like this cost taxpayers too much money. Others believe the lack of transparency between the public and police departments about using Planatair and other forms of data surveillance is dangerous.
According to BuzzFeed News, LAPD’s use of Palantir has little to no public oversight or regulation. The program “helped the LAPD construct a vast database that indiscriminately lists the names, addresses, phone numbers, license plates, friendships, romances, jobs of Angelenos — the guilty, innocent, and those in between.”
The LAPD has been using Palantir for ten years, and between 2015 and 2016, paid for it via money it received from the federal government, but it’s unclear if that is always how it has been funded.
Palantir collects information from multiple sources, including the DMV and photos collected at traffic lights and toll booths. The database has one billion pictures of license plates from those locations so that police can see where and when your car was photographed, then click to learn more about you.
On top of this, the report notes that dozens of California police departments, sheriff’s offices, airport police, universities and school districts signed onto data sharing agreements with the LAPD between 2012 and 2017. As a result, these places have had to send daily copies of their police records, licence plate readings, and dispatch information to the LAPD so officers can put that data into Palantir.
A document of user metrics obtained by BuzzFeed shows that as of 2017, there are 5,000 registered LAPD user accounts on Palantir, which is over 40% of the department’s officers. In 2016, LAPD ran more than 60,000 searches in support of more than 10,000 cases.
The outlet also obtained training documents that detail specifically how officers are being instructed to use Palantir. Police can search for people not only by name, but by race, gender, gang membership, tattoos, scars, friends and family. These searches will return a list of names along with associated addresses, emails, vehicles, warrants, mugshots, surveillance pictures, and even personal connections like friends, family members, neighbors and coworkers.
Criticisms of Palantir and Policing
One of the largest criticisms of Palantir comes from those who fear it will exacerbate the racism that already exists in policing.
“The federal government shouldn’t be spending money on unproven surveillance software or crime prediction programs that target Black and Hispanic Americans and don’t actually reduce crime,” Senator Ron Wyden (D-Or.) said.
Many of these concerns here are backed up by sociologist Sarah Brayne, who studied and observed how the LAPD uses data surveillance over the course of seven years. In July, she wrote about Palantir and the LAPD for the Los Angeles Times and said officers had built a “sprawling database of information.”
“In the digital age, data are a form of capital. If only the police and tech companies have access to the data and analytic software, independent evaluation of how this capital is being leveraged in law enforcement is impossible,” Brayne wrote. She also believed that there can easily be racial bias issues in the application of these tools
“Analytic software also can exacerbate inequalities under the veneer of objectivity,” she said. “Surveillance tools such as license plate readers are deployed based on past department crime statistics, which means that “predictive policing” data systems disproportionately point to Black and brown people and neighborhoods for heavier policing and future data gathering.”
Controversies as Palantir Goes Public
Palantir was started by its CEO Alex Karp and a handful of other founders, including Peter Thiel. In addition to being used by the LAPD, it has also been used by the New York Police Department, the CIA, Immigration and Customs Enforcement, and most recently, by the Department of Health and Human Services for data processing during the COVID-19 pandemic.
BuzzFeed’s report exposing its use in the LAPD is just the latest piece of criticism it has faced heading into its move to go public. Amnesty International put out a release accusing Palantir of human rights abuses, specifically citing its relationship with ICE.
“Palantir touts its ethical commitments, saying it will never work with regimes that abuse human rights abroad. This is deeply ironic, given the company’s willingness stateside to work directly with ICE, which has used its technology to execute harmful policies that target migrants and asylum-seekers,” wrote Michael Kleinman, the Director of Amnesty International’s Silicon Valley Initiative.
According to Amnesty International, ICE has used Palantir to arrest parents and caregivers of unaccompanied children and to plan mass riads, leading to children being separated from their caregivers. Palantir allows ICE to identify, share information on, investigate, and track migrants and asylum seekers, which aids operations like these.
Palantir also faced criticism earlier this month from Rep. Alexandria Ocasio-Cortez (D-Ny.) who wrote the Securities and Exchange Commission detailing her concerns about Palantir going public. In a letter, she claimed that the company was not transparent enough with the public.
“Palantir reports several pieces of information about its company – and omits others – that we believe require further disclosure and examination, as they present material risks of which potential investors should be aware and national security concerns of which the public should be aware,” she wrote.
Ocasio-Cortez highlighted several areas of concern, including the fact that Palantir has worked with foreign governments known to engage in corrupt practices and human rights violations, their failure to provide adequate information about one of its board members, and the potential data security implications of its relationship with HHS could have.
“Palantir must provide greater transparency to potential investors about the data protections or lack thereof associated with its government contracts, and further information about the U.S. and non-U.S. government entities for which it is working on data related to the COVID-19 crisis,” she wrote. “This is of paramount importance to investors and the public, as Palantir Chief Operating Office Shyam Sankar recently characterized the company’s work for multiple governments to manage and process data in response to the COVID-19 crisis as the new “driving thrust of the company.”
See what others are saying: (BuzzFeed News) (Business Insider) (Washington Post)
Key Takeaways From the Explosive “Facebook Papers”
Among the most startling revelations, The Washington Post reported that CEO Mark Zuckerberg personally agreed to silence dissident users in Vietnam after the country’s ruling Communist Party threatened to block access to Facebook.
“The Facebook Papers”
A coalition of 17 major news organizations published a series of articles known as “The Facebook Papers” on Monday in what some are now calling Facebook’s biggest crisis ever.
The papers are a collection of thousands of redacted internal documents that were originally turned over to the U.S. Securities and Exchanges Commission by former product manager Francis Haugen earlier this year.
The outlets that published pieces Monday reportedly first obtained the documents at the beginning of October and spent weeks sifting through their contents. Below is a breakdown of many of their findings.
Facebook Is Hemorrhaging Teens
For example, The Verge said the internal documents it reviewed showed that since 2019, teen users on Facebook’s app have fallen by 13%, with the company expecting another staggering falloff of 45% over the next two years. Meanwhile, the company reportedly expects its app usage among 20- to 30-year-olds to decline by 4% in the same timeframe.
Facebook also found that fewer teens are signing up for new accounts. Similarly, the age group is moving away from using Facebook Messenger.
In an internal presentation, Facebook data scientists directly told executives that the “aging up issue is real” and warned that if the app’s average age continues to increase as it’s doing right now, it could disengage younger users “even more.”
“Most young adults perceive Facebook as a place for people in their 40s and 50s,” they explained. “Young adults perceive content as boring, misleading, and negative. They often have to get past irrelevant content to get to what matters.”
The researcher added that users under 18 additionally seem to be migrating from the platform because of concerns related to privacy and its impact on their wellbeing.
Facebook Opted Not To Remove “Like” and “Share” Buttons
In its article, The New York Times cited documents that indicated Facebook wrestled with whether or not it should remove the “like” and “share” buttons.
The original argument behind getting rid of the buttons was multi-faceted. There was a belief that their removal could decrease the anxiety teens feel since social media pressures many to want to achieve a certain number of likes per post. There was also the hope that a decrease in this pressure could lead to teens posting more. Away from that, Facebook additionally needed to tackle growing concerns about the lightning-quick spread of misinformation.
Ultimately, its hypotheses failed. According to the documents reviewed by The Times, hiding the “like” button didn’t alleviate the social anxiety teens feel. It also didn’t lead them to post more.
In fact, it actually led to users engaging with posts and ads less, and as a result, Facebook decided to keep the buttons.
Despite that, in 2019, researchers for Facebook still asserted that the platform’s “core product mechanics” were allowing misinformation and hate to flourish.
“The mechanics of our platform are not neutral,” they said in the internal documents.
Facebook Isn’t Really Regulating International Hate
That’s largely because Facebook does not employ a significant number of moderators who speak the languages of many countries where the platform is popular. As a result, its current moderators are widely unable to understand cultural contexts.
Theoretically, Facebook could solidify an AI-driven solution to catching harmful content spreading among different languages, but it still hasn’t been able to perfect that technology.
“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” Eliza Campbell, director of the Middle East Institute’s Cyber Program, told the AP. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”
According to The Atlantic, as little as 6% of Arabic-language hate content on Instagram was detected by Facebook’s systems as recently as late last year. Another document detailed by the outlet found that “of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools.”
According to The Atlantic, “employees blamed company leadership for insufficient investment” in both instances.
Facebook Was Lackluster on Human Trafficking Crackdowns Until Revenue Threats
In another major revelation, The Atlantic reported that these documents appear to confirm that the company only took strong action against human trafficking after Apple threatened to pull Facebook and Instagram from its App Store.
Initially, the outlet said employees participated in a concerted and successful effort to identify and remove sex trafficking-related content; however, the company did not disable or take down associated profiles.
Because of that, the BBC in 2019 later uncovered a broad network of human traffickers operating an active ring on the platform. In response, Facebook took some additional action, but according to the internal documents, “domestic servitude content remained on the platform.”
Later in 2019, Apple finally issued its threat. After reviewing the documents, The Atlantic said that threat alone — and not any new information — is what finally motivated Facebook to “[kick it] into high gear.”
“Was this issue known to Facebook before BBC enquiry and Apple escalation? Yes,” one internal message reportedly reads.
Zuckerberg Personally Made Vietnam Decision
According to The Washington Post, CEO Mark Zuckerberg personally called a decision last year to have Facebook agree to demands set forth by Vietnam’s ruling Communist Party.
The party had previously threatened to disconnect Facebook in the country if it didn’t silence anti-government posts.
“In America, the tech CEO is a champion of free speech, reluctant to remove even malicious and misleading content from the platform,” the article’s authors wrote. “But in Vietnam, upholding the free speech rights of people who question government leaders could have come with a significant cost in a country where the social network earns more than $1 billion in annual revenue.”
“Zuckerberg’s role in the Vietnam decision, which has not been previously reported, exemplifies his relentless determination to ensure Facebook’s dominance, sometimes at the expense of his stated values,” they added.
In the coming days and weeks, there will likely be more questions regarding Zuckerberg’s role in the decision, as well as inquiries into whether the SEC will take action against him directly.
Still, Facebook has already started defending its reasoning for making the decision. It told The Post that the choice to censor was justified “to ensure our services remain available for millions of people who rely on them every day.”
In the U.S., Zuckerberg has repeatedly claimed to champion free speech while testifying before lawmakers.
Among other findings, the Financial Times reported that Facebook employees urged management not to exempt notable figures such as politicians and celebrities from moderation rules.
Outside of these documents, similar to Haugen, another whistleblower submitted an affidavit to the SEC on Friday alleging that Facebook allows hate to go unchecked.
As the documents leaked, Haugen spent Monday testifying before a committee of British Parliament.
See what others are saying: (Business Insider) (Axios) (Protocol)
NFL Reaches Agreement to End Race-Norming, New Testing Formula Remains Unclear
The practice, which was adopted by the league in the ’90s, assumes that Black players operate with a lower cognitive function than players of other races.
NFL Ends Race-Norming
The U.S. District Court of Philadelphia uploaded a confidential proposed settlement between the NFL and former players on Wednesday that confirms the league’s plans to abolish race norming.
The NFL previously halted the use of race-norming in June as part of a$1 billion settlement with retirees Kevin Henry and Najeh Davenport, but details of the deal weren’t supposed to be released until it underwent review from a federal judge.
In fact, it currently seems as if someone in the court accidentally uploaded the document, as it was deleted hours later.
Among the details reaped from the settlement, it was revealed that the league plans to modify cognitive tests over the next year as part of a short-term change regarding how it verifies dementia-related brain injury claims. Previously, it used race-norming — the practice of assuming Black players have a lower cognitive function than players of other races — to test whether retirees seeking financial compensation had sustained brain injuries from the sport.
Black retirees who were denied access to compensation originally will also have their tests automatically re-evaluated over the course of the next year, if the settlement pushes through.
The NFL has additionally agreed to develop a long-term replacement system with the help of experts and players’ lawyers.
Still, the exact formula behind these new testing metrics, which will be designed as race-neutral per the agreement, is unknown. For example, retirees don’t know how the new changes will affect their scores or if they might potentially need to take additional tests before becoming eligible for compensation.
The Issue With Race-Norming
Race-norming was first adopted by the league back in the ’90s, and in theory, it was meant to help offer better treatment to Black retirees who had developed dementia from brain injuries related to football.
Essentially, the thought process was to take socioeconomic factors into account since Black people come from disadvantaged communities at higher rates; however, that quickly became a major issue since Black players were held to a higher standard of proof than players of other races.
For example, since the tests assumed Black people have less cognitive skill, Black retirees seeking claims needed to score lower to be granted compensation. That then led to many having their claims denied because they tested too high — even if they would have tested within the range to receive compensation had they been white.
See what others are saying: (Associated Press) (The Washington Post) (ABC News)
Facebook Plans Name Change as Part of Rebrand
News of the alleged rebrand came the same day Facebook was fined nearly $70 million for breaching U.K. orders related to the company’s 2020 acquisition of Giphy, as well as the same day it reached a $14 million discrimination settlement with the U.S. Justice Department.
Facebook Allegedly Plans To Debut New Name
Facebook, Inc. is planning to announce a new company name next week, according to a Tuesday report from The Verge.
The rebrand would reportedly align with CEO Mark Zuckerberg’s vision to shape the company into a full-fledged “metaverse” — AKA a virtual reality space where users can interact with one another in real-time.
The new name is currently unknown, but it would likely not affect the social media platform Facebook. Instead, the change would target its parent company, Facebook, Inc. — similar to how Alphabet became the parent company of Google following a 2015 restructure.
On Monday, Facebook said it is currently planning to hire 10,000 people in the European Union to help make its metaverse goal a reality.
Still, plans for the metaverse have not gone uncriticized, especially given the recent weeks of increased scrutiny regarding Facebook’s dominance over people’s daily lives. “Metaverse” was first coined in 1992 by American author Neal Stephenson in his novel “Snow Crash,” which depicts a corporate-owned virtual world.
Twitter CEO Jack Dorsey even cited one user who referenced the novel, agreeing that Stephenson was right in his prediction of “a dystopian corporate dictatorship.”
Facebook To Pay Fine and Settlement
Also on Tuesday, regulators in the United Kingdom fined Facebook nearly $70 million for breaching orders related to its 2020 acquisition of Giphy.
While that’s only a fraction of the $400 million it paid to purchase Giphy, UK regulators warned that they could eventually order Facebook to sell off Giphy if they find proof the acquisition has damaged competition.
In the U.S., the Justice Department said the same day that Facebook has agreed to pay up to $14.25 million to settle discrimination allegations brought by the agency under the Trump administration.
In December, the department accused the company of favoring foreign workers with temporary work visas over what it described as thousands of qualified U.S. workers.
“Facebook is not above the law and must comply with our nation’s federal civil rights laws, which prohibit discriminatory recruitment and hiring practices,” Kristen Clarke, an assistant attorney general at the department, said.
Notably, this settlement is the largest ever collected by the department’s Civil Rights Division.