Connect with us

Business

Expect Increased Post Removals While Social Media Sites Combat Coronavirus Misinformation

Published

on

  • Major tech companies like Google, Twitter, Reddit, and Facebook have pledged to work together to combat the spread of coronavirus misinformation. 
  • But as thousands of their employees shift to working from home, sites like YouTube and Twitter said they are relying more on automated enforcement systems. 
  • Because of this, users should expect delays in responses from support teams and a potential increase in posts removed by mistake.

Companies Unite 

Top social media and technology companies are teaming up to help fight off the online spread of fake news about the coronavirus.

As you’ve probably noticed, the internet has been heavily saturated with information about COVID-19 in recent weeks– some of it accurate and some of it not. The World Health Organization has labeled this phenomenon an “infodemic,” an over-abundance of information that makes it hard for people to find trustworthy sources and reliable guidance when they need it.

So to face this pressing issue, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube released a joint statement Monday saying they are working closely together in their response efforts. 

“We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world,” the companies said. 

“We invite other companies to join us as we work to keep our communities healthy and safe.”

How Are They Doing This?

As far as how they plan to tackle misinformation, over the past few weeks, each company has announced and updated its own individual strategies.

Facebook and Instagram, for instance, already banned ads and listings selling medical face masks, with product director Robert Leathern promising more action if the company sees “people trying to exploit this public health emergency.” 

On top of that, the sites rolled out automatic pop-up messages featuring information from the World Health Organization and other health authorities, among other measures. 

Source: Facebook Newsroom

Facebook COO Sheryl Sandberg even said that Facebook – which has a policy of not fact-checking political ads – would remove coronavirus misinformation shared by politicians, celebrities, and private groups. 

Meanwhile, Reddit has set up a banner on its site linking to the r/coronavirus community for timely discussions and information from the Center for Disease Control. Reddit said it will hold AMA (Ask me Anything) chats with public health experts but warned that it may also “apply a quarantine to communities that contains hoax or misinformation content. A quarantine will remove the community from search results, warn the user that it may contain misinformation, and require an explicit opt-in.  

Expect Issues, Especially on Twitter and YouTube 

Twitter, on the other hand, said it will monitor tweets during the outbreak, but warned that it’s relying more on automated systems to help enforce rules while they support social distancing and working from home. 

“This might result in some mistakes,” the company said. “We’re meeting daily to see what changes we need to make.” 

The platform stressed that it will not permanently suspend accounts based solely on automated enforcement systems. It also said it would review its rules in the context of COVID-19 and consider “the way in which they may need to evolve to account for new account behavior.” 

Similarly, Google warned customers to expect some changes while its employees work remotely. In a blog post, it said all of its products will be active, but “some users, advertisers, developers and publishers may experience delays in some support response times for non-critical services, which will now be supported primarily through our chat, email, and self-service channels.” 

YouTube specifically warned that there may actually be an increase in videos that are removed for policy violations because, like Twitter, they are depending more on automated systems. 

“As a result of the new measures we’re taking, we will temporarily start relying more on technology to help with some of the work normally done by reviewers,” YouTube said in its blog post. 

“This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.”

However, YouTube explained that it will only issue “strikes” against uploads where it has “high confidence” that the video violates its terms. Creators can still appeal content they feel was removed by error, but again, they should expect delays in responses. 

The company also noted that it will be more cautious about what content gets promoted, including live streams. And in some cases, it said unreviewed content “may not be available via search, on the homepage, or in recommendations.”

See what others are saying: (CNBC) (Tech Crunch) (Business Insider) 

Business

Key Takeaways From the Explosive “Facebook Papers”

Published

on

Among the most startling revelations, The Washington Post reported that CEO Mark Zuckerberg personally agreed to silence dissident users in Vietnam after the country’s ruling Communist Party threatened to block access to Facebook.


“The Facebook Papers” 

A coalition of 17 major news organizations published a series of articles known as “The Facebook Papers” on Monday in what some are now calling Facebook’s biggest crisis ever. 

The papers are a collection of thousands of redacted internal documents that were originally turned over to the U.S. Securities and Exchanges Commission by former product manager Francis Haugen earlier this year. 

The outlets that published pieces Monday reportedly first obtained the documents at the beginning of October and spent weeks sifting through their contents. Below is a breakdown of many of their findings.

Facebook Is Hemorrhaging Teens 

Both Bloomberg and The Verge reported that Facebook is struggling to retain its hold over teens.  

For example, The Verge said the internal documents it reviewed showed that since 2019, teen users on Facebook’s app have fallen by 13%, with the company expecting another staggering falloff of 45% over the next two years. Meanwhile, the company reportedly expects its app usage among 20- to 30-year-olds to decline by 4% in the same timeframe.

Facebook also found that fewer teens are signing up for new accounts. Similarly, the age group is moving away from using Facebook Messenger.

In an internal presentation, Facebook data scientists directly told executives that the “aging up issue is real”  and warned that if the app’s average age continues to increase as it’s doing right now, it could disengage younger users “even more.”

“Most young adults perceive Facebook as a place for people in their 40s and 50s,” they explained. “Young adults perceive content as boring, misleading, and negative. They often have to get past irrelevant content to get to what matters.” 

The researcher added that users under 18 additionally seem to be migrating from the platform because of concerns related to privacy and its impact on their wellbeing.

Facebook Opted Not To Remove “Like” and “Share” Buttons

In its article, The New York Times cited documents that indicated Facebook wrestled with whether or not it should remove the “like” and “share” buttons.

The original argument behind getting rid of the buttons was multi-faceted. There was a belief that their removal could decrease the anxiety teens feel since social media pressures many to want to achieve a certain number of likes per post. There was also the hope that a decrease in this pressure could lead to teens posting more. Away from that, Facebook additionally needed to tackle growing concerns about the lightning-quick spread of misinformation.

Ultimately, its hypotheses failed. According to the documents reviewed by The Times, hiding the “like” button didn’t alleviate the social anxiety teens feel. It also didn’t lead them to post more. 

In fact, it actually led to users engaging with posts and ads less, and as a result, Facebook decided to keep the buttons. 

Despite that, in 2019, researchers for Facebook still asserted that the platform’s “core product mechanics” were allowing misinformation and hate to flourish.

“The mechanics of our platform are not neutral,” they said in the internal documents.

Facebook Isn’t Really Regulating International Hate

The Atlantic, WIRED, and The Associated Press all reported that terrorist content and hate speech continue to spread with ease on Facebook.

That’s largely because Facebook does not employ a significant number of moderators who speak the languages of many countries where the platform is popular. As a result, its current moderators are widely unable to understand cultural contexts. 

Theoretically, Facebook could solidify an AI-driven solution to catching harmful content spreading among different languages, but it still hasn’t been able to perfect that technology. 

“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” Eliza Campbell, director of the Middle East Institute’s Cyber Program, told the AP. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”

According to The Atlantic, as little as 6% of Arabic-language hate content on Instagram was detected by Facebook’s systems as recently as late last year. Another document detailed by the outlet found that “of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools.”

According to The Atlantic, “employees blamed company leadership for insufficient investment” in both instances.

Facebook Was Lackluster on Human Trafficking Crackdowns Until Revenue Threats

In another major revelation, The Atlantic reported that these documents appear to confirm that the company only took strong action against human trafficking after Apple threatened to pull Facebook and Instagram from its App Store. 

Initially, the outlet said employees participated in a concerted and successful effort to identify and remove sex trafficking-related content; however, the company did not disable or take down associated profiles. 

Because of that, the BBC in 2019 later uncovered a broad network of human traffickers operating an active ring on the platform. In response, Facebook took some additional action, but according to the internal documents, “domestic servitude content remained on the platform.”

Later in 2019, Apple finally issued its threat. After reviewing the documents, The Atlantic said that threat alone — and not any new information — is what finally motivated Facebook to “[kick it] into high gear.” 

“Was this issue known to Facebook before BBC enquiry and Apple escalation? Yes,” one internal message reportedly reads. 

Zuckerberg Personally Made Vietnam Decision

According to The Washington Post, CEO Mark Zuckerberg personally called a decision last year to have Facebook agree to demands set forth by Vietnam’s ruling Communist Party.

The party had previously threatened to disconnect Facebook in the country if it didn’t silence anti-government posts.

“In America, the tech CEO is a champion of free speech, reluctant to remove even malicious and misleading content from the platform,” the article’s authors wrote. “But in Vietnam, upholding the free speech rights of people who question government leaders could have come with a significant cost in a country where the social network earns more than $1 billion in annual revenue.” 

“Zuckerberg’s role in the Vietnam decision, which has not been previously reported, exemplifies his relentless determination to ensure Facebook’s dominance, sometimes at the expense of his stated values,” they added.

In the coming days and weeks, there will likely be more questions regarding Zuckerberg’s role in the decision, as well as inquiries into whether the SEC will take action against him directly. 

Still, Facebook has already started defending its reasoning for making the decision. It told The Post that the choice to censor was justified “to ensure our services remain available for millions of people who rely on them every day.”

In the U.S., Zuckerberg has repeatedly claimed to champion free speech while testifying before lawmakers.

Other Revelations

Among other findings, the Financial Times reported that Facebook employees urged management not to exempt notable figures such as politicians and celebrities from moderation rules. 

Meanwhile, reports from Politico, CNN, NBC, and a host of other outlets cover documents related to Facebook’s market dominance, how much it downplayed its role in the insurrection, and more.  

Outside of these documents, similar to Haugen, another whistleblower submitted an affidavit to the SEC on Friday alleging that Facebook allows hate to go unchecked.

As the documents leaked, Haugen spent Monday testifying before a committee of British Parliament.

See what others are saying: (Business Insider) (Axios) (Protocol)

Continue Reading

Business

NFL Reaches Agreement to End Race-Norming, New Testing Formula Remains Unclear

Published

on

The practice, which was adopted by the league in the ’90s, assumes that Black players operate with a lower cognitive function than players of other races. 


NFL Ends Race-Norming

The U.S. District Court of Philadelphia uploaded a confidential proposed settlement between the NFL and former players on Wednesday that confirms the league’s plans to abolish race norming. 

The NFL previously halted the use of race-norming in June as part of a$1 billion settlement with retirees Kevin Henry and Najeh Davenport, but details of the deal weren’t supposed to be released until it underwent review from a federal judge. 

In fact, it currently seems as if someone in the court accidentally uploaded the document, as it was deleted hours later. 

Among the details reaped from the settlement, it was revealed that the league plans to modify cognitive tests over the next year as part of a short-term change regarding how it verifies dementia-related brain injury claims. Previously, it used race-norming — the practice of assuming Black players have a lower cognitive function than players of other races — to test whether retirees seeking financial compensation had sustained brain injuries from the sport. 

Black retirees who were denied access to compensation originally will also have their tests automatically re-evaluated over the course of the next year, if the settlement pushes through. 

The NFL has additionally agreed to develop a long-term replacement system with the help of experts and players’ lawyers.

Still, the exact formula behind these new testing metrics, which will be designed as race-neutral per the agreement, is unknown. For example, retirees don’t know how the new changes will affect their scores or if they might potentially need to take additional tests before becoming eligible for compensation.

The Issue With Race-Norming

Race-norming was first adopted by the league back in the ’90s, and in theory, it was meant to help offer better treatment to Black retirees who had developed dementia from brain injuries related to football.

Essentially, the thought process was to take socioeconomic factors into account since Black people come from disadvantaged communities at higher rates; however, that quickly became a major issue since Black players were held to a higher standard of proof than players of other races. 

For example, since the tests assumed Black people have less cognitive skill, Black retirees seeking claims needed to score lower to be granted compensation. That then led to many having their claims denied because they tested too high — even if they would have tested within the range to receive compensation had they been white. 

See what others are saying: (Associated Press) (The Washington Post) (ABC News)

Continue Reading

Business

Facebook Plans Name Change as Part of Rebrand

Published

on

News of the alleged rebrand came the same day Facebook was fined nearly $70 million for breaching U.K. orders related to the company’s 2020 acquisition of Giphy, as well as the same day it reached a $14 million discrimination settlement with the U.S. Justice Department.


Facebook Allegedly Plans To Debut New Name

Facebook, Inc. is planning to announce a new company name next week, according to a Tuesday report from The Verge. 

The rebrand would reportedly align with CEO Mark Zuckerberg’s vision to shape the company into a full-fledged “metaverse” — AKA a virtual reality space where users can interact with one another in real-time. 

The new name is currently unknown, but it would likely not affect the social media platform Facebook. Instead, the change would target its parent company, Facebook, Inc. — similar to how Alphabet became the parent company of Google following a 2015 restructure. 

On Monday, Facebook said it is currently planning to hire 10,000 people in the European Union to help make its metaverse goal a reality. 

Still, plans for the metaverse have not gone uncriticized, especially given the recent weeks of increased scrutiny regarding Facebook’s dominance over people’s daily lives. “Metaverse” was first coined in 1992 by American author Neal Stephenson in his novel “Snow Crash,” which depicts a corporate-owned virtual world.

Twitter CEO Jack Dorsey even cited one user who referenced the novel, agreeing that Stephenson was right in his prediction of “a dystopian corporate dictatorship.”

Facebook To Pay Fine and Settlement

Also on Tuesday, regulators in the United Kingdom fined Facebook nearly $70 million for breaching orders related to its 2020 acquisition of Giphy. 

While that’s only a fraction of the $400 million it paid to purchase Giphy, UK regulators warned that they could eventually order Facebook to sell off Giphy if they find proof the acquisition has damaged competition.

In the U.S., the Justice Department said the same day that Facebook has agreed to pay up to $14.25 million to settle discrimination allegations brought by the agency under the Trump administration. 

In December, the department accused the company of favoring foreign workers with temporary work visas over what it described as thousands of qualified U.S. workers. 

“Facebook is not above the law and must comply with our nation’s federal civil rights laws, which prohibit discriminatory recruitment and hiring practices,” Kristen Clarke, an assistant attorney general at the department, said. 

Notably, this settlement is the largest ever collected by the department’s Civil Rights Division.

See what others are saying: (The Verge) (Engadget) (The New York Times)

Continue Reading