Connect with us

Business

Expect Increased Post Removals While Social Media Sites Combat Coronavirus Misinformation

Published

on

  • Major tech companies like Google, Twitter, Reddit, and Facebook have pledged to work together to combat the spread of coronavirus misinformation. 
  • But as thousands of their employees shift to working from home, sites like YouTube and Twitter said they are relying more on automated enforcement systems. 
  • Because of this, users should expect delays in responses from support teams and a potential increase in posts removed by mistake.

Companies Unite 

Top social media and technology companies are teaming up to help fight off the online spread of fake news about the coronavirus.

As you’ve probably noticed, the internet has been heavily saturated with information about COVID-19 in recent weeks– some of it accurate and some of it not. The World Health Organization has labeled this phenomenon an “infodemic,” an over-abundance of information that makes it hard for people to find trustworthy sources and reliable guidance when they need it.

So to face this pressing issue, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube released a joint statement Monday saying they are working closely together in their response efforts. 

“We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world,” the companies said. 

“We invite other companies to join us as we work to keep our communities healthy and safe.”

How Are They Doing This?

As far as how they plan to tackle misinformation, over the past few weeks, each company has announced and updated its own individual strategies.

Facebook and Instagram, for instance, already banned ads and listings selling medical face masks, with product director Robert Leathern promising more action if the company sees “people trying to exploit this public health emergency.” 

On top of that, the sites rolled out automatic pop-up messages featuring information from the World Health Organization and other health authorities, among other measures. 

Source: Facebook Newsroom

Facebook COO Sheryl Sandberg even said that Facebook – which has a policy of not fact-checking political ads – would remove coronavirus misinformation shared by politicians, celebrities, and private groups. 

Meanwhile, Reddit has set up a banner on its site linking to the r/coronavirus community for timely discussions and information from the Center for Disease Control. Reddit said it will hold AMA (Ask me Anything) chats with public health experts but warned that it may also “apply a quarantine to communities that contains hoax or misinformation content. A quarantine will remove the community from search results, warn the user that it may contain misinformation, and require an explicit opt-in.  

Expect Issues, Especially on Twitter and YouTube 

Twitter, on the other hand, said it will monitor tweets during the outbreak, but warned that it’s relying more on automated systems to help enforce rules while they support social distancing and working from home. 

“This might result in some mistakes,” the company said. “We’re meeting daily to see what changes we need to make.” 

The platform stressed that it will not permanently suspend accounts based solely on automated enforcement systems. It also said it would review its rules in the context of COVID-19 and consider “the way in which they may need to evolve to account for new account behavior.” 

Similarly, Google warned customers to expect some changes while its employees work remotely. In a blog post, it said all of its products will be active, but “some users, advertisers, developers and publishers may experience delays in some support response times for non-critical services, which will now be supported primarily through our chat, email, and self-service channels.” 

YouTube specifically warned that there may actually be an increase in videos that are removed for policy violations because, like Twitter, they are depending more on automated systems. 

“As a result of the new measures we’re taking, we will temporarily start relying more on technology to help with some of the work normally done by reviewers,” YouTube said in its blog post. 

“This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.”

However, YouTube explained that it will only issue “strikes” against uploads where it has “high confidence” that the video violates its terms. Creators can still appeal content they feel was removed by error, but again, they should expect delays in responses. 

The company also noted that it will be more cautious about what content gets promoted, including live streams. And in some cases, it said unreviewed content “may not be available via search, on the homepage, or in recommendations.”

See what others are saying: (CNBC) (Tech Crunch) (Business Insider) 

Advertisements

Business

Amazon and Instacart Workers Launch Strike, Demanding Safer Conditions During Pandemic

Published

on

  • Amazon workers in Staten Island are staging a walkout, demanding that the warehouse be thoroughly cleaned. This comes after workers say not enough measures sanitary measures were taken when a coronavirus case was confirmed at that facility, but Amazon says it has increased deep cleanings.
  • Instacart workers nationwide are also striking, saying they will not fulfill orders until they receive sanitation supplies, hazard pay, and better access to paid sick leave.
  • Delivery service workers have been facing uphill battles when it comes to sick leave, with many companies only offering 2 weeks paid if an employee tests positive for the virus, despite tests being far and few between.

N.Y. Amazon Workers Strike

Amazon and Instacart workers are striking, demanding their respective companies give them tools to work in safer and cleaner conditions as they become essential figures during coronavirus lockdowns.

Frustrations at an Amazon facility in Staten Island, New York grew after one of the workers there tested positive for coronavirus. Employees have been concerned that not enough safety measures were taken after this, and are demanding during a Monday walkout that the building be thoroughly cleaned while they are not present. 

“The plan is to cease all operations until the building is closed and sanitized,” employee Christian Smalls, who is actually in a 14-day precautionary quarantine recommended by Amazon, told CNN. “We’re not asking for much. We’re asking the building to be closed and sanitized, and for us to be paid [during that process].”

Early counts suggest that around 100 workers attended the walkout. Videos show participants carrying signs, with many standing apart from one another to practice social distancing. Some signs contained phrases like “Our Health Is Also Essential.”

Smalls also told CNN that Amazon is not being transparent with the public about how many workers at the Staten Island warehouse have tested positive. He believes that the facility, which he called “breeding grounds for this pandemic,” could have as many as seven cases. 

An Amazon spokesperson told CNBC that Smalls claims were “misleading” and that the facility was being deep cleaned on an increased basis. Amazon as a whole is also giving those who are diagnosed or those who come into contact with someone diagnosed with the virus an extra two weeks paid sick leave so they can quarantine. Workers are also seeing a pay boost of $2 an hour through April.

However, this Staten Island facility is just one of many Amazon locations seeing a number of issues amid the coronavirus outbreak. At least 13 Amazon warehouses have reported confirmed cases of the coronavirus. A warehouse in Queens was also temporarily closed after a case was confirmed there. According to CNBC, workers at numerous facilities have been forced to ration essential things like hand sanitizer and disinfectant wipes, if there even are any available.

Instacart Strike and Delivery Workers

Workers at Instacart are staging a nationwide strike of their own starting Monday. Contractors for the grocery delivery service say they want increased hazard pay of $5 per order, a better tipping system, more paid sick leave, and to be provided sanitation supplies like disinfecting wipes and hand sanitizer. Some of their gig workers say they will not fulfill orders until their demands are met. 

In response, Instacart has said they will distribute hand sanitizer and change its tipping settings. This is still not enough for their workers, who go into crowded grocery stores every day so people in lockdowns can stay inside. 

“Actions speak louder than words. Instacart worker Sarah Polito told NPR. “You can tell us that we’re these household heroes and that you appreciate us. But you’re not actually, they’re not showing it. They’re not taking these steps to give us the precautions. They’re not giving us hazard pay.”

Instacart workers are among many delivery service workers who do not feel their employer is properly responding to the coronavirus. While companies like DoorDash, Postmates, Uber and more have given two weeks paid sick leave to workers diagnosed with the coronavirus, employees are still left in a tricky place because here are just not enough tests. Employees who think they might have COVID-19 but cannot access a test are out of luck.

One DoorDash worker told the L.A. Times that after he felt shortness of breath and had a cough, a doctor wrote him a note saying he should quarantine for two weeks.

“Patient may return to work on April 3, 2020 pending management of pain and symptoms,” the note read. “Patient is instructed to self quarantine to avoid acquiring viral illness or exposure to others.”

Upon receiving this note, DoorDash denied his sick pay request because the doctor did not outright mention the coronavirus. He was then suspended for two weeks without pay for safety reasons.

Support for Strikes

Because so many workers feel they are not getting the benefits they deserve during this outbreak, there was a lot of support for workers at Amazon and Instacart striking. Online, many encouraged people to not use those services to show solidarity with the workers. 

Rep. Alexandria Ocasio-Cortez also tweeted about it. “One of the best ways to thank essential workers is to support the fight to improve their lives,” she wrote. 

See what others are saying: (Forbes) (Reuters) (Vice)

Advertisements
Continue Reading

Business

Zoom’s Sudden Popularity Draws Attention to App’s Privacy Risks

Published

on

  • As more and more people use Zoom for virtual gatherings, several have raised concerns about privacy issues in the app.
  • One issue is that meeting hosts have the ability to save meetings to a cloud and monitor some behavior of attendees.
  • Many using the app have also experienced “zoombombers,” which are trolls making their way into calls, showing graphic and explicit content. 
  • Zoom has responded to one major criticism: its ability to share data with Facebook. Vice’s Motherboard found that the app could do so on Thursday and by Friday, Zoom got rid of that code.

Host Capabilities

As video chatting app Zoom increases in popularity while students and employees work from home, critics are afraid the app may have glaring privacy issues that users are unaware of. 

Zoom has become widely-used since millions of people across the country were forced inside because of the coronavirus. From meetings, to lectures to virtual boozy Sunday brunches, it has become the app of choice for video chatting in quarantine. Even Prime Minister Boris Johnson has used it to conduct government meetings in the U.K.

Calls on the app can be set up by a “host” who initiates scheduling the call, but many allege that these hosts are given too much power on Zoom. The app offers tools that, depending on the subscription tier-one belongs to, allow hosts to access what some may consider private information. 

One feature called “attention tracking” lets the host of a meeting see if an attendee does not have Zoom in focus for more than 30 seconds. This means that if an attendee is active in a window other than Zoom– to look at other documents, message a colleague, or watch the world collapse live on Twitter for 30 seconds– the host is made aware of this. They don’t see what the attendee is specifically doing, just that the Zoom window has become inactive. 

Still, the idea of this happening while you could be completely unaware has made a lot of people uneasy. Justin Brookman, director of privacy and technology policy at Consumer Reports  said this kind of feature should not exist. 

“If you’re teleworking on a home computer, your boss shouldn’t be able to monitor what’s on your screen,” he said in an article on Consumer Reports. “Zoom should get rid of attention tracking mode, or at the very least make participants aware when it’s on.” 

And this isn’t the only thing hosts can do that some see as potentially dangerous. There are several options that allow Zoom meetings to be recorded. One that some find particularly concerning is cloud recording, which is exclusively for paid subscribers and can only be done by hosts. It allows the video, audio, and a transcription of the meeting to be stored in the Zoom cloud. From there it can be accessed and downloaded by authorized employees at a company so that people who were not part of the meeting can read or watch it back. 

“Zoombombing”

Zoom’s issues extend past the powers a host has. There have also been reports about trolls being able to hack into Zoom meetings, something that has been called “zoombombing.” According to a report from TechCrunch, zoombombers are hopping into meetings and showing graphic content like pornography or violent imagery.

In one case, a public Zoom Work From Home Happy Hour was attacked with sexually explicit video and images. Despite the hosts’ many attempts to boot the zoombomber out of the meeting, they were able to re-enter under a new name. To stop this from happening, the hosts had to end the call.

That’s not the only time something like this has happened. NBC talked to a couple that read children’s books to kids stuck at home via Zoom. Ruha Benjamin, an associate professor of African American studies at Princeton University, was leading the call and told NBC that while she was reading to the kids, an image of a “chubby white man in a thong” popped up.

At first, she did not know if everyone could see it, but then a male voice began to repeatedly say the n-word for all 40 kids on the call to hear. She then had to shut the call down and told the outlet, “we knew it was a malicious, targeted thing. My husband and I are both African American.”

Virtual classrooms, religious services, and various other places have also been targets of this kind of harassment. Zoombombers have the ability to do this for a couple of reasons. First, if a Zoom call is public or if the link has been made public, anyone who wants to join can. Second, Zoom’s default settings allow anyone in a call to get screen time. A host does not need to grant an attendee access. Some of this can be changed in Zoom’s advanced settings if a user knows to look for it, but otherwise, this is the way the app will do things on its own.

Entrepreneur Alex Miller shared a Twitter thread giving tips on how to best protect your Zoom calls from hackings like this. 

You can disable the “join before host” feature so that no one can enter a chat and do something inappropriate without the host knowing. Zoom users can also add a co-host so that multiple people can remain on guard. Screen sharing can also be changed to host only.

On top of this, users can also disable file transfers and prevent removed people from joining the call again.

Info Sharing With Facebook

Zoom has also responded to another issue that was found within the app. A Thursday report from Vice’s Motherboard found that Zoom could send data to a company that is perhaps most well known for data privacy controversies: Facebook. This could happen even if you don’t even have a Facebook account.

One day after this report came out, Zoom removed the code that allowed this. According to Motherboard, Zoom would connect to Facebook’s Graph API, which is the main way developers get data in or out of Facebook. Zoom would then notify Facebook when a user opens the app and give details on the device they are doing so from, including the model, location, phone carrier, and a “unique advertiser identifier created by the user’s device which companies can use to target a user with advertisements.” Nothing in their privacy policy explicitly addressed this. 

When Zoom told Motherboard they were getting rid of this code, they explained that the issue had to do with their SDK, or software development kit, which is a bunch of code that can be used to implement app features, but can also send data to third parties.

“Zoom takes its users’ privacy extremely seriously,” they said in a statement to Motherboard. “We originally implemented the ‘Login with Facebook’ feature using the Facebook SDK in order to provide our users with another convenient way to access our platform. However, we were recently made aware that the Facebook SDK was collecting unnecessary device data.”

Zoom also confirmed that the information being collected was not personal user information, but device information, which lined up with Motherboard’s findings. 

See what others are saying: (The Guardian) (Forbes) (BBC)

Advertisements
Continue Reading

Business

TikTok Suppressed Content From “Ugly,” Poor, and Disabled Users, Report Says

Published

on

  • A report from The Intercept claimed that in an effort to attract new users, TikTok had policies in place for its moderators to suppress content from users deemed “ugly,” poor, or disabled.
  • The documents also showed that TikTok outlined bans to be placed on users who criticized “political or religious leaders” or “endangered national honor.”
  • Sources said the policies were created last year and were in use as recently as the end of 2019.
  • A TikTok spokesperson said the majority of the guidelines were never in use or are no longer in use, but the ones targeting users’ appearances were aimed at preventing bullying.
  • However, the documents reviewed by The Intercept do not explicitly mention anti-bullying efforts.

Leaked Policies

Newly released documents reveal that TikTok creators directed their moderators to censor posts from users believed to be poor, disabled, or “ugly,” among other guidelines.  

The leaked policies were first reported by The Intercept on Monday, exposing an inconsistency within the highly popular video-sharing app whose tagline is “Real People. Real Videos.” However, based on this recently-exposed information, it seems like TikTok only wants to funnel certain types of “real people” on the “For You” feed, its page dedicated to promoting select content to its millions of users. 

The Intercept noted that the documents appear to have originally been printed in Chinese — the language of the app’s home country — but had been translated into sometimes-choppy English for global distribution. Of the multiple pages of policies the news outlet posted, one outlines characteristics that the app considers undesirable such as “abnormal body shape, chubby, have obvious beer belly, obese, or too thin.” 

The rules also encourage restrictions of “ugly facial looks” including wrinkles, noticeable scars, and physical disabilities. Criteria for the backgrounds of videos were also included in the policies, discouraging “shabby and dilapidated” environments including slums, dirty and messy settings, and old decorations. 

As far as the reasoning for these guidelines, TikTok wrote: “If the character’s appearance or the shooting environment is not good, the video will be much less attractive, not [worthy] to be recommended to new users.” 

A spokesperson for the app told The Verge that the guidelines reported by The Intercept are regional and “were not for the U.S. market.”

The other policies that The Intercept released detail more types of content that should be banned across the platform, including defamation or criticism of “civil servants, political or religious leaders,” as well as family members of these leaders. Moderators were instructed to punish any users who “endang[er] national honor” or distort “local or other countries’ history,” using May 1998 riots in Indonesia, Cambodian genocide, and Tiananmen Square incidents as examples.

The Intercept reported that sources told them the policies were created last year and were in use until at least late 2019. 

TikTok’s Response

A spokesperson for the app told The Intercept that “most of” these exposed rules “are either no longer in use, or in some cases appear to never have been in place.”

The spokesperson also told the outlet that the policies geared toward suppressing disabled, seemingly impoverished, or unattractive users “represented an early blunt attempt at preventing bullying, but are no longer in place, and were already out of use when The Intercept obtained them.”

These intentions have been pushed by the platform in the past — in December, TikTok admitted that at one point they prevented the spread of videos from disabled, LGBTQ, or overweight users, claiming it was an attempt to curb bullying. 

A TikTok spokesperson told The Intercept that these newly-released policies “appear to be the same or similar” as the ones revealed in December, but the guidelines published this week are notably different — they don’t mention anti-bullying motives and instead focus on how to appeal to more users. 

Criticism of TikTok’s Moderation and App’s Response

TikTok has faced scrutiny in the past for appearing to censor certain content, including pro-democracy protests in Hong Kong and criticism of the Chinese government.  

It’s also worth noting that the app has been under fire for its data-sharing policies and the U.S. government has even suggested this is a national security threat. 

TikTok said this week that it will stop using China-based moderators to review overseas content, noting that these employees hadn’t been monitoring content in U.S. regions. 

And in further attempts to counter the criticism of their moderation tactics, TikTok announced last week that it plans to open a “transparency center” in Los Angeles in May. This center will allow outside observers to better understand how the platform moderates its content.

See what others are saying: (The Intercept) (The Verge) (Business Insider)

Advertisements
Continue Reading