- As more and more people use Zoom for virtual gatherings, several have raised concerns about privacy issues in the app.
- One issue is that meeting hosts have the ability to save meetings to a cloud and monitor some behavior of attendees.
- Many using the app have also experienced “zoombombers,” which are trolls making their way into calls, showing graphic and explicit content.
- Zoom has responded to one major criticism: its ability to share data with Facebook. Vice’s Motherboard found that the app could do so on Thursday and by Friday, Zoom got rid of that code.
As video chatting app Zoom increases in popularity while students and employees work from home, critics are afraid the app may have glaring privacy issues that users are unaware of.
Zoom has become widely-used since millions of people across the country were forced inside because of the coronavirus. From meetings, to lectures to virtual boozy Sunday brunches, it has become the app of choice for video chatting in quarantine. Even Prime Minister Boris Johnson has used it to conduct government meetings in the U.K.
Calls on the app can be set up by a “host” who initiates scheduling the call, but many allege that these hosts are given too much power on Zoom. The app offers tools that, depending on the subscription tier-one belongs to, allow hosts to access what some may consider private information.
One feature called “attention tracking” lets the host of a meeting see if an attendee does not have Zoom in focus for more than 30 seconds. This means that if an attendee is active in a window other than Zoom– to look at other documents, message a colleague, or watch the world collapse live on Twitter for 30 seconds– the host is made aware of this. They don’t see what the attendee is specifically doing, just that the Zoom window has become inactive.
Still, the idea of this happening while you could be completely unaware has made a lot of people uneasy. Justin Brookman, director of privacy and technology policy at Consumer Reports said this kind of feature should not exist.
“If you’re teleworking on a home computer, your boss shouldn’t be able to monitor what’s on your screen,” he said in an article on Consumer Reports. “Zoom should get rid of attention tracking mode, or at the very least make participants aware when it’s on.”
And this isn’t the only thing hosts can do that some see as potentially dangerous. There are several options that allow Zoom meetings to be recorded. One that some find particularly concerning is cloud recording, which is exclusively for paid subscribers and can only be done by hosts. It allows the video, audio, and a transcription of the meeting to be stored in the Zoom cloud. From there it can be accessed and downloaded by authorized employees at a company so that people who were not part of the meeting can read or watch it back.
Zoom’s issues extend past the powers a host has. There have also been reports about trolls being able to hack into Zoom meetings, something that has been called “zoombombing.” According to a report from TechCrunch, zoombombers are hopping into meetings and showing graphic content like pornography or violent imagery.
In one case, a public Zoom Work From Home Happy Hour was attacked with sexually explicit video and images. Despite the hosts’ many attempts to boot the zoombomber out of the meeting, they were able to re-enter under a new name. To stop this from happening, the hosts had to end the call.
That’s not the only time something like this has happened. NBC talked to a couple that read children’s books to kids stuck at home via Zoom. Ruha Benjamin, an associate professor of African American studies at Princeton University, was leading the call and told NBC that while she was reading to the kids, an image of a “chubby white man in a thong” popped up.
At first, she did not know if everyone could see it, but then a male voice began to repeatedly say the n-word for all 40 kids on the call to hear. She then had to shut the call down and told the outlet, “we knew it was a malicious, targeted thing. My husband and I are both African American.”
Virtual classrooms, religious services, and various other places have also been targets of this kind of harassment. Zoombombers have the ability to do this for a couple of reasons. First, if a Zoom call is public or if the link has been made public, anyone who wants to join can. Second, Zoom’s default settings allow anyone in a call to get screen time. A host does not need to grant an attendee access. Some of this can be changed in Zoom’s advanced settings if a user knows to look for it, but otherwise, this is the way the app will do things on its own.
Entrepreneur Alex Miller shared a Twitter thread giving tips on how to best protect your Zoom calls from hackings like this.
You can disable the “join before host” feature so that no one can enter a chat and do something inappropriate without the host knowing. Zoom users can also add a co-host so that multiple people can remain on guard. Screen sharing can also be changed to host only.
On top of this, users can also disable file transfers and prevent removed people from joining the call again.
Info Sharing With Facebook
Zoom has also responded to another issue that was found within the app. A Thursday report from Vice’s Motherboard found that Zoom could send data to a company that is perhaps most well known for data privacy controversies: Facebook. This could happen even if you don’t even have a Facebook account.
When Zoom told Motherboard they were getting rid of this code, they explained that the issue had to do with their SDK, or software development kit, which is a bunch of code that can be used to implement app features, but can also send data to third parties.
“Zoom takes its users’ privacy extremely seriously,” they said in a statement to Motherboard. “We originally implemented the ‘Login with Facebook’ feature using the Facebook SDK in order to provide our users with another convenient way to access our platform. However, we were recently made aware that the Facebook SDK was collecting unnecessary device data.”
Zoom also confirmed that the information being collected was not personal user information, but device information, which lined up with Motherboard’s findings.
See what others are saying: (The Guardian) (Forbes) (BBC)
Facebook Is Reviewing More Than 2,200 Hours of Footage for Next-Gen AI
The project, which could prove to be revolutionary, is already raising some big privacy concerns.
Facebook’s Next-Gen AI
Facebook announced Thursday that it has captured more than 2,200 hours of first-person video that it will use to train next-gen AI models.
The company said it aims to make the AI, called Ego4D, capable of understanding and identifying both real and virtual objects through a first-person perspective using smart glasses or VR headsets. In effect, that could potentially help users do everything from remembering where they placed forgotten items to recording others in secret.
Facebook listed five key scenarios the project aims to tackle and gave real-world examples of how each may look for people who will eventually use the AI.
- “What happened when?” With that scenario, Facebook gave the example, “Where did I leave my keys?”
- “What am I likely to do next?” There, Facebook gave the example, “Wait, you’ve already added salt to this recipe.”
- “What am I doing?” For example, “What was the main topic during class?”
- “Who said what when?” For example, “What was the main topic during class?”
- “Who is interacting with whom?” For example, “Help me better hear the person talking to me at this noisy restaurant.”
Facebook said the amount of footage it has collected is 20 times greater than any other data set used by the company.
In the wake of recent controversy surrounding Facebook, it’s important to note that the footage wasn’t reaped from users. Instead, the company said it, and 13 university partners, compiled the footage from more than 700 participants around the world.
Still, that hasn’t alleviated all privacy concerns.
In an article titled, “Facebook is researching AI systems that see, hear, and remember everything you do,” The Verge writer James Vincent said that although the project’s guidelines seem practical, “the company’s interest in this area will worry many.”
Vincent pointe out that the AI announcement doesn’t mention anything in the way of privacy or removing data for people who may not want to be recorded.
A Facebook spokesperson later assured Vincent that privacy safeguards will be introduced to the public in the future.
“For example, before AR glasses can enhance someone’s voice, there could be a protocol in place that they follow to ask someone else’s glasses for permission, or they could limit the range of the device so it can only pick up sounds from the people with whom I am already having a conversation or who are in my immediate vicinity,” the spokesperson said.
Among positive reception, some believe the tech could be revolutionary for helping people around the house, as well as for teaching robots to more rapidly learn about their surroundings.
FDA Issues Its First E-Cigarette Authorization Ever
The authorization only applies to tobacco-flavored products, as the FDA simultaneously rejected several sweet and fruit-flavored e-cigarette cartridges.
FDA Approves E-Cigarette
The U.S. Food and Drug Administration approved an e-cigarette pen sold under the brand name Vuse on Tuesday, as well as two tobacco-flavored cartridges that can be used with the pen.
This marks the first time the FDA has ever authorized the use of vaping products. In a news release, the agency said it made the decision because “the authorized products’ aerosols are significantly less toxic than combusted cigarettes based on available data.”
“The manufacturer’s data demonstrates its tobacco-flavored products could benefit addicted adult smokers who switch to these products — either completely or with a significant reduction in cigarette consumption — by reducing their exposure to harmful chemicals,” the agency added.
The company that owns Vuse, R.J. Reynolds Vapor Company, also submitted several sweet and fruit-flavored pods for review; however, those were all rejected. While the FDA did not specify which flavors it rejected, it did note that it has yet to make a decision on whether to allow menthol-flavored e-cigarettes, including ones sold under Vuse.
FDA Is Reviewing All Vape Products Still on the Market
In January 2020, the FDA banned pre-filled pods with sweet and fruity flavors from being sold. While other e-cigarette related products, including some forms of flavored vapes, were allowed to stay on the market for the time being, they were only able to do so if they were submitted for FDA review.
The FDA’s primary issue with fruity cartridges stems from statistics showing that those pods more easily hook new smokers, particularly underage smokers.
In fact, in its approval of the Vuse products, the FDA said it only authorized them because it “determined that the potential benefit to smokers who switch completely or significantly reduce their cigarette use, would outweigh the risk to youth, provided the applicant follows post-marketing requirements aimed at reducing youth exposure and access to the products.”
While some have cheered the FDA’s decision, not everyone was enthusiastic. Many critics cited a joint FDA-CDC study in which nearly 11% of teens who said they vape also indicated regularly using Vuse products.
See what others are saying: (Business Insider) (Wall Street Journal) (The Washington Post)
Kaiser Permanente Health Workers Vote To Authorize Strike Over Pay, Staffing, and Safety
The vote could inspire unioned Kaiser workers in other states to eventually approve strikes of their own.
Workers Approve Strike
Over 24,000 unioned nurses and other healthcare workers at Kaiser Permanente hospitals voted Monday to authorize strikes against the company in California and Oregon.
The tens of thousands of workers who cast a ballot make up 86% of the Kaiser-based healthcare professionals represented by either the United Nurses Associations of California/Union of Health Care Professionals (UNAC/UHCP) or the Oregon Federation of Nurses and Health Professionals. An overwhelming 96% voted to approve the strike.
According to both unions, the list of workers includes nurses, pharmacists, midwives, and physical therapists.
The vote itself does not automatically initiate a strike; rather, it gives the unions the power to call a strike amid stalled contract negotiations between Kaiser and the unions. If the unions ultimately tell their members to begin striking, they will need to give a 10-day warning.
The California and Oregon contracts expired Sep. 30, but several more Kaiser-based union contracts are rapidly approaching their expiration dates as well. That includes contracts for more than 50,000 workers in Colorado, Georgia, Hawaii, Maryland, Virginia, Washington state, and D.C. Notably, the demands from those workers echo many of the demands made by California and Oregon’s union members.
At the center of this potential strike are three issues: staffing problems, safety concerns, and proposed revisions to Kaiser’s payment system. For months, nurses have been publicly complaining about long shifts spurred by the COVID-19 pandemic, staffing shortages, and an over-reliance on contract nurses.
Because of that, they’re seeking to force Kaiser to commit to hiring more staff, as well as boost retention.
But the main catalyst for any looming strikes is pay. According to UNAC/UHCP, Kaiser wants to implement a two-tier payment system, which would decrease earnings by 26% to 39% for employees hired from 2023 onward. On top of that, those new employees would see fewer health protections.
The unions and their members worry such a system could lead to an increased feeling of resentment among workers since they would be paid different rates for performing the same job. They also worry it could exacerbate retention and hiring issues already faced by the hospital system.
Additionally, the workers want to secure 4% raises for each of the next three years, but Kaiser’s currently only willing to give 1%, citing a need to reduce labor costs to remain competitive.