- Apple has removed several apps from its App Store that allowed parents to track and limit their children’s screen time.
- The New York Times issued a report with complaints from competitor apps who suggested Apple was engaging in anti-competitive behavior.
- Apple denied this claim and said the removal was because the apps were abusing technology that enables third-party access to sensitive information.
Apple said Sunday that it has removed several parental control apps from its App Store because they put user privacy and security at risk, not because the company wants to stifle competition.
Apple removed apps that they say were abusing a technology called Mobile Device Management (MDM), which gives app developers access to sensitive information. “The information includes things like user location, browsing history, saved images and video, and more,” the company said in a statement on their website.
Apple released its statement in response to a New York Times report published Saturday, which suggested that the company removed the apps for anti-competitive reasons. “Apple has always supported third-party apps on the App Store that help parents manage their kids’ devices.” Apple said.
“Contrary to what The New York Times reported over the weekend, this isn’t a matter of competition. It’s a matter of security. In this app category, and in every category, we are committed to providing a competitive, innovative app ecosystem.”
The company went on to say that it had given the affected apps 30 days to submit updated software that did not violate its policies. It said many of the companies did make changes and were not removed afterward. Those that did not were removed from the App Store.
New York Times Report
On Saturday, the New York Times released a report that included statements from several app makers who complained that Apple had removed their products from the App Store after it launched its own similar tools.
The latest version of Apple’s iOS mobile operating system includes a feature called Screen Time, which allows users to see how much time they spend on their phone. The tool allows for content and privacy restrictions that parent’s often used to control how much time their children can spend on certain apps and websites.
According to the Times, over the past year, Apple has removed or restricted at least 11 of the 17 most downloaded screen-time and parental-control apps. The report also says they have cracked down on several other lesser known apps and in some cases, forced companies to remove features that allowed for parental controls on apps.
The newspaper noted that on Thursday, the apps Kidslox and Qustodio filed a complaint with the European Union’s competition office. The two companies were among the most popular parental-control apps on the market, but Kidslox says business had drastically dropped since Apple forced changes to its app that has made it less useful than Apple’s own Screen Time tool.
Apple is also facing an antitrust complaint in Russian from Kaspersky Lab, a Russian cybersecurity firm that American security officials say has ties to the Russian government. The firm says Apple removed important features from its parental control app, and in addition to their antitrust complaint in Russia, they are also now exploring a similar complaint in Europe.
Accusations that Apple engages in anti-competitive behavior are not new, nor are they exclusive to the ScreenTime tool. In fact, Spotify and other streaming services have long complained that Apple gives its Apple Music service an unfair advantage over competitors. In March, Spotify filed an antitrust complaint against Apple with the European Union, alleging that Apple is hurting innovation and consumer choice with its Apple “tax” and restrictive rules in its App Store.
Mental Health Startup Cerebral May Have Harmed Hundreds of Patients, Leaked Documents Reveal
The company is being investigated by multiple federal agencies for its questionable practices, which have come under increasing scrutiny in recent weeks.
Over 2,000 Incident Reports Shed Light on Recklessness
A Silicon Valley mental health startup called Cerebral may have harmed hundreds of patients by flagrantly disregarding medical standards, according to a cache of documents reviewed by Insider, as well as over 30 interviews with current or former employees by the outlet.
Founded in 2020, Cerebral provides mental health treatment to customers through talk therapy and medication for conditions such as depression, anxiety, insomnia, and ADHD.
With people quarantined during the pandemic, it became one of the largest virtual therapy firms in the United States, attracting some $462 million from investors.
Cerebral employees filed at least 2,060 incident reports during seven months in 2021, according to Insider. They show that the company enrolled patients with complex conditions like bipolar disorder, then assigned them to clinicians and other staff members with insufficient training, oversight, and support to treat such cases.
It also put dozens of patients on questionable treatment plans and misdiagnosed many others, the reports say, with company medical providers prescribing potentially lethal combinations of drugs or addictive drugs to patients with histories of addiction.
Additionally, many patients were left stranded without care for extended periods due to technology issues or the company’s failure to retain clinicians.
As a result, Cerebral shuffled patients from one provider to the next and even bungled their prescriptions, sometimes leading them to suffer drug withdrawal or take the wrong medication.
Patients Tell Their Stories
One patient reportedly spent two weeks waiting for a referral to a clinician, later saying she spent eight days in a psychiatric ward.
Another patient told CBS News she was prescribed a drug for her anxiety but afterward could not reach her prescriber for instructions on how to switch to the new medication safely.
“Any time I needed help, she was never available,” she said.
After she did not get a response for six days, she began taking the drug anyway, which caused her to break out in a rash.
“I messaged back,” she said, “letting them know it was spreading and getting worse, and they said that they were still trying to get a hold of that prescriber… They make it seem like they want to help, and then they get you, and then they’re gone.”
A Cerebral spokesperson told Insider that the reports did not highlight enough patients to accurately reflect the company.
“Any incident reports you obtained show Cerebral’s dedication to quality,” the spokesperson said. “You can’t take a relatively small group of incident reports and draw conclusions about our care.”
Two former senior employees told the outlet those reports were monitored by just a couple of people who had other responsibilities at the company, adding that leadership frequently pushed off solving the systemic issues flagged.
Cerebral’s practices are currently being investigated by the Drug Enforcement Administration, the Department of Justice and the Federal Trade Commission.
See what others are saying: (Business Insider) (CBS News) (Fierce Healthcare)
Instagram Testing New Tools To Verify Users Are Over 18
The new tools include AI software that analyzes video footage of a person’s face to verify their age.
Instagram Cracks Down on Underage Users
Instagram is testing new features in the United States to verify the age of users who claim to be over 18 years old.
According to a statement from Instagram’s parent company, Meta, the tools will only apply to users who seek to change their age from under 18 to over 18. The platform previously asked for users to upload their ID for verification in this process, but on Thursday, it announced there will be two new methods for confirming age.
One of the strategies was referred to as “social vouching.” Using this option, people can request that three mutual Instagram followers over the age of 18 confirm their age on the platform.
The other method allows users to upload a video selfie of themselves to be analyzed by Yoti, third-party age verification software. Yoti then estimates a person’s age based on their facial features, sends that estimate to Meta, and both companies delete the recording.
According to Meta, Yoti cannot recognize or identify a face based on the recording and only looks at the pixels to determine an age. Meta said that Yoti “is the leading age verification provider for several industries around the world,” as it has been used and promoted by social media companies and governmental organizations.
Still, some question how effective it will be for this specific use. According to The Verge, while the software does have a high accuracy rate among certain age groups and demographics, data also shows it is less precise for female faces and faces with darker skin tones.
Issues With Kids on Instagram
Meta argues that it is important for Instagram to be able to discern who is and is not 18, as it impacts what version of the app users have access to.
“We’re testing this so we can make sure teens and adults are in the right experience for their age group,” the company’s statement said.
“When we know if someone is a teen (13-17), we provide them with age-appropriate experiences like defaulting them into private accounts, preventing unwanted contact from adults they don’t know and limiting the options advertisers have to reach them with ads,” it continued.
These changes come as Instagram has been facing increased pressure to address the way its app impacts younger users.
Only children 13 and older are allowed to have Instagram accounts, but the service has faced criticism for not doing enough to enforce this. A 2021 survey of high school students found that nearly half of the respondents had created a social media account of some kind before they were 13.
The company also recently came under fire after The Wall Street Journal published internal Meta documents revealing that the company knew that it harmed teens, including by worsening body image issues for young girls and women.
See what others are saying: (The Verge) (The Wall Street Journal) (Axios)
Elon Musk Threatens to Fire Employees Unless They Work in Person Full-Time
The world’s richest man in the world previously suggested that the popularity of remote work has “tricked people into thinking that you don’t actually need to work hard.”
“If You Don’t Show up, We Will Assume You Have Resigned”
On Wednesday, Electrek published two leaked emails apparently sent from Elon Musk to Tesla’s executive staff threatening to fire them if they don’t return to work in person.
“Anyone who wishes to do remote work must be in the office for a minimum (and I mean *minimum*) of 40 hours per week or depart Tesla,” he wrote. “This is less than we ask of factory workers.”
“If there are particularly exceptional contributors for whom this is impossible, I will review and approve those exceptions directly,” he continued.
Musk then clarified that the “office” must be a main office, not a “remote branch office unrelated to the job duties.”
“There are of course companies that don’t require this, but when was the last time they shipped a great new product? It’s been a while,” he wrote in the second email.
Later on Wednesday, a Twitter user asked Musk to comment on the idea that coming into work is an antiquated concept.
He replied, “They should pretend to work somewhere else.”
The Billionaire Pushes People to Work Harder
Musk has a history of pressuring his employees and criticizing them for not working hard enough.
“All the Covid stay-at-home stuff has tricked people into thinking that you don’t actually need to work hard. Rude awakening inbound,” he tweeted last month.
Three economists told Insider that remote work during the pandemic did not damage productivity.
“Most of the evidence shows that productivity has increased while people stayed at home,” Natacha Postel-Vinay, an economic and financial historian at the London School of Economics, told the outlet.
Musk is notorious for criticizing lockdown mandates and went so far as to call them “fascist” during a Tesla earnings call in April 2020.
Not long before that, Tesla announced that it would keep its Fremont, California plant open in defiance of shelter-in-place orders across the state.
In an interview with The Financial Times last month, Musk blasted American workers for trying to stay home, comparing them to their Chinese counterparts whom he said work harder.
“They won’t just be burning the midnight oil. They will be burning the 3 a.m. oil,” he said. “They won’t even leave the factory type of thing, whereas in America people are trying to avoid going to work at all.”
That same day, Fortune published an article detailing how Tesla workers in Shanghai work 12-hour shifts, six days out of the week, sometimes sleeping on the factory floor.