- Apple is finally fixing an iOS bug that blocks searches featuring the word “Asian” on phones with settings that are turned on to limit adult content.
- iPhone users have long complained about the issue, and in an interview with Mashable, one man claimed Apple “said nothing officially” after he first reported the problem in Dec. 2019.
- The fix comes amid growing calls for the U.S. to address violence against the Asian community and after an Atlanta shooting stoked conversations about Western culture’s hypersexualization of Asian women.
- While the fix is already live in the iOS 14.5 Beta, it will not become available to the full public until sometime this spring.
iOS Update Will Fix “Asian” Adult Content Block
Apple’s upcoming iOS update is set to fix a bug that currently blocks searches featuring the word “Asian” as adult content.
That includes terms like “Asian food,” “Asian Americans,” “Asian culture,” “local Asian market,” and even simply the word “Asian” itself.
For those terms to be blocked, iOS users must be searching on Safari and need to have their settings in “Content Restrictions” on “Limit Adult Websites.”
While millions of Americans undoubtedly never enable this content setting, as one Twitter user noted, this “means a 12 y/o Chinese-American girl might Google ‘Asian hairstyles’ and find out that her culture is blocked as ‘adult content.’”
Like many others, that person also noted the filter is likely blocking these searches because it assumes “Asian” is a “porn-related” word.
It’s interesting to note that tweet isn’t recent. In fact, it’s from February of last year, and according to Mashable, people have reportedly been trying to get Apple to fix this bug since December 2019. In an interview, one man even told the outlet that Apple “said nothing officially” after he first reported the issue.
The bug is now resolved in iOS 14.5 Beta; however, most users will have to wait until sometime later this spring for the widespread rollout of iOS 14.5.
Change Comes Amid Increasing Awareness of Violence Against Asian Americans
So why only now fix the bug? As many outlets have reported, this is likely in response to the growing conversation around violence against Asian Americans.
Criticism against the bug also resurfaced when users who had these parental settings turned on found themselves unable to search for the term “Stop Asian Hate.”
That then led to Danny Sullivan, Google’s public liaison, stressing that this was an issue on Apple’s end. Still, Sullivan said he would raise the issue with his contacts to try to get Apple to fix the bug nonetheless.
The fix also comes after the March 16 shooting in Atlanta, Georgia, where eight people — including six Asian women — were murdered. According to police, the shooter was motivated by a sex addiction. As a result, Western culture’s hypersexualization and fetishization of Asian women has been thrust under a national spotlight.
See what others are saying: (Mashable) (Business Insider) (HypeBeast)
Mental Health Startup Cerebral May Have Harmed Hundreds of Patients, Leaked Documents Reveal
The company is being investigated by multiple federal agencies for its questionable practices, which have come under increasing scrutiny in recent weeks.
Over 2,000 Incident Reports Shed Light on Recklessness
A Silicon Valley mental health startup called Cerebral may have harmed hundreds of patients by flagrantly disregarding medical standards, according to a cache of documents reviewed by Insider, as well as over 30 interviews with current or former employees by the outlet.
Founded in 2020, Cerebral provides mental health treatment to customers through talk therapy and medication for conditions such as depression, anxiety, insomnia, and ADHD.
With people quarantined during the pandemic, it became one of the largest virtual therapy firms in the United States, attracting some $462 million from investors.
Cerebral employees filed at least 2,060 incident reports during seven months in 2021, according to Insider. They show that the company enrolled patients with complex conditions like bipolar disorder, then assigned them to clinicians and other staff members with insufficient training, oversight, and support to treat such cases.
It also put dozens of patients on questionable treatment plans and misdiagnosed many others, the reports say, with company medical providers prescribing potentially lethal combinations of drugs or addictive drugs to patients with histories of addiction.
Additionally, many patients were left stranded without care for extended periods due to technology issues or the company’s failure to retain clinicians.
As a result, Cerebral shuffled patients from one provider to the next and even bungled their prescriptions, sometimes leading them to suffer drug withdrawal or take the wrong medication.
Patients Tell Their Stories
One patient reportedly spent two weeks waiting for a referral to a clinician, later saying she spent eight days in a psychiatric ward.
Another patient told CBS News she was prescribed a drug for her anxiety but afterward could not reach her prescriber for instructions on how to switch to the new medication safely.
“Any time I needed help, she was never available,” she said.
After she did not get a response for six days, she began taking the drug anyway, which caused her to break out in a rash.
“I messaged back,” she said, “letting them know it was spreading and getting worse, and they said that they were still trying to get a hold of that prescriber… They make it seem like they want to help, and then they get you, and then they’re gone.”
A Cerebral spokesperson told Insider that the reports did not highlight enough patients to accurately reflect the company.
“Any incident reports you obtained show Cerebral’s dedication to quality,” the spokesperson said. “You can’t take a relatively small group of incident reports and draw conclusions about our care.”
Two former senior employees told the outlet those reports were monitored by just a couple of people who had other responsibilities at the company, adding that leadership frequently pushed off solving the systemic issues flagged.
Cerebral’s practices are currently being investigated by the Drug Enforcement Administration, the Department of Justice and the Federal Trade Commission.
See what others are saying: (Business Insider) (CBS News) (Fierce Healthcare)
Instagram Testing New Tools To Verify Users Are Over 18
The new tools include AI software that analyzes video footage of a person’s face to verify their age.
Instagram Cracks Down on Underage Users
Instagram is testing new features in the United States to verify the age of users who claim to be over 18 years old.
According to a statement from Instagram’s parent company, Meta, the tools will only apply to users who seek to change their age from under 18 to over 18. The platform previously asked for users to upload their ID for verification in this process, but on Thursday, it announced there will be two new methods for confirming age.
One of the strategies was referred to as “social vouching.” Using this option, people can request that three mutual Instagram followers over the age of 18 confirm their age on the platform.
The other method allows users to upload a video selfie of themselves to be analyzed by Yoti, third-party age verification software. Yoti then estimates a person’s age based on their facial features, sends that estimate to Meta, and both companies delete the recording.
According to Meta, Yoti cannot recognize or identify a face based on the recording and only looks at the pixels to determine an age. Meta said that Yoti “is the leading age verification provider for several industries around the world,” as it has been used and promoted by social media companies and governmental organizations.
Still, some question how effective it will be for this specific use. According to The Verge, while the software does have a high accuracy rate among certain age groups and demographics, data also shows it is less precise for female faces and faces with darker skin tones.
Issues With Kids on Instagram
Meta argues that it is important for Instagram to be able to discern who is and is not 18, as it impacts what version of the app users have access to.
“We’re testing this so we can make sure teens and adults are in the right experience for their age group,” the company’s statement said.
“When we know if someone is a teen (13-17), we provide them with age-appropriate experiences like defaulting them into private accounts, preventing unwanted contact from adults they don’t know and limiting the options advertisers have to reach them with ads,” it continued.
These changes come as Instagram has been facing increased pressure to address the way its app impacts younger users.
Only children 13 and older are allowed to have Instagram accounts, but the service has faced criticism for not doing enough to enforce this. A 2021 survey of high school students found that nearly half of the respondents had created a social media account of some kind before they were 13.
The company also recently came under fire after The Wall Street Journal published internal Meta documents revealing that the company knew that it harmed teens, including by worsening body image issues for young girls and women.
See what others are saying: (The Verge) (The Wall Street Journal) (Axios)
Elon Musk Threatens to Fire Employees Unless They Work in Person Full-Time
The world’s richest man in the world previously suggested that the popularity of remote work has “tricked people into thinking that you don’t actually need to work hard.”
“If You Don’t Show up, We Will Assume You Have Resigned”
On Wednesday, Electrek published two leaked emails apparently sent from Elon Musk to Tesla’s executive staff threatening to fire them if they don’t return to work in person.
“Anyone who wishes to do remote work must be in the office for a minimum (and I mean *minimum*) of 40 hours per week or depart Tesla,” he wrote. “This is less than we ask of factory workers.”
“If there are particularly exceptional contributors for whom this is impossible, I will review and approve those exceptions directly,” he continued.
Musk then clarified that the “office” must be a main office, not a “remote branch office unrelated to the job duties.”
“There are of course companies that don’t require this, but when was the last time they shipped a great new product? It’s been a while,” he wrote in the second email.
Later on Wednesday, a Twitter user asked Musk to comment on the idea that coming into work is an antiquated concept.
He replied, “They should pretend to work somewhere else.”
The Billionaire Pushes People to Work Harder
Musk has a history of pressuring his employees and criticizing them for not working hard enough.
“All the Covid stay-at-home stuff has tricked people into thinking that you don’t actually need to work hard. Rude awakening inbound,” he tweeted last month.
Three economists told Insider that remote work during the pandemic did not damage productivity.
“Most of the evidence shows that productivity has increased while people stayed at home,” Natacha Postel-Vinay, an economic and financial historian at the London School of Economics, told the outlet.
Musk is notorious for criticizing lockdown mandates and went so far as to call them “fascist” during a Tesla earnings call in April 2020.
Not long before that, Tesla announced that it would keep its Fremont, California plant open in defiance of shelter-in-place orders across the state.
In an interview with The Financial Times last month, Musk blasted American workers for trying to stay home, comparing them to their Chinese counterparts whom he said work harder.
“They won’t just be burning the midnight oil. They will be burning the 3 a.m. oil,” he said. “They won’t even leave the factory type of thing, whereas in America people are trying to avoid going to work at all.”
That same day, Fortune published an article detailing how Tesla workers in Shanghai work 12-hour shifts, six days out of the week, sometimes sleeping on the factory floor.