- Social media users are downloading a popular Russian-owned app called FaceApp to alter their photos with features like its aging filer.
- However, many have raised concerns about the app’s privacy policies and terms of service, accusing the company of collecting user data to sell to third parties or share with Russia.
- The company released a statement saying it does not do either of those things.
- However, other concerns about the app and what it specifically does with user data still exist.
FaceApp Challenge Goes Viral
FaceApp is a Russian app that uses artificial intelligence to alter photos of people’s faces. The app is two years old, but a recent FaceApp challenge has prompted the app to trend again. Users are posting photos of themselves with an aging filter that adds a few decades of wrinkles to their faces.
The trend has caught on with celebrities, many of whom have posted their own photos. Drake showed us what promo for his farewell tour might look like.
The Jonas Brothers gave us a glimpse of the year 3000.
Scooter Braun showed the damage a Taylor Swift controversy might do to your skin.
Here’s what Lil Nas X might look like after severe back pain stops him from taking his horse down the old town road.
We also got a peek of what Piers Morgan might look like in a month or so.
Celebrity photos and jokes aside, there is actually a big controversy surrounding FaceApp and the access it has to information on users’ phones. Many voiced their concerns on Twitter, though much of the fears turned out to be speculation.
Developer Joshua Nozzi said that he believed the app might be “uploading all your photos.”
Others brought up the app’s Russian ownership.
“You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you,” the policy reads.
This essentially means that the app can take your photos and use them on their own. Many say that this could mean content could get used for marketing purposes.
“We use third-party analytics tools to help us measure traffic and usage trends for the Service,” the policy states. “These tools collect information sent by your device or our Service, including the web pages you visit, add-ons, and other information that assists us in improving the Service.”
The policy also says that while it will not sell your data to third parties, it can “share certain information such as cookie data with third-party advertising partners.”
FaceApp Addresses Concerns
FaceApp gave a statement to TechCrunch on Wednesday about some of the app’s policies to clarify some of the rumors spreading online.
FaceApp said that photos are processed in the cloud, but it debunked Nozzi’s theory that it was downloading all photos from your camera roll.
“FaceApp performs most of the photo processing in the cloud,” their statement reads. “We only upload a photo selected by a user for editing. We never transfer any other images from the phone to the cloud.”
At the bottom of the statement, they linked to Nozzi’s tweet, which has now been deleted, specifically to drive their point home.
“We don’t do that. We upload only a photo selected for editing. You can quickly check this with any of network sniffing tools available on the internet.”
The statement went on to say: “We might store an uploaded photo in the cloud. The main reason for that is performance and traffic: we want to make sure that the user doesn’t upload the photo repeatedly for every edit operation. Most images are deleted from our servers within 48 hours from the upload date.”
The statement said that the company accepts requests from users to remove all their data from its servers. They also added that the app’s features are available without logging in and said that 99% of users don’t log in, meaning that in most cases, they don’t have access to any data that could identify a person.
In its final points, the company confirmed that it does not sell data to third parties, and does not transfer information to Russia.
Many don’t think the statement answered enough questions. For example, it did not address the app’s right to use your data, which is mentioned in the terms of service, or other marketing concerns.
However, outlets like the Independent have noted that this is “fairly standard within such apps.”
Back in 2017, the chair of the Australian Privacy Foundation, David Vaile, spoke to the Australia Broadcasting Company about this lack of transparency.
“They ask for way more rights than they need to offer the service to you,” Vaile said. “It is impossible to tell from this what happens when you upload it, that is the problem. The licence is so lax.”
See what others are saying: (The Independent) (Mashable) (Tech Crunch)
TikTok and Twitter Are Now Deleting Videos That Expose Closeted Olympians on Grindr
On top of outing people who may not be ready to have their sexuality revealed to the world, these videos could have endangered LGBTQ+ athletes from countries where homosexuality is illegal.
Closeted Olympians Being Doxxed
Openly LGBTQ+ Olympians are currently more visible than they have ever been before, but unfortunately, so are closeted ones.
That’s because some people have been using the LGBTQ+ dating app Grindr to try and find Olympians. They’ve been doing so by using the app’s “Explore” feature, which allows people to search and see users in specific locations (ie. Olympic Village).
But some aren’t content with just discovering which athletes belong to the LGBTQ+ community. They’re also sharing that information on platforms like TikTok and Twitter.
“I used Grindr’s explore feature to find myself [an] Olympian boyfriend,” one TikTok user said in a post that had been viewed 140,000 times, according to Insider.
That video reportedly went on to show the poster scrolling through Grindr to expose over 30 users’ full faces.
As many have argued, not only does this potentially out already-stressed Olympians who may not yet be comfortable sharing their sexuality, it also could put some users at serious risk if they live in countries where being LGBTQ+ is illegal.
In fact, the video cited by Insider seemingly did just that, as it reportedly shows the face of a user who appears to be from a country “known for its anti-LGBTQ policies.”
Grindr Responds, TikTok and Twitter Take Action
In response, Grindr said the posts violate its rules against “publicly displaying, publishing, or otherwise distributing any content or information” from the app. It then asked the posters to remove the content.
Ultimately, it was TikTok and Twitter themselves that largely took action, with the two deleting at least 14 posts scattered across their platforms.
Twitter says it’s taking steps to remove the posts flagged by Insider showing Grindr’s explore page at the Olympic Village. TikTok has yet to give an on the record response. pic.twitter.com/r11pNL6Lwu— Benjamin Goggin (@BenjaminGoggin) July 28, 2021
A Highly-Visible LGBTQ+ Presence at the Games
According to Outsports, at least 172 of around 11,000 Olympians are openly LGBTQ+. While that number is still well below the statistical average, it’s triple the number of LGBTQ+ athletes that attended Rio’s 2016 Games.
In fact, if they were their own country, openly LGBTQ+ athletes would reportedly rank 11th in medals, according to an Outsports report published Tuesday.
Among those winners is British diver Tom Daley, who secured his first gold medal on Monday and used his platform to send a hopeful message to LGBTQ+ youth by telling them, “You are not alone.”
After winning a silver medal on Wednesday, U.S. swimmer Erica Sullivan talked about her experience as both a member of the LGBTQ+ community and a person of color.
Still, the Olympics has faced criticism for its exclusion of intersex individuals, particularly those like South African middle-distance runner Caster Semenya, who won gold medals in both 2012 and 2016. Rules implemented in 2019 now prevent Semenya from competing as a woman without the use of medication to suppress her testosterone levels.
Jake Paul Launches Anti-Bullying Charity
The charity, called Boxing Bullies, aims to use the sport to give kids confidence and courage.
Jake Paul Launches Boxing Bullies Foundation
YouTuber Jake Paul — best known as the platform’s boxer, wreckless partier, and general troublemaker — has seemingly launched a non-profit to combat bullying.
The charity is called Boxing Bullies. According to a mission statement posted on Instagram, it aims to “instill self confidence, leadership, and courage within the youth through the sport of boxing while using our platform, voice, and social media to fight back against bullying.”
If the notion of a Paul-founded anti-bullying charity called “Boxing Bullies” was not already begging to be compared to former First Lady Melania Trump’s “Best Best” initiative, maybe the group’s “Boxing Bullies Commandments” will help connect the dots. Those commandments use an acronym for the word “BOX” to spell out the charity’s golden rules.
“Be kind to everyone; Only defend, never initiate; X-out bullying.”
Paul Hopes To “Inspire” Kids To Stand Up For Themselves
Paul first said he was launching Boxing Bullies during a July 13 interview following a press conference for his upcoming fight against Tyron Woodley.
“I know who I am at the end of the day, which is a good person,” he told reporters. “I’m trying to change this sport, bring more eyeballs. I’m trying to support other fighters, increase fighter pay. I’m starting my charity, I’m launching that in 12 days here called Boxing Bullies and we’re helping to fight against cyberbullying.”
It has not been quite 12 days since the interview, so it’s likely that more information about the organization will be coming soon. Currently, the group has been the most active on Instagram, where it boasts a following of just around 1,200 followers. It has posted once to Twitter, where it has 32 followers; and has a TikTok account that has yet to publish any content. It also has a website, though there is not too much on it as of yet.
On its Instagram, one post introducing Paul as the founder claims the rowdy YouTuber started this charity because he has been on the receiving end of bullying.
“Having been a victim of bullying himself, Jake experienced firsthand the impact it has on a person’s life,” the post says. “Jake believes that this is a prevailing issue in society that isn’t talked about enough. Boxing gave Jake the confidence to not care about what others think and he wants to share the sport and the welfare it‘s had on him with as many kids as possible.”
It adds that he hopes his group can“inspire the next generation of kids to be leaders, be athletes, and to fight back against bullying.”
Paul Previously Accused of Being a Bully
While fighting against bullying is a noble cause, it is an ironic project for Paul to start, as he has faced no shortage of bullying accusations. While Paul previously sang about “stopping kids from getting bullied” in the lunchroom, some have alleged he himself was actually a classic high school bully who threw kids’ backpacks into garbage cans.
This behavior allegedly continued into his adulthood, as a New York Times report from earlier this year claimed he ran his Team 10 house with a culture of toxicity and bullying. Among other things, sources said he involved others in violent pranks, pressured people into doing dangerous stunts, and destroyed peoples’ personal property to make content.
See what others are saying: (Dexerto)
Director Defends Recreating Anthony Bourdain’s Voice With AI in New Documentary
The film’s director claims he received permission from Bourdain’s estate and literary agent, but on Thursday, Bourdain’s widow publicly denied ever giving that permission.
Bourdain’s Voice Recreated
“You are successful, and I am successful, and I’m wondering: Are you happy?” Anthony Bourdain says in a voiceover featured in “Roadrunnner,” a newly released documentary about the late chef — except Bourdain never actually said those words aloud.
Instead, it’s one of three lines in the film, which features frequent voiceovers from Bourdain, that were created through the use of artificial intelligence technology.
That said, the words are Bourdain’s own. In fact, they come from an email Bourdain reportedly wrote to a friend prior to his 2018 suicide. Nonetheless, many have now questioned whether recreating Bourdain’s voice was ethical, especially since documentaries are meant to reflect reality.
Director Defends Use of AI Voice
The film’s director, Academy Award winner Morgan Neville, has defended his use of the synthetic voice, telling Variety that he received permission from Bourdain’s estate and literary agent before inserting the lines into the film.
“There were a few sentences that Tony wrote that he never spoke aloud,” Neville said. “It was a modern storytelling technique that I used in a few places where I thought it was important to make Tony’s words come alive.”
Bourdain’s widow — Ottavia Bourdain, who is the executor of his estate — later denied Neville’s claim on Twitter, saying, “I certainly was NOT the one who said Tony would have been cool with that.”
In another interview with GQ, Neville described the process, saying the film’s creators “fed more than ten hours of Tony’s voice into an AI model.”
“The bigger the quantity, the better the result,” he added. “We worked with four companies before settling on the best.”
“If you watch the film,” Neville told The New Yorker, “you probably don’t know what the other lines are that were spoken by the AI, and you’re not going to know. We can have a documentary-ethics panel about it later.”
The Ethics Debate Isn’t Being Tabled
But many want to have that discussion now.
Boston-based film critic Sean Burns, who gave the film a rare negative review, later criticized it again for its unannounced use of AI, saying he wasn’t aware that Bourdain’s voice had been recreated until after he watched the documentary.
Meanwhile, The New Yorker’s Helen Rosner wrote that the “seamlessness of the effect is eerie.”
“If it had been a human voice double I think the reaction would be “huh, ok,” but there’s something truly unsettling about the idea of it coming from a computer,” Rosner later tweeted.
Online, many others have criticized the film’s use of AI, with some labeling it as a “deepfake.”
Others have offered more mixed criticism, saying that while the documentary highlights the need for posthumous AI use to be disclosed, it should not be ruled out altogether.
“In a world where the living could consent to using AI to reproduce their voices posthumously, and where people were made aware that such a technology was being used, up front and in advance, one could envision that this kind of application might serve useful documentary purposes,” David Leslie, ethics lead at the Alan Turing Institute, told the BBC.
Celebrities Recreated After Death
The posthumous use of celebrity likeness in media is not a new debate. In 2012, a hologram of Tupac took the stage 15 years after his death. In 2014, the Billboard Music Awards brought a hologram of Michael Jackson onstage five years after his death. Meanwhile, the Star Wars franchise digitally recreated actor Peter Cushing in 2016’s “Rogue One,” and unused footage of actress Carrie Fisher was later translated into “The Rise of Skywalker,” though a digital version of Fisher was never used.
In recent years, it has become almost standard for filmmakers to say that they will not create digital versions of characters whose actors die unexpectedly. For example, several months after Chadwick Boseman’s death last year, “Black Panther: Wakanda Forever” executive producer Victoria Alonso confirmed Boseman would not be digitally recreated for his iconic role as King T’Challa.