Connect with us

Business

Zuckerberg Used Facebook User Data to Help Friends and Hurt Competitors

Published

on

  • Facebook CEO Mark Zuckerberg once considered 100 deals with app developers to potentially sell user data in an attempt to learn the “real market value” of the information, according to an NBC News report.
  • The report cites around 4,000 pages of leaked internal Facebook documents that show that the company instead opted to use the data as a bargaining chip to reward apps that purchased ads, were close friends of executives, or shared data with them in return.

The Report

Facebook CEO Mark Zuckerberg once considered selling the company’s user data to third-party app developers to find out just how much the user’s data is worth, all while publically claiming to be protecting that same data.

NBC News released a report Tuesday, saying it had obtained around 4,000 pages of leaked company documents spanning from 2011 to 2015. The documents contained emails, web chats, presentations, spreadsheets, and meeting summaries which reportedly showed that Zuckerberg and his team found ways to leverage Facebook user data to companies it partnered with.

It’s not uncommon for companies to work together to share information about customers, however, Facebook has access to sensitive data that many other companies don’t have access to, like information about friends, relationships, photos, and more.

In some cases, NBC News said that Facebook would reward favored companies by giving them access to the data of its users. It would then deny that same data to rival companies or apps that were not considered “strategic partners.”

For instance, Facebook gave Amazon extended access to user data because Amazon had invested heavily in Facebook advertising and partnered with the company for the launch of the Fire smartphone.

By contrast, Facebook reportedly discussed cutting the app, MessageMe, off from user data access. Facebook’s reasoning was that the app had grown too popular and was now a competitor.

Protecting User Data

All the while, Facebook was publically creating a narrative around its concern for user trust, promising to prioritizer data protections.

Private communication between users is “increasingly important,” Zuckerberg said in a 2014 New York Times interview. “Anything we can do that makes people feel more comfortable is really good.”

However, the documents show that behind the scenes, the company was formulating ways to require third-party applications to compensate them for access to user data, through direct payment, spending on advertising, or data sharing agreements.

Facebook Wants to Maintain Its Dominance

Zuckerberg reportedly talked about pursuing 100 deals to sell data access to developers, “as a path to figuring out the real market value” of Facebook user data and then “setting a public rate” for developers, NBC reported.

“The goal here wouldn’t be the deals themselves, but that through the process of negotiating with them we’d learn what developers would actually pay (which might be different from what they’d say if we just asked them about the value), and then we’d be better informed on our path to set a public rate,” Zuckerberg wrote in a message.

In the end, Facebook decided against selling data directly and instead opted to share it with app developers who were considered “friends” of Zuckerberg, or who invested heavily on Facebook and shared their own valuable data in return.

According to NBC, Zuckerberg “noted that though Facebook could charge developers to access user data, the company stood to benefit more from requiring developers to compensate Facebook in kind — with their own data — and by pushing those developers to pay for advertising on Facebook’s platform.”

The companies ultimate goal was to ensure that Facebook held onto its dominant position in the market.

Facebook Calls Documents Cherry-Picked

Facebook has denied giving any developers or partners preferential treatment because of their spending or personal relationships with executives. Instead, the company told NBC News that its focus on “full reciprocity” was to enable users to share their experiences within outside apps with their Facebook friends.

The company also did not question the authenticity of the documents, which stem from a California court case between Facebook and Six4Three.

Six4Three developed an app called Pikinis, which let people pay to find pictures of users in swimsuits. Six4Three’s app was shut down in 2015 after Facebook changed its policies around the sharing of user data with third-party app developers.

Facebook said the documents are “cherry-picked” and misleading.

“As we’ve said many times, Six4Three — creators of the Pikinis app — cherry picked these documents from years ago as part of a lawsuit to force Facebook to share information on friends of the app’s users,” Paul Grewal, vice president and deputy general counsel at Facebook, said in a statement released by the company.

“The set of documents, by design, tells only one side of the story and omits important context. We still stand by the platform changes we made in 2014/2015 to prevent people from sharing their friends’ information with developers like the creators of Pikinis. The documents were selectively leaked as part of what the court found was evidence of a crime or fraud to publish some, but not all, of the internal discussions at Facebook at the time of our platform changes. But the facts are clear: we’ve never sold people’s data.”

See what others are saying (NBC News) (CNBC) (The Street)

Business

Twitter to Investigate Auto-Crop Algorithm After Accusations of Racial Bias

Published

on

  • Twitter users believe they discovered a racial bias in an algorithm the platform uses to automatically select which part of an image it shows in a photo preview.
  • Many argued that the auto-cropping tool showed a white bias after testing the theory with photos of Black and white people, cartoon characters, and even dogs. 
  • However, others who tested the theory generated results that did not support this idea. Regardless, most users admit that these experiments have their limitations and agree that the current results at least show that this is something worth looking into.
  • The company released a statement saying it tested its system for bias in the past but admitted it needs to conduct further analysis of it. Online, Twitter employees seemed to welcome the public discourse and the company promised to share its results as well as further actions it may take.

Potential White Bias 

Twitter responded to concerns over its automatic cropping algorithm Sunday after users believed they discovered a racial bias in the tool.

In 2018, Twitter began auto-cropping photos in its timeline previews to prevent them from taking up too much space in the main feed and to allow multiple photos to appear in the same tweet. To do this, the company uses several algorithmic tools that focus on the most important part of the picture, like faces or text. 

However, users recently began to spot issues with the algorithm. The first person credited for highlighting a potential problem was PhD student Colin Madland. He made his discovery while highlighting a different racial bias he thinks he found on the video-conference company Zoom. 

Madland tweeted that when his Black colleague uses a virtual background on Zoom, his head is erased. When he uploaded examples to show this happening to his Black colleague and not himself, he noticed that Twitter was only showing his own face in its preview. 

Soon after, others followed up with more targetted experiments. Cryptographic and infrastructure engineer Tony Arcieri, for example, tweeted out two long images with Senate Majority Leader Mitch McConnel and Former President Barack Obama. 

The two photos have the politicians stacked on top of each other in different orders but with white space in between them. The experiment showed that Twitter would focus on McConnell, no matter what order the photos were stacked in.

Another user found that the algorithm even focused on McConnell when two photos of Obama were present in a single stack.

A similar white preference appeared in examples of Black and white men in suits, Simpsons characters Lenny and Carl, and even black and white dogs. 

Examples That Don’t Support White Bias Theory

Others looking into this theory of a white bias found results that did not support the idea. 

For example, one user found that photos of Obama were cropped for the preview over photos of Donald Trump. 

Still, some researching the trends noted that these experiments do have their limitations and are likely influenced by tons of other factors. Some believe the algorithm recognized high profile figures or considers brightness and contrast, among other photo elements.

Twitter’s Chief Design Officer (CDO), Dantley Davis, even suggested that the choice of cropping sometimes takes brightness of the background into consideration.

However, ohers found examples that rejected that idea. Regardless, all these tests did a lot to convince people that there was something worth looking at here, including Davis, who has been experimenting himself.

He’s not alone in his research. In fact, plenty of other Twitter users have been going to great lengths to track their results as they try to study what is going on.

Twitter Promises to Investigate 

On Sunday, a Twitter spokesperson eventually released a statement admitting that the company had work to do.

“Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing,” the company explained.

But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate.” 

Davis also isn’t the only employee that has appeared to welcome all of this public discourse. The company’s Chief Technology Officer, Parag Argawal tweeted, “This is a very important question. To address it, we did analysis on our model when we shipped it, but needs continuous improvement. Love this public, open, and rigorous test — and eager to learn from this.”

See what others are saying; (The Next Web) (The Guardian) (Mashable

Continue Reading

Business

Perfume Brand Apologizes for Replacing John Boyega in the Chinese Version of an Ad He Directed

Published

on

  • Jo Malone London, a perfume and candle brand, apologized to its global brand ambassador John Boyega after it reshot his personal advert without him for the Chinese market.
  • Last year, Boyega conceived, starred in, and directed a commercial for the band, which showcased his friends and family and was shot in his diverse hometown on Peckham, London.
  • Without Boyega’s knowledge, the company replicated the concept with Chinese actor Liu Haoran and did not feature a single Black person in the remake. 
  • After backlash, Jo Malone London apologized and said, “The concept for the film was based on John’s personal experiences and should not have been replicated.”

Boyega’s Commercial 

The perfume and candle brand Jo Malone London apologized to actor John Boyega after it replicated the personal advert he made for the company without him for the Chinese market.

In 2019, the brand named the Star Wars actor its first male global ambassador. Under the role, Boyega shot an advert for the company based on his roots and personal experiences. 

The short film was called, “A London Gent,” and according to several reports, it was his creative concept and a project he directed. It showcased him enjoying time with his real-life friends and family in his diverse hometown of Peckham, London.

“There’s a mixture of things you see me do in the film, you see me in a professional environment on a film set, then with family and it’s about breaking free of the concept of ‘going back or returning to your roots’ but more about the roots existing with this new side of my life,” he said of the commercial last year in an interview with Women’s Wear Daily.

Chinese Remake

The commercial was well received and actually won Best Media Campaign at The Fragrance Foundation Awards this year. Still, the brand decided to essentially replicate the commercial for the Chinese market without Boyega’s knowledge or participation.

Instead of just using Boyega’s original ad, it replaced him with Chinese actor Liu Haoran, star of the hugely popular Detective Chinatown film franchise. Boyega’s friends and family were replaced as well, which means there was not a single Black person included in the Chinese ad.

Though it’s not totally identical, it’s clear the commercial reused the same concept –minus the diversity elements. It even replicates some specific scenes like one where the camera zooms into Boyega’s eye and another where he rides a horse while his friends ride bikes.

On top of all that, the Chinese ad is also called “A London Gent,” and according to The Hollywood Reporter, Boyega only found out about this after it was put on Twitter.

Boyega hasn’t officially commented on the issue, but he’s definitely aware of the backlash. He retweeted one user who shared his ad saying, “Now, this man needs to be properly compensated for the thievery! No apology is good enough.”

He also retweeted a post showing the Chinese ad for comparison, as well as an article from The Hollywood Reporter on the topic. 

That article includes a statement from the brand which reads: “We deeply apologize for what, on our end, was a mistake in the local execution of the John Boyega campaign. John is a tremendous artist with great personal vision and direction. The concept for the film was based on John’s personal experiences and should not have been replicated.”

Joe Malone also apologized to Haoran, saying he was not involved in the conception of the Chinese ad.

“While we immediately took action and removed the local version of the campaign, we recognize that this was painful and that offense was caused,” it continued.

We respect John, and support our partners and fans globally. We are taking this misstep very seriously and we are working together as a brand to do better moving forward.”

Boyega’s Past Experiences

This is not the first time Boyega has sparked discussions about racism in China and the entertainment industry. In 2015, when “Star Wars: The Force Awakens” was released, Boyega’s character was resized to be significantly smaller on the Chinese version of the movie poster.  

In a recent GQ interview, Boyega also criticized Disney, saying nonwhite characters were pushed aside in the Star Wars franchise while white characters were given more nuance. 

“What I would say to Disney is do not bring out a Black character, market them to be much more important in the franchise than they are and then have them pushed to the side. It’s not good. I’ll say it straight up,” he said at the time.  

As for Jo Malone, it has pulled the Chinese advert, but it’s unclear if Boyega’s relationship with the brand will continue. 

See what others are saying: (Insider) (Variety) (The Hollywood Reporter)

Continue Reading

Business

AmazonBasics Products Dangerous, Start Fires & Explode: Report

Published

on

  • A report by CNN has found that dozens of AmazonBasics items are dangerously flawed, leading to fires and explosions.
  • 1500 reviews were found across 70 items citing dangerous flaws in the products between 2016-2020, despite Amazon saying many of these items were investigated and found to be safe.
  • Currently, dozens of items are still available on the site that have been flagged by users as dangerous and potential fire hazards.

AmazonBasics Burn

Ever seen a listing for a common everyday item on Amazon and thought, “that price is too good to be true?” Well, that may be the case. CNN reported on Thursday at least 70 items that are part of Amazon’s AmazonBasic line are fatally flawed; particularly electronics which are reported to have started hundreds of fires.

One story from Wethersfield, Connecticut features a young man who was burned after being awoken by a chair in his bedroom that was on fire. Firefighters determined that a white AmazonBasics USB cord used to charge his phone had shorted and started the fire. Other items sold under the AmazonBasics label — which was set up in 2009 and sells thousands of everyday items for cheap — have been reported in reviews to catch fire. A microwave sold under the label has over 150 reviews describing safety concerns over the device, notably pointing out its proclivity to catch fire.

CNN obtained a few defective devices from customers and sent them off to a lab in Maryland to be tested and find out why it happens so often. That research was cut short because of the COVID-19 pandemic, but in the case of the burning microwaves, initial findings revealed that they featured a fatal design on a panel that covered a heating device and could start fires.

Other common items that were reported to have caught fire include power strips and car chargers. Overall, according to CNN, 1,500 reviews between 2016 and 2020 by US customers identified safety concerns from AmazonBasics products, with 10% of reviews specifically mentioning the items catching fire.

“Safe to Use”

Amazon’s initial response to the report is that some of the items identified were investigated and found to be “safe to use.”

We take several steps to ensure our products are safe including rigorous testing by our safety teams and third party labs,” the company said in a statement to The HIll. “The appliance continues to meet or exceed all certification requirements established by the FDA, UL, FCC, Prop 65, and others for safety and functionality.”

“We’re continuously refining our processes and leveraging new technologies to ensure that AmazonBasics products are safe for their intended use. We want customers to shop our products with confidence, and if there’s ever a concern, you can contact our customer service team and we’ll promptly investigate,” The company added in a blog they posted as a response to the CNN report.

Currently, about 30 items with three or more reviews that identified dangerous flaws remain on the site. This could lead to large legal problems for the company. In the past, various courts have ruled and upheld that Amazon is not liable for defective items sold by third-party vendors on the platform. However, AmazonBasics are branded in-house items (although Amazon doesn’t manufacture these items). 

Being in-house items may mean that unlike third-party vendors, Amazon possibly is not shielded by the same protections and could be liable for the destruction caused by said devices.

See What Others Are Saying: (CNN) (The Hill) (The Verge)

Continue Reading