- Anna Sorokin, the woman who pretended to be a Germain heiress to swindle banks, restaurants, hotels, and others out of thousands, agreed to a deal with Netflix to make a series about her crimes.
- The deal would give her $100,000 for her story, as well as a $15,000-per-episode consulting fee and $7,500 in royalties per episode.
- New York state is now working to stop Netflix from paying her, pointing to the “Son of Sam” law which was created to prevent criminals from profiting off their crimes.
Who is Anna Sorokin?
The state of New York is working to stop Netflix from paying fake heiress Anna Sorokin more than $100,000 to use her story for an upcoming series about her notorious scam.
Sorokin, who was known in social circles as “Anna Delvey,” moved to New York City in 2013, claiming to be a German heiress with a $60 million trust fund. She lived in luxurious hotels for months at a time, ate at swanky restaurants, attended exclusive parties, and wore designer clothes.
But Sorokin, who was actually born to a middle-class family in Russia, frauded her way through life. According to prosecutors, she forged financial statements, made up accountants, and lied about wire transfers to get out of paying money that she owed to businesses, friends, and other socialites.
The fake heiress, dubbed by the media as the “SoHo Scammer,” was arrested in 2017 and sentenced in May 2018 to four to 12 years in prison for multiple counts of theft and grand larceny.
According to court documents, she was also ordered to pay $198,956.19 in restitution to the victims of her scam. Victims included hotels like The Beekman and the W New York, a private jet and helicopter service called Blade, and even City National Bank, who she managed to dupe into giving her a $100,000 loan to launch a private art club in Manhattan.
Sorokin’s story picked up widespread attention in the summer of 2018 when Vanity Fair and The Cut published stories about her. HBO and Netflix later began working on projects about her as well, with Lena Dunham behind the HBO project and Shonda Rhimes behind the Netflix series.
According to a new report by the New York Post, Netflix acquired the rights to Sorokin’s life story in June of 2018, months after her arrest, but before her trial began. The New York Times also reported that this was part of a larger deal to buy the rights to information detailed in an article published by New York Magazine’s Jessica Pressler in May 2018.
Netflix’s contract with Sorokin allegedly gives her $100,000 for her story, along with a $15,000 per episode consultant fee, and $7,500 in royalties per episode, the Post reported citing court documents.
New York State Gets Involved
The Post also reported that the first payout was $30,000 that went directly to Sorokin’s lawyer. Now New York State is trying to stop Sorokin from getting any money from Netflix for herself.
In late May, the office of the New York State attorney general filed a request to block a $70,000 payment from Netflix that Sorokin was set to receive in June. The state cited the “Son of Sam” law, which is designed to stop criminals from profiting off publicity around their crimes. That legislation passed in 1977, after many speculated that a notorious serial killer might sell his story to a writer or filmmaker.
Along with blocking the $70,000 payment, Attorney General Letitia James is also working to stop Sorokin from earning the consultant and royalty fees. On top of that, a judge in Albany temporarily ordered Netflix to not pay Sorokin until the matter is settled through litigation, except for the $30,000 for her attorney’s unpaid legal fees, according to court records obtained by the Times.
“The monies sought to be preserved herein, constitute ‘profits from a crime,'” Assistant Attorney General Adele Durand wrote in recently-filed court papers cited by the Post.
Instead, Durand said the proceeds of Sorokin’s Netflix deal should be donated to the New York State Office of Victim Services, for redistribution to the people impacted by her crimes.
Todd Spodek, Sorokin’s lawyer told the Times: “It has always been Ms. Sorokin’s intention to pay back her victims.”
“I anticipate resolving the issue without further litigation,” he added.
This is somewhat similar to what Sorokin said to the Times in a jailhouse interview from May. According to the newspaper, she said she always had the intention to pay the money back and had been trying to raise millions for a social club she thought would be a lucrative investment.
However, in that same interview, she admitted that she was not actually sorry for duping her victims.“I’d be lying to you and to everyone else and to myself if I said I was sorry for anything,” she said. “I regret the way I went about certain things.”
The Times also reported: “Ms. Sorokin was asked if, given the chance, she would do the same things again. Ms. Sorokin shrugged. ‘Yes, probably so,’ she said, laughing.”
As of now, the Netflix series is still in development. As far as the HBO production, that deal was struck with one Sorokin’s victims, former Vanity Fair photo editor Rachel Williams, who Sorokin stuck with a 62,000 bill for a trip to Morocco. Williams also published a book about her experience with Sorokin that was released on Tuesday.
See what others are saying: (The New York Times) (The New York Post) (Business Insider)
RIDICULOUS! James Harden CNN Controversy, Blippi, Ellen Kristen Bell SLAMMED & Turkey Syria Worsens
Today in Awesome
Check out https://phil.chrono.gg/ for 70% OFF “Forged Battalion” only available until 9 AM!
Cosmopolitan: Charlize Theron & Nick Kroll Piss Off Some Spirits
Vanity Fair: Nick Kroll Improvises 7 New Cartoon Voices
Netflix: Prank Encounters
First We Feast: Liza Koshy Meets Her Future Self While Eating Spicy Wings
Singapore “Fake News” Law Goes Into Effect
- A new law has gone into effect in Singapore that aims to stop the spread of fake news by allowing members of the government to single-handedly decide what is and is not fake news and whether or not that content should be removed.
- Critics have argued that the law is a blatant attempt to suppress free speech and stifle political dissent ahead of an election.
- Big tech companies like Facebook and Google have also vocally opposed the law, and others have noted that one of the most concerning aspects is that it also applies to private messages sent on encrypted apps like WhatsApp.
- Now, individuals can face up to 10 years in jail for sharing whatever the government deems “false information.”
“Fake News” Law
A controversial bill widely known as the “fake news” law officially went into effect in Singapore Wednesday.
The new law will aim to stop the spread of disinformation, or fake news, in the city-state. The legislation, which is officially called the Protection from Online Falsehoods and Manipulation Act, was passed by Singapore’s Parliament back in May.
According to reports, it will now be illegal to spread any “false statements of fact” that would potentially pose a threat to “public tranquility,” and the “friendly relations of Singapore with other countries.”
That may seem straightforward, but the law is controversial due to the fact that it gives government ministers the sole power to determine what is and is not fake news, with the threshold for determination also being quite low.
According to Channel News Asia, a minister simply needs to decide if something is a “falsehood,” which is defined as “a statement of fact that is false or misleading.”
Then, if that minister says it is in the public interest to take action against the “falsehood,” they can order whatever content they determine to be fake news to be taken down or have a correction put up next to it.
Government ministers can also force tech companies like Facebook and Google to block accounts or websites they say are spreading false information.
While the government has said that anyone impacted by the law can file an appeal and that the appeals process will be quick and cheap, the consequences of being found guilty of posting false information are extremely high.
Under the law, companies that are found guilty of spreading fake news can face fines up to $1 million in Singapore dollars—which is about $722,000 in U.S. dollars—while individuals who are found guilty can face up to 10 years in prison.
Singapore’s Prime Minister Lee Hsien Loong has said that the law is necessary “to hold online news sources and platforms accountable if they proliferate deliberate online falsehoods.”
“If we do not protect ourselves, hostile parties will find it a simple matter to turn different groups against one another and cause disorder in our society,” he added.
Free Speech Concerns
However, critics of the law have said that it is a clear attempt to stifle free speech and dissent, with many arguing that it gives way too much power and authority to the government without providing oversight for government abuse.
To that point, opponents have pointed to Singapore’s mixed record on protecting press freedoms and political dissent.
In the 2019 World Press Freedom Index, Reporters Without Borders ranked Singapore 151 out of 180 countries for press freedoms, meaning Singapore was ranked in one of the worst positions for a country that considers itself a democracy.
Notably, that also placed it below countries that are well-known for censoring any kind of political opposition, like Russia and Myanmar.
As a result, the activists, experts, and rights groups who have openly criticized the law worry that it will be used as a political tool for censorship.
Speaking to CNN, the Deputy Director of Human Rights Watch, Phil Robertson, said the bill will be used for “political purposes,” noting that it comes right before elections set to happen in the next few months.
“The Singapore government has a long history of calling everything they disagree with as false and misleading,” he added.
“Singapore’s leaders have crafted a law that will have a chilling effect on internet freedom throughout south-east Asia, and likely start a new set of information wars as they try to impose their narrow version of ‘truth’ on the wider world,” Robertson wrote in a tweet Wednesday.
The International Commission of Jurists, a group of judges and lawyers, also echoed Robertson’s sentiment in a statement before the law passed, where they argued that the law would create “a real risk that the law will be misused to clamp down on opinions or information critical of the government.”
Even members of Parliament have spoken out against the bill, arguing it is an overextension of government power.
“To introduce such a bill is not what the government claims to defend democracy and public interest, it is more like the actions of a dictatorial government that will resort to any means to hold on to absolute power,” opposition lawmaker Low Thia Khiang said before the bill’s passage in May.
Tech Companies Opposition
Others have also argued the law will give Singapore too much power over big tech firms that have a large presence in Singapore. For example, Facebook, Twitter, and Google all have their Asian headquarters in the city-state.
“This law would give Singapore overwhelming leverage over the likes of Facebook and Twitter to remove whatever the government determines is ‘misleading,’” Amnesty International’s Regional Director for East and Southeast Asia Nicholas Bequelin said in a statement.
“This is an alarming scenario. While tech firms must take all steps to make digital spaces safe for everyone, this does not provide governments an excuse to interfere with freedom of expression— or rule over the news feed,” he added.
Google and Facebook both opposed the law when it was being debated in Parliament. After it was passed, Google said that the law will “hurt innovation and the growth of the digital information ecosystem.”
Others have also noted that one of the most concerning parts so the law is that it does not just apply to posts made publicly on Facebook or Twitter but that it can be applied to closed private messaging apps and chat groups like WhatsApp, which is extremely popular in Singapore.
That, in turn, means the government can not only read its citizen’s private messages but also potentially jail them for up to 10 years for content sent privately, maybe even to just one other person.
See what others are saying: (CNN) (VICE) (The Guardian)
Nerd City & Other Creators Call Out YouTube Bots for Demonetizing LGBTQ+ Content
- A group of YouTubers said they have worked since June to compile evidence that certain words or phrases within video titles lead to automatic demonetization by the platform’s machine learning program.
- As a result, those YouTubers also claim the platform’s bots are routinely demonetizing LGBTQ+ content.
- A day after the videos documenting this evidence were posted, YouTube directly responded to them and said that “the right teams are reviewing your concerns in detail,” also promising to follow up on the claims.
YouTubers Create Monetization/Demonetization Word List
In a series of videos released Sunday, a group of YouTubers detailed 15,000 keywords that they tested against YouTube bots and claimed many of those words—including some LGBTQ+ terms—lead to automatic demonetization.
Particularly, the project looks at those keywords and determines whether or not each caused a video to be demonetized when used in the title of a video. The research, which was conducted from June to July, was a collaboration between creators Nerd City, YouTube Analyzed(who does not work for YouTube), and Sealow.
“Robot law enforcement on YouTube just resulted in two years of gay people being treated like it’s the 1300’s,” Nerd City said in his video.
The report, published as a Google spreadsheet, classifies words in one of two categories: green meaning monetized and yellow meaning demonetized. However, YouTube Analyzed said the way monetization is decided is more like a 0-1 scale.
Thus, certain words near the middle of that scale might be green one day and yellow the next. To provide context, he placed an asterisk next to words that yielded mixed results.
To create the list, they uploaded two-second clips they said had no demonetizable audio or video. Then, they experimented with keywords, replacing demonetized words with “happy” or “friend” to see it if that would monetize the video.
As such, they found a grab bag of results. For example, “antivaxx” sometimes resulted in demonetization, but never “antivax” or “anti-vaxxer.”
Additionally, “North Carolina” was demonetizable but not “North Korea.” YouTube Analyzed actually explained this by saying that if a word has too much negative association with it, the bot might be prone to flagging the word. He argued “North Carolina” might have been flagged because news surrounding transgender bathroom laws made headlines in July as he was compiling the list.
Other words like “restaurant,” “you,” “sunglasses,” “photos,” “profit,” and even “Shrek” reportedly caused their videos to get demonetized.
While more expected terms like slurs, cuss words, and other words like “Hitler” were also flagged, other controversial words like “incel” and phrases like “how to murder” weren’t demonetized. YouTube Analyzed suggests, unlike the “North Carolina” example, if the bots haven’t seen a word or phrase used enough, they might not catch it.
LGBTQ+ Video Demonetization
The creators also found that common LGBTQ+ terminology tended to be demonetized, and some media outlets have called this project the most conclusive evidence that YouTube is demonetizing LGBTQ+ videos.
Again, however, the system yielded highly variable results. For example, “gay” was demonetizable, but YouTube Analyzed noted the word is context-sensitive. The term “lesbian” was sometimes green but “lesbians” was always yellow. Also, “transgender” was monetizable but not always “trans.”
Additionally, the word “homophobia” was ad-friendly, but not “homosexual,” while terms like “straight” and “heterosexual” were both always green.
Some of the titles they tried included “Lesbian princess” and “Kids Explain Gay Marriage,” a reference to a Jimmy Kimmel skit posted on YouTube. Both were demonetized but later monetized when replacing “lesbian” and “gay” with “happy.”
As to why these videos are being demonetized, Sealow posits a couple of possible reasons. The first is similar to the “North Carolina” example where, politics and negative press could influence certain words. In the case of LGBTQ+ content, bots could interpret certain terms negatively if they are regulating a high number of homophobic or hateful content.
Sealow also worries that if videos with words like “gay” are manually demonetized by people with biases, then bots will also develop the tendency to demonetize those videos regardless of the content.
According to Nerd City, YouTube is possibly outsourcing some 10,000 workers from a company called Lionbridge, which employs people from a number of countries that have anti-LGBTQ+ laws, including Somalia, Afghanistan, and Indonesia.
He then asks: if there’s no standardized policy in place for LGBTQ+ content could reviewers keep a video demonetized based on their own bias?
It is unclear how many workers—if any—are from those countries or if such a bias is actually being taken into account; however, former workers with Lionsbridge have reportedly complained of unclear guidelines.
Past Accusations Against LGBTQ+ Creators
Some YouTubers like Petty Paige have now resorted to censoring words like trans and homosexual to stay monetized, and a wide range of LGBTQ+ creators have called this trend an open secret.
In December, Mexican YouTuber Lusito Comunica asked YouTube Chief Product Officer Neal Mohan about this directly, saying three of his videos with LGBTQ+ titles were demonetized.
“I can just tell you categorically that there is no list of words or keywords or terms or anything like that that is going to go into our classifiers making an apriori decision on whether our videos are monetized or not,” Mohan said.
“There’s nothing in terms of how our monetization algorithms work that should be based on any kind of predescribed or predetermined list,” he continued.
In his video, Sealow refutes that point, saying, “Given our testing results, it’s made clear that these comments are not accurate.” He notes that while the current situation for LGBTQ+ may be improved from two years ago, most would still call it unacceptable.
He also said he finds Mohan’s comments troubling because as CPO, Mohan has the power to fix this problem.
Later, in August, Alfie Deyes posed a similar question to YouTube’s CEO Susan Wojcicki.
“We do not automatically demonetize LGBTQ content,” she said. Then, later adding, “There’s no policies that say if you put certain words in the title that that will be demonetized.”
Deyes then reiterated his question, asking if any words specifically from the LGBTQ+ are flagged, to which she says, “There shouldn’t be.”
Nerd City then focused on the word “policy” in his video, saying Wojcicki lied by omission.
“It’s sneaky language from a very smart woman who talks to a lot of lawyers,” he said. “There’s no policy to demonetize gay words, but there is a protocol where bots are doing exactly that.”
Also in August, a group of YouTubers sued the platform and claimed among other things, that YouTube is demonetizing their content.
In 2018, YouTube took steps to expand its reviewing process, adding those previously-mentioned 10,000 workers to combat what Wojcicki called “bad actors,”or people who attempt to exploit the platform’s monetization system. Those “bad actors” are actually part of why YouTube says it hasn’t released its algorithm data.
YouTube’s Mystery Algorithm
The report represents an attempt to better warn creators about why their videos may be demonetized, but demonetization involves other factors, as well. As they continue to attempt to learn more about the mysterious algorithm, that list changes every day.
Because of that, all of them note the information they presented is not necessarily complete. Nerd City has argued that YouTube should publish details on how its algorithm works, saying more openness could allow creators to make more money because they would then be able to see what does and does not get monetized.
He also deconstructs the “bad actors” argument, saying people would just report misleading content anyway.
Notably, the FairTube Campaign is urging YouTube to at least send creators a reason why their specific videos were demonetized, that way they can then learn and take steps to make sure future videos are ad-friendly.
Monday, the YouTube Team Twitter account respond to this series of videos, saying, “Wanted to let you know that we’ve watched your video and the right teams are reviewing your concerns in detail. We want to make sure that we give you some clear answers, so we’ll follow back up when the teams have been able to take a good, hard look.”
Later, a YouTube spokesperson then released a statement saying there is no list of words that deem a video not ad-friendly.
“We’re proud of the incredible LGBTQ+ voices on our platform and take concerns like these very seriously,” the spokesperson said. “We do not have a list of LGBTQ+ related words that trigger demonetization and we are constantly evaluating our systems to help ensure that they are reflecting our policies without unfair bias.”
That spokesperson also said YouTube tests samples of LGBTQ+ content when there are new monetization classifers to make sure LGBTQ+ videos aren’t more likely to be demonetized.