Inside ‘Facebook Jail’: The Secret Rules That Put Users in the Doghouse
In Facebook FB 0.13% Jail, many users are serving time for infractions they don’t understand.
Colton Oakley was restricted after ranting about student debt. The recent graduate of the State University of New York at New Paltz posted that anyone who was mad about loan cancellation was “sad and selfish.” His sentence: three days without posting on Facebook.
Alex Gendler, a freelance writer in Brooklyn, N.Y., got a similar ban after sharing a link to a story in Smithsonian magazine about tribal New Guinea. Nick Barksdale, a history teacher in Oklahoma, served 30 days recently after jokingly telling a friend “man, you’re spewing crazy now!”
None of the three quite understand what they did wrong.
“If you use the term ‘crazy,’ does that automatically get you banned?” asked Mr. Barksdale.
The plight of baffled users caught in Facebook’s impenetrable system for adjudicating content has reinforced the company’s reputation for heavy-handed and inept policing of its online platforms. The problem, which has been mounting for years, is increasingly acute as lawmakers and the public focus on the vast power social-media companies hold over the flow of information.
The company’s newly formed Oversight Board—a group of 20 lawyers, professors and other independent experts who consider appeals to decisions made by Facebook—has been charged with interpreting Facebook’s numerous detailed rules governing everything from the depiction of graffiti to swearing at newsworthy figures.
The board’s most closely watched decision is expected Wednesday—whether Facebook appropriately applied its rules when it booted former President Donald Trump indefinitely from the platform.
Lawmakers have repeatedly grilled Mark Zuckerberg about the issue, prompting the CEO to repeat his mantra that nobody would ever think such delicate work should be handled by a private company. Still, the challenge remains: how best to police a platform that every day sees billions of posts, comments and photos.
In its earlier decisions, the board has zeroed in on Facebook’s legion of rules, calling them unclear and “difficult for users to understand.”
Facebook says it is taking steps to address the Oversight Board’s recommendations and “build out better customer support for our products.”
“While we’re transparent about our policies, we understand that people can still be frustrated by our decisions, which is why we’re committing to doing more,” a Facebook spokeswoman, Emily Cain, said in a written statement.
Since it began taking cases in October, the Oversight Board has received more than 220,000 appeals from users, and issued eight rulings—six of them overturning Facebook’s initial decision.
John Taylor, the board’s spokesman, says the intention was never “to offer the hot takes on any particular issue of the day. The point of the board is to render a decision on the most difficult content decisions facing the company.”
Facebook in recent years has introduced many new rules, often in response to specific complaints from lawmakers and various interest groups, designed to protect users and guide an army of outside contractors who work on content moderation. The internal guidelines are in addition to the company’s Community Standards listed on its website, and not made public.
Some of the guidelines include detailed examples to illustrate fine distinctions.
Moderators were instructed in documents viewed by the Journal to remove this statement: “If you vote by mail, you will get Covid!”
The documents also said this statement was acceptable: “If you vote by mail, be careful, you might catch Covid-19!”
Around voting and the 2020 election, moderators were instructed to remove this sentence: “My friends and I will be doing our own monitoring of the polls to make sure only the right people vote.”
But not this one: “I heard people are disrupting going to the polls today. I’m not going to the polling station.”
The Facebook spokeswoman says the differences are small but “enough of a distinction that we called them out and why we have these detailed documentation for our content reviewers.”
The spokeswoman says Facebook reviews two million pieces of content a day. Mr. Zuckerberg has said the company makes the wrong call in more than 10% of cases—meaning about 200,000 decisions could be wrong a day.
Users that run afoul of Facebook’s rules can spend time in what many now call “Facebook Jail,” losing commenting and posting abilities for from 24 hours to 30 days or, in more serious cases, lose their accounts indefinitely. A user typically has to rack up multiple strikes against them before facing a ban, but Facebook doesn’t tell users how many it takes, saying in a 2018 blog post that “we don’t want people to game the system.”
Facebook doesn’t release the number of accounts it restricts.
When content is removed or users are blocked, users usually receive a notice saying they have violated Community Standards. The notice typically indicates which broad category has been violated. Separately from the Oversight Board’s decisions, Facebook restores thousands of pieces of previously removed content each quarter after user appeals.
Facebook recently has turned more toward automation to help guide its decisions, relying on artificial intelligence and algorithms to take down content and also decide on user appeals, according to people familiar with the company and more than two dozen users interviewed by the Journal. The result is more frustration, with some users wondering how Facebook could have made a decision on their content in only a matter of seconds.
A research paper from New York University last summer called the company’s approach to content moderation “grossly inadequate” and implored Facebook to stop outsourcing most of the work and to double the overall number of moderators.
“This is what you get when you build a system as big as this,” says Olivier Sylvain, a professor of law at Fordham University, who has researched Facebook and content moderation generally. “I don’t think it’s unhealthy or wrong for us to wonder if the benefits that flow from such a big service outweigh the confusion and harm.”
Mr. Barksdale, a history teacher in Newcastle, Okla., has been banned from his Facebook page several times since last fall, each time for reasons he says he doesn’t fully understand.
One time he was restricted from posting and commenting for three days after sharing a World War II-era photo of Nazi officials in front of the Eiffel Tower as part of a history discussion, with a brief description of the photo. He got a 30-day ban for trying to explain the term pseudoscience to one of his followers.
In March, after he joked with another history aficionado during a debate that “you’re spewing crazy now,” Facebook alerted him that he had been restricted for seven days. When Mr. Barksdale clicked a button to appeal, Facebook disagreed, and lengthened his ban to 30 days, saying six of his past posts had gone against the company’s Community Standards.
Mr. Barksdale says he tries to follow Facebook’s Community Standards, but hadn’t been aware he had committed so many infractions.
The Facebook spokeswoman says the company mistakenly removed Mr. Barksdale’s comment. She says, in general, users can find their violations in their “support inbox” attached to their profile.
Facebook’s Community Standards, the public rules, have expanded in recent years to include six major categories and 27 subcategories ranging from “violent and graphic content” to “false news.”
Facebook’s policy on hate speech forbids direct attacks on people, based on race, religion and other demographics. However, it allows them if the words are used to raise awareness, or “in an empowering way,” according to the Community Standards. “But we require people to clearly indicate their intent. If intention is unclear, we may remove content.”
Internally, Facebook works from a far more specific and complicated set of guidelines, a more than 10,000-word document called its “Implementation Standards,” that its more than 15,000 content moderators rely on to make decisions. Still more documents—known internally as “Operational Guidelines” and “Known Questions”—further explain the company’s rationale for its rules.
In most cases, when a piece of content is flagged by Facebook—either by a user or by its algorithms—the post or photo or video is usually reviewed by the moderators, whose job is to try to apply the rules that Facebook has devised.
In one of the documents viewed by the Journal, the company forbid use of a “degrading physical description,” which it defined as “calling an individual’s appearance ugly, disgusting, repulsive, etc.” It gave as an example: “It’s disgusting and repulsive how fat and ugly John Smith is.”
But, the document continued, “We do not remove content like “frizzy hair,” “lanky arms,” “broad shoulders,” etc. since “frizzy,” “lanky,” and “broad,” are not deficient or inferior, and therefore not degrading.”
Many users aren’t aware of the internal documents that help moderators interpret the public rules. In contrast, Alphabet Inc.’s Google search engine publishes the full set of 175 pages of rules that its 10,000 “search quality raters” use to evaluate search results.
“We want people to understand where we draw the line on different kinds of speech, and we want to invite discussion of our policies, but we don’t want to muddy the waters by inundating people with too much information,” the Facebook spokeswoman says.
In several rulings this year, the Oversight Board has urged Facebook to be more clear in its Community Standards akin to the specificity found in its internal documents.
A French Facebook user last year criticized the lack of a health strategy in France and questioned what society had to lose by allowing doctors to prescribe in an emergency a “harmless drug,” such as hydroxychloroquine, according to the Oversight Board.
The FDA has warned against the use of the drug in relation to the coronavirus.
Facebook took down the post, saying it contributed to the risk of “imminent physical harm” on the platform, part of its “violence and incitement” Community Standard.
In its ruling, the Oversight Board disagreed, saying “a patchwork of policies found on different parts of Facebook’s website make it difficult for users to understand what content is prohibited.” The board advised Facebook to “explain what factors, including evidence-based criteria, the platform will use in selecting the least intrusive option when enforcing its Community Standards to protect public health.”
As part of its agreement when it established the board, Facebook must restore a piece of content after an Oversight Board decision, but it isn’t required to follow the Board’s other recommendations. In the French case, it said in a February blog post that it is committed to enforcing all of the Oversight Board’s suggestions.
The Oversight Board disagreed when Facebook took down a post in which a user wrongly attributed a quote to the Nazi propaganda minister, Joseph Goebbels, saying it violated its Community Standard on “dangerous individuals and organizations.”
Facebook told the Oversight Board that Goebbels is on an internal list of dangerous individuals. Facebook should make that list public, the board said. Facebook is assessing the feasibility of making the list public, the company said in its February post.
When artist Sunny Chapman saw a photo in a Facebook group of a selfie of a Muslim woman in front of an anti-Muslim protest last month, a well-known photo that was circulating, Ms. Chapman of Hancock, N.Y., wanted to join the conversation. In a reply to a comment disparaging Muslims, she wrote that she had traveled in Morocco by herself and “felt much safer there than I do here in the USA with all these crazy white men going around shooting people.”
Facebook took down her comment, and informed her she had been restricted from posting or commenting for 30 days because the post violated the Community Standards on hate speech.
Ms. Chapman says she was confused, because another comment posted after hers voicing a similar perspective had been left up. That post read: “Most acts of violence in this country are committed by white men. Usually christian, often white supremacists, and almost always white men,” according to screenshots of the posts viewed by the Journal.
Ms. Chapman had earlier received a 30-day ban for calling two other users who were degrading Vice President Kamala Harris racist. Even though she reported the comments made by the other users, they weren’t taken down.
“What I’m learning about Facebook is not to talk on Facebook,” Ms. Chapman says.
Facebook reinstated Ms. Chapman’s account after the Journal shared her example.
Recently, Facebook expanded the Oversight Board’s scope to include decisions on user requests to remove content.
In recent years, Facebook has been relying more heavily on its artificial intelligence to flag problem content, according to people familiar with the company. In May 2020, the company touted its use of AI to take down content related to the coronavirus pandemic.
Facebook took down 6.3 million pieces of content under the “bullying and harassment” category during the fourth quarter of 2020, up from 3.5 million in the third quarter, in part because of “increasing our automation abilities,” the company said in its quarterly Community Standards Enforcement report.
Users appealed about 443,000 pieces of content in the category, and Facebook restored about a third of it, the company said. Other categories saw fewer pieces of content removed compared with the previous quarter, and content action can be affected by many outside factors, such as viral posts that raise numbers.
Facebook increasingly polices content in ways that aren’t disclosed to users, in hopes of avoiding disputes over its decisions, according to current and former employees. The algorithms bury questionable posts, showing them to fewer users, quietly restricting the reach of those suspected of misbehavior rather than taking down the content or locking them out of the platform entirely.
Facebook has acknowledged the practice in some cases. To protect state elections in India, it said in a March blog post that it would “significantly reduce the distribution of content that our proactive detection technology identifies as likely hate speech or violence and incitement.”
The use of automation for moderation at scale is “too blunt and insufficiently particularized,” says Mr. Sylvain of Fordham. “AI and automated decision-making right now are just not good enough to do the hard work of sorting really culturally specific posts or ads.”
Users say they have had content taken down from months or years earlier with no explanation, in what one user called “Facebook’s robot gone wrong.”
The Facebook spokeswoman says one reason for this could be a “banking system” technology employed internally that keeps track of problematic content and removes it when it is posted again.
The Oversight Board has advised Facebook to let users know when automated enforcement is being used to moderate content, and let them appeal those decisions to a human being, in certain cases. Facebook said in its blog post that it is assessing the feasibility of those recommendations.
Since the start of the coronavirus pandemic last year, Facebook users haven’t had an opportunity to appeal bans at all, and instead are given the option to “disagree” with a decision, without further review, although Facebook did look at some of those cases and restore content. Facebook says that is because it has lacked the human moderators to review cases. In a late April decision, the Oversight Board urged Facebook to “prioritize returning this capacity.”
Tanya Buxton, a tattoo artist in Cheltenham, England, has tried to appeal multiple restrictions on her Facebook accounts showcasing areola tattoos—tattoos that are made to look like nipples for women who have had mastectomies.
How much of a breast, or nipple, can be shown on Facebook has been a particularly fraught issue.
In one of its internal documents elaborating on the rules, it tackles sensitive subjects ranging from what constitutes “near nudity” or sexual arousal.
Facebook users should be allowed to show breastfeeding photos, the company wrote in a document to moderators, but warned: “Mistakes in this area are sensitive. Breastfeeding activists or ‘lactivists’ are vocal in the media because people harass them in public. Any removals of this content make us exposed to suggestions of censorship.”
While the public Community Standards are vague about Ms. Buxton’s tattoos, Facebook’s internal guidelines address the issue and say they should be allowed. But, the company acknowledged in its guidelines to moderators, “It can be really hard to make an accurate decision here.”
Ms. Buxton, who says she isn’t aware of the internal guidelines, has appealed each time she has been banned.
Facebook says it mistakenly removed two pieces of content from Ms. Buxton’s page that it restored after questions from the Journal.
Last year, after an appeal, Facebook sent her an automated note, saying that because of the coronavirus pandemic, “we have fewer people available to review content.”
“As we can’t review your post again, we can’t change our decision,” Facebook wrote.
Photo: Cutouts near the Capitol ahead of a Congressional hearing on disinformation and social media in March.
PHOTO: CAROLINE BREHMAN/CONGRESSIONAL QUARTERLY/ZUMA PRESS
Link: https://www.wsj.com/articles/inside-facebook-jail-trump-the-secret-rules-that-put-users-in-the-doghouse-11620138445?page=1