Please ensure Javascript is enabled for purposes ofwebsite accessibility

As illegal immigration debate intensifies, some question Facebook hate speech standards


FILE - In this June 7, 2013, file photo, the Facebook "like" symbol is on display on a sign outside the company's headquarters in Menlo Park, Calif. (AP Photo/Marcio Jose Sanchez, File)
FILE - In this June 7, 2013, file photo, the Facebook "like" symbol is on display on a sign outside the company's headquarters in Menlo Park, Calif. (AP Photo/Marcio Jose Sanchez, File)
Facebook Share IconTwitter Share IconEmail Share Icon

At a press conference last week, Stanislaus County, California Sheriff Adam Christianson contrasted the life of slain Corporal Ronil Singh, who came to the United States legally from Fiji, with that of his alleged killer, Gustavo Perez Arriaga, who entered the country illegally.

"This suspect, unlike Ron – who immigrated to this country lawfully and legally to pursue his lifelong career of public safety, public service and being a police officer – this suspect is in our country illegally. He doesn't belong here. He's a criminal," Christianson said.

Some Facebook users have been dismayed to discover their posts blocked or their accounts suspended for making the same comparison, and they say the social media platform’s efforts to protect users from hate speech are stifling an honest debate about immigration policy and censoring mainstream conservative opinions.

“Stating FACT is no longer aloud on Facebook. This is the second time in less than a month FB has suspended my account. Here’s the latest as I posted on the ILLEGAL who murdered the California officer,” one user complained on Twitter after being suspended for a post that stated in all caps, “ALL HERE ILLEGALLY ARE CRIMINALS AND ONLY CRIMINALS.”

Several Facebook users told the Blue Lives Matter blog their posts describing Singh as a legal immigrant and Arriaga as an illegal immigrant had been flagged for violating the site’s hate speech standards. The notices they received did not specifically say how they violated standards, but some believe the objection was to their use of terms like “illegal immigrant” and “illegal alien” to refer to Arriaga.

A Facebook spokesperson said Wednesday the use of these words does not inherently violate its Community Standards, but the company acknowledged some enforcement errors were made regarding posts that called Arriaga an illegal immigrant and an apology has been sent to at least one user.

“We do make mistakes in this space. Which is why earlier this year we introduced appeals – this gives people the option to let us know when they think we’ve made a mistake on content removed for hate speech, among a variety of other policies. We’re working on expanding the appeals offering so that it applies to all content types,” said Facebook spokeswoman Ruchika Budhraja.

Conservative users also complained recently that evangelist Franklin Graham’s account was briefly suspended over a two-year-old post in which he expressed support for legislation that would require transgender individuals to use bathrooms that correspond to their biological sex. According to Graham, Facebook apologized for that decision Sunday.

“It looks like we made a mistake and removed something you posted on Facebook that didn’t go against our Community Standards,” a message sent to Graham stated.

These incidents underscore the enormity of the challenge social media giants faces in policing political speech on a global scale and the risk of alienating users who believe content decisions are exposing underlying biases against them.

Facebook has long acknowledged this is a difficult task and its implementation of hate speech policies will never be flawless. With billions of posts a day around the world, though, even a miniscule error rate could leave thousands of users facing unjust punishments.

“Our approach, like those of other platforms, has evolved over time and continues to change as we learn from our community, from experts in the field, and as technology provides us new tools to operate more quickly, more accurately, and precisely at scale,” Facebook executive Richard Allan wrote in a lengthy 2017 post attempting to explain some of the factors involved.

According to Nicholas Bowman, a research associate at West Virginia University’s Interaction Lab and editor of Communication Research Reports, the outrage over errors in this process is understandable, but the extent of the problem is hard to gauge because instances of alleged social media censorship ironically get amplified by social media.

“No one likes to be called a racist, and nobody wants to be called a hate-monger None of us think what we’re saying is hateful,” he said.

Facebook has adopted a much broader definition of hate speech than the U.S. Supreme Court has, and, in the process, it has established standards that are difficult to enforce and unlikely to ever please all users.

“From the get-go, hate speech doesn’t have a great definition No matter how you define it, it’s always going to be objectionable to some group of people online,” Bowman said.

Social media offers users unprecedented access to a global community, but platforms have struggled to police offensive content without watering down restrictions to the point where they’re meaningless or placing stiff limits on what users can discuss or say.

“They’re obviously weighing, is it worth it to upset those people because we think it’s important to maintain a better social experience for people,” said Mike Horning, an expert on the social and psychological effects of communications technologies at Virginia Tech University.

Facebook’s Community Standards officially define “hate speech” as “a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.”

There are three tiers of speech that are barred under this policy:

  • Tier 1: violent or dehumanizing speech based on protected characteristics or immigration status, such as comparisons to insects, filth, bacteria, intellectually or physically inferior animals, or criminals.
  • Tier 2: statements of physical, mental, or moral inferiority, expressions of contempt like “I hate” or “I don’t like,” or expressions of disgust including “gross,” “vile,” and “disgusting.”
  • Tier 3: calls to exclude or segregate a person or group based on protected characteristics or content that targets people with slurs, which are “defined as words commonly used as insulting labels for the above-listed characteristics.”

“Discussing controversial topics or espousing a debated point of view is not at odds with our Community Standards. Immigration, for example, is a subject that people have different points of view on, and we think it’s important for people to be able to share their opinions and beliefs on Facebook,” Budhraja said.

Under Tier 3, the standards explicitly note, “We do allow criticism of immigration policies and arguments for restricting those policies.” That is a significant caveat, but it is also a nebulous one open to interpretation.

“They don’t have in their community standards a mechanism to identify speech that is part of a broader political policy debate versus speech that is intended to diminish or disparage an individual,” Horning said.

In August, Facebook banned conspiracy theorist Alex Jones and his Infowars news site for violating these standards because of dehumanizing and violent content. Twitter did the same a month later citing abusive behavior by Jones against other users, leaving many to wonder why his past violent and abusive posts had not gotten him removed from either site sooner.

In a front-page story last week, The New York Times revealed details from 1,400 pages of internal rulebooks used to guide around 15,000 Facebook moderators around the world in judging the permissibility of content, and it gets much thornier than the public Community Standards suggest.

The documents were leaked to The Times by an employee who “feared that the company was exercising too much power, with too little oversight—and making too many mistakes.”

According to the article, a team mostly comprised of lawyers and young engineers regularly meets to create PowerPoint slides to distill controversial content questions to yes-or-no decisions intended to eliminate moderators’ personal biases. Moderators told The New York Times they are under pressure to review 1,000 pieces of content a day, taking only about 10 seconds for each, but Facebook executives insisted there are no quotas.

Elizabeth Cohen, a professor of communication studies at West Virginia University who specializes in media psychology, said this approach is consistent with Facebook’s attempts to avoid being seen as a media company with editorial oversight of what it publishes, but she is skeptical human biases can ever be completely removed from the process of identifying hate speech.

“You have to make a lot of complex decisions about the context and that’s something humans have to do, and humans are inherently biased Essentially what they’re trying to do is ask people to act like algorithms,” she said.

In his 2017 post, Allan explained Facebook is experimenting with artificial intelligence that can identify toxic language, but its monitoring efforts rely heavily on users in the community reporting millions of possible cases of hate speech every week.

“With billions of posts on our platform — and with the need for context in order to assess the meaning and intent of reported posts — there’s not yet a perfect tool or system that can reliably find and distinguish posts that cross the line from expressive opinion into unacceptable hate speech,” he wrote.

In the U.S., false positives and questionable judgment calls have given rise to claims social media companies are biased against conservatives, an allegation the companies strenuously reject. Testifying before Congress, Facebook CEO Mark Zuckerberg acknowledged many Silicon Valley employees are liberal but he insisted content is never banned for political reasons.

Immigration has become a particularly challenging subject for social media platforms to moderate as the public policy debate intensifies, and Facebook users complaining about Gustavo Arriaga are not the first to encounter complications. Last year, Twitter temporarily prevented the Center for Immigration Studies from promoting tweets that referred to “criminal aliens” and “illegal aliens” because they were deemed “hateful content.”

In the days before the midterm elections, Facebook joined most television news networks in rejecting an ad produced by President Donald Trump’s campaign that linked a cop-killer to a caravan of Central American migrants. At the time, Trump’s campaign manager complained the “#PaloAltoMafia” was trying to control people’s thoughts, but Fox News was among the outlets that distanced itself from the ad.

For several years, mainstream media outlets have shied away from describing those in the country illegally as “illegal immigrants,” but federal laws often refer to them as “illegal aliens.” Some who use that language are accurately reflecting the legal terminology, but others are undoubtedly racist.

“I would be highly suspicious that Facebook flags every use of ‘illegal alien’ as hate speech. There’s no way they do that,” Bowman said.

Bowman observed he has used the term in a legal context in Facebook discussions without any trouble, and he added Facebook might be better served by offering more transparency regarding what is being flagged and why. It may be the source of the content or other material a user has posted that set off an alarm rather than the specific words they used.

“Facebook needs to make it clear if your post is being reported by a follower or triggered by an algorithm,” he said.

Horning pointed to polls showing most Americans object to silencing what they deem to be “politically incorrect” speech. In theory, the blowback over Facebook’s definition of hate speech could drive some conservative users away to other platforms, but in practice, there is nowhere comparable for them to go.

“At this point, Facebook is so massive, I don’t think they’re particularly worried about that,” he said.

Critics have accused Facebook and Twitter of censorship and demanded all views be provided an equal platform, but experts stressed social media platforms are under no obligation to do so. By signing onto Facebook, users agree to live with whatever content rules engineers and lawyers in Menlo Park want to impose.

“Facebook is a private company,” Cohen said. “Our First Amendment rights apply to the things our government can do. Facebook does not have a constitution guaranteeing us a right to free speech.”

Loading ...