Illustration by Alex Castro / The Verge

Facebook and Instagram have instituted a temporary change in policy that allows users in some countries to post content that’s usually forbidden, including calls for harm or even the death of Russian soldiers or politicians. The change first surfaced in a report by Reuters, citing internal emails to moderators. In them, the outlet reports mods are told that calls for the death of Russian President Vladimir Putin or Belarusian President Alexander Lukashenko will be allowed, as long as they don’t contain threats toward others or “indicators of credibility” like saying where or how the act will take place.

In a statement sent to The Verge, Meta spokesperson Andy Stone said, “As a result of the Russian invasion of Ukraine we have temporarily made allowances for forms of political expression that would normally violate our rules like violent speech such as ‘death to the Russian invaders.’ We still won’t allow credible calls for violence against Russian civilians.”

The New York Times confirmed this policy applies to people using the service from Ukraine, Russia, Poland, Latvia, Lithuania, Estonia, Slovakia, Hungary, and Romania. The Times also notes that in 2021, Vice reported Facebook moderators received similar temporary instructions about “death to Khamanei” content and cited a spokesperson saying that Facebook had made that particular exception in certain earlier cases as well.

The Facebook community standards regarding hate speech and violence and incitement have continued to receive updates since the company started publishing them publicly in 2018. This change is just the latest example of how platforms have modified their treatment of content originating from the invading countries or relating to them since the fighting began.

An update to the Reuters report includes the content of the message sent to moderators, which reads as follows:

We are issuing a spirit-of-the-policy allowance to allow T1 violent speech that would otherwise be removed under the Hate Speech policy when: (a) targeting Russian soldiers, EXCEPT prisoners of war, or (b) targeting Russians where it’s clear that the context is the Russian invasion of Ukraine (e.g., content mentions the invasion, self-defense, etc.).

Typically, moderation guidelines would dictate that language dehumanizing or attacking a particular group based on its identity be removed. But the emails cited by Reuters claim the context of the current situation requires reading posts from the listed countries about generic Russian soldiers as a proxy for the Russian military as a whole, and absent credible statements attached, moderators are directed not to take action on them.

Still, it’s unclear whether the posts would be removed even without the direction. The policy already includes many carve-outs and exceptions. It states explicitly that additional information or context is needed before the enforcement of the policy in several cases, including:

Content attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic. Facebook looks at a range of signs to determine whether there is a threat of harm in the content. These include but are not limited to: content that could incite imminent violence or intimidation; whether there is a period of heightened tension such as an election or ongoing conflict; and whether there is a recent history of violence against the targeted protected group. In some cases, we may also consider whether the speaker is a public figure or occupies a position of authority.

The Russian government’s reaction to the report is unknown, and there haven’t been any updates from its censorship agency Roskomnadzor, which banned Facebook earlier this month.

By

Leave a Reply

X