What content moderation policies on big platforms have the greatest impact on political content in the MENA region?
Policies on major platforms including Meta, TikTok, Twitter (X), and YouTube related to graphic content, hate speech, material classified as terrorist or incitement to violence, and sometimes, misinformation have a big impact in the region. Ad and other business account policies can also have an impact.
Reports from 7amleh, SMEX’s Helpdesk, and the work and experience of many Mena Alliance for Digital Rights members indicate that Instagram is a key platform for political content, although Facebook is still popular, and Twitter/X and Youtube are still in use by some. TikTok is increasing in importance, to the degree that it has even been banned in several places in the region. This FAQ focuses on Meta because of its prominence in the region and since it is still where most content moderation failures take place. Additionally, myriad leaks, the Oversight Board, and even Meta’s transparency practices have offered some insights into its content moderation processes. However, as more information about content moderation on TikTok becomes available, this FAQ will be updated if possible.
We want to make it clear that despite the focus on Meta, many other platforms and their policies have a similarly harmful impact on human rights in the region and elsewhere. One key area that impacts the region quite often has to do with payment platforms like PayPal and fundraising platforms like GoFundMe. Payments and fundraisers may be delayed or simply deleted, even when they do not ostensibly violate any terms of service. While those issues are related to content moderation—and sometimes have the same root causes—they are beyond the scope of this FAQ. For more information on financial censorship, check out 7amleh’s “Paypal 4 Palestine” Campaign.
How could the changes announced by Meta and other platforms in response to the election of Donald Trump impact the MENA region?
Donald Trump has a long history of using and caring a lot about social media platforms, and he appears to have made control of these platforms a key part of his second term strategy. This is bad news for the MENA region. Trump has made it clear that he is biased against Muslims (and likely people from the MENA region more broadly) and unconditionally supports Israel. Since his election, Meta loosened restrictions on hate speech, TikTok thanked President Trump for saving it after he instigated the TikTok ban himself, and Google dropped its commitment to not use AI for weapons and surveillance.
It’s unclear how the updates to Meta’s content policy and enforcement announced on Jan 7 2025 will impact the MENA region, but Mr. Zuckerberg’s announcement noted an eagerness to work with Trump.The announcement also emphasized that Meta will continue to moderate “drugs, terrorism, and child exploitation.” “Terrorism” in the United States has always been political code for “Arabs, Muslims, and Southwest Asians,” and has not included far-right violence. Despite a focus on “free speech” in the announcement, it is unlikely there will be any great improvement in the MENA. Worse, it is very likely that, as noted in SMEX’s analysis, hate speech against women, migrants, and LGBTQ people in the region will increase.
TikTok users can also clearly expect changes. The US Supreme Court upheld a law banning the platform, and TikTok even shut down with a notice to users before the ban went into effect. The notice stated the platform was working with President Trump to restore service, and when the temporary service disruption was over the notice thanked President Trump for saving the platform. As noted by The Intercept, some US lawmakers believed the platform was too pro-Gaza. Since the ban was lifted, many users say they are not seeing Gaza-related content any longer.
Why do these platforms seem to fail in the region?
Content from war and conflict zones, and even protests, can be graphic in nature. However, the underlying issue is often that, as happens with many other places in the Global Majority, platforms fail to properly resource moderation in the MENA region and moderation of the Arabic language. 2024 research from MADR member Meedan, for example, shows that LLMs that were “at the time of evaluation, state-of-the-art,” largely performed very poorly with Arabic. These failures were particularly notable in the trove of leaked internal Meta documents released in 2021, dubbed the “Facebook papers.” Several documents outlined insistent warnings from Meta employees, pointing out the complexity of the Arabic language, the need for proper dialect expertise, and Meta’s consistent failures. For example, in Afghanistan Meta removed only 1% of hate speech, and “terrorist content” algorithms “incorrectly deleted nonviolent Arabic content 77 percent of the time.” Additionally, they fail to properly apply newsworthiness or political content exceptions to policies even where clearly applicable. Lastly, platforms apply biased definitions of terrorism and violent extremism to content across the region, including the United States’ terrorist designations lists.
How do platforms moderate content under these policies?
The most common issues people face are removal of their content, suspension of accounts, or the more insidious suppression of their content such as demotion or shadowbanning. Shadowbanning is when content is “covertly being hidden or taken down.” Hashtags can also face similar artificial suppression.
One other more recent practice is for platforms to place a warning label on content, sometimes in such a way that users have to click through to view it. This is known as an “interstitial.” For example, Meta regularly places a “graphic content” interstitial/clickable warning label on images from the region, sometimes incorrectly.
Removal of content can lead to permanent account suspension. Removal of content can usually be appealed. Shadowbanning, warning labels, and account suspension generally cannot.
How are these moderation issues impacting political content?
Several organizations monitor content moderation and support users with content moderation issues, including 7amleh (content related to Palestine), SMEX (content in Arabic-speaking countries), and Mnemonic (human rights documentation), which have documented significant removal of content reporting on or discussing human rights violations, and mourning lives lost. Many international NGOs and coalitions have also reported on and documented content removal, including MADR members Access Now, Electronic Frontier Foundation, and Article 19. High profile activists and individuals have also had their content removed. The Oversight Board (discussed in greater detail here) has taken several cases related to Palestine, as well as Iran and many Arabic-speaking countries, and has made numerous recommendations related to moderation in Arabic and moderation of human rights-related and newsworthy content.
The impacts of the removal of this content are myriad. Several UN mechanisms, the International Criminal Court, the International Court of Justice, and European courts with universal jurisdiction all have ongoing cases and investigations related to Palestine, Syria, Iraq, Sudan, South Sudan, Libya, and Iran. These investigations can be aided by content posted on social media, and international mechanisms are increasingly interested in accessing this important evidentiary content. Unfortunately, reams of evidence are being removed by overzealous and flawed content moderation. Advocates have been trying to create a pathway to request that platforms preserve such content, but so far, efforts have only yielded ad-hoc preservation in limited circumstances— including the war in Ukraine—excluding Palestine, Syria, and the rest of the MENA region.
Furthermore, in places impacted most severely, such as Gaza and all of Palestine, designated international and local press have been intentionally silenced, and people are left to rely on social media to get important updates. For example, Instagram accounts have provided the most up to date information about where in Lebanon Israeli bombs are dropping. This kind of on-the-ground coverage from everyday users and independent media is essential.
Are there other impacts besides removal of important political content?
Unfortunately, everyday users in the region are impacted. People mourning lost loved ones and discussing events, even those being covered in mainstream media,are seeing the impact of these policies as well.
Furthermore, not only do platforms over-remove political content, they fail to remove hate speech targeting Palestinians and LGBTQI+ and other vulnerable users throughout the region. This online hate and violence mirrors real-world violence experienced by those communities. Even when platforms do have sufficient policies, they often have not invested in appropriate language and cultural expertise or technology to implement the policies. For example, as discussed in greater detail here, a 2022 review of Meta’s content moderation in Palestine found an absence of any Hebrew hate speech classifiers. Reporting from 2024 shows that this continues to be a problem. Additionally, platforms allow harmful paid content such as ads approved by Meta that include calls to inflict violence or encourage migration of Palestinians from the West Bank.
How does content moderation impact already marginalized groups?
As with any other region, there are some users on social media in the MENA region who are particularly vulnerable, such as women, LGBTQI+ people, and migrant workers, as well as human rights defenders.The impacts faced by these communities serve as a disturbing reminder of how social media platforms’ shortcomings and content moderation failures can link to offline harm perpetrated by the military, law enforcement, or non-state actors. As Article 19 points out, the methods of tech-facilitated state policing used against LGBTQI+ users in the region, including social media monitoring, “are also being used against other marginalised communities – as well as against the wider population.”
ARTICLE 19 research has detailed the entrapment of LGBTQI+ users with fake profiles and gathering of digital evidence including social media posts. More recently, they conducted the most extensive review of tech-facilitated harms against LGBTQI+ people ever done in the region. The report revealed some disturbing statistics; in addition to extremely high levels of police violence and targeting of devices, 25% of respondents reported experiences of state/police entrapments, including through Facebook.
SMEX has documented myriad issues with sexual and reproductive health and rights information and LGBTQI+ advocacy being improperly removed in the region, despite the presence of policies that should protect such content. Their research found that “Arabic content is met with harsher restrictions than similar English-language content.”
However, in addition to content removals and entrapment MADR members have documented how women (especially women human rights defenders) and LGBTQI+ users in the region can face high levels of hate speech, outing, and blackmailing. 88% of Article 19’s interviewees had directly experienced hate speech online and 51% of interviewees explained having no luck with their reporting. MADR member Jordan Open Source Association recently released an AI tool called Nuha, to help researchers detect and classify hate speech. It has been trained on “a dataset obtained by monitoring 20 trending hashtags related to women and the feminist movement in Jordan, as well as 83 names of women activists and women influencers in Jordan.”
Migrant workers also face often under-moderated hate speech on social media platforms. At the same time, despite years of advocacy in some countries in the region it appears domestic workers are illegally auctioned on social media sites, even major platforms like Instagram.
EFF has also documented the way in which cybercrime provisions in various countries across the region impact free expression on online platforms, particularly speech related to LGBTQI+ issues.
What alternatives exist to major social media platforms?
There are many alternatives to Meta, TikTok, YouTube, and X, although every one of these alternatives unfortunately do not allow users to reach as big of an audience.
Things to consider about alternative platforms include:
- Who owns the platform? What are their financial interests and political leanings?
- Who owns the data on the platform and where is it stored? Is the data held in a central server or in servers across the world? Are any of those jurisdictions that present security concerns for you?
- What are the rules and enforcement strategies for content moderation on the platform? Is the platform centrally moderated or moderated by the community? Are the rules clear?
- Is data from the platform portable to other platforms or is it hard to extract?
- Is the platform part of the “fediverse”?
Many open Internet advocates are particularly focused on the fediverse, “an interconnected social platform ecosystem based on an open protocol called ActivityPub, which allows you to port your content, data, and follower graph between networks.” Many people have started to access the fediverse using Mastodon.
The Oversight Board
The Oversight Board is a body that was created by Meta to deliver decisions on individual content moderation decisions and advisory recommendations. The Board is independently incorporated, funded by a trust created by Meta- essentially, a pool of money that Meta has set aside and cannot control. However, the trustees include several originally appointed with Meta’s approval, and most of the Board members were also appointed while Meta still had a final say. Nonetheless, the Board has made many recommendations that reflect demands made by MENA civil society.
The Board makes decisions about individual cases from Facebook, Instagram, and Threads in which Meta has decided to take down or leave up content. The Board also responds to requests for policy advice from Meta (“policy advisory opinions”) and decides “summary” cases.
What rules is the Board applying?
The Board makes its decisions by interpreting international human rights law, in particular Articles 19 and 20 of the International Covenant on Civil and Political Rights, and Meta’s own policies. You can read more about applicable Meta policies here in this FAQ (internal link). The Board generally uses a standard case format that applies the three-part test for restrictions on freedom of expression, which must meet the requirements of legality, having a legitimate aim and necessity and proportionality. The Board has relied on and cited myriad UN decisions and reports, including Special Rapporteur reports focused on content moderation.
In general, the Board has interpreted legality in line with UN guidance to mean rules be written precisely enough that users know how to follow those rules, and they must be published somewhere they are accessible to users. The Board has recommended that content moderation policies be easily accessible, easy to understand, contextualized for regions and languages where necessary, and supported with examples where necessary as well. The change in Meta’s policies is noticeable since the Board started taking cases, if only because they are now documented with track changes in Meta’s transparency center. The Board has also pointed out other legality issues for Meta, for example not indicating clearly that Instagram is governed by Facebook policies.
Legitimate aims include protecting “the rights or reputations of others, or national security, public order, or public health or morals.” Legitimate aims, as the Board has emphasized, do not include avoiding offense, but do include counterterrorism and removing incitement to violence. The Board has a contradictory record when it comes to content that may not be imminent incitement to violence, but that may still be linked to it in less measurable ways, for example content targeting immigrants in France, blackface in the Netherlands, and even antisemitic content. (cite to these three cases, which all have sizable minority opinions)
Necessity and proportionality are not as well defined, but generally the Board has looked at alternate measures like warning screens, and has also often asked Meta for public or confidential reports on certain practices to understand their impact better. Some alternate measures have been thoroughly explored with regards to “terrorist content,” and increasingly other types of content such as hate speech, for example limiting comments, adding warning screens, or making posts impossible to share. Many users in the WANA region complain of another measure that Meta denies using, which is shadowbanning, or limiting the reach of a user’s posts without notifying them.
The Board has less regularly considered the six factors of the Rabat Plan of Action. Those factors include:
- Social and political context
- For example, speech related to elections or speech taking place during a genocide.
- The speaker
- For this factor, a particularly influential speaker would weigh heavily; for example speeches made encouraging violence against ethnic minorities by elected officials.
- Intent
- This factor would consider whether the speaker intended to incite violence.
- Content
- This would look at, naturally, the actual content of the speech such as the kinds of words and imagery used.
- Extent of the speech
- This factor would consider how widespread the speech is. For example, if it is a viral video or a post on a private page with 10 followers.
- Likelihood of causing harm, including imminence.
For more about the Rabat Plan of Action read Article 19’s take here, and the analysis of David Kaye, former UN Special Rapporteur on Freedom of Expression, on how it could be applied to content moderation here.
How does one file an appeal with the Board?
The Oversight Board website has instructions here, but here is a tl;dr of top points users should know if they might be interested in filing an appeal:
- The likelihood the Board will take on any case is incredibly low. However, the broader of an issue presented by your case, the more likely the Board is to take it.
- The Board does not have authority over account suspensions or ads.
- You must first appeal any content decision to Meta, and then receive an Oversight Board ID number from Meta.
- Once you have that ID, you can enter it into the Board website. You will have to allow the Board some access to potentially private information; you can read the Board’s privacy policy here.
In order to have the best chances with your appeal, you should understand whether the decision was actually incorrect under Meta’s existing policies, or whether there is a gap in the policies you think the Board should address. You will have the opportunity to file a statement with the Board, and you can assist them by making your reasoning clear.
The Board will not purposefully reveal your identity or the specifics of your content but be aware that the Board taking on a case can present security risks and it is worth considering those.
Once the Board has taken a case, it will accept public comments. Should you wish to, you can share the case announcement and encourage others to submit comments. You can take a look at comments submitted in other cases, for example the public comments in the Policy Advisory Opinion on Referring to Designated Individuals as “Shaheed” and the comments in the Shared Al Jazeera case. For more about the Shaheed PAO, read SMEX’s analysis of the decision and of Meta’s inadequate response. You can read more about the Shared Al Jazeera case here and here.
If the Oversight Board has accepted your appeal, please feel free to reach out to MADR for support. MADR can help you understand the process and think about how you want to participate.
Specific platform policies
In addition to the following policies, it’s worth noting that content related to sexual health and reproductive rights is often impacted by poorly designed policies. As noted by SMEX, “Platforms do not have SRHR-specific policies; instead, regulations are generally scattered around different policies, such as community guidelines and advertising policies. These regulations often fall under ‘adult’ or ‘sexual’ content, leaving room for poor and summary regulation.”
Meta
All of Meta’s policies can be found in its Transparency Center, and Meta includes far more detail than any other platform both about its policies and how it enforces those policies. The Center discusses how Meta uses automation, as well as how it applies “strikes” to accounts and limits or suspends them. The Transparency Center also explains how Meta determines when to keep up “newsworthy” content that would otherwise be taken down, taking into account “whether that content surfaces an imminent threat to public health or safety, or gives voice to perspectives currently being debated as part of a political process.” That being said, these explanations are often worded in ways that may obscure key pieces of information.
Out of all the applicable policies on Meta, the Dangerous Organizations and Individuals policy has the most impact on content from the MENA region.
Dangerous Organizations and Individuals
Meta’s policy relies on a list of “designated dangerous organizations and individuals.” The list is divided into Tier 1 and Tier 2, with Tier 1 being reserved for entities and individuals that target civilians. Meta’s policy prohibits 1) “glorification,” 2) support, or 3) representation of designated entities and individuals. It also includes a provision explaining that users can share content “reporting on, neutrally discussing or condemning dangerous organizations and individuals or their activities.”
Tier 1 includes all entities designated by the United States as “specially designated narcotics trafficking kingpins (SDNTKs),” “foreign terrorist organizations (FTOs),” and “specially designated global terrorists.”
The policy was updated extensively in late 2023 and early 2024. Changes in December 2023 added a prohibition on “glorification” and removed a previous prohibition on “praise.” Practically the differences between glorification and praise are not extensive. The December 2023 update also added a prohibition on “unclear references.” There are no examples of content, but Meta agreed to provide them in response to an Oversight Board recommendation. The news reporting explanation was added on Feb 8, 2024.
Issues:
Bias in the what and who is covered: US Designation lists do not include many categories of groups that target civilians, including domestic far-right organizations in the United States such as the Proud Boys. Meta’s policy doesn’t appear to cover many types of misinformation or conspiracy theories quite directly linked to offline harm, for example “the Great Replacement Theory,” at the root of the 2019 Christchurch massacre shooters’ manifesto and replicated ad nauseam in manifestos since then. Finally, state-sponsored terrorism is not addressed by the DOI policy, nor Meta’s policies in general, at all.
Newsworthiness and news reporting: The DOI policy specifically allows, “content reporting on, neutrally discussing or condemning dangerous organizations and individuals or their activities.”However, it’s unclear whether this, or Meta’s “newsworthiness” exception, have been applied to content related to October 7, 2023. Anecdotal evidence appears to indicate otherwise, as the prime minister of Malaysia’s post on Facebook conveying his condolences after the assassination of Hamas political chief Ismail Haniyeh was removed by the platform. Meta responded that it removed the prime minister’s content in “error” and proceeded to restore the post with a newsworthy label indicating: “This post is allowed for public awareness.” However, other similar content removals were not reversed, despite the fact that this type of content clearly meets Meta’s newsworthiness standard.
Hateful conduct
Previously, Meta had a hate speech policy which prohibited 1. Direct attacks on 2. Protected characteristics. Attacks include “dehumanizing speech; statements of inferiority, expressions of contempt or disgust; cursing; and calls for exclusion or segregation,” as well as the use of “harmful stereotypes” such as Blackface.
Protected characteristics align largely with UN categories, and include “race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and serious disease.”refugees, migrants, immigrants, and asylum seekers are protected only from the most severe attacks, and “Sometimes, based on local nuance, we consider certain words or phrases as frequently used proxies for PC groups.”
Meta’s new “Hateful conduct,” policy retains most of the same language, but it now has provisions to allow for more speech targeting transgender people, women, and immigrants. As explained here (link to “already marginalized groups” portion of FAQ) these groups are already disproportionately impacted by content moderation policies.
The issue with these policies in the MENA region is that local nuance and appropriately resourcing Arabic and Hebrew is key here. As the BSR review discovered, Meta did not have any hate speech classifiers for Hebrew, allowing extremely violent speech to remain up. On the other hand, LGBTQI+ people reclaiming anti-LGBTQI+ slur words in Arabic resulted in a removal that was reversed by the Oversight Board.
Meta’s “Zionist” policy
In July of 2024, Meta announced that it would “now remove content that targets “Zionists” with dehumanizing comparisons, calls for harm, or denials of existence on the basis that “Zionist” in those instances often appears to be a proxy for Jewish or Israeli people.
A letter signed by 73 civil society organizations before Meta made the change emphasized that “ The proposed policy would too easily mischaracterize conversations about Zionists — and by extension, Zionism — as inherently antisemitic, harming Meta users and undermining efforts to dismantle real antisemitism and all forms of racism, extremism and oppression. Treating ‘Zionist’ as a proxy will also encourage the incorrect and harmful conflation of criticism of the acts of the state of Israel with antisemitism.” Despite these serious concerns, repeatedly expressed to the company by civil society, Meta went ahead with the change. Undoubtedly, it will be implemented with the use of automation, which, as discussed above, consistently displays technical issues and bias with Arabic-language moderation, leading to a high number of false positives.
Even where the policy is applied accurately, it will almost certainly lead to content removals that violate international human rights standards. Political discourse is given the highest level of protection in, and is in many ways the reason for, the freedom of expression protections in Articles 19 and 20 of the ICCPR. Importantly, the scope of this right “embraces even expression that may be regarded as deeply offensive,” though such speech can appropriately be regulated for other reasons, for example to prevent incitement to violence. (General Comment No. 34, para. 11).
Violence and Incitement
This policy prohibits “language that incites or facilitates violence and credible threats to public or personal safety.” Content discussing political events that are related to offline unrest may be removed under this policy even when the content itself does not directly encourage violence.
Graphic Content
Meta’s graphic content policy prohibits a whole list of specific types of gorey still images and videos of humans and animals. In some cases Meta will allow such content with a warning screen. This absolutely includes the kind of content a user might post from a conflict zone, for example violent death, a person being threatened with death, or acts of brutality.
Importantly, the policy notes the following: “We recognize that users may share content in order to shed light on or condemn acts such as human rights abuses or armed conflict. Our policies consider when content is shared in this context and allow room for discussion and awareness raising accordingly.”
Content that is clearly labeled should fall under this provision, but content from the MENA region often still gets removed under this policy.
Telegram
Unlike the other platforms listed here, the messaging app Telegram does not have any presence in or legal connection to the United States. In fact, its founder Pavel Durov has several citizenships- including a UAE passport.
Telegram is used throughout the MENA to organize and share information. Unfortunately it is also exploited by bad actors to foment hatred against marginalized groups and plan vigilante violence, and by governments to spread misinformation and surveil activists by exploiting security deficiencies.
Telegram has played a particularly big role in Iran; at one point it was so popular that, like Facebook in Myanmar, it was “the Internet.” Telegram was a key platform for organizing the 2017-18 protests, and despite the government’s ban on Telegram, activists continue to access it using VPNs. Like elsewhere in the region, Telegram’s moderation has favored the government while harming activists. During the 2017-18 protest, Telegram agreed to an Iranian government request to block a channel the government said was fomenting violence, More recently, Telegram refused to remove channels being used by the government to collect data on and harass human right defenders, and agreed to shut down purported Hamas channels.
At a global level, Telegram has served as a place for neo-nazi and far right organizing, drug and weapon sales, and more, leading to bans and mounting pressure to increase content moderation.
- That may be why, in September of 2024, Durov announced that Telegram would be cracking down on “problematic content” such as fraud and terrorism using “cutting edge AI tools”, and that the company will now share user details, including internet IP addresses and phone numbers, with law enforcement “in response to valid legal requests”.
YouTube
YouTube’s content moderation faces the same issues with automation and bias that Meta’s does, but it is used differently. It has had a particularly negative impact over the years on vast archives of human rights documentation from Syria, as documented extensively by MADR member Mnemnonic. More recently YouTube has demonetized and removed content from Palestine.
“Violent or Dangerous Content” policies, including
Twitter/X
Research from many MADR members has also documented significant content removals on Twitter over the years, but more recently the issue has
TikTok
“Safety and Civility” policies
“Shocking and graphic content” policies
This FAQ was prepared by Dia Kayyali. Dia Kayyali (they/them) is a 2025 Tech Policy Press Fellow, a technology and human rights consultant, and a community organizer. As a leader in the content moderation and platform accountability space, Dia’s work has focused on the real-life impact of policy decisions made by lawmakers and technology companies, with a particular focus on impacts in global majority countries. They have cultivated global solidarity to push back and improve the impact of policies on vulnerable communities, from LGBTQIA+ people to religious minorities. They have also advocated for human rights extensively directly with policymakers in the United States, European Union, and globally. They previously served as a Senior Case and Policy Officer at the Oversight Board (aka the Facebook Oversight Board), a Policy Director at Mnemonic, a Tech + Advocacy Program Manager at WITNESS, and as an activist at the Electronic Frontier Foundation.