This paper was written by two Tunisian researchers who prefer to remain anonymous.
In the wake of the presidential elections of October 2024, the first since the coup d'état in July 2021, Tunisia has seen a resurgence of suspiciously sponsored political content, both pro- and anti-regime, on online social media networks. These included advertisements denouncing the refusal of the president of the electoral body, Farouk Bouasker, to reinstate certain candidates. We also saw “locked” profiles with Egyptian-sounding names reacting with likes or "laugh" emojis to publications on the Facebook page of the Presidency of the Republic of Tunisia. With the proliferation of anti-Saied pages garnering thousands of likes in the space of a few days, pro-regime influencers have multiplied their videos to denounce the spread of these pages and content, calling their audiences to witness the truth of the plot hatched by dark forces, which is the mainstay of the new regime's narrative.
It is hard to overlook the importance of Facebook in Tunisia. It remains the most widely used network and continues to be a major platform for political life. While it has long been the site of disinformation campaigns in Tunisia, in recent years the issue has taken on a whole new dimension: the sector has gone from being a local, cottage industry to a veritable industry run by specialized companies operating on an international scale. This industrialization of disinformation goes hand in hand with a phenomenon of opinion manipulation, taking the form of troll profiles dictating the political agenda, or fake profiles creating a false sense of popularity for certain ideas. Both disinformation and manipulation raise questions about the future of democracy, in Tunisia and globally, in a context where for many, these networks continue to represent a faithful reflection of reality.
This paper seeks to provide an overview of the dynamics of disinformation from the Tunisian digital space, exploring the different narratives conveyed, forms of manipulation, and the role of social media platforms in their amplification. The paper also shows that certain disinformation narratives circulate between different countries in the region. The aim is to broaden reflection on these forms of manipulation while proposing a regional research and action agenda that can help reduce the impact of these activities, known in the Arab world by the general name of "electronic flies".
Manipulation or Disinformation? Untangling the Concepts
It is important to clarify the concepts used when talking about misinformation or online manipulation. As the two are often intertwined, it is easy to use these words interchangeably. Claire Wardle and Hossein Derakhshan, in their report "Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking", present information manipulation in terms of three types of information disorder: misinformation (false information deliberately created to harm), misinformation (false information shared with no intention of harm), and malinformation (true information used to harm).
These disorders follow a three-phase cycle: the creation, production, and distribution of content, and are carried out by various agents with political, social, or financial motivations. These agents, whether official (state actors, political parties) or unofficial (groups of individuals, isolated trolls), play a central role in the dissemination of messages that can take a variety of forms (texts, videos, images, memes). Their motivations are varied and range from financial to social or political. The messages themselves vary in terms of durability, with some designed to have a long-term impact, while others are temporary and targeted at specific events, such as an election.
Jonathan Corpus Ong and Jason Vincent A. Cabañes describe digital manipulators as "architects of misinformation" who oversee and design campaigns based on fake news and orchestrated interactions. Often with backgrounds in advertising and public relations, these "architects" collaborate with anonymous influencers, who manage several fake accounts with large numbers of followers. These influencers create attractive, shareable content, mixing true and false information to amplify campaign messages. Further down the hierarchy, fake account operators at the community level generate "illusions of engagement" through content shares and likes.
The 2019 Presidential Election: First Trial Balloons
The early 2019 presidential election was the first wake-up call to the rising tide of misinformation in Tunisia. Several investigations revealed the massive use of Facebook pages with suspicious content. The Oxford Internet Institute, in its global inventory of organized manipulation on social networks, revealed massive use of social networks to manipulate public opinion during the 2019 presidential elections. The manipulative tactics presented in its country case studies report included the creation and management of unaffiliated Facebook pages, which disseminated false surveys, unfounded rumors, and deliberately misleading content to influence voters. The use by political parties of fake human accounts to disseminate pro-government or pro-party narratives, as well as to attack opponents, was documented.
For example, an investigation published in 2019 by the Digital Forensic Research Lab (DFR Lab), the online disinformation and manipulation research center of the American think-tank, The Atlantic Council, revealed that an Israeli company Archimedes Group had created disinformation campaigns in a number of African countries, including Tunisia. Inkyfada's analysis led to the conclusion that the content of the 11 pages concerned benefited future presidential candidate Nabil Karoui, suggesting that he was the sponsor. The pages, which gave themselves the appearance of "ranting" pages against politicians, mysteriously spared Karoui, even lauding him. The pages in question were removed by Facebook, which had detected "coordinated" and "deceptive" behavior. They were characterized by growth that was too rapid for their creation date. For example, the most popular of the deactivated pages was also the most recent, garnering 104,000 likes in the space of two months. Often, pages are liked by thousands of bots, giving the appearance of popularity that pushes real profiles to join what appears to be a massive movement.
A second investigation, again conducted by DFR Lab and published in 2020, revealed the existence of other pages, again for the benefit of Nabil Karoui, attacking his opponents. This time, it was a Tunisian company, Ureputation, that was at the helm. These sponsored pages often passed themselves off as neutral journalistic information channels (for example, by boasting that they specialized in fact-checking) and relied on fake "news media", created online to give them greater credibility. Today, this practice seems to have been abandoned, as it was often these news websites, whose domain names could easily be traced back to their origin, that made it possible to identify the perpetrators.
In addition, the Court of Auditors' report on the 2019 elections observed the existence of hundreds of unofficial pages of the various presidential candidates. The example of Kais Saied is the most interesting: of all the candidates, Kais Saied was the only one not to have an official page, he was also the one with the most unofficial support pages (30), with 120 administrators and just over three million likes. His opponent, on the other hand, had just three unofficial pages and some 586,000 likes.
25 July 2021: Suspicious Online Mobilization?
The Twenty-fifth of July 2021 marked both a political and a digital turning point. The suspension of parliamentary activities by President Kais Saied was preceded by widespread mobilization on social media, with some pages gathering over 700,000 subscribers, such as the Facebook group "NO to compensation for Nahdhaouis" now inaccessible. However, groups and pages of the same name still exist, with a substantial number of subscribers. Some groups had commercial functions and saw their names changed a few days after 25 July. According to researcher Larbi Sadiki, these pages helped to amplify popular anger from 12 July, in a context of deep crisis due to the Mechichi government's haphazard management of the pandemic. It is important to note that, although these pages certainly played an amplifying role in this movement, it is not possible to ascertain that they were manipulative without concrete evidence; they may have been the result of real mobilization.
Mobilization on social media was not limited to the Tunisian digital space and local players. The measures taken by the President after 25 July gave rise to a diversity of content on the media space and social networks. On the one hand, anti-Ennahdha messages were disseminated via bots, influencers, and widely shared hashtags, such as "Tunisians revolt against the brotherhood". According to researcher Marc Owen Jones, much of this activity comes from accounts linked to the United Arab Emirates-Egypt-Saudi Arabia axis.
On the other hand, a false report published by Middle East Eye was massively relayed, claiming that former head of government Hichem Mechichi had been physically assaulted at Carthage Palace by Egyptian officers, accusing the Egyptian army and the President of the United Arab Emirates of having directly supported the coup.
From Political Manipulation to Migrant Chasing
The manipulation of Tunisia's digital space, which until now has mainly targeted the country's various political factions, has taken a sinister turn with the hate campaign against migrants from sub-Saharan Africa. This campaign, which according to Falso, an independent fact-checking platform, began in 2021 by the Parti Nationaliste Tunisien (a three-member micro-party), will culminate in the president's 21 February 2023 speech on the "criminal plan to change the composition of the demographic landscape" that sub-Saharan immigration would represent.
In the months leading up to this speech, the Tunisian digital space was flooded with videos, memes, and testimonials, often from anonymous accounts and groups, spreading hate speech and false rumors about the sub-Saharan community. The main rumor concerns the disappearance of street cats, and even domestic cats, which migrants are said to have eaten. Curiously, the same rumor is now being used in the United States by Donald Trump and his supporters against Haitian migrants.
This campaign, which began in restricted Facebook groups, then spread to various large-audience Facebook groups and TikTok. Although Facebook has suspended some of these groups and accounts, following recommendations from Tunisian partner organizations, they remain active elsewhere, notably on Instagram.
By spring 2023, over 50 anti-migrant Facebook groups and pages had emerged, with subscribers from various Maghreb countries. Several X accounts also appeared, systematically sharing content focused on Maghrebi supremacy and relaying hate speech and rumors against migrants. It is interesting to note that, unlike the movement on Facebook and TikTok, which was aimed at a local audience, the anti-migrant accounts on X are almost exclusively French-speaking and seem to be aimed more at the Maghreb diaspora.
Nor is this movement limited to Tunisia. In Morocco, ultranationalist movements have been spreading on social media since 2019. They use the same rhetoric, relying on "memes" borrowed from the American alt-right, depicting migrants as criminals, and on historical Moroccan symbols, such as the Marinid flag. In Egypt, a similar movement emerged in 2022, with two distinct groups: The first, the "Sons of Kemet", claimed Egyptian racial purity, seeing themselves as heirs to ancient Egyptian civilization. They advocate the expulsion of those who do not share these alleged genetic characteristics. The second group, "Egyptian Nationalism", adopts a nationalist vision with anti-refugee rhetoric similar to that of the Western far right.
These movements draw on similar rhetoric and symbols: a nostalgia for a glorious past, be it the Pharaohs, the Moroccan Empire, or Carthaginian civilization; the desire to restore a strong state, characterized by xenophobic, anti-minority nationalism, both ethnic and religious; strong opposition to NGOs that defend minorities and migrants; and finally, the re-appropriation of memes from the American alt-right. Although it is not clear who is behind them, these commonalities show that we are dealing with a fairly well-rehearsed and massive industry.
In Tunisia, these movements have gained in sophistication, as X and Instagram accounts have emerged, posing as simple pages run by history enthusiasts, sharing content on Carthaginian history and civilization, mostly using AI-generated images to illustrate the nation's past grandeur. Their content is punctuated from time to time by racist content, which becomes the majority when the subject of migrants becomes topical again, only to disappear, as if nothing had happened. "These pages try to bypass some of the platforms’ content moderation mechanisms, by posting ephemeral stories, for example, or by quickly deleting hateful content, before it is moderated by the platforms or reported by other users. The administrators of these pages are therefore aware that their content is in breach of these platforms’ rules. There is also the use of Facebook's live functionality, using pre-recorded videos, which allows them to use the advantages of the functionality in terms of audience, algorithms, and notifications sent to their followers," explains Rima Sghaier, a digital rights defender with whom we spoke.
Moderation continues to be a black spot in the management of this content. Indeed, much of the work is automated by the major platforms, making them unable to detect hate content in Arabic or Arabic dialects. An internal Facebook survey in 2020 revealed that in the Middle East and North Africa region, only 6% of hate content was detected on Instagram, compared with 40% on Facebook. What is more, "a person who wants to report content is almost never going to come across a real human on the other side", points out Rima Sghaier. Only in the case of a massive campaign and reporting by local trusted partners are pages pulled down, which takes a long time. Marc Owen Jones, a specialist in online disinformation in the region, talks in the podcast Afikra about the effect of "data imperialism", where GAFAMs (Google, Amazon, Facebook, Apple, Microsoft) take advantage of user data from the Global South while concentrating their moderation resources almost exclusively on countries in the Global North, where the reputational risk is greater.
Troll Farms: A Flourishing Under-Researched Practice
In the region, manipulation for disinformation purposes has become a flourishing, if hidden, industry. In addition to the creation of propaganda pages and Facebook groups, hundreds if not thousands of people are employed in troll farms that take the guise of digital communications companies to run fake personal accounts. In a published article, researchers Marina Ayeb and Tiziano Bonini interviewed a number of troll farm workers based in Egypt and Iraq. Reproducing a model revealed in particular during the 2015 US elections, they explain that their job is to animate dozens of fake profiles to react to current events according to directives given to them upstream. Thousands of accounts are animated in this way, all sounding the same, attacking the same people without giving the appearance of being "bots". This creates a mass effect, which simply aims to give the impression that the opinions these accounts defend are the majority opinions. Some of these workers, notably in Egypt, explained that they had noticed that the same "talking points" they had to spread on the networks were reproduced identically in the broadcast media.
In the article, Social Media Manipulation in the MENA: Inauthenticity, Inequality, and Insecurity, Leber and Abrahams identify several types of actors involved in social network manipulation in the MENA region. Authoritarian regimes often run centralized campaigns, using "bot armies" to spread pro-government narratives. However, decentralized actors, such as coordinated user groups or co-opted influencers, also participate in these manipulative campaigns. These individuals or entities may operate independently while promoting state objectives. Private companies also provide automated engagement services, blurring the distinction between state manipulation and private initiative.
In a region where unemployment, particularly among young people, is structural, these unfulfilling and precarious jobs are difficult to turn down or quit: they remain jobs. In the absence of any regulation at either the government or GAFAM level, the industry has a bright future ahead of it. Worse still, with Elon Musk's takeover of Twitter and the introduction of a paid certification system, misinformation accounts have gained considerable leeway, since they can now adopt an official appearance.
What Does This Mean for The Future?
Online disinformation is now a global practice. Both techniques and "memes" circulate from country to country, contributing to the general rise of renewed forms of authoritarianism that are shaking the world. If social media networks were, at the beginning of 2010s, a driving force for the mass mobilization of social movements, they have gradually become, as GAFAMs found their profitability model, a space where mass manipulation – as long as it is paid for – has become widespread.
Industrialized disinformation practices, coupled with a lack of moderation, pose a serious threat to democracy. Countries like Tunisia, where trust in institutions is low and democracy is young and fragile, have few safeguards to stand against such practices. Of course, this is not to say that it is the Facebook campaigns that have brought the country down, but these campaigns have contributed to rotten the political environment. Beyond the campaigns, it is the very centrality of a social media network where engagement is compounded by clashes and conflicts that need to be questioned.
Because of the threat these new practices pose to public debate and democracy, the level of regulation is insufficient, if it is not simply a tool to imprison opponents. Decree-Law 54 on the fight against disinformation, which came into force in Tunisia in 2022, has become the preferred tool for repressing the opinions of citizens and journalists, who are accused of spreading false news when they criticize the country's situation or the regime. The law does not affect the industrialized disinformation that is rife in the country.
Companies such as Meta, X, and TikTok must be made to face up to their responsibilities in terms of content moderation, which also involves regulation. Arabic, and more specifically the various dialects, continue to be very poorly moderated, allowing hate speech, particularly racist speech, to reach the general public.
At the same time, the issue calls for greater investment in research. Research into misinformation is still in its infancy, and there are significant gaps in the region, particularly in terms of understanding its real impact on political and social behavior. What influence do these campaigns exert? How do they create their narratives and for what purpose? Who are their sponsors? Why is such frequent recourse to a supposedly glorious history? What influence does the Western far right have on these movements?
Available studies focus mainly on manipulation campaigns carried out in the USA or Europe, which limits their relevance in other contexts, such as Tunisia. It is therefore imperative to develop a research framework specific to the region, which takes into account its political and social specificities. But even before focusing on narratives and their impacts, it is the very capacity to research these contents that needs to be strengthened. Yet this is made increasingly difficult by the growing closure of APIs and their commercialization at prices beyond the means of many institutions in the Global South. "Even when certain accesses are granted to a handful of researchers, as is the case with Meta, a filtering system favors Western researchers. Research tools have become inaccessible and too expensive, like Crowdtangle, which is closed, or X, which has made its APIs available at prohibitive rates, creating a barrier for institutions in our region," explains Rima Sghaier.
Fact-checking, while present in Tunisia, is not enough to stem the spread of misinformation on a large scale. According to Jon Bateman and Dean Jackson, in their report Countering Disinformation Effectively, fact-checking efforts are effective in correcting erroneous beliefs, but they do not necessarily bring about lasting behavioral change, so it is essential to complement fact-checking efforts with critical media education to build users' capacities to resist manipulation.
Finally, the industrialization of online manipulation, and the all-too-weak response to it, raises a broader question for democracy and social justice activists in the Arab world: How should we invest in social media networks today? What role should they play in movements for emancipation, when the ground seems to be undermined in advance by the stranglehold of the far right, encouraged by algorithms, and, in the case of X, by the company director himself? Should we desert and create other spaces, or industrialize our online presence at the risk of going against the grain of algorithms that are stronger than we are?
The views represented in this paper are those of the author(s) and do not necessarily reflect the views of the Arab Reform Initiative, its staff, or its board.