The Struggle for Trust Online | Freedom House

Created time
Oct 24, 2024 02:35 PM
Posted?
Posted?
notion image
Freedom on the Net
Written by
Allie Funk
Kian Vesteinsson
Grant Baker

Key Findings

Global internet freedom declined for the 14th consecutive year. Protections for human rights online diminished in 27 of the 72 countries covered by Freedom on the Net (FOTN), with 18 earning improvements. Kyrgyzstan received this year’s sharpest downgrade, as President Sadyr Japarov intensified his efforts to silence digital media and suppress online organizing. China shared its designation as the world’s worst environment for internet freedom with Myanmar, where the military regime imposed a new censorship system that ratcheted up restrictions on virtual private networks (VPNs). At the other end of the spectrum, Iceland maintained its status as the freest online environment, and Zambia secured the largest score improvement. For the first time in 2024, FOTN assessed conditions in Chile and the Netherlands, both of which showcased strong safeguards for human rights online.
Free expression online was imperiled by severe prison terms and escalating violence. In three-quarters of the countries covered by FOTN, internet users faced arrest for nonviolent expression, at times leading to draconian prison sentences exceeding 10 years. People were physically attacked or killed in retaliation for their online activities in a record high of at least 43 countries. Internet shutdowns and reprisals for online speech created even more perilous environments for people affected by several major armed conflicts around the world.
Censorship and content manipulation were combined to sway elections, undermining voters’ ability to make informed decisions, fully participate in the electoral process, and have their voices heard. Voters in at least 25 of the 41 FOTN countries that held or prepared for nationwide elections during the coverage period contended with a censored information space. In many countries, technical censorship was used to constrain the opposition’s ability to reach voters, reduce access to reliable reporting, or quell concerns about voting irregularities. In at least 21 of the 41 countries, progovernment commentators manipulated online information, often stoking doubt about the integrity of the forthcoming results and seeding long-term mistrust in democratic institutions. In addition, interference from governments and a reduction in transparency mechanisms on major social media platforms chilled the efforts of independent researchers and media groups to shed light on election-related influence operations.
In more than half of the FOTN countries that held or prepared for elections, governments took steps aimed at addressing information integrity, with mixed results for human rights online. The interventions included enforcing rules related to online content, supporting fact-checking and digital literacy initiatives, and passing new guidelines to limit the use of generative artificial intelligence (AI) in campaigning. The impact on internet freedom depended on the extent to which each effort prioritized transparency, civil society expertise, democratic oversight, and international human rights standards. Examples from South Africa, Taiwan, and the European Union served as the most promising models.
Building a trustworthy online environment requires a renewed and sustained commitment to internet freedom. This year, FOTN indicators assessing limits on content dropped to their lowest average score in more than a decade, excluding the two countries covered in this edition for the first time—an indication that online censorship and manipulation are growing ever more extreme. The lack of access to a high-quality, reliable, and diverse information space has impeded people’s ability to form and express their views, engage productively in their communities, and advocate for government and company accountability. Policy interventions designed to protect information integrity can help build confidence in the online environment, provided they are anchored in free expression and other fundamental rights. Responses that fail to incorporate those principles will only hasten the global decline in internet freedom and democracy more broadly.

The Struggle for Trust Online

A rapid series of consequential elections have reshaped the global information environment over the past year. Technical censorship curbed many opposition parties’ ability to reach supporters and suppressed access to independent reporting about the electoral process. False claims of voter fraud and a rise in harassment of election administrators threatened public confidence in the integrity of balloting procedures. Partisan efforts to delegitimize independent fact-checkers and researchers chilled their essential work. As a result, more than a billion voters had to make major decisions about their future while navigating a censored, distorted, and unreliable information space.
These trends contributed to the 14th consecutive year of decline in global internet freedom. Of the 72 countries covered by Freedom on the Net 2024, conditions for human rights online deteriorated in 27, and 18 countries registered overall gains. The year’s largest decline occurred in Kyrgyzstan, followed by Azerbaijan, Belarus, Iraq, and Zimbabwe. Conversely, Zambia earned the largest improvement, as space for online activism opened. In more than three-fourths of the countries covered by the project, people faced arrest for expressing their political, social, and religious views online, while people were met with physical violence related to their online activities in a record high of at least 43 countries.

Explore the Report

notion image
notion image
notion image

Wiping out online dissent

For the first time in 10 years, China shared its designation as the world’s worst environment for internet freedom with a second country: Myanmar. Conditions there deteriorated to their lowest point in the history of FOTN. Since seizing power in a 2021 coup, Myanmar’s military has conducted a brutally violent crackdown on dissent and imprisoned thousands of people in retaliation for their online speech, all while building a mass censorship and surveillance regime to suppress the activities of civilian prodemocracy activists and armed resistance groups. In May 2024, the military introduced new censorship technology to block most VPNs, cutting residents off from tools they had relied on to safely and securely bypass internet controls. At the same time, Beijing has persisted in its effort to isolate China’s domestic internet from the rest of the world, blocking international traffic to some government websites and imposing huge fines on people using VPNs. The Chinese government also continued to systematically repress dissent, for example by censoring online discussion about activist and journalist Sun Lin, who died in November 2023 after police beat him in apparent retaliation for his social media posts about protests against Chinese Communist Party (CCP) leader Xi Jinping.
Well beyond the world’s worst environments, many people faced harsh repercussions for expressing themselves online. In at least 56 of the 72 countries covered by FOTN,internet users were arrested due to their political, social, or religious expression. A Thai prodemocracy activist was sentenced to 25 years in prison in March 2024, having been convicted under the country’s repressive lèse-majesté law for 18 posts about the monarchy on the social media platform X. Cuban authorities sentenced a woman to 15 years in prison for sedition and “enemy propaganda” after she shared images of protests on social media, including a video recording of police attacking demonstrators. In Pakistan, a court sentenced a 22-year-old student to death on blasphemy charges for preparing pictures and videos that denigrated the prophet Muhammad, and sentenced a 17-year-old to life in prison for sharing them on WhatsApp.
Authorities around the world limited access to online spaces that people used to consume news, connect with loved ones, and mobilize for political and social change. Governments in at least 41 countries blocked websites that hosted political, social, and religious speech during the report’s June 2023 to May 2024 coverage period. In Kyrgyzstan, the government blocked the website of the independent media outlet Kloop after it reported on an imprisoned opposition figure’s allegations of torture in detention. Authorities later ordered a full liquidation of the umbrella organization that runs the outlet, further reducing people’s access to investigative reporting on government corruption and rights abuses. In at least 25 countries, governments restricted access to entire social media and communication platforms. French authorities ordered the blocking of TikTok in the French Pacific territory of New Caledonia to curb protests by members of the Kanak community, the island’s Indigenous people, that grew violent in May 2024 amid dissatisfaction with proposed electoral reforms.
notion image

Internet freedom under fire

Retaliatory violence for online expression from both state and nonstate actors, as well as deteriorating conditions during armed conflicts, drove several score declines during the coverage period. In a record 43 countries, people were physically attacked or killed in reprisal for their online activities. In Iraq, where journalists, activists, and bloggers face routine violence, kidnappings, and even assassinations in retaliation for online speech, a prominent civil society activist was murdered in October 2023 by an unknown assailant after his Facebook posts encouraged Iraqis to engage in protests. A Belarusian online journalist reported being tortured by authorities in December due to his connection to one of the hundreds of independent news outlets that the government deems “extremist.”
Armed conflicts were made even more dangerous by a lack of access to information and essential services. Internet shutdowns during fighting in Sudan, Ethiopia, Myanmar, the Gaza Strip, and Nagorno-Karabakh plunged people into information vacuums, prevented journalists from sharing reports about human rights abuses, and hampered the provision of desperately needed humanitarian aid. Amid the civil war in Sudan between the paramilitary Rapid Support Forces (RSF) and the regular Sudanese Armed Forces (SAF), the RSF captured internet service providers’ data centers in Khartoum and cut off internet access across the country in February. The shutdown disrupted humanitarian groups’ ability to deliver food, medicine, and medical equipment.
Forces vying for control during wartime also retaliated directly against people who reported on or discussed the conflicts online. Both the RSF and the SAF in Sudan tortured journalists and other civilians in response to perceived criticism on digital platforms. During the Azerbaijani military’s offensive in Nagorno-Karabakh in September 2023, authorities in Baku detained several people for a month, including former diplomat Eman Ibrahimov, because of social media posts that criticized the operation or called for a peaceful resolution of the conflict.
notion image
Palestinian journalists in the Gaza Strip attempt to connect to the internet. Armed conflicts around the world were made even more dangerous by restrictions on connectivity. (Photo credit: Said Khatib/AFP via Getty Images)
The impact of the devastating war between Israel and the militant group Hamas reverberated around the world. (Israel and the Israeli-occupied Palestinian territories are not among the 72 countries covered by FOTN.) People in several countries, including Bahrain, Saudi Arabia, and Singapore, faced repercussions for expressing their views about the conflict online. In Jordan, dozens of users were arrested between October and November 2023 under the country’s repressive new cybercrime law for their posts criticizing the Jordanian government’s relationship with Israel or calling for protests in support of the Palestinian cause. More broadly, independent researchers documented a surge of antisemitic and anti-Muslim hate speech online, a proliferation of false and misleading content about the conflict, and an increase in disproportionate restrictions on pro-Palestinian and other Palestine-related content by Facebook’s and Instagram’s moderation systems.

Elections focus attention on a trust deficit in the information space

In 2024, FOTN’s indicators assessing limits on content—including website blocking, disproportionate content removal, censorship laws, self-censorship practices, content manipulation, and constraints on information diversity—dropped to their lowest average score in more than 10 years, excluding the two countries that were covered for the first time in this edition. Today’s information space contributes to and is degraded by many of the same challenges affecting human society more broadly: rising political polarization, a chilling of civic participation, partisan efforts to undermine confidence in elections, and a long-term erosion of trust in democratic institutions. These problems have interfered with people’s fundamental rights to seek, receive, and impart diverse information, form opinions, and express themselves online.
Today’s information space contributes to and is degraded by many of the same challenges affecting human society more broadly.
As voters around the world headed to the polls in 2024, the preexisting threats to the information space only grew more acute. Freedom House and other commentators had warned that a perfect storm of challenges could prove disastrous for information integrity during the year. Generative AI has become more accessible, lowering the barrier of entry for those seeking to create false and misleading information. Many social media companies have laid off the very teams that were dedicated to advancing trust, safety, and human rights online. These warning signs served as a catalyst for efforts aimed at rebuilding confidence in online information during the coverage period. Policymakers, tech companies, and civil society groups experimented with ways to strengthen platform governance, boost digital literacy, and incentivize more responsible online behavior. Some initiatives showed promise, though it is still too early to assess their efficacy. Others failed to adequately protect internet freedom while attempting to address false, misleading, and incendiary content. To foster an online environment that offers high-quality, diverse, and trustworthy information, successful policies must include robust protections for free expression and other fundamental rights.

Controlling Information to Tilt an Election

Many governments sought to control electoral outcomes while still claiming the political legitimacy that only a free and fair election can confer. Their curation of the online information space through censorship and content manipulation often reinforced offline efforts to plant seeds of doubt in or rig the voting itself. For example, a number of incumbents restricted access to content about the opposition, reducing their opponents’ ability to persuade and mobilize voters, or simply boosted their on preferred narratives about the election results. Censorship and content manipulation frequently began well before an electoral period, disrupting the crucial discussion and debate necessary for voters to form and express their views.

Obstruction of access to diverse information

In 25 of the 41 FOTN countries that held or prepared for nationwide elections during the coverage period, governments blocked websites hosting political, social, and religious speech; restricted access to social media platforms; or cut off internet connectivity altogether. Blocking websites, the most common form of election-related censorship, allowed authorities to selectively restrict content that they deemed objectionable, such as reporting on corruption or evidence of voting irregularities, while maintaining access to information that worked in their favor. Internet shutdowns were the least common election-related censorship tactic, suggesting that authorities are more reluctant to impose such extreme and unpopular restrictions during balloting. When they did occur, shutdowns were most often aimed at reducing opposition parties’ ability to communicate with voters ahead of an election or to quash postelection protests over alleged fraud.

Technical censorship limits independent information and reduces electoral competition

Technical censorship was often used to suppress access to independent reporting, criticism of the government, and civil society websites, mirroring a given state’s broader offline restrictions over news media. Officials in Cambodia ordered internet service providers to block access to independent news websites a week before the July 2023 elections, further tightening media controls during a balloting process that was thoroughly engineered to suppress challenges to the ruling Cambodian People’s Party.
Governments also deployed technical censorship to stymie the opposition’s ability to engage with voters. Ahead of Bangladesh’s February 2024 elections, authorities temporarily restricted internet connectivity when the main opposition Bangladesh Nationalist Party (BNP) held a large rally in October 2023, limiting online discussion of the event and impeding the party’s digital outreach to supporters. In the run-up to Belarus’s openly rigged parliamentary elections in February 2024, officials partially restricted access to YouTube to prevent Belarusians from watching exiled opposition leader Sviatlana Tsikhanouskaya’s New Year address. To mock the government’s own information controls, however, the Belarusian opposition created Yas Gaspadar, a made-up candidate generated by AI, claiming that he could speak freely to voters online without risking arrest.
notion image
Authorities in Belarus restricted access to YouTube in an attempt to prevent people from watching opposition leader Sviatlana Tsikhanouskaya deliver a speech ahead of the election. (Photo credit: Freedom House)
Repressive regimes that faced strong opposition challengers resorted to the most brazen forms of censorship in their bids to maintain power. During Pakistan’s February 2024 general elections, the military used harsh offline methods to suppress support for former prime minister Imran Khan and his Pakistan Tehreek-e-Insaf (PTI) party, imprisoning Khan and other party leaders, barring Khan from running, and forcing the PTI to field its candidates as independents. To bypass the crackdown, the PTI organized virtual rallies and deployed a generative AI avatar of Khan to deliver speeches that he wrote behind bars. The military intensified its censorship in response, with users reporting difficulty accessing the internet and social media platforms during the virtual rallies. On election day, authorities restricted mobile connectivity, and some voters stated that the restrictions limited their ability to locate polling stations. After the vote, as results showed a strong performance by PTI-linked candidates and the party’s supporters gathered on X to allege voting irregularities, authorities blocked the platform, as well as websites created by the party to document purported vote rigging.
In Venezuela, ahead of an independently organized opposition primary in October 2023, the authoritarian regime of Nicolás Maduro ordered the blocking of sites that allowed voters to locate polling stations. The move aligned with Maduro’s offline interference, including a ruling by the politicized Supreme Tribunal of Justice that barred the primary’s winner, María Corina Machado, from running in the July presidential election, held after FOTN’s coverage period. In July, when vote tallies collected by the opposition showed that Maduro had been soundly defeated by a Machado ally, former diplomat Edmundo González Urrutia, the regime ratcheted up its censorship apparatus to support Maduro’s claims of victory. Authorities blocked Signal, X, and a host of media and civil society websites as part of their drive to quell mass protests, cut the opposition leadership off from its supporters, and reduce access to independent news about the election results and the state’s offline crackdown.

Censorship laws threaten electoral speech

Authorities in many countries enacted stricter laws and regulations governing online content, effectively deterring people from reporting on elections and expressing their views about candidates and policies. Ahead of an early presidential election in June and July, authorities in Iran—the world’s third most repressive internet freedom environment—criminalized any content that encouraged election boycotts or protests, or that criticized candidates. The rules were, at least in part, aimed at garnering higher voter turnout to make the election seem legitimate, despite the arbitrary disqualification of most candidates. Iran’s judiciary also warned that the electoral law prohibited candidates and their supporters from using foreign social media platforms, almost all of which are blocked in the country.
In the run-up to Russia’s sham March 2024 presidential election, the Kremlin enacted a slew of laws that further smothered what was already a heavily restricted information environment. One law criminalized the advertisement of VPNs, advancing the government’s existing efforts to limit the use of such tools to access uncensored information. A February 2024 law banned Russians from advertising on websites and social media channels that were labeled as “foreign agents,” forcing the country’s few remaining independent media channels, largely active on Telegram and YouTube, to downsize their operations and lay off staff.

Distortion of the information space

Progovernment commentators who used deceitful or covert tactics to manipulate online information were active in at least 25 of the 41 FOTN countries that held or prepared for nationwide elections during the coverage period. Content manipulation campaigns warped online discussion by perpetuating falsehoods about the democratic process, manufacturing inauthentic support for official narratives, or discrediting those who presented a threat to the leadership’s political dominance. These networks often worked in tandem with state-controlled or -aligned news media, deployed bot accounts across social media, generated fake websites to spoof real news outlets, and harnessed genuine enthusiasm from political loyalists. Such content manipulation is a less visible form of control than outright censorship, and it may trigger less political blowback, making it a lower-risk tactic that can yield the high reward of reshaping an online environment and even winning an election.

An evolution of players and tactics

The actors involved in disinformation campaigns, as well as their incentives and the technology they employ, have evolved in recent years. To gain plausible deniability regarding their involvement, political leaders have increasingly outsourced content manipulation to social media influencers and shady public-relations firms that benefit from lucrative contracts or political connections. Influencers who participate in content manipulation leverage the trust and loyalty they have built with their followers to promote false, misleading, or divisive messages. In Taiwan, for example, fashion and makeup influencers posted false claims about vote rigging ahead of the country’s January 2024 elections. The claims mirrored an influence campaign originating in China that aimed to discourage voting in Taiwan.
The purveyors of false and misleading information have adapted to a proliferation of platforms and their varying policies and practices on content moderation. As a given narrative pinballs across different video-, image-, and text-based applications, mitigation measures carried out by any single company have less effect. Companies that have grown their user base significantly in recent years, such as Twitch, are playing catch-up when it comes to countering false and misleading information at scale. Others, like Telegram, have become breeding grounds for such campaigns because of their explicitly hands-off approach to content moderation. Research has also pointed to a rise in false, misleading, and hateful content on X after the company drastically relaxed its approach to content moderation, cut staffing on a number of teams, and introduced other concerning policies.
Freedom on the Net 2023documented the early adoption of generative AI as a means of distorting narratives on political and social topics. During this coverage period, generative AI was frequently used to create false and misleading content. Ahead of Rwanda’s July 2024 elections, a network of accounts spread AI-generated messages and images in support of incumbent president Paul Kagame. Chatbot models offered by major tech companies also spewed inaccurate or partially accurate information about registering to vote, voting by mail, or other procedures in several elections, demonstrating how poorly they are equipped to provide high-quality election information.
However, generative AI does not yet seem to have dramatically enhanced the impact of influence operations. Available evidence from civil society, academia, and media investigations suggests that generative AI–assisted disinformation campaigns have had a minimal impact on electoral outcomes. OpenAI reported in May that the company had disrupted attempts by actors linked to China, Iran, Israel, and Russia to use ChatGPT as a component in more conventional influence campaigns, which failed to generate much reach or engagement. It takes time for governments and those they employ to effectively incorporate new techniques into influence operations, and generative AI is just one of many tools at their disposal. There is also a major research gap in terms of detecting these campaigns in general and identifying their use of generative AI specifically, limiting public knowledge on the impact of the technology.

Sowing doubt in the integrity of elections

Disinformation campaigns during the coverage period commonly broadcast false and misleading narratives that painted electoral institutions and processes as rigged, alleged foreign interference, or in the most authoritarian states, claimed that a fraudulent election was legitimate. While such campaigns are partisan by definition, their effect extends beyond a particular candidate or party, causing voters to distrust the outcome of the balloting itself. Left unchecked, they seed long-lasting skepticism or even cynicism about elections and can undermine public trust in democratic institutions over time.
Several campaigns over the past year attempted to delegitimize electoral institutions, intimidate election officials, or falsely claim that electoral processes were rigged in the opposition’s favor. In Zimbabwe, supporters of the ruling party harassed independent election observers during the August 2023 elections, maligning them as biased against the government. Ahead of Serbia’s December 2023 elections, progovernment tabloids published false and misleading information about the opposition and independent media, including a fake video purporting to show the political opposition buying votes. These campaigns disproportionately target women who play a prominent role in political processes. In South Africa, a fusillade of online attacks was directed at electoral commission member Janet Love, with many accusing her of rigging the vote; the attacks largely came from supporters of Jacob Zuma, a former president who sought to delegitimize the commission as part of his effort to stage a political comeback in the May 2024 elections.
independent experts in the United States has left people less informed about influence operations ahead of the November election. Narratives asserting that politicians were influenced by foreign interests were also common. Fueled in part by comments from officials of the then-ruling Awami League about foreign pressure on the elections, progovernment Bangladeshi bloggers painted the opposition BNP as a tool of US interests. In the lead-up to the June 2024 European Parliament elections, influencers who support Hungary’s ruling Fidesz party published videos characterizing the Hungarian political opposition as the “dollar left” and independent news outlets as the “dollar media,” implying that they do the bidding of foreign donors.
In authoritarian countries, progovernment commentators mobilized to depict sham elections as free and fair. Azerbaijan’s regime enlisted content creators from around the world—compensated with free travel and accommodation in Baku—to acclaim the integrity of the February 2024 elections, which were heavily manipulated to favor incumbent president Ilham Aliyev. The efforts built on Azerbaijani officials’ long-running attempts to legitimize their rigged elections, including the funding of ersatz election-observation missions.

Attempts to delegitimize fact-checkers

Government actors in several countries launched direct attacks—in the form of disinformation campaigns, online harassment, or other forms of political interference—on the work of independent researchers and fact-checkers who are dedicated to unmasking influence operations and boosting trustworthy information. As a result, some initiatives were forced to shutter or reduce their operations, leaving voters in the dark about attempts to spread false information and undermining societal resilience in the face of electoral manipulation. Governments also established more friendly alternatives to independent fact-checkers, seeking to harness the trusted practice of fact-checking for their own political benefit.
In some of the most repressive environments, governments have long worked to delegitimize or co-opt fact-checking. On the day of Egypt’s December 2023 presidential election, the country’s media authority launched an investigation into the fact-checking platform Saheeh Masr. The site had reported that the state-owned conglomerate United Media Services ordered affiliated outlets to suppress election reporting, including stories that showed low turnout or voters facing pressure to choose a particular candidate.
Democracies were not immune to this trend during the coverage period. Weeks before balloting began in India’s general elections, the central government sought to stand up a fact-checking unit that would “correct” purportedly false reporting on official business. Indian journalists and civil society groups criticized the project as ripe for abuse, and the country’s highest court temporarily paused the creation of the unit. Similarly, the Washington Post reported that an Indian disinformation research hub was in fact linked to the national intelligence services, finding that it laundered talking points in support of the ruling Bharatiya Janata Party (BJP) alongside fact-based research.
South Korean president Yoon Suk-yeol and his People Power Party employed the rhetoric of “fake news” to justify a campaign against independent media ahead of April 2024 legislative elections. Authorities raided and blacklisted independent media outlets that had reported critically on the government, and People Power Party legislators launched a campaign to tar South Korea’s primary fact-checking platform, the nonprofit SNUFactCheck, as biased. The accusations reportedly caused a major sponsor to withdraw funding from SNUFactCheck, which had operated through a partnership between Seoul National University and dozens of prominent media outlets. The funding crisis led the center to suspend activities indefinitely as of August 2024, depriving residents of a crucial service that helped them distinguish fact from fiction online.
Pressure on independent experts in the United States has left people less informed about influence operations ahead of the November elections.
Similar pressure on independent experts in the United States has left people less informed about influence operations ahead of the November elections. A coalition of researchers known as the Election Integrity Partnership (EIP), which had conducted analysis of—and at times notified social media companies about—false electoral information during the 2020 campaign period, faced intense pressure and scrutiny. False allegations about the EIP’s work, including that it fueled government censorship, prompted a wave of litigation, subpoenas from top Republicans on the US House of Representatives’ Judiciary Committee, and online harassment aimed at EIP participants. This concerning campaign has raised the cost of working on information integrity and produced a chilling effect in the broader community of US experts on the topic. Individual experts and institutions have reported scaling down their activities and limiting public discussion of their work to avoid similar hostility or hefty legal fees.
Companies also reduced access to data about activities on their platforms, hampering the ability of fact-checkers and independent researchers to study the information space. In August 2024, Meta shut down CrowdTangle, a critical tool that allowed real-time analysis of content across Facebook and Instagram, and replaced it with a far more limited alternative. In September 2023, X banned nearly all scraping on its site, cutting off a primary source of data for researchers. The move built on an earlier change that locked access to X’s interface for researchers behind an exorbitantly expensive paywall. Researchers’ access to platform data allows them to uncover harassment and disinformation campaigns, unmask the actors behind them, and flag key trends on social media. Limiting access to this information makes it more difficult to design effective policies and technical interventions for strengthening internet freedom.

Developing Remedies That Protect Internet Freedom

In more than half of the 41 FOTN countries that held or prepared for nationwide elections during the coverage period, governments took steps aimed at making the information space more reliable. Common interventions included engaging with technology companies to boost authoritative information from election commissions or to address false and misleading information; supporting fact-checking initiatives led by local media and civil society; and setting rules for how political campaigns can use generative AI. The measures often varied within a given country, with regulatory bodies taking different—and at times conflicting—approaches based on their mandate, legal authority, structural independence, and political incentives.
To determine whether these efforts strengthened or undermined internet freedom, Freedom House assessed them based on four criteria: transparency in their decision-making and related processes, meaningful engagement with local civil society, independent implementation and democratic oversight, and adherence to international human rights standards. These features, when present, helped guard against government overreach and company malfeasance, fostered trust and legitimacy with the public, allowed for open debate about how to address false and misleading content, and facilitated the incorporation of diverse expertise that leads to more informed and effective actions. The most promising approaches were found in South Africa, the European Union (EU), and Taiwan, whose interventions largely met all four criteria.
A myriad set of factors limited assessment of whether the actions explored in this report were effective at fostering a high-quality, diverse, and trustworthy information environment. For one, the utility of each remedy depended on each setting’s unique context, such as a country’s political dynamics or legal framework. The same fact-checking initiative that proves effective in an established democracy may flounder in an environment where the state exercises control over online media. Several policies were simply too new to assess, as countering false and misleading information is generally a long-term endeavor. The voluntary or nontransparent nature of many interventions also made enforcement difficult to track. Finally, research gaps, created in part by government pressure that has chilled the work of fact-checkers and by company decisions to roll back access to platform data, hamper understanding of how false, misleading, and incendiary content spreads and the extent to which interventions are addressing the problem.

When state regulators overstep

In the periods surrounding major elections, many governments attempted to address false, misleading, or incendiary content by enforcing content-removal rules among technology companies. The most problematic efforts lacked transparency and robust oversight, failed to involve civil society, and unduly restricted free expression and access to information. The risks of overreach were most profound in settings where some forms of protected online speech were already criminalized, the rule of law was weak, and regulatory bodies lacked independence.
notion image
Ahead of Indonesia’s February 2024 elections, authorities launched efforts to address purportedly illegal content online, but the initiative was marred by opacity that raised concerns about abuse. The elections oversight agency Bawaslu, the communications regulator Kominfo, and the national police established a joint election desk to identify and request the removal of “illegal” content by platforms, in part reportedly due to frustrations that tech companies had failed to adequately act on complaints during the 2019 elections. The likelihood of overreach was increased by the fact that decision-making was left in the hands of regulatory and law enforcement bodies, rather than an independent judiciary with a better record of protecting free expression. Kominfo has previously used the country’s broad definitions of “illegal” speech to censor LGBT+ content, criticism of Islam, and expressions of support for self-determination in the Papua region.
In India, partisan officials forced tech companies to toe a favorable line ahead of the 2024 elections, displacing the more independent Election Commission of India (ECI) from its role overseeing election-related online information. The ECI declined to strengthen its Voluntary Code of Ethics, a 2019 agreement with platforms that sets out some brief but inadequate commitments regarding online content for the campaign period, and then enforced it sparingly and inconsistently. The ECI’s soft touch created space for intervention by the far more politicized Ministry of Information and Broadcasting, which censored BJP critics, independent media, and opposition activists during the campaign. For example, pursuant to orders from the ministry in early 2024, X and Instagram restricted India-based users from viewing accounts that had mobilized as part of a farmers’ protest movement to advocate for a stronger social safety net.
As Brazil prepared for countrywide municipal elections in October 2024, the efforts of the Superior Electoral Court (TSE) to safeguard election integrity demonstrated the complexity of upholding internet freedom while countering disinformation campaigns. In February, the TSE issued new rules requiring social media platforms to immediately remove posts that could undermine election integrity if they are “notoriously false,” “seriously out of context,” or present “immediate threats of violence or incitement” against election officials. Platforms that fail to comply face escalating civil penalties. Such problematic content can reduce people’s access to reliable voting information, chill the work of election administrators, and contribute to offline violence. However, the guidelines’ vague categorization and tight removal deadlines risk incentivizing excessive content removal, potentially affecting speech that should be protected under international human rights standards. Greater transparency from the TSE on its legal justification for content restrictions and associated orders to companies would provide much-needed insight into these rules’ impact on free expression and allow civil society to hold the TSE accountable when it oversteps.
In addition, Brazil’s Supreme Court has pursued clearly disproportionate restrictions on free expression in a parallel effort to address false, misleading, and incendiary content that has contributed to offline violence in the country. Supreme Court justice Alexandre de Moraes, who led the TSE from August 2022 to June 2024, ordered the blocking of X in August 2024, after the coverage period, as part of a months-long dispute over the platform’s refusal to comply with court orders restricting far-right accounts that were accused of spreading false and misleading information. The blocking order, which severed millions of Brazilians from the platform and was upheld by a panel of Supreme Court justices in early September, also concerningly threatened fines for people using anticensorship tools like VPNs to access X. The dispute between Moraes and X escalated into displays of brinksmanship in which X owner Elon Musk launched invectives and insults at the justice and flouted rules requiring foreign companies to have a local presence, while Moraes extended his enforcement efforts to Starlink and its parent company SpaceX, of which Musk is the chief executive and largest shareholder.
notion image
A view of the election results announcement hosted by the Electoral Commission of South Africa, which worked with civil society to address problematic online content during the May election. (Photo credit: Michele Spatari/AFP via Getty Images)

A more rights-respecting way to address problematic content

Some countries have developed more promising efforts to deal with false, misleading, or incendiary content, emphasizing transparency, the involvement of local civil society, democratic oversight, and adherence to international human rights standards.
South Africa’s approach surrounding its May 2024 elections is one such positive example. The Real411 portal, led by the Electoral Commission of South Africa (IEC) and the civil society group Media Monitoring Africa (MMA), allowed the public to report cases of false information, harassment, hate speech, and incitement to violence, which were then assessed by media, legal, and technology experts to determine whether they met a set of narrow definitions for each category of content. If they did, the IEC could refer the content to the Electoral Court to determine whether it violated election laws, to platforms to determine whether it violated their terms of service, or to the media to raise awareness about or debunk false narratives. The IEC and MMA also created PADRE, an online repository designed to catalog and increase transparency regarding political parties’ spending on and placement of political advertisements. Independent experts’ involvement in the IEC’s initiatives helped ensure that decisions about online content were proportionate, specific, and protective of free expression.
South African civil society served as a bulwark against another regulator’s more disproportionate efforts to mitigate electoral misinformation. Proposed rules from the Film and Publication Board, which were withdrawn after civil society challenged their constitutionality, would have required companies to restrict access to vaguely defined “misinformation, disinformation, and fake news,” and imposed criminal penalties—including prison terms of up to two years—for people who spread allegedly prohibited content.
notion image
Ahead of European Parliament elections in June, the EU used its unique market size and regulatory toolkit to compel social media platforms and search engines to increase transparency and mitigate electoral risks. The Digital Services Act (DSA), which entered into full force in February 2024, requires large platforms and search engines to provide detailed transparency reports, risk assessments, and researcher access to platform data, among other stipulations. In April 2024, the European Commission produced election guidelines that laid out the measures these companies should adopt under the DSA, such as labeling political ads and AI-generated content and ensuring that internal election-related teams were adequately resourced. Invoking the DSA, the commission opened formal proceedings against Meta and X for a host of possible violations, including Meta’s suspected noncompliance on limiting deceptive electoral advertising and X’s deficiencies in mitigating election-related risks.
The EU’s nonobligatory Code of Practice on Disinformation served as a separate mechanism to strengthen information integrity. The code enlists signatories, including major platforms and advertising companies, to preemptively debunk and clearly label “digitally altered” content, set up transparency centers, and demonetize false and misleading information. These steps can help supply voters with the reliable information they need to make informed electoral decisions and fully participate in balloting. However, the voluntary nature of the code makes its effectiveness unclear and hard to track.
With robust oversight and safeguards for free expression, information sharing between democratic governments and tech companies can improve users’ ability to access authoritative and reliable information. Government agencies may be privy to information about foreign actors, for example, that could provide context to companies as they seek to combat cyberattacks or coordinated inauthentic behavior. Federal agencies in the United States rolled back cooperation with platforms in a critical period leading up to the November 2024 elections, as they navigated legal challenges from state officials in Louisiana and Missouri. The two states, joined by private plaintiffs, had sued the federal government in 2022, claiming that its interactions with tech companies during the 2020 election period and the COVID-19 pandemic amounted to “censorship.” The Supreme Court dismissed the case in June 2024, ruling that the plaintiffs did not prove harm and noting that a lower court’s judgment in their favor had relied on “clearly erroneous” facts. The high court did not issue more detailed guidance on how agencies should communicate with platforms in alignment with constitutional free speech protections. As a result of the proceedings, the Federal Bureau of Investigation disclosed plans to increase transparency and set clearer guardrails around its engagement with platforms.

Support for fact-checking and digital literacy

The coverage period featured several positive initiatives aimed at facilitating voters’ access to authoritative information, such as through fact-checking programs, centralized hubs of resources, or digital literacy training.
Taiwan’s civil society has established a transparent, decentralized, and collaborative approach to fact-checking and disinformation research that stands as a global model. Ahead of and during the country’s January 2024 elections, these fact-checking programs helped build trust in online information across the political spectrum and among diverse constituencies. The Cofacts platform allowed people to submit claims they encountered on social media or messaging platforms for fact-checking by Cofacts contributors, who include both professional fact-checkers and nonprofessional community members. During the election period, Cofacts found that false narratives about Taiwan’s foreign relations, particularly with the United States, were dominant on the messaging platform Line. Other local civil society organizations, such as IORG and Fake News Cleaner, also cultivated resistance to disinformation campaigns by conducting direct outreach and programming in their communities.
Ahead of India’s elections, more than 50 fact-checking groups and news publishers launched the Shakti Collective, the largest coalition of its kind in the country’s history. The consortium worked to identify false information and deepfakes, translate fact-checks into India’s many languages, and build broader capacity for fact-checking and detection of AI-generated content. The diversity of members in the Shakti Collective allowed it to reach varied communities of voters and identify emerging trends, such as an increase in false claims in regional languages that electronic voting machines were rigged.
notion image
Indonesian fact-checkers worked to debunk false posts about the February 2024 election. (Photo credit: Bay Ismoyo/AFP via Getty Images)
Governments in some countries supported the implementation of these sorts of programs. The independently run European Digital Media Observatory (EDMO), established in 2018 by the EU, conducted research and collaborated with fact-checking and media literacy organizations during the European Parliament election period. EDMO uncovered a Russia-linked influence network that was running fake websites in several EU languages, and also found that generative AI was used in only about 4 percent of the false and misleading narratives they detected in June. Mexico’s National Electoral Institute (INE) launched Certeza INE 2024, a multidisciplinary project to counter electoral disinformation, ahead of the country’s June elections. As part of the program, voters could ask questions about how to vote and report articles, imagery, and audio clips to “Ines,” a virtual assistant on WhatsApp. Content flagged by voters would then be fact-checked through a partnership that included Meedan, Agence France-Press, Animal Político, and Telemundo.
Fact-checkers are often among the first to identify trends in false narratives, the actors responsible, and the technology they use. Their insights can inform effective policy, programmatic, and technological interventions that will advance internet freedom. However, while academic research has found fact-checking to be effective in certain contexts, it may not always lead to broader behavioral shifts by users. There also remains a fundamental structural imbalance between fact-checkers and the purveyors of disinformation campaigns: it takes far more time and effort to prove that a claim is false than it does to create and spread it. These initiatives may face particular difficulties in highly polarized environments, as voters who already lack trust in independent media groups will be unlikely to believe their fact-checking work.

Regulations on generative AI in political campaigns

Spurred by concerns that generative AI would blur the line between fact and fiction during consequential voting, regulators in at least 11 of the 41 FOTN countries that held or prepared for nationwide elections during the coverage period issued new rules or official guidance to limit how the technology could be used in electoral contexts. Prohibiting problematic uses of generative AI, such as impersonation, can compel political campaigns and candidates to adopt more responsible behavior. Rules that require labeling provide voters with the transparency they need to distinguish between genuine and fabricated content.
Ahead of South Korea’s elections, legislators banned the use of deepfakes in campaign materials starting 90 days before the balloting, with offenders subject to penalties of up to seven years in prison or fines of 50 million won ($39,000). The law also required the labeling of AI-generated materials that were published before the 90-day period, and empowered election regulators to order takedowns of offending content. Taiwanese policymakers took a more proportionate approach, passing a June 2023 law that allows candidates to report misleading deepfakes of themselves to social media companies for removal, if technical experts at law enforcement agencies confirm that the content was generated by AI.
In the United States, while no federal rules were adopted, at least 19 state legislatures passed laws to address generative AI in electoral contexts as of July 2024, according to the Brennan Center for Justice. A Michigan law enacted in November 2023 requires labeling of political advertisements generated by AI and introduces criminal penalties for using the technology, without appropriate labels, to “deceive” voters in the 90 days ahead of an election. A Florida law passed in March 2024 amends the state’s campaign finance framework to require the labeling of AI-generated content in political ads.
Electoral campaigns in a number of countries deployed generative AI during the coverage period, underscoring the need for clear rules as this technology becomes enmeshed in the ordinary practice of modern politics. Successful Indonesian presidential candidate Prabowo Subianto used an AI-generated avatar to rebrand himself as a cuddly and cat-obsessed figure, appealing to younger voters and effectively papering over credible allegations that he had committed human rights abuses as a military commander before the country’s transition to democracy. During Argentina’s November 2023 presidential runoff, candidates Javier Milei and Sergio Massa integrated AI-generated memes into their campaigning, most notably when the Massa camp posted an AI-manipulated video that depicted Milei speaking about a private market for the sale of organs, effectively mocking a previous statement he had made.
A healthy 21st-century democracy cannot function without a trustworthy online environment, in which free expression and access to diverse information prevail.

Internet freedom as a pillar of modern democracy

It is no coincidence that the most effective and frequently recommended means for reversing the global decline in internet freedom are also potent safeguards for restoring confidence in the electoral information space. For example, internet regulations that mandate transparency around content moderation systems and provide platform data to vetted researchers can help equip voters with a more informed understanding of influence operations during balloting. Long-term support for civil society groups can empower them with the necessary resources to collaborate with election commissions to boost authoritative voting information and protect free expression. The best solutions also go beyond technology, calling for reinvestment in civic education, modernization of election rules, and accountability for powerful figures who engage in antidemocratic behavior.
Ultimately, a healthy 21st-century democracy cannot function without a trustworthy online environment, in which freedom of expression and access to diverse information prevail. Defending these foundational rights allows people to safely and freely use the internet to engage in discussion, form civic movements, scrutinize government and company performance, and debate and build consensus around key social challenges. The protection of democracy writ large therefore requires a renewed and sustained commitment to upholding internet freedom around the world.
Freedom on the Net 2023
notion image

Test Your Knowledge

Test your knowledge of some of the key findings from our 2024 report by taking our quiz.

Report Resources

notion image
Download the complete Freedom on the Net 2024 PDF booklet.
notion image

Policy Recommendations

Learn how governments and companies can protect internet freedom.
Blue download data now button
notion image

Report Data

notion image

Individual Country Reports

Visit our Countries in Detail page to view all of this year's scores and read reports from each of the 72 countries we cover.
notion image

Research Methodology

The Freedom on the Net index measures each country’s level of internet freedom based on a set of 21 questions.
notion image

Acknowledgements

Freedom on the Net is a collaborative effort between Freedom House staff and a network of more than 85 researchers covering 72 countries.
notion image

Join the Conversation

Around the world, voters have been forced to make major decisions about their future while navigating a censored, distorted, and unreliable information space. Spread the word about these troubling trends and help ward off the further decline of internet freedom.