In the modern era of instantaneous digital communication, the unchecked proliferation of misinformation presents significant challenges, with its impact varying widely across jurisdictions due to diverse socio-political, cultural, and linguistic landscapes. While global platforms have initiated measures to address this pervasive issue, their reliance on universal frameworks often proves inadequate, particularly within complex and pluralistic societies such as India. This accentuates the critical need for a departure from a uniform approach in favour of tailored strategies that address localized realities.

In the same vein, I have two submissions:

First, I argue that global platforms consistently implement standardized policies and community guidelines predominantly influenced by Western jurisprudential norms, particularly those rooted in American conceptions of free speech. These frameworks, while effective in their original jurisdictions, fail to account for the intricate socio-cultural dynamics and regional variances inherent to India. Consequently, such policies lack the requisite specificity to address the unique manifestations of misinformation in this jurisdiction, leaving significant gaps in their enforcement mechanisms.

Second, I argue that platforms must engage in robust consultations with local stakeholders, including civil society, subject-matter experts, and affected communities, to co-develop policies that are contextually appropriate and culturally sensitive. This participatory process would ensure that platform policies align with the ground realities of their user base, thereby enhancing their efficacy and legitimacy. The integration of localized frameworks with global standards is not merely a desirable paradigm but an imperative for effectively countering the multifaceted challenges posed by misinformation in India.

Global Standards, Local Realities: The Disconnect

Mark Zuckerberg has openly acknowledged that the speech policies of major platforms are deeply rooted in American ideas of free speech. While these principles work well in their original context, they often fall short in addressing the diverse realities of a country like India. Despite some attempts to move beyond this one-size-fits-all approach, the way Community Standards are enforced still follows a rigid “command-and-control” style. This means that the rules are created and implemented by platform officials who are far removed from the everyday experiences of Indian users, leaving little room for local voices to influence these decisions.

India’s rich tapestry of religions, languages, and cultures further complicates this disconnect. Many platforms fail to reflect this diversity in their policies, often drafting terms of service that are overly complex and difficult for the average person to understand—especially for those who are new to the internet. Recognizing this gap, the groups and other stakeholders have advocated the importance of translating platform policies into local languages. Doing so would make these rules more accessible and meaningful, enabling a broader range of users to navigate and engage with the digital world more effectively.

Additionally, content moderators typically lack a nuanced understanding of the specific contexts surrounding online speech in various regions. Research highlights the distinction between “global” and “local” aspects of hate speech, revealing that platforms often overlook “hyper-local harmful speech.” Such localized expressions of speech may be entirely ignored by global corporations – which underscore the urgent need for initiatives tailored to the realities of specific communities.

Towards a Participatory Governance Model

To combat misinformation, platforms like Facebook have collaborated with experts and fact-checking organizations to detect fake news. However, despite these efforts, they lack consistency in engaging with communities when creating speech guidelines and interpreting them in specific contexts. To address this, some propose a “contextualized participatory governance model” for platform policies, which would involve diverse communities in the rule-making process. This collaborative approach would empower users to contribute to defining community standards, much like an elected Parliament. A Facebook white paper has suggested introducing procedural accountability regulations to give users tools to challenge speech-related decisions. Meanwhile, other experts advocate for platforms to outsource certain policymaking responsibilities to independent third parties. Platforms have already begun utilizing independent entities for specific functions, but these efforts remain limited in scope.

To this end, platforms are beginning to delegate certain moderation tasks, including appeals processes, to external independent entities. For instance, Facebook’s Oversight Board is designed to review and assess moderation decisions, aiming to ensure they are justifiable and transparent. However, while the Oversight Board plays a crucial role in decision-making, its focus is primarily on individual cases rather than establishing comprehensive standards for future content moderation, which is not particularly effective.

Due to the disadvantages of the limited mandate of such solutions, there is a strong need to collaborate with users and communities and create proactive policies driven by the localized and contextual nature of misinformation. In this context, it is suggested that platforms adopt “contextualized participatory governance models” by involving citizens, specialists, and civil society in creating community standards. To address misinformation challenges effectively, this approach can be implemented in two main ways:

At first, specialized civic associations can be established to address misinformation challenges specifically for India, particularly those that are high-risk or sensitive. For example, false information online about State and local elections in India is a significant danger to voter interests and election integrity.

Specialized civic associations, election officials, fact-checkers, human rights experts, and platform representatives can collaborate to address specific challenges in various types of elections. Such groups can address such misinformation by applying a structured approach informed by frameworks like the ACMA (Australian Communications and Media Authority), which categorizes misinformation risks into two types: short-term or imminent risks, and long-term or systemic risks. Short-term risks include public health outbreaks, financial fraud, or threats to election integrity—issues that require immediate, targeted interventions.

During elections, a dedicated civic group could team up with platforms like Facebook and Twitter to tackle misinformation, such as false claims about electronic voting machines (EVMs) being rigged. This type of misinformation poses an immediate threat and needs a swift response to prevent harm. By tapping into their local expertise, the group could monitor trending posts, flagging misleading or false information for prompt action by the platforms. At the same time, they could work with these platforms to create clear, straightforward messages in regional languages, explaining voting procedures in a way that everyone can understand. For example, imagine scrolling through your social media feed and seeing a banner that reassures you about the security of the voting process—crafted with input from trusted local experts. This small but meaningful step could ensure that accurate information reaches voters when it’s needed most.

While short-term interventions like these focus on immediate challenges, the association could also address long-term risks such as communal hate speech and systemic cultural misrepresentation. By guiding platforms on recognizing harmful regional trends—like coded language unique to specific areas—and collaborating on culturally sensitive content moderation policies, the association could help foster a safer and more inclusive online space. Through ongoing research and strategic recommendations, they would work to counteract the deep-rooted impacts of misinformation while ensuring platforms remain responsive to immediate threats during critical times like elections or public health crises.

Moreover, entrusting platform policy decision-making to outside organizations can help foster independent thinking and decision-making, rather than being influenced by the platforms’ own agendas. A methodical strategy will guarantee that individuals like members of civil society, experts, and academics who offer unbiased viewpoints on platform decisions are included in the standard-setting process right from the start. This collaborative approach not only strengthens the credibility of the platforms’ responses but also ensures that the unique challenges of misinformation in specific contexts, such as Indian elections, are effectively addressed with tailored solutions.

A second approach to combating misinformation could involve forming community-focused digital coalitions. These coalitions would bring together diverse voices—everyday platform users, local residents, policymakers, journalists, researchers, academics, and others—who are uniquely positioned to address misinformation challenges specific to their region. By organizing workshops and awareness campaigns, these groups could help people better identify trustworthy information and spot fake news. The strength of these coalitions lies in their deep understanding of local contexts, cultural nuances, and language-specific cues—things that global platforms often overlook. For example, during the COVID-19 pandemic, a small village in Tamil Nadu faced a wave of misinformation about traditional remedies like drinking neem water or turmeric-infused milk as “cures” for the virus. Although these messages seemed harmless, they discouraged people from seeking proper medical care, ultimately putting lives at risk.

These coalitions could also help platforms implement community standards by offering input on local contexts, crafting clear guidelines for identifying misinformation, and establishing penalties for repeat offenders. Additionally, they could set up local fact-checking teams to verify claims circulating within their communities. By collaborating with local governments, these elected coalitions could conduct research, investigate the spread of misinformation, and develop region-specific policies. Platforms, state actors, and civil society could jointly define the formation, autonomy, and funding of these coalitions, creating a sustainable framework for tackling misinformation while ensuring trust, accountability, and inclusivity in the digital ecosystem.

Conclusion

It is essential to ensure that these organizations operate with impartiality and inclusivity, which should be a priority for researchers aiming to uphold their independence. Implementing a transparent selection process managed by cross-platform entities, such as an oversight body, can facilitate this goal. The success of governments and platforms hinges on mutual support from both sides, as illustrated by the challenges encountered by the Delhi Assembly Committee during its inquiry into Facebook’s role in the communal riots of February 2020. The Committee struggled to conduct its investigation primarily due to Facebook’s lack of cooperation and its own limited authority to compel responses from the platform. A ‘community-centric digital coalition’ could have played a pivotal role by assessing Facebook’s handling of fake news, collecting feedback from residents, and providing recommendations for addressing misinformation and hate speech.

These recommendations require substantial further development to operationalize and should be analyzed throughout their evolution. Therefore, while there is no reason to delay efforts in implementing the groundwork of these solutions, the requirements for platforms to engage with these communities should be cautiously considered before being enshrined in legislation, until their utility and efficacy are better understood.

Tanmay Durani, Rajiv Gandhi National University of Law, Punjab