Navigating the Minefield: Misinformation and disinformation in Indian elections

Author: Shruti Kapil, Associate Security & Mutual dependence

 

Summary: the 2024 General Elections in India have been labeled the ‘AI elections’. There is growing evidence of both opportunities for political parties and threats to the information ecosystem, with a careful balance required between government regulations, innovation and fostering individual responsibility through education. 

The 2024 general elections in India are being labeled as the ‘AI elections,’ with artificial intelligence (AI) playing a significant role in campaign strategies. With nearly 986 million voters, 751 million internet users, along with a digital literacy rate of 61 percent in urban areas and only 25 percent in rural regions, the impact of AI presents an unprecedented challenge. The World Economic Forum has identified misinformation and disinformation as India’s top threat for 2024. Additionally, a survey conducted by the digital rights organization Social & Media Matters found that nearly 80 percent of India’s first-time voters are bombarded with fake news on prominent social media platforms. With 462 million active social media users in India, the concerns regarding the dissemination of misleading information are profound. Such content holds the power to influence voting behavior, compromise electoral integrity, and even incite civil unrest.  

 Numerous instances have highlighted the impact of AI on elections, presenting both opportunities and threats. From AI-generated calls to translated political speeches, to encounters with manipulated videos targeting political figures, the spectrum of AI applications in elections is vast. The central question remains: how can we harness AI for constructive purposes while mitigating its potential negative repercussions on democratic processes?  

Generative AI has demonstrated significant potential in voter outreach, particularly through telephone communication. For instance, Polymath Solutions, an AI firm based in Ajmer, is conducting a pilot project wherein local politicians interact with voters through AI-generated calls, addressing their concerns in real-time. Similarly, the Bharatiya Janata Party (BJP) utilized an AI tool named Bhashini to dub and translate Prime Minister Narendra Modi’s speech for Tamil-speaking audiences, highlighting AI’s positive impact in overcoming language barriers. Bhashini functions as an AI-powered language translation system, enabling conversations among speakers of diverse Indian languages. This tool has received mixed reactions, with concerns raised about the potential manipulation of content. 

While AI undeniably offers significant advantages in political campaigns, such as cost reduction, labor-saving, and broader reach, its potential for facilitating misinformation, disinformation, and deepfakes cannot be ignored. Instances of fake news and deepfakes targeting politicians and celebrities, such as actors Amir Khan and Ranveer Singh criticizing PM Modi, underscore the profound impact of AI-driven threats on elections. Similarly, a video purportedly featuring Home Minister Amit Shah announcing changes in reservations stirred controversy, only to be later exposed as edited. There have been instances where deceased politicians were digitally resurrected using AI for political campaigns, leading to voters being misled by these messages. Despite their deceptive nature, these videos garnered millions of views after going viral. Misinformation not only misleads people and undermines trust in the information they encounter but also serves as a convenient excuse for individuals to dismiss authentic content as fabricated or AI-generated. 

In response to these challenges, the Election Commission of India (ECI) warned political parties against using AI to create deepfake content, mandating removal within three hours of notification. However, delays in removal underscore the need for specific laws to address AI and deepfake technology and deter misinformation. The Ministry of Electronics and Information Technology (MeitY) has issued its first formal guidance on AI models and tools. On March 15, 2024, MeitY retracted a contentious advisory that previously required AI firms to obtain government approval before making their products available online in India. The new advisory eliminates this requirement, instead emphasizing the importance of transparency, content moderation, consent mechanisms, and the identification of deepfakes. The goal is to ensure responsible AI deployment, protect electoral integrity, and enhance user awareness and empowerment. 

Many in the tech industry criticized the advisory for its ambiguity and its potential to hinder AI innovation. There is a fear that stringent regulations may prompt AI startups to relocate to countries with more favorable regulatory environments. While the advisories represent a positive step forward in an area previously uncharted, their ambiguity has sparked unease within the tech community. India currently lacks a dedicated legislative framework for overseeing the development and deployment of AI technologies, a necessity given the rapid and unpredictable evolution of AI. To address these concerns and provide much-needed clarity, the government is anticipated to unveil a draft AI regulation framework in July. 

AI-driven threats such as misinformation, disinformation, and fake news transcend borders, impacting all countries and necessitating a transnational solution. The AI Safety Summit 2023 in the UK marked a significant step in uniting nations to understand and explore potential solutions. India and 27 other nations, including the UK, US, and EU, signed a joint declaration committing to collaborative efforts in assessing AI-related risks. Increased international collaboration is essential, not only for driving innovation and progress in AI but also for comprehending its effects on humanity and developing AI solutions to address them. Just as innovation in advancing AI models is encouraged, there should also be incentives for developing AI to mitigate threats posed by AI, thus contributing to a safer global environment. 

As 80 countries gear up for elections in 2024 amid the looming threats of misinformation and disinformation, empowering the public becomes imperative. Central to countering misleading content is fostering a psychological “herd immunity” through educational initiatives, nurturing critical thinking skills, and encouraging responsible sharing of information online. While governments and tech giants hold pivotal roles, individual users must also shoulder the responsibility for their actions in the digital realm. 

 

Share This Article:

Scroll to Top