John F. Kennedy famously said, “The ignorance of one voter in a democracy impairs the security of all.” In this age, voters are expected to not just know who the candidates are but also have a critical understanding of a niche concept like artificial intelligence (AI) and its potential implications on elections. People’s trust in public institutions increases only when elections are free, fair and transparent, since this guarantees that the will of the people is fairly represented. Likewise, the integrity of the electoral process is an essential component of a resilient and functioning democracy. It is therefore vital that those in charge of conducting elections acknowledge and tackle the manipulation caused by artificial intelligence.
Though AI is considered to be a revolutionary technological progress that offers hope for solving many international problems, it is seen to be disrupting long-standing systems and causing disputes in a number of sectors. The creation of a deepfake is one of the most apparent counterproductive uses of artificial intelligence; it could deliberately show a politician making an inflammatory speech, causing a public outcry. Artificial intelligence techniques, such as Generative Adversarial Network, are used to generate or edit audio or video material; hence, the likelihood of AI being used to breach the credibility of voting systems through tampering is on the rise. According to a Pew Research Center survey (By Shanay Gracia, survey conducted from August 26 – September 2, 2024), around 57% of Americans were found worrying about the tendency of AI to fraudulently manipulate political campaigns. The survey found that people were generally concerned that AI tools such as deepfakes could be used to persuade people into changing their voting preferences.
In a geopolitical context, AI is currently being employed as a tool of warfare to trigger political instability. The People’s Republic of China’s disinformation campaign to undermine political leaders who support Taiwan’s autonomy is a classic example of this. It has a long history. In the 2024 presidential elections, an attempt was made to influence public opinion and destabilize the whole electoral process by distorting Lai Ching-te’s original remarks as reported by Thomson Foundation of Taiwan. Similarly, just two days prior to the September 2023 parliamentary elections in Slovakia, a deepfake tape allegedly depicting Michal Šimečka, the leader of the pro-NATO Progressive Slovakia party, manipulating the election was circulated extensively. This particular tape gained popularity on social media and its validity was soon called into question. This incident serves as a stark reminder of the difficulty to maintain electoral integrity in the era of artificial intelligence-generated material being deployed to influence voting patterns (The Brookings Review, 2024).
It is safe to say that the development of AI-enabled technology has, of course, assisted politicians across the world to run campaigns on digital and social media platforms like Meta, Google, Twitter and TikTok, facilitating a boost in voter turnout. Also, a few months prior to the onset of parliamentary elections this year, Prime Minister Modi himself used an AI-based language translation tool called ‘Bhashini’ at the Kashi Tamil Sangamam to communicate with a sizable Tamil audience. Nevertheless, while considering the wide range of negative factors that have arisen as a result of its widespread use, its use becomes extremely contentious.
Owing to a host of these challenges, a number of tech giants at the Munich Security Conference addressed the need to mitigate risky AI-generated content intended to mislead the voters, as a result of which the “Tech Accord to Combat Deceptive Use of AI in Elections” came about. The ultimate purpose of this accord was to boost cooperation so as to stop the dissemination of misinformation, making of deep fakes, and to afford protection to people in this digital era. In India, a ‘Deepfakes Analysis Unit’ was established early this year to help people connect with reliable information and to provide a fact-checking helpline on WhatsApp in an effort to deter AI-generated media content that could trick people. Around 19 states in the US enacted laws to crack down on such content, and the Washington state legislation said that in an event that the media fails to disclose such content being generated through AI, candidates who are portrayed in the AI-generated content may file a lawsuit for damages. Similar laws were passed in New Mexico, Florida, Utah, Indiana and Wisconsin that mandate transparency when AI-generated content is used in political advertisements and campaign communications (Axios Research, 2024).
A regulatory framework can be established by implementing preventive measures against misuse of AI in political campaigns, and strategies should be adopted to mitigate the use of deepfakes in elections and to prevent violent polarization among various factions during electoral processes. The most effective approach is to utilize machine-learning models to detect AI-generated political content, moderate and censor sensitive information, and watermark and label such content. In anticipation of the 2024 U.S. presidential election, Meta had declared that it would start tagging AI-generated photos posted on its Facebook and Instagram platforms. Additionally, criminal penalization should be imposed for using AI to deliberately mislead voters regarding political manifestos, election promises, voting procedures, etc. Fact-checking initiatives, media literacy campaigns and educational programs could all be very helpful in guaranteeing that voters are wary of the risks. Increased public awareness may result in calls for stricter legislation governing the use of AI in elections, such as those pertaining to data privacy and consumer protection. Governments may propose new legislative reforms to prohibit certain forms of AI-driven political content, such as automated bots that disseminate specific political messages, and to mandate greater transparency in the use of AI in elections, particularly concerning personalized political advertisements and the algorithms that underlie targeted ads. Furthermore, to assist individuals in identifying and verifying elements such as image-tampering techniques, reverse image searches, clone detection, and fake media content that includes deep fake images, a real-time task force could be established. The future of democracy will depend on the willingness of individuals to address the challenges posed by AI-driven election manipulation, while ensuring that modern technology upholds democratic values rather than compromising them.
One may note that in all the din about AI’s growing influence on electioneering, one must not lose sight of the larger challenge that its usage poses to democratic ethos. The plausibility of our democratic practices being overshadowed by the dark clouds of “corporate imperialism” can’t be sidelined as a mere footnote to the coming age of artificial intelligence. We must remember that analytical data of citizenry is the most crucial commodity up for grabs during elections in democracies, and has the potential to facilitate a potent stranglehold over the socio-political dynamics of nations.
Emerging concepts such as “algorithmic colonization” shouldn’t be dismissed as mere jargons produced as figments of intellectual gymnastics, rather must be pondered over seriously. There’s an immediacy to the need for political philosophers to think deeply about the consequences of AI usage in elections and its long-term impact on democratic practices. Whether artificial intelligence strengthens our democratic ethos, or weakens our democracy, is for the Indian political philosophers to decide, but they must engage in an intensive dialogue (referred to as “Vaad”) with a sense of urgency, true to the Indian intellectual traditions.