+
  • HOME»
  • Unraveling the Deepfake dilemma: Balancing technology, ethics, and legislation in the digital age

Unraveling the Deepfake dilemma: Balancing technology, ethics, and legislation in the digital age

The recent upheaval involving an AI Deepfake featuring actress Ms. Rashmika Mandana has resonated profoundly among the masses, highlighting shared vulnerabilities, pressing security challenges, and growing concerns about privacy infringement in the digital age. This incident underscored the stark reality that even public figures like Ms. Rashmika Mandana are not immune to the threat of […]

The recent upheaval involving an AI Deepfake featuring actress Ms. Rashmika Mandana has resonated profoundly among the masses, highlighting shared vulnerabilities, pressing security challenges, and growing concerns about privacy infringement in the digital age. This incident underscored the stark reality that even public figures like Ms. Rashmika Mandana are not immune to the threat of deepfakes, emphasizing the alarming ease with which individuals can be targeted, accessed, and manipulated through this advanced technology.

Understanding Deepfakes
The emergence of deepfakes, a facet of artificial intelligence, poses a significant threat to society, extending from spreading misinformation to enabling malicious social engineering crimes. These sophisticated creations use cutting-edge technology to seamlessly graft one person’s face and voice onto another’s, producing videos where individuals appear to engage in actions they never did. This alarming capability raises concerns about the potential misuse of deepfakes for political propaganda, blackmail, and the production of deceitful content infiltrating various domains, including pornography, politics, art, acting, advertisements, entertainment, online memes, and social media.
In the realm of social engineering crimes, deepfakes become a powerful tool for manipulation and deception. Consider a scenario where a fabricated video depicts a high-ranking executive or government official soliciting sensitive information or financial transactions. Such videos, with their uncanny realism, could trick unsuspecting individuals into compliance. Exploiting personal connections, attackers could create deepfake videos of friends or family members seeking financial assistance. Impersonation attacks, featuring fake videos or audio impersonating celebrities or public figures, further amplify the risks, potentially granting unauthorized access to confidential information. Alarmingly, 71% of people globally are unaware of what a deepfake is, with just under a third of consumers stating they are aware of the threat posed by deepfakes

Far-reaching implications of Deepfakes
The ramifications of deepfake technology extend beyond manipulating information and eroding trust in media, reaching into the realm of interpersonal consequences. Empirical studies, like the one conducted by Vaccari and Chadwick, reveal that exposure to deepfakes, even when not entirely deceptive, triggers heightened uncertainty about media, leading to a decline in overall trust in news—a phenomenon aligned with Mr. Don Fallis’s concept of epistemic threat. Furthermore, the interpersonal impact is evident in the potential of video deepfakes to shape and implant false memories, influencing individuals’ attitudes toward the deepfake’s target. A significant study discovered that exposure to a deepfake of a political figure significantly worsened people’s opinions of that politician. The negative effects were further amplified by microtargeting on social media, especially within specific demographic or political groups, intensifying the impact on trust and attitudes.
In a remarkable application of Sonantic’s deepfake technology, renowned actor Val Kilmer, whose distinctive voice was silenced by throat cancer in 2015, has experienced a noteworthy resurgence, allowing him to ‘speak’ once again on screen. This raises a crucial question: Does the visual authenticity in the movie make viewers overlook the actor’s loss of voice, irrespective of their awareness?

Mitigating Deepfake Threats: A Holistic Approach
While the implications portray a concerning outlook regarding the potential misuse of deepfake technology, it is essential to acknowledge the human capacity for adaptation and resilience. Drawing parallels with historical instances of adapting to new forms of deception, such as email spam, suggests that awareness and education can empower individuals to navigate and discern deepfakes. Despite technological advancements, concerns arise about memory erosion and ethical issues in non-consensual deepfake use, particularly in altering pornography, presenting risks of extortion and humiliation. Addressing these challenges is crucial to safeguard against exploitation. The nexus between social media and government plays a pivotal role in addressing deepfake challenges. Platforms can enhance user agency by deploying face-generating AI to counter surreptitious use. A comprehensive strategy involves commercializing fact-checking, formulating regulations, and fostering regional collaboration to fortify defenses globally.

The creation of deepfakes depicting individuals engaged in fabricated and non-consensual acts raises profound ethical concerns, potentially leading to devastating consequences such as extortion, humiliation, and harassment. As the understanding of deepfake dynamics advances, addressing these ethical challenges becomes imperative to safeguard individuals from malicious exploitation.

The interplay between social media and the government is crucial in addressing the various challenges posed by deepfakes. Social media platforms can combat deceptive use by enabling users to employ face-generating AI systems to conceal their identity in shared photos. Although platforms like Facebook and Instagram provide some user control through tagging, the fight against deepfakes requires a more extensive approach. This collaborative effort should involve commercializing fact-checking services, creating regulatory frameworks, and promoting structured fact-checking data initiatives like ClaimReview. Regional collaboration is essential to strengthen defenses against the widespread global threats associated with deepfakes.

India currently lacks explicit laws against deepfakes, with existing legislation insufficient to address the diverse forms they take. Existing legislation, such as sections 67 and 67A of the Information Technology Act 2000 and Section 500 of the Indian Penal Code 1860, falls short in addressing the various forms of deepfake manipulation. Recognizing privacy as a fundamental right, the Personal Data Protection Bill 2019 aims to safeguard personal data, imposing restrictions and penalties. Once passed, it is anticipated to implicitly prohibit the use and circulation of deepfake videos

Conclusion
As our society confronts the dynamic challenges posed by AI-driven threats, it becomes crucial to adopt proactive measures and engage in collaborative initiatives to minimize the potential repercussions of deepfake technology. Vigilance stands as the foundational defense against social engineering attacks fueled by deepfakes. Individuals and organizations must cultivate a mindset of careful scrutiny when faced with unexpected requests, verify the identities of those making such appeals, and promptly report any suspicious activities to the appropriate authorities.

Prof. Jyotirmoy Banerjee, Lecturer (IPL), Indian Institute of Management Rohtak
Prof. Prachilekha Sahoo, Assistant Professor of Law, Centurion University of Technology and Management

Tags:

Advertisement