top of page

Deepfakes, Defamation And Digital Consent: The Legal Grey Zones

Imagine receiving a video that appears to show you or someone you know saying something you never said, doing something you never did. It spreads quickly online, catches fire across social platforms, and suddenly, your reputation is in tatters, your privacy breached, consent ignored. Welcome to the world of deepfakes and the legal grey zones that still characterise their regulation in India.



The rise of synthetic media technologies, popularly called deepfakes, has revolutionised how we consume, create and manipulate digital content. While they hold tremendous creative and educational potential, they also usher in a new wave of legal and ethical challenges from defamation and impersonation to non-consensual exploitation of likeness. In India, the legal framework struggles to keep pace with the speed and reach of these technologies. 


Under the India Penal Code, 1860 (IPC), Sections 499 and 500 dealt with defamation. Accordingly, if someone makes or publishes an imputation about another person intending to harm reputation, the person may be held liable. The Information Technology Act, 2000 and its amendments provide criminal liability for certain types of digital impersonation (Sections 66C, 66D) or transmission of images without consent (Section 66E) and for publishing or transmitting obscene or sexually explicit content (Sections 67, 67A, 67B). The newer Bharatiya Nyaya Sanhita, 2023 attempts to recast defamation (Section 356) and private act image dissemination offences (Section 77) in a digital context. 


Yet the core problem remains. There is no statute in India that explicitly addresses deepfakes i.e., AI fabricated videos or audio tracks where a person’s likeness is manipulated to look as if they said or did something. The existing provisions were crafted for earlier forms of speech or image misuse. As noted by the Indian Journal of Law, “these provisions are not tailored to AI-generated content and lack clarity regarding consent, manipulation, and accountability in deepfakes”. 


One illustrative incident involved a deepfake video of a prominent actor in which he appeared to make communally inflammatory statements. The Bombay High Court intervened, noting “the morphing is so sophisticated and deceptive that it is virtually impossible to discern that the same are not genuine images/videos of the plaintiff.” Although this is not purely a defamation case, it demonstrates how deepfakes can gravely harm reputation, identity and public order. In such scenarios, the defamation framework might apply if one can prove publication, falsity, imputational harm and the requisite intention.


Consent as a concept has not been embedded in our defamation or impersonation jurisprudence when it comes to synthetic media. The misuse often involves replicating someone’s face, voice or likeness without permission, generating content that the person never sanctioned. The legal framework invokes privacy as recognised in K. S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1, under Article 21 and other rights of dignity, but there remains a gap between theory and tailored legislation addressing deepfakes. 


For instance, when a deepfake places a person in an explicit scene or a fabricated statement, we see multiple wrongs, including false representation (defamation/impersonation), lack of consent (privacy violation), and potential public harm (misinformation). The current remedy often forces the victim to shoe-horn their case into existing laws not designed for such harm.


Why this remains a grey zone? Firstly, many perpetrators operate anonymously, across borders, making enforcement difficult. Secondly, deepfakes blur the line between what was said and what appears to have been said, complicating the defamation requirement of imputations. Some deepfakes may argue parody or artistic expression, raising free speech questions. The defamation law requires intention to harm, and publication, and questions remain whether dissemination of manipulated media outside those parameters is captured. Finally, although intermediary rules under the IT Act require takedowns of artificially morphed images (Rule 3(2)(b) of the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules), 2021, the takedown-mechanism is reactive rather than preventive. 


Deepfakes expose the fault lines of our current legal architecture. Defamation and impersonation statutes offer meaningful pathways for relief, but they were not crafted with AI-made synthetic media in mind. Digital consent, the notion that one must expressly authorise use of one’s face, voice or likeness for a created media piece is still an evolving concept in Indian law. Without explicit legislation or updated jurisprudence, victims are often left navigating a fractured remedy ecosystem.


What must happen? Legislatures need to consider specific offences addressing non-consensual creation and dissemination of deepfakes, with tailored definitions, faster takedown mechanisms and cross-border cooperation. Platforms must adopt proactive labelling and verification regimes. Law schools, legal practitioners and digital rights activists should collaborate to draft model provisions that balance innovation and safety.


For now, victims of deepfakes should act swiftly by issuing takedown notices under intermediary rules, filing defamation suits, or cyber complaints under IT Act sections 66C/66D/66E. But society must recognise that the next frontier in reputation, identity, and consent will be digital and synthetic, and our laws must catch up before the damage becomes irreparable.


References:

  1. Abhay Jain, Deepfakes and Misinformation: Legal Remedies and Legislative Gaps, Indian Journal of Law, DOI:10.36676/ijl.v3.i2.86. https://law.shodhsagar.com/index.php/j/article/view/86 

  2. Deepfake videos, images storming internet. What laws can come to your rescue?, India Today (2023) https://www.indiatoday.in/law/story/deepfake-videos-images-storming-internet-what-laws-can-come-to-your-rescue-2459655-2023-11-07 

  3. Deepfakes and the Law in India | Legal Protection for Privacy & Reputation in India, Licit360. https://licit360.in/deepfakes-and-the-law-protecting-reputation-and-privacy-in-india 

  4. Legal Implications of Deepfake Technology in India: A Detailed Analysis, Vidya Planet. https://vidyaplanet.org/deepfake-technology-in-india 

  5. Deepfake Regulation: Balancing Innovation and Abuse, SCC Online blog (2025). https://www.scconline.com/blog/post/2025/11/08/deepfake-regulation-rights  


This article is authored by Sreshta Ann John, who was among the Top 40 performers in the Quiz Competition on International Human Rights organized by Lets Learn Law.


 
 
 

Comments


bottom of page