The Bombay High Court’s recent order directing the removal of AI-generated morphed images and videos of actor Shilpa Shetty Kundra may look like a case limited to a celebrity. But the issue it raises goes far beyond one individual. The ruling has important implications for how Indians understand privacy, consent, and ownership of their digital identity in an age where artificial intelligence is widely available.
AI tools that can alter faces, clone voices, and generate realistic videos are no longer restricted to experts or large organisations. Many such tools are cheap, easy to use, and accessible to almost anyone with an internet connection. This has made misuse easier and more frequent, often with serious consequences for individuals who have little protection or recourse.
The court described the content in question as “prima facie extremely disturbing,” recognising the harm that such manipulated material can cause. At the centre of the case is a simple but important idea: a person’s face, voice, and likeness cannot be used without consent. This applies whether the person is a public figure or a private citizen.
In today’s world, identity is no longer limited to physical presence or official documents. Our photos on social media, our videos, our voice notes, and even our mannerisms form part of who we are. When these are copied, altered, or misused using AI, it becomes a violation of personal privacy. The fact that such content may appear online does not mean it is free for others to manipulate or profit from.
Many people may assume that deepfakes only target celebrities. That assumption is misleading. The same technology used in high-profile cases is increasingly being used against ordinary people—students, professionals, and women in particular. Deepfakes have been used for harassment, blackmail, and non-consensual explicit content. In many cases, victims struggle to get content taken down or to receive legal support.
This is why the Shilpa Shetty verdict matters. By clearly stating that no person can be portrayed in a way that harms their right to privacy, the court has set a precedent that can be used by others facing similar abuse. It sends a message that the law recognises digital harm as real harm.
The case also raises questions about ownership. The suit alleged that AI was used to copy personal mannerisms for commercial purposes without permission. This highlights an emerging concern: our personality and likeness have value, and using them without consent is not acceptable. Just as we protect our financial assets, we must also protect our digital selves.
There is also a responsibility on internet users. Harmful content spreads because people click, share, and engage with it. While legal action is important, public behaviour also plays a role in limiting misuse. Choosing not to engage with manipulated or invasive content reduces its reach and impact.
The court’s order to remove offending URLs shows that online platforms are not beyond accountability. It reinforces the idea that digital spaces must follow basic standards of dignity and consent.
The Shilpa Shetty deepfake ruling is not just about one actor or one incident. It is about drawing a line on how technology can and cannot be used. As AI continues to develop, such legal clarity becomes essential. Protecting a person’s image, voice, and identity online is not a special privilege—it is a basic right that applies to every Indian using the internet.