The Persona Paradox: Personality Rights, Deepfakes & Identity in Indian Law

Introduction
In the digital age, a person’s name, image, voice, persona, signature, and distinctive mannerisms once ephemeral and loosely controlled, are now fully commodified. The boundary between who a public figure is and how they are perceived, manipulated or exploited has narrowed.
Across the world, AI and deepfake technologies make it easy to generate imitation voices, fake images, video clones, and synthetic personas. The danger is not only misattribution, defamation, or reputational harm, it is the undermining of autonomy over one’s identity.
In India, where there is no dedicated “personality rights statute,” courts are evolving principles to protect this space. This article maps that evolution, surveys recent judicial responses to deepfake and AI misuse, examines doctrinal tensions, and proposes the path ahead.
Table of Contents
Foundations: Legal Basis of Personality Rights in India
1. Lack of Standalone Statute; Doctrinal Patchwork
Unlike some U.S. states or jurisdictions with explicit “right of publicity” laws, India lacks a single, standalone statute codifying personality rights. Instead, protection is constructed through:
- Constitutional rights (notably privacy and dignity under Article 21),
- Tort and common law analogues (passing off, defamation, misrepresentation, unjust enrichment),
- Intellectual property rights (moral rights under the Copyright Act, trademark protection), and
- Contractual remedies (licensing agreements, endorsements, consent clauses).
The Supreme Court’s landmark judgment in K.S. Puttaswamy v. Union of India (2017) affirmed privacy as a fundamental right. Many courts have since interpreted “informational privacy” and dignity to encompass control over one’s persona.
2. Moral Rights and Performer Protections
Under the Indian Copyright Act, 1957, performers hold moral rights: to claim authorship and to restrain distortion, mutilation, or modification prejudicial to their reputation (Section 57). Though this protection is narrower, it provides a foothold: when a performance (audio/video work) is manipulated without permission, the performer can invoke moral rights.
However, personality rights extend beyond “performances” to non-copyrightable features (voice cloning, likeness, gestures). Thus, courts have had to fill the gaps.
3. Passing Off, Unjust Enrichment & Misrepresentation
Courts have long treated unauthorized commercial use of a person’s persona (image, name) as a form of passing off, where consumers might believe endorsement or affiliation exists. This doctrine affords relief against misleading uses.
Additionally, the principle of unjust enrichment supports a remedy when a third-party profits from use of someone else’s identity, without consent or compensation.
Indian Case Law: From Rajat Sharma to Deepfake Battles
1. Earlier Landmarks: Rajat Sharma, Anil Kapoor & Others
- In Rajat Sharma v. Ashok Venkatraman (Delhi HC), the court held that a disparaging advertisement implying that Sharma was no longer relevant infringed his publicity rights. The court emphasized that public personalities have a protectable interest in their name and reputation.
- Anil Kapoor v. Simply Life India & Ors. (2023) is often cited as a turning point: Kapoor successfully restrained defendants from using his name, image, voice, and signature on merchandise and promotional content. The decision recognized that mannerisms, catchphrases, and distinctive attributes also fall within the ambit of personality rights.
- In the Global Health / Dr. Naresh Trehan case (Delhi HC, CS(COMM) 6/2025), the court granted an injunction for misuse of Dr. Trehan’s name and image in a healthcare context, protecting his persona beyond the entertainment domain.
These precedents show that courts are increasingly willing to protect personality rights not just for celebrities, but high-profile professionals in sensitive fields.
2. Deepfakes & AI: The New Battleground
Recent cases illustrate courts confronting the deepfake dilemma head-on:
- Delhi HC (2025): In a major ruling involving AI-generated defamatory content targeting activist Kamya Buch, the court issued wide-ranging interim orders. It directed large platforms (X, Meta, Google) to block or delist morphs and defamatory deepfakes, and held that courts cannot “turn a blind eye” to misuses of identity in the AI era.
- Delhi HC – Bachchan Cases (2025): Abhishek Bachchan and Aishwarya Rai Bachchan separately moved the court seeking relief against unauthorized AI images, impersonation, and misuse of their name and image. The court issued takedown orders and directed blocking of infringing sites.
- Bombay HC – Asha Bhosle: In one of the most recent developments, Bombay HC granted interim relief to legendary singer Asha Bhosle in a case against AI platforms accused of voice cloning and misuse of her persona. The court recognized that unauthorized use of her voice and style constitutes a violation of her personality rights.
- Deepfake impersonation of journalist Anjana Om Kashyap: Delhi HC directed removal of a YouTube channel impersonating the anchor through deepfake content.
Legal Challenges & Tensions
Freedom of Speech vs. Personality Control: One recurring tension is balancing the right to artistic, satirical or journalistic expression with protection of identity. Overbroad injunctions may stifle parody, commentary, or political critique. Courts must calibrate relief blocking direct impersonation or misleading commercial exploitation while preserving legitimate speech.
Defining the “Interest” Protected: Courts must delineate what elements of persona merit protection: name, voice, image, signature, gestures, catchphrases, style, persona expression, etc. The boundaries are blurred: For instance, in Anil Kapoor, his catchphrase “jhakaas” and walk were recognized as protectable. But can a voice mimic in a fictional context be protected? Must the person show harm or confusion? These remain open issues.
The John Doe / Anonymous Defendant Problem: Many deepfake perpetrators are anonymous. Courts are issuing John Doe injunctions to block broad classes of defendants or URLs upstream. However, identifying and enforcing against mirror sites and evasions is a practical challenge.
Platform & Intermediary Liability
Platforms hosting infringing content (YouTube, Instagram, AI tool providers) often claim safe harbour under Section 79 of the IT Act. But courts are requiring proactive response:
- Platforms may be directed to delist content, share takedown plans, or provide source data.
- Some judgments hold platforms accountable for failure to act swiftly upon notice, especially when harm is irreparable.
Remedial Complexity: Monetary Compensation, Injunctions & Remedies
Courts must decide what relief suffices -interim takedowns or full blocking, account for damages or disgorgement, statutory vs. equitable remedies, strength of injunction (simple vs. dynamic). In AI cases, additional reliefs (e.g., preventing training of AI models on a celebrity’s persona) are being demanded.
Strategies for Protection & Enforcement
Pre-emptive Registration & Branding: Register trademarks or service marks over names, catchphrases, distinctive signatures. Some celebrities do this to anchor ownership. Maintain copyright in works (photos, voice recordings) and enforce moral rights.
Contractual Controls & Licensing: Any use of persona (images, voice, giveaways, merchandise) should be governed by clear license or assignment agreements with strict terms (scope, duration, territory, media, royalties). Insert audit and termination rights for misuse beyond license. In AI/ML contracts, include prohibitions on downstream model training without express permission.
Technical Safeguards & Monitoring: Use watermarking, metadata tagging, digital rights management (DRM) to trace misuse and deploy monitoring tools, hash matching, reverse search to detect deepfake clones or unauthorized re-uploads. Leverage notice-and-takedown automation across platforms.
Judicial Strategy: Use interim and dynamic injunctions to block downstream replication and seek disclosure orders from platforms for mirror site information or user logs. Pursue John Doe orders targeting generalized classes. In claims, ask for disgorgement / account of profits rather than fixed damages.
Public Policy & Legislative Engagement: Advocate for a dedicated Personality Rights Act or amendments to existing IP, IT or media laws to explicitly include persona control and harm thresholds. Encourage legislation that mandates platform accountability, content provenance, or mandatory “digital identity tags” in AI-generated media.
Forward-Looking Proposals & Doctrine Reform
Express Statutory Protection for Persona
A modern statute should codify the right of publicity / personality, specifying:
- What attributes are covered (name, image, voice, style, mannerisms).
- The threshold for misuse (confusion, misrepresentation, commercial exploitation).
- Safe harbors for fair use, news, parody, academic works.
- Remedies: injunctive relief, damages, account of profits.
- Obligations on platforms and AI/ML model builders (transparency, provenance, watermarking).
AI & Model Training Regulation: given the pushback in cases like Bachchan and Bhosle, courts are starting to consider ordering that platforms should not permit AI models to be trained on a person’s voice/image without consent. This doctrinal extension is nascent but may become mainstream.
Digital Identity Tags & Provenance Standards: Mandating “identity tags / digital passports” in AI-generated content could help trace origin and authenticity. Scholars have proposed embedding watermark metadata or digital signatures in deepfake media.
Freedom of Expression Safeguards: Statutory or judicial carve-outs should preserve parody, critique, political satire, academic use. The law must avoid becoming a tyranny of persona control.
International Harmonisation: Given the cross-border nature of digital media, treaties or harmonized standards (e.g. WIPO, EU, US models) could help. For instance, India could examine the U.S. Lanham Act (right of publicity in some states), or California’s statutes, and adapt global best practice.
Illustrative Scenario: AI Deepfake & Persona Clash
Hypothetical: A global AI voice platform (VoiceSynthX) releases a voice model of a famous Bollywood singer, enabling users to generate songs in her voice. The model was derived from publicly available recordings, without permission. The singer claims:
- Infringement of her personality rights (voice, style).
- Violation of moral rights (derivation from her original performances).
- Unjust enrichment (platform monetises model).
The singer seeks:
- Immediate injunction against VoiceSynthX to block access to the model in India.
- Removal of derivatives and takedown of infringing content globally.
- Disclosure of users, revenue, logs.
- Prohibition of training of other models using her voice data.
- Account of profits and damages.
In court, issues arise:
- Does usage of public recordings allow derivative training?
- Is the voice model a new creative work (transformative) or unauthorized derivative?
- Is an interim injunction justified given irreparable harm?
- What jurisdictional reach over offshore AI providers?
- Should platform safe harbour shield them?
Such cases will test both the doctrine and the infrastructure (registries, enforcement, platform cooperation) of personality law in India.
Conclusion
Personality rights in India, once peripheral, now lie at the frontier of identity law in the digital era. As AI becomes capable of simulating people’s voices, faces, and mannerisms, the need to protect autonomy over persona becomes urgent. Indian courts are rising to the challenge issuing bold interim injunctions, expanding reliefs, and recognizing that persona is not just a commercial asset but a facet of dignity and identity.
Yet, serious gaps remain like lack of statutory clarity, enforcement complexity, tension with free speech, and cross-border AI challenges. The path forward demands not only litigation but legislative evolution, platform accountability, technical safeguards, and public policy engagement.
For legal practitioners, the task is not merely reactive. It is to craft future-proof personalities via trademarks, contracts, watermarking, and procedural innovations; so that in the age of digital doubles and synthetic recreations, clients remain the authors, not the victims, of their own image.
By entering the email address you agree to our Privacy Policy.