Artificial Intelligence in the Courtroom: The Growing Risk of AI-Generated Fake Judgments

Posted On - 5 May, 2026 • By - Nivedita Bhardwaj

Introduction

The integration of artificial intelligence (AI) into legal practice is transforming the way lawyers conduct research, draft pleadings, and analyse case law. AI-powered tools are increasingly being used to enhance efficiency, reduce research time, and improve productivity across litigation and advisory functions.

However, alongside these benefits, a significant risk has emerged that is the use of AI-generated content containing fabricated or non-existent legal authorities. Indian courts have recently flagged the growing “menace” of citing fictitious judgments generated by AI tools, raising serious concerns about professional responsibility, accuracy, and the integrity of judicial proceedings.

This issue reflects a broader global challenge: how to responsibly integrate AI into legal systems without compromising foundational principles of justice.

Judicial Trigger: The Emergence of AI-Generated Fake Citations

The issue came into sharp focus in proceedings before the Supreme Court of India, arising out of observations made by the Bombay High Court in a matter under the Maharashtra Rent Control framework.

In the case, a party cited a judicial precedent that could not be traced in any recognised legal database. Upon scrutiny, the High Court noted clear indicators of AI-assisted drafting, including:

  • Absence of verifiable citations
  • Inconsistent formatting patterns
  • Non-existent case references

The Court observed that neither judges nor law clerks were able to locate the cited judgment, resulting in wastage of judicial time and raising doubts about the authenticity of submissions.

Consequently, the High Court imposed costs of ₹50,000 and strongly deprecated the practice, cautioning that such conduct could invite disciplinary action, including referral to the Bar Council.

On appeal, while the Supreme Court of India expunged certain adverse remarks as a matter of judicial discretion, it unequivocally acknowledged the increasing misuse of AI tools to generate fake legal citations, describing it as a growing concern not only in India but globally.

Key Judicial Observations

The Supreme Court’s observations signal an important shift in judicial thinking regarding technology in litigation:

  • AI tools may be used as assistive research aids, not as authoritative sources
  • The burden of verification lies entirely on the advocate or user
  • Reliance on unverified AI-generated content can undermine the administration of justice

This marks a clear judicial position: technology cannot dilute professional accountability.

1. Duty of Candour and Professional Responsibility

At the core of this issue lies the advocate’s duty of candour to the court, recognised under the rules framed by the Bar Council of India. Citing fabricated or unverifiable judgments whether it is intentional or negligent may amount to:

  • Professional misconduct
  • Misleading the court
  • Breach of ethical obligations

Importantly, the use of AI does not mitigate liability. On the contrary, given the known limitations of generative AI (including hallucinations), the duty to verify becomes even more stringent.

2. Impact on Judicial Efficiency

The misuse of AI-generated content imposes tangible costs on the justice system:

  • Wastage of judicial time in verifying non-existent authorities
  • Delay in adjudication
  • Increased burden on already strained court resources

Instead of enhancing efficiency, unverified AI use risks disrupting courtroom processes.

3. Reliability Concerns: The Problem of AI “Hallucinations”

Generative AI systems are known to produce “hallucinations” that are outputs that appear authoritative but lack factual accuracy. In legal contexts, this presents a critical risk:

  • Fabricated case laws
  • Incorrect statutory interpretations
  • Misleading legal reasoning

Given that judicial decisions rely heavily on precedent, any inaccuracy can have serious legal consequences.

Comparative Perspective: A Global Challenge

The issue is not unique to India. Courts in the United States have already imposed sanctions on lawyers for citing AI-generated fictitious cases. In one widely reported instance, attorneys faced penalties after submitting briefs containing non-existent judicial precedents generated by AI tools.

These developments highlight a shared global concern: the need to regulate and standardise the use of AI in legal practice.

Regulatory and Institutional Response: The Road Ahead

The Indian judiciary’s acknowledgment of this issue suggests the possibility of formal regulatory intervention. Likely developments may include:

1. Judicial Guidelines: Courts may issue practice directions governing the permissible use of AI in pleadings and submissions.

2. Enhanced Professional Standards: Bar Councils may introduce specific ethical guidelines on technology usage and verification obligations.

3. Sanctions for Non-Compliance: Stricter penalties including costs, adverse remarks, and disciplinary action may be imposed for negligent or misleading submissions.

4. Institutional Verification Protocols: Law firms and legal departments may adopt internal AI-use policies, including mandatory citation verification processes.

To mitigate risk, legal practitioners should adopt the following safeguards:

  • Verify all case laws through recognised legal databases (e.g., SCC, Manupatra)
  • Avoid relying on AI outputs as primary sources of authority
  • Use AI strictly for research assistance and drafting support
  • Implement multi-level review mechanisms before filing pleadings

Conclusion

The emergence of AI in the legal domain represents both an opportunity and a challenge. While it has the potential to significantly enhance efficiency, its misuse that is particularly through the citation of fabricated judgments poses a serious threat to the integrity of the justice system.

The recent observations of the Supreme Court of India serve as a timely reminder that technology cannot replace professional diligence. The credibility of the legal system ultimately depends on the authenticity and reliability of the material presented before courts.

As the legal profession adapts to technological advancements, the guiding principle must remain clear: AI may assist, but accountability rests solely with the human user.