Human Responsibility in the Age of Algorithms: AI Liability under India’s Evolving Regulatory Framework

Introduction
The rapid integration of artificial intelligence (AI) across industries has brought significant efficiency gains, but it has also raised complex legal questions around responsibility for AI-driven outcomes, particularly where systems generate errors, biased outputs, or misleading advice. In India, the question of AI liability is not governed by a standalone statute. Instead, it is addressed through a combination of established legal principles, sector-specific regulatory frameworks, and evolving policy guidance.
The existing legal landscape requires courts and regulators to adapt traditional doctrines such as negligence, product liability, and contractual responsibility to scenarios involving autonomous or semi-autonomous systems. At the same time, regulatory oversight by bodies such as the Reserve Bank of India, Securities and Exchange Board of India1, Insurance Regulatory and Development Authority of India2, and Telecom Regulatory Authority of India, along with policy initiatives from the Ministry of Electronics and Information Technology, reflects a growing emphasis on responsible AI deployment.
Against this backdrop, the issue is not whether AI can be held liable as an independent actor, but rather how liability should be attributed among the various stakeholders involved in its design, development, deployment, and use. This article examines the contours of AI accountability and liability within India’s evolving regulatory framework, while situating these developments within broader global trends.
AI Accountability and Liability: Distinct but Interconnected Concepts
AI accountability refers to the broader obligation of entities involved in the lifecycle of an AI system such as developers, deployers, and operators to ensure responsible design, deployment, and use. It encompasses governance, oversight, transparency, and risk management.
AI liability, by contrast, is a narrower legal concept that concerns the attribution of responsibility for harm under applicable civil, regulatory, or criminal frameworks. Where an AI system causes injury, financial loss, or other damage, liability determines which party may be held legally responsible and subject to remedies such as damages, penalties, or sanctions.
A foundational principle underlying both concepts is that AI systems do not possess independent legal personality. Consequently, responsibility must ultimately be traced back to human actors or legal entities. Depending on the circumstances, liability may arise under civil law, regulatory enforcement mechanisms, or, in certain cases, criminal law.
Legal Challenges Posed by AI Systems
AI systems introduce complexities that challenge traditional approaches to legal responsibility:
- Behavioural unpredictability: Advanced machine learning models may exhibit non-linear and difficult-to-predict behaviour.
- Opacity (“black box” problem): Limited explainability in certain AI models complicates the assessment of fault and causation.
- Multi-stakeholder ecosystem: Responsibility may be distributed across developers, data providers, system integrators, and end-users.
- Absence of legal personhood: AI systems cannot themselves be held liable under current legal frameworks.
These factors necessitate a fact-specific analysis in determining liability and often require adapting existing legal doctrines to new technological realities.
Illustrative Risk Scenarios
Practical scenarios demonstrate how AI-related liability issues may arise:
- Autonomous Vehicles: In the event of an accident involving an autonomous driving system, liability may be assessed across multiple parties, including the vehicle manufacturer, software developer, system integrator, and, where applicable, the human operator.
- AI in Medical Diagnosis: If an AI-based diagnostic tool produces an inaccurate output leading to delayed or incorrect treatment, questions may arise regarding medical negligence, duty of care, and the extent of reliance placed on automated systems by healthcare professionals.
- Robo-Advisory Services: Financial institutions deploying AI-driven advisory tools may face regulatory action or civil claims if such tools generate misleading or unsuitable investment advice, particularly in highly regulated sectors governed by the Securities and Exchange Board of India.
- AI Chatbots Providing Legal or Commercial Advice: Businesses relying on AI-generated outputs for contractual or legal decision-making may face claims based on misrepresentation, negligence, or breach of contract where such outputs result in economic loss.
These examples underscore that liability is highly context-dependent and must be assessed based on the roles, responsibilities, and degree of control exercised by each stakeholder.
Application of Existing Indian Laws
In the absence of a dedicated AI statute, liability is determined under existing legal frameworks:
1. Negligence and Duty of Care
Under Indian tort law, a claimant must establish:
- Existence of a duty of care
- Breach of that duty
- Causation (including proximate cause)
- Resulting damage
In AI-related contexts, the duty of care may extend to developers, deployers, or service providers, depending on their level of control and the foreseeability of harm.
2. Product Liability
The Consumer Protection Act, 2019 provides a structured framework for product liability claims against manufacturers, sellers, and service providers for defective products.
AI-enabled products and software may fall within its scope where they are supplied to consumers and cause harm. However, liability is subject to statutory defences, including misuse, compliance with standards, or absence of defect at the time of sale.
3. Contractual Liability
AI systems are typically deployed through contractual arrangements. Liability may arise where:
- The system fails to meet agreed performance standards
- Warranties or representations are breached
- Service-level obligations are not fulfilled
Careful contractual drafting including limitation of liability clauses and risk allocation is therefore critical.
4. Data Protection and Digital Regulation
Where AI systems process personal data, compliance with the Digital Personal Data Protection Act, 2023 becomes essential.
Additionally, obligations under the Information Technology Act, 2000 including intermediary due diligence requirements may apply depending on the nature of the AI system.
5. Sectoral Regulatory Compliance
Use of AI does not dilute existing regulatory obligations. For instance:
- Financial institutions must comply with prudential and conduct regulations issued by the Reserve Bank of India
- Securities market participants remain subject to oversight by the Securities and Exchange Board of India
Regulators have increasingly emphasised that entities remain fully responsible for outcomes generated through AI systems, including those developed by third-party vendors.
6. Criminal Liability
AI-related conduct may attract criminal liability where statutory thresholds such as negligence or intent are met. While largely untested in India, liability may arise against individuals exercising control or supervision over AI systems, particularly in cases involving gross negligence or failure to implement adequate safeguards.
Global Regulatory Developments
European Union: The EU AI Act (2024) introduces a risk-based regulatory framework, with phased implementation between 2025 and 2026. High-risk AI systems are subject to stringent requirements relating to transparency, risk management, and human oversight.
United States: The United States currently relies on existing legal frameworks such as product liability, consumer protection, and sector-specific regulation rather than a unified federal AI liability statute.
United Kingdom: The UK has adopted a principles-based, sector-led approach, with regulators issuing guidance tailored to their respective domains rather than implementing a comprehensive AI law.
India’s Governance-Oriented Approach
India has adopted a decentralised and governance-driven approach to AI regulation. While no comprehensive AI legislation has been enacted, policy initiatives under the IndiaAI Mission3 and advisories issued by the Ministry of Electronics and Information Technology emphasise responsible AI development.
Key elements of this approach include:
- Sectoral regulation: Domain-specific regulators are expected to address AI risks within their respective sectors.
- Risk-based governance: Higher-risk applications warrant stricter oversight and safeguards.
- Value chain accountability: Responsibility is distributed across developers, deployers, integrators, and users.
- Transparency and explainability: Systems should be designed to enable meaningful understanding of outputs where feasible.
- Human oversight: Critical decisions should not be fully automated without appropriate supervision.
- Self-regulation and standards: Industry-led frameworks and best practices are encouraged.
While Indian courts have yet to deliver a definitive ruling on AI-specific liability, judicial engagement with technology-related disputes is steadily evolving.
Practical Risk Management for Businesses
Organisations deploying AI systems should adopt robust governance frameworks, including:
- Conducting AI-specific risk and impact assessments
- Ensuring high-quality and representative training data
- Maintaining human oversight in high-impact decision-making
- Implementing transparency and explainability measures where feasible
- Establishing grievance redressal mechanisms
- Continuously monitoring and auditing AI system performance
Such measures not only support regulatory compliance but also help mitigate potential exposure to liability.
Conclusion
AI presents transformative opportunities for businesses but also introduces complex legal risks. In the absence of a dedicated statutory framework, Indian law addresses these risks through the application of existing legal principles, sectoral regulation, and emerging governance standards.
The central legal position remains clear: AI systems themselves are not legal actors. Responsibility must be traced to the human and organisational actors behind them. As regulatory frameworks continue to evolve, organisations must proactively embed accountability, transparency, and risk management into their AI strategies to ensure both compliance and sustainable innovation.
By entering the email address you agree to our Privacy Policy.