top of page

Criminal Liability in Artificial Intelligence: Can machines be held accountable under criminal law?

Abstract


Artificial Intelligence (AI) systems increasingly perform tasks with real-world legal implications. From autonomous vehicles to algorithmic decision-making tools, their actions challenge traditional concepts of criminal liability. This article assesses whether AI can or should be held criminally accountable, with focus on India, the United States, and the United Kingdom.


I. Introduction


AI is embedded in domains such as medicine, transportation, and criminal justice. But as machines grow autonomous, can they be prosecuted for criminal offenses? Traditional criminal law presupposes mens rea (guilty mind) and actus reus (guilty act), both of which are problematic in the context of AI systems lacking consciousness or intent.


II. The Doctrinal Dilemma: Mens Rea and Actus Reus


While AI systems can perform harmful acts (actus reus), they do not possess intent or awareness necessary for mens rea. Gabriel Hallevy offers three theoretical models of AI liability:

1. Perpetration-via-Another: Treats AI as an innocent agent used by a human actor.

2. Natural-Probable Consequence: Holds developers liable if harm was foreseeable.

3. Direct Liability: Assigns legal personhood to AI entities.

These models disrupt the human-centered premise of criminal jurisprudence.


III. Comparative Legal Perspectives


A. India

India lacks explicit legal provisions addressing AI criminality. The Indian Penal Code, 1860 applies only to human conduct:

  • Section 299: Culpable homicide.

  • Section 304A: Causing death by negligence.

In Kunal Saha v. AMRI Hospital, the Supreme Court imposed criminal negligence liability on doctors, but not on the diagnostic systems used. The 2018 NITI Aayog report proposed AI regulation, but no statutory framework exists yet.


B. United States

The Model Penal Code requires both a voluntary act and culpable mental state. However, strict liability offenses exist in certain regulatory areas. In United States v. Athlone Industries, the court endorsed corporate criminal liability through “collective knowledge.”

A 2018 incident in Arizona where an autonomous Uber vehicle killed a pedestrian saw the backup driver charged—not the AI or the company. This exposes a significant legal vacuum. The CFAA and proposed AI Accountability Act still treat AI as a tool, not a legal subject.


C. United Kingdom

UK law hinges on the “identification doctrine,” requiring fault to be traced to a directing mind. In Tesco v. Nattrass, the House of Lords held that corporations are liable only when culpable acts are attributable to top-level individuals.

 

The 2022 UK Law Commission report acknowledged AI’s legal challenges but declined to endorse direct criminal liability for machines pending further reform.

 

IV. Notable Case Summaries


1. Kunal Saha v. AMRI Hospital, (2014) 1 SCC 384 (India): Medical negligence—potential parallel to AI diagnostics.

2. United States v. Athlone Industries, 746 F.2d 977 (3d Cir. 1984): Corporate liability through collective knowledge.

3. Tesco Supermarkets Ltd v. Nattrass, [1972] AC 153: Identification doctrine blocks AI attribution.

4. State v. Rafaela Vasquez (2020): AI fatality—driver charged, not AI.

5. People v. Ferrer (2017): Algorithm-induced harm in platform moderation.

 

V. Normative Considerations


  • Arguments For Liability


Promotes safer AI design (deterrence).

Closes accountability gaps.

Discourages scapegoating.


  • Arguments Against Liability


AI lacks intent or moral reasoning.

Cannot be punished or rehabilitated.

Risks weakening human responsibility.

 

VI. The Path Forward


1. Strict Liability Regimes: Especially for high-risk AI (e.g., health, transport).

2. Electronic Personhood: The EU has floated symbolic “electronic personality” status for civil accountability—not criminal.

3. Hybrid Liability: Combines vicarious and corporate liability.

4. Legislative Reform: Amend existing laws to include AI-like agents under defined provisions.


VII. Conclusion


AI challenges the fundamental assumptions of criminal law. India, the US, and the UK currently reject direct AI criminal liability, yet the legal gap is widening. While machines may never be moral agents, laws must evolve to ensure justice in an increasingly automated world.


References-

 

This article is authored by Rishika Naha. She was among the Top 40 performers in the Quiz Competition on Mergers and Acquisitions organized by Lets Learn Law.

 
 
 

1 Comment


bottom of page