EU Artificial Intelligence Act: Legal Overview

Posted On - 15 July, 2024 • By - Jidesh Kumar

Overview

The Artificial Intelligence Act (AI Act) is a regulation by the European Union (EU) concerning artificial intelligence (AI). It establishes a comprehensive regulatory and legal framework for AI within the EU and will come into force on 1 August 2024.

The Act applies to all types of AI across various sectors, with exceptions for AI systems used exclusively for military, national security, research, and non-professional purposes. As a piece of product regulation, it regulates the providers of AI systems and entities using AI in a professional context but does not confer rights on individuals.

Summary

The AI Act categorizes AI systems by risk level:

  1. Unacceptable Risk: Prohibited (e.g., social scoring, manipulative AI).
  2. High Risk: Subject to stringent regulation.
  3. Limited Risk: Subject to transparency obligations (e.g., informing users about AI interactions like chatbots).
  4. Minimal Risk: Unregulated (e.g., AI in video games, spam filters).

The majority of obligations fall on developers/providers of high-risk AI systems. Providers outside the EU must comply if their AI is used within the EU. Users (deployers) of high-risk AI systems have fewer obligations but must ensure compliance when operating within the EU or where AI output is used in the EU.

General Purpose AI (GPAI)

GPAI model providers must:

  • Provide technical documentation and usage instructions.
  • Adhere to the Copyright Directive.
  • Publish a summary of training content.

Free and open-license GPAI models must comply with copyright and training data summary requirements unless they pose a systemic risk. Systemic risk GPAI models require additional evaluations, cybersecurity measures, and incident reporting.

Prohibited AI Systems (Chapter II, Art. 5)

Prohibited AI systems include:

  • Techniques distorting behavior and impairing decision-making.
  • Exploitation of vulnerabilities due to age, disability, or socio-economic status.
  • Biometric categorization inferring sensitive attributes, except under specific lawful conditions.
  • Social scoring causing adverse treatment.
  • Criminal risk assessments based solely on profiling.
  • Untargeted scraping for facial recognition databases.
  • Emotion inference in sensitive environments (e.g., workplaces).

High-Risk AI Systems (Chapter III)

High-risk AI systems (Art. 6) include:

  • Safety components requiring third-party conformity assessment.
  • Use cases listed in Annex III, unless performing narrow procedural tasks or assisting human assessments.

Providers must:

  • Implement risk management and data governance systems.
  • Ensure training datasets are relevant and error-free.
  • Maintain technical documentation for compliance verification.
  • Facilitate human oversight and ensure accuracy, robustness, and cybersecurity.
  • Establish quality management systems.

Annex III Use Cases

High-risk applications cover:

  • Non-banned biometrics.
  • Critical infrastructure management.
  • Educational and vocational access and assessment.
  • Employment and worker management.
  • Access to public and private services.
  • Law enforcement and criminal justice.
  • Migration and border control.
  • Administration of justice and democratic processes.

General Purpose AI (GPAI) Obligations

Providers must:

  • Document training and testing processes.
  • Provide information for downstream integration.
  • Respect the Copyright Directive.
  • Publish training data summaries.

Systemic risk GPAI models require:

  • Model evaluations and adversarial testing.
  • Systemic risk mitigation.
  • Incident tracking and reporting.
  • Cybersecurity protections.

Compliance may be demonstrated through adherence to codes of practice or European harmonized standards once published.

Governance

The AI Act establishes the AI Office within the Commission to ensure compliance, handle complaints, and conduct evaluations of GPAI models.

Implementation Timeline

Post-enactment, the AI Act will apply:

  • 6 months for prohibited AI systems.
  • 12 months for GPAI.
  • 24 months for Annex III high-risk AI systems.
  • 36 months for Annex I high-risk AI systems.

Codes of practice must be finalized within 9 months post-enactment.

Contributed By – Abhilasha