AI Governance & Data Privacy
How Data Protection Laws Apply to Artificial Intelligence Systems
Table of Contents
Introduction: The Intersection of AI and Privacy
Artificial intelligence systems are fundamentally reshaping how organisations collect, process, and derive value from personal data. From large language models trained on billions of data points to predictive analytics engines that score creditworthiness in milliseconds, AI has introduced a scale and complexity of data processing that existing privacy frameworks were never designed to address.
The privacy implications of AI are not merely incremental. They are qualitatively different from traditional data processing in several respects:
- Scale: AI systems routinely process datasets comprising millions or billions of records, often scraped from public sources or aggregated across multiple data controllers, making individual consent models impractical.
- Opacity: Deep learning models operate as functional "black boxes" where even their developers cannot fully explain how a specific input maps to a specific output, challenging fundamental transparency requirements in data protection law.
- Emergent capabilities: AI systems can derive sensitive personal information (health conditions, political beliefs, sexual orientation) from ostensibly non-sensitive data through inference and correlation, undermining the concept of data minimisation.
- Autonomous decision-making: AI-driven decisions about individuals — loan approvals, hiring recommendations, insurance pricing, content moderation — operate at a speed and volume that renders meaningful human oversight difficult.
- Persistent data use: Training data becomes embedded in model parameters in ways that make true deletion technically challenging, raising questions about data retention and the right to erasure.
For Chief Technology Officers, Data Protection Officers, and compliance professionals, the challenge is clear: AI systems must be governed within existing and emerging regulatory frameworks, while still enabling innovation. This guide examines how current data protection laws — particularly the EU General Data Protection Regulation (GDPR) and India's Digital Personal Data Protection Act, 2023 (DPDPA) — apply to AI systems, how the EU AI Act creates a new regulatory layer, and what practical steps organisations should take to build compliant AI governance programmes.
AI and Personal Data: Core Privacy Challenges
Before examining specific regulatory requirements, it is essential to understand the distinct privacy challenges that AI systems present. These challenges cut across jurisdictions and apply regardless of the specific legal framework in force.
Training Data and Lawful Basis
Machine learning models require vast training datasets. Where these datasets contain personal data — whether directly (names, emails, photographs) or indirectly (behavioural patterns, location histories, device fingerprints) — the organisation must establish a lawful basis for processing. This is complicated by the fact that training data is often collected for one purpose (e.g., providing a service) and repurposed for another (training an AI model), potentially violating purpose limitation principles.
Inference and Profiling
AI systems can generate new personal data through inference. A model that analyses purchasing patterns to predict pregnancy, or that infers political leanings from social media activity, creates personal data that the data subject never provided. The legal status of inferred data — particularly whether it constitutes "special category" data under the GDPR — remains one of the most contested questions in privacy law.
Re-identification Risk
Anonymisation and pseudonymisation are standard privacy-preserving techniques, but AI's pattern-recognition capabilities have substantially eroded their effectiveness. Research has demonstrated that machine learning models can re-identify individuals from supposedly anonymised datasets with high accuracy, particularly when combining multiple data sources. A dataset that qualifies as anonymous under traditional statistical methods may not remain anonymous when processed by a sufficiently capable AI system.
Algorithmic Bias and Discrimination
AI models trained on historical data inevitably absorb and can amplify existing societal biases. When these biases affect decisions about individuals — credit scoring, hiring, insurance underwriting, criminal justice risk assessment — they engage anti-discrimination principles embedded in data protection frameworks. Bias in AI is not merely an ethical concern; it creates concrete legal liability under multiple regulatory regimes.
Data Minimisation vs. Model Performance
Data protection laws universally require that personal data processing be limited to what is necessary for the stated purpose. AI development, however, generally follows the principle that more data yields better models. This creates a structural tension between legal requirements and engineering incentives that organisations must navigate through careful governance and documentation.
Important
AI systems can infer sensitive personal data (health status, political opinions, sexual orientation) from non-sensitive inputs. Organisations must assess not only the data they collect, but the data their models can derive through inference, and apply appropriate safeguards accordingly.
GDPR and AI
The GDPR, in force since May 2018, was drafted before the current wave of generative AI, but its principles-based architecture gives it significant reach into AI governance. Several provisions are directly relevant.
Article 22: Automated Individual Decision-Making
Article 22(1) provides that data subjects have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects or similarly significantly affects them. This provision applies directly to AI systems that make consequential decisions without meaningful human involvement.
Key interpretive points from the European Data Protection Board (EDPB) guidance:
- "Solely automated" means no meaningful human involvement — a human who rubber-stamps an AI recommendation without genuine discretion does not constitute adequate oversight.
- "Legal effects or similarly significant effects" encompasses decisions on creditworthiness, employment, insurance, access to essential services, and similar consequential outcomes.
- Exceptions exist where the decision is necessary for a contract, authorised by EU or Member State law, or based on explicit consent, but each exception carries additional safeguards.
Right to Explanation
Recital 71 of the GDPR states that data subjects should have the right to obtain "an explanation of the decision reached" after automated processing. While recitals are not directly binding, they inform interpretation. The EDPB has indicated that controllers must provide "meaningful information about the logic involved" (Articles 13(2)(f), 14(2)(g), and 15(1)(h)), which for AI systems means explaining how the model processes data, what factors are most influential, and why a particular outcome was reached for the individual.
Data Protection Impact Assessments (DPIAs)
Article 35 requires a DPIA where processing is "likely to result in a high risk to the rights and freedoms of natural persons." The Article 29 Working Party (now EDPB) identified several criteria that trigger mandatory DPIAs, and AI systems frequently meet multiple triggers:
| DPIA Trigger Criterion | AI Relevance |
|---|---|
| Evaluation or scoring | Credit scoring, employee performance models, risk assessment |
| Automated decision-making with legal or similar effect | Loan approvals, hiring decisions, insurance pricing |
| Systematic monitoring | Surveillance systems, employee monitoring tools |
| Sensitive data or data of highly personal nature | Health AI, biometric systems, inferring sensitive attributes |
| Data processed on a large scale | Nearly all production AI systems |
| Innovative use of new technologies | Generative AI, foundation models, novel ML architectures |
The EDPB guidance states that meeting two or more criteria generally triggers a mandatory DPIA. Most AI systems will meet at least three.
Purpose Limitation and Legitimate Interest
Purpose limitation (Article 5(1)(b)) requires that personal data be collected for specified, explicit, and legitimate purposes and not further processed in a manner incompatible with those purposes. Organisations seeking to rely on legitimate interest (Article 6(1)(f)) as the lawful basis for AI training must conduct a rigorous balancing test, documenting the legitimate interest pursued, the necessity of processing, and the impact on data subjects' rights and freedoms.
The EDPB's 2024 opinion on AI model training confirmed that legitimate interest can serve as a lawful basis for training AI models on personal data, but only where the controller can demonstrate that the processing is genuinely necessary for the identified interest and that the data subjects' interests do not override it. This assessment must be conducted and documented before training begins.
Practical Tip
When deploying AI systems in the EU, always conduct a DPIA before launch. Document the specific lawful basis for each data processing activity within the AI pipeline — data collection, training, inference, and output storage may each require separate justification.
DPDPA and AI
India's Digital Personal Data Protection Act, 2023 (DPDPA) establishes a consent-centric framework that applies to automated processing of digital personal data. While the DPDPA does not contain AI-specific provisions equivalent to GDPR Article 22, several of its requirements have direct implications for AI systems.
Consent for Automated Processing
Section 6 of the DPDPA requires that personal data be processed only for lawful purposes for which the data principal has given consent, or for certain legitimate uses specified under Section 7. The consent must be "free, specific, informed, unconditional and unambiguous," with a clear affirmative action. For AI systems, this means:
- The consent notice must clearly describe that the data will be used for automated processing or AI-driven analysis.
- Bundled consent (combining AI processing consent with basic service consent) is unlikely to meet the "specific" requirement.
- Data principals must be able to withdraw consent, which may require organisations to implement mechanisms for excluding individuals' data from AI processing pipelines.
Data Principal Rights
The DPDPA grants data principals the right to access information about their data (Section 11), the right to correction and erasure (Section 12), and the right to grievance redressal (Section 13). For AI systems, the right to erasure is particularly challenging — removing an individual's data from a trained model (sometimes called "machine unlearning") is technically difficult and may require model retraining. Organisations must plan for how they will honour these rights in the context of AI systems.
Significant Data Fiduciary Obligations
Section 10 of the DPDPA empowers the Central Government to notify certain data fiduciaries as "Significant Data Fiduciaries" (SDFs) based on factors including the volume and sensitivity of personal data processed and the risk to data principal rights. Large AI companies processing substantial volumes of Indian personal data are likely candidates for SDF designation. SDFs face additional obligations:
- Appointing a Data Protection Officer (DPO) based in India.
- Appointing an independent data auditor to evaluate compliance.
- Conducting periodic Data Protection Impact Assessments.
- Meeting additional requirements as the Central Government may prescribe by rules.
The DPDPA rules, expected to be finalised in 2026, are anticipated to include further provisions on algorithmic transparency and AI-specific obligations for SDFs. Organisations should monitor developments closely.
KSK Insight
KSK has been actively tracking DPDPA rule-making and MeitY consultations on AI governance. Our team advises technology companies on structuring consent architectures and data processing agreements that account for AI-specific requirements under Indian law.
EU AI Act Overview
The EU AI Act (Regulation (EU) 2024/1689), which entered into force on 1 August 2024, is the world's first comprehensive AI-specific legislation. It establishes a risk-based regulatory framework that classifies AI systems into four tiers, with compliance obligations calibrated to the level of risk.
Risk-Based Classification
| Risk Tier | Examples | Regulatory Treatment |
|---|---|---|
| Unacceptable Risk | Social scoring by governments, real-time remote biometric identification in public spaces (with narrow exceptions), manipulation of vulnerable groups, emotion recognition in workplaces and schools | Prohibited outright |
| High Risk | AI in critical infrastructure, education, employment, credit scoring, law enforcement, migration, justice administration, biometric identification | Mandatory conformity assessment, risk management system, data governance, transparency, human oversight, accuracy/robustness requirements |
| Limited Risk | Chatbots, deepfake generators, emotion recognition systems (non-prohibited), AI-generated content | Transparency obligations (users must be informed they are interacting with AI or viewing AI-generated content) |
| Minimal Risk | Spam filters, AI-enabled video games, inventory management systems | No specific obligations (voluntary codes of conduct encouraged) |
Compliance Timeline
The AI Act's provisions take effect in stages:
- 2 February 2025: Prohibitions on unacceptable-risk AI systems take effect.
- 2 August 2025: Obligations for general-purpose AI (GPAI) models apply; national competent authorities must be designated; codes of practice for GPAI providers finalised.
- 2 August 2026: Full application of high-risk AI system requirements (Annex III systems), transparency obligations for limited-risk systems, and enforcement mechanisms.
- 2 August 2027: Requirements for high-risk AI systems that are safety components of products already regulated under specific EU sectoral legislation (Annex I).
General-Purpose AI (GPAI) Models
The AI Act introduces specific obligations for providers of GPAI models (such as large language models). All GPAI providers must maintain technical documentation, implement copyright compliance policies, and publish sufficiently detailed summaries of training data. GPAI models posing "systemic risk" (defined by a computational threshold of 10^25 FLOPs or by Commission designation) face additional obligations including model evaluation, adversarial testing, cybersecurity measures, and serious incident reporting.
Important
The prohibition on unacceptable-risk AI practices has been in force since February 2025. Organisations operating AI systems in the EU should have already assessed whether any of their systems fall within the prohibited categories. Non-compliance carries penalties of up to EUR 35 million or 7% of global annual turnover.
EU AI Act and Data Protection Interplay
The EU AI Act and the GDPR are complementary instruments, not alternatives. An AI system deployed in the EU must comply with both simultaneously. Understanding their interaction is critical for compliance planning.
Parallel Compliance Obligations
The AI Act explicitly states (Article 2(7)) that it does not affect the GDPR. This means:
- A high-risk AI system must conduct both a conformity assessment under the AI Act and a DPIA under the GDPR where applicable.
- Data quality requirements under the AI Act (Article 10) must be met alongside GDPR data minimisation and purpose limitation requirements.
- Transparency obligations under the AI Act supplement, rather than replace, GDPR information obligations under Articles 13 and 14.
- The AI Act's human oversight requirements (Article 14) complement, but are distinct from, GDPR Article 22's safeguards for automated decision-making.
Regulatory Sandboxes
Article 57 of the AI Act requires Member States to establish at least one AI regulatory sandbox by 2 August 2026. These sandboxes allow developers to test AI systems under regulatory supervision, with the possibility of processing personal data collected for other purposes where specific conditions are met — a significant concession given the GDPR's strict purpose limitation principle. However, sandbox participants must still implement privacy safeguards including access controls, time limitations, and deletion of data after testing.
Data Quality for High-Risk AI
Article 10 of the AI Act imposes specific data governance requirements on high-risk AI systems, including that training, validation, and testing datasets must be relevant, sufficiently representative, and free of errors to the extent possible. This creates a data quality obligation that goes beyond the GDPR's accuracy principle (Article 5(1)(d)) by requiring proactive measures to identify and address biases in training data. Organisations must document their data governance practices, including the statistical properties of datasets, bias detection methodologies, and gap-filling measures.
Enforcement Coordination
The AI Act designates national market surveillance authorities for enforcement, while the GDPR is enforced by data protection authorities (DPAs). Where an AI-related incident involves both AI Act non-compliance and a personal data breach, organisations may face parallel investigations. The AI Act includes provisions for cooperation between AI authorities and DPAs, but the practical mechanics of coordination are still being developed at the Member State level.
Practical Tip
Map your AI compliance obligations across both the AI Act and the GDPR in a single compliance register. This avoids duplication, identifies gaps, and ensures that documentation prepared for one regulation (e.g., a DPIA) can be leveraged to satisfy requirements under the other (e.g., AI Act risk management documentation).
India's AI Regulatory Approach
India has not enacted dedicated AI legislation as of February 2026. Instead, the regulatory approach has been principles-based and sector-specific, with the DPDPA providing the overarching data protection framework within which AI governance operates.
NITI Aayog and National AI Strategy
NITI Aayog's "Responsible AI" papers (2021) articulated seven principles for AI governance in India: safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and reinforcement of positive human values. While these principles are non-binding, they signal the direction of potential regulation and are increasingly referenced in sectoral guidance.
MeitY Advisories
The Ministry of Electronics and Information Technology (MeitY) issued advisories in March 2024 that required AI platforms deploying under-tested or unreliable models to seek government approval before making them available to Indian users. These advisories, while facing legal challenges regarding their enforceability, indicate the government's willingness to exercise oversight over AI deployments. MeitY has also established the IndiaAI mission with a budget allocation for compute infrastructure, datasets, and AI application development.
Sectoral Regulatory Approaches
India's financial regulators have been the most active in addressing AI:
- Reserve Bank of India (RBI): The RBI's guidelines on digital lending (2022) and the Master Direction on IT Governance, Risk, Data and Cybersecurity (2023) require regulated entities to ensure fairness, transparency, and accountability in AI/ML-based credit scoring and lending decisions. Algorithmic lending models must be auditable, and borrowers must be informed when AI contributes to adverse credit decisions.
- Securities and Exchange Board of India (SEBI): SEBI's framework on algorithmic trading already regulates AI-driven trading strategies. SEBI has also issued guidance on the use of AI in investment advisory and research analyst activities, requiring disclosure of AI involvement in recommendations.
- Insurance Regulatory and Development Authority of India (IRDAI): IRDAI has addressed the use of AI in claims processing and underwriting, requiring insurers to maintain transparency in algorithmic decision-making affecting policyholders.
Anticipated Developments
The DPDPA rules, when finalised, are expected to include provisions relevant to AI governance, particularly regarding Significant Data Fiduciary obligations, cross-border data transfer mechanisms (relevant for AI model training), and children's data processing (relevant for AI systems interacting with minors). India's approach is likely to remain sector-specific and principles-based rather than adopting an EU-style comprehensive AI regulation in the near term.
KSK Insight
KSK advises technology companies navigating India's evolving AI regulatory landscape. Our team tracks developments across MeitY, RBI, SEBI, and IRDAI to provide integrated compliance guidance that addresses sector-specific requirements alongside the DPDPA's overarching data protection obligations.
Responsible AI Frameworks
In the absence of binding AI-specific regulation in many jurisdictions, international frameworks and standards provide essential guidance for organisations building AI governance programmes. These frameworks increasingly inform regulatory expectations and can serve as evidence of due diligence.
OECD AI Principles (2019, updated 2024)
The OECD AI Principles, adopted by 46 countries including India, articulate five principles: inclusive growth and sustainable development; human-centred values and fairness; transparency and explainability; robustness, security, and safety; and accountability. The 2024 update addressed generative AI and foundation models, adding guidance on value chain responsibility and environmental impact. The OECD principles are referenced by the G7 Hiroshima AI Process and inform the AI governance approaches of multiple jurisdictions.
Singapore Model AI Governance Framework
Singapore's framework, now in its second edition, provides a practical, sector-agnostic approach structured around four principles: internal governance structures and measures; determining the AI decision-making model (human-in-the-loop, human-on-the-loop, or human-out-of-the-loop); operations management (data, model, and system management); and stakeholder interaction and communication. The framework's "Implementation and Self-Assessment Guide for Organisations" (ISAGO) offers a detailed checklist that organisations can use regardless of jurisdiction.
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
The IEEE's "Ethically Aligned Design" provides technical standards for embedding ethical considerations into AI development. IEEE 7010-2020 (Wellbeing Impact Assessment) offers a structured methodology for assessing AI systems' impacts on human well-being, complementing legal compliance frameworks with a broader impact perspective.
ISO/IEC 42001:2023 — AI Management Systems
ISO/IEC 42001 is the first international management system standard for AI. Published in December 2023, it specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system (AIMS). Certification against ISO 42001 provides a recognised benchmark of AI governance maturity and can be particularly valuable for demonstrating compliance to regulators, clients, and partners. Key elements include:
- AI risk assessment and treatment processes.
- AI system impact assessment methodology.
- Roles and responsibilities for AI governance.
- Data management for AI (quality, provenance, bias assessment).
- Documentation and record-keeping requirements.
- Continuous monitoring and improvement mechanisms.
Practical Tip
Consider ISO/IEC 42001 certification as a strategic investment. It provides a structured framework that maps to multiple regulatory requirements, demonstrates governance maturity to clients and regulators, and creates a systematic basis for continuous improvement in AI governance.
AI Impact Assessments
AI impact assessments (AIIAs) are emerging as the cornerstone of AI governance practice, analogous to DPIAs under the GDPR but broader in scope. Whether mandated by regulation (as under the EU AI Act for high-risk systems) or adopted voluntarily as a governance best practice, AIIAs provide a structured methodology for identifying, evaluating, and mitigating the risks that AI systems pose to individuals and communities.
Methodology
A robust AIIA should follow a structured methodology encompassing the following stages:
1. Scoping and Context
- Define the AI system's purpose, intended use, and operational context.
- Identify the target population and potential affected groups.
- Determine the decision-making model: is the AI advisory, augmentative, or autonomous?
- Classify the system under applicable regulatory frameworks (e.g., AI Act risk tier).
2. Data Flow Mapping
- Document all data inputs, including training data sources, real-time data feeds, and user-provided data.
- Map data flows through the AI pipeline: collection, preprocessing, training, inference, output, and storage.
- Identify personal data at each stage and the applicable lawful basis.
- Assess data quality, representativeness, and potential sources of bias.
3. Bias Testing and Fairness Assessment
- Define relevant fairness metrics (demographic parity, equalised odds, predictive parity) appropriate to the use case.
- Test for disparate impact across protected characteristics (race, gender, age, disability, religion).
- Document bias mitigation measures (pre-processing, in-processing, and post-processing techniques).
- Establish ongoing monitoring for bias drift in production.
4. Proportionality and Necessity
- Assess whether the AI system is necessary and proportionate to the stated objective.
- Consider less intrusive alternatives that could achieve the same purpose.
- Evaluate the severity and likelihood of adverse impacts on individuals.
5. Stakeholder Consultation
- Engage data subjects, domain experts, and affected communities where appropriate.
- Document feedback received and how it influenced the assessment.
- Seek views of the Data Protection Officer and, where applicable, the supervisory authority.
6. Documentation and Review
- Document findings, risk mitigation measures, residual risks, and the rationale for proceeding.
- Establish review triggers (material changes to the model, new use cases, regulatory developments).
- Schedule periodic reassessments, at minimum annually for high-risk systems.
Important
An AI impact assessment conducted only at the design stage and never revisited is of limited value. AI systems evolve through retraining, data drift, and changing operational contexts. Build periodic review and triggered reassessment into your AIIA process from the outset.
Transparency and Explainability
Transparency is a foundational principle across virtually all AI governance frameworks. It operates at multiple levels: organisational transparency (disclosing that AI is being used), model transparency (documenting how the model works), and decision-level transparency (explaining specific outcomes to affected individuals).
Disclosure Obligations
The regulatory trend is toward mandatory disclosure of AI involvement in decision-making. Under the EU AI Act, limited-risk AI systems (such as chatbots and deepfake generators) must inform users that they are interacting with AI. Under the GDPR, data controllers must inform data subjects about the existence of automated decision-making and provide "meaningful information about the logic involved." India's sectoral regulators (RBI, SEBI) require disclosure of AI involvement in specific contexts such as credit decisions and investment advice.
Model Documentation
Comprehensive model documentation — sometimes called "model cards" (following the framework proposed by Mitchell et al., 2019) — has become an industry standard practice. Effective model documentation should include:
- Model architecture, training methodology, and key hyperparameters.
- Training data description, including sources, size, temporal scope, and known limitations.
- Performance metrics across relevant subgroups (disaggregated evaluation).
- Known limitations, failure modes, and conditions under which the model should not be used.
- Intended use cases and out-of-scope applications.
- Version history and change log.
Algorithmic Auditing
External algorithmic auditing is increasingly expected for high-risk AI systems. The EU AI Act requires third-party conformity assessments for certain high-risk systems (notably biometric identification). Beyond regulatory mandates, voluntary algorithmic auditing demonstrates governance maturity and can identify issues before they become regulatory or reputational problems. Effective auditing encompasses technical evaluation (model performance, bias, robustness), process evaluation (governance, documentation, incident response), and impact evaluation (effects on individuals and communities).
Communicating AI Decisions to Data Subjects
Explaining AI decisions to non-technical individuals remains one of the most practical challenges in AI governance. Best practices include:
- Providing a plain-language explanation of the key factors that influenced the decision.
- Offering the data subject the ability to contest the decision and request human review.
- Avoiding overly technical explanations that obscure rather than illuminate.
- Using counterfactual explanations where appropriate (“your application would have been approved if X factor had been different”).
- Distinguishing between global model explanations (how the model works generally) and local explanations (why this specific decision was reached for this individual).
Practical Tip
Invest in explainability tooling (such as SHAP, LIME, or Integrated Gradients) during model development, not as an afterthought. Retrofitting explainability onto a deployed model is significantly more difficult and expensive than building it in from the design stage.
Vendor and Third-Party AI Due Diligence
Most organisations do not build AI systems from scratch. They procure AI capabilities from third-party vendors, integrate open-source models, or use AI-enabled SaaS platforms. Under data protection law, the deploying organisation typically remains the data controller and bears ultimate responsibility for compliance. Rigorous vendor due diligence is therefore essential.
Procurement Checklist
Before procuring any AI system that processes personal data, organisations should evaluate:
- Training data provenance: How was the model's training data sourced? Was consent obtained or another lawful basis established? Are there copyright or intellectual property risks?
- Data processing location: Where is data processed and stored? Does this trigger cross-border transfer obligations under the GDPR or DPDPA?
- Sub-processors: Does the vendor use sub-processors (e.g., cloud infrastructure providers)? What oversight mechanisms exist?
- Security measures: What technical and organisational measures does the vendor implement to protect personal data?
- Bias testing: Has the vendor conducted bias testing? Are results available for review?
- Explainability: Can the vendor provide meaningful explanations of AI decisions to enable the deployer to meet transparency obligations?
- Incident response: What is the vendor's process for identifying, reporting, and remediating AI-related incidents?
- Certification and audit rights: Does the vendor hold relevant certifications (ISO 27001, ISO 42001, SOC 2)? Does the contract include audit rights?
Contractual Safeguards
Data processing agreements for AI vendors should include, in addition to standard GDPR Article 28 or DPDPA-compliant terms:
- Restrictions on the vendor using the deployer's data to train or improve the vendor's own models (a common clause in AI SaaS agreements that organisations frequently overlook).
- Obligations to notify the deployer of material changes to the model (retraining, architecture changes, data updates).
- Cooperation obligations for DPIAs, algorithmic audits, and regulatory inquiries.
- Clear allocation of liability for AI-related harms, including bias, inaccurate outputs, and data breaches.
- Termination rights and data portability provisions ensuring the deployer is not locked into a vendor whose AI system becomes non-compliant.
Open-Source Model Risks
The proliferation of open-source AI models (Llama, Mistral, Stable Diffusion, and others) creates specific governance challenges. Deployers of open-source models bear full responsibility for compliance, without the contractual protections that commercial vendor relationships provide. Key risks include uncertain training data provenance, unknown biases, lack of support for explainability, and the absence of a vendor to share liability. Organisations using open-source AI models should conduct thorough independent assessment equivalent to, or exceeding, the due diligence they would perform on a commercial vendor.
Important
Review your existing AI vendor contracts for "model improvement" clauses that permit the vendor to use your data for training their models. This is a common default in AI SaaS terms that can create significant privacy and intellectual property risks. Negotiate explicit opt-outs.
Practical Compliance Framework
Building an effective AI governance programme requires organisational commitment, clear accountability structures, and systematic processes. The following framework provides a practical roadmap for organisations at any stage of AI governance maturity.
1. AI Governance Policy
Establish a board-approved AI governance policy that articulates the organisation's principles for AI use, defines acceptable and prohibited AI applications, and sets the governance structure. The policy should be living document, reviewed at least annually and updated to reflect regulatory changes and evolving best practices.
2. Roles and Accountability
- AI Governance Committee: A cross-functional body (legal, technology, risk, business) responsible for strategic oversight, policy approval, and escalation decisions.
- AI Ethics Officer or Lead: An individual with day-to-day responsibility for AI governance, impact assessments, and coordination with the DPO.
- Data Protection Officer: The DPO's role naturally extends to AI governance given the overlap with data protection requirements. Ensure the DPO has sufficient AI literacy and resources.
- Model Owners: For each AI system, designate a model owner responsible for compliance, performance monitoring, and incident response.
3. AI System Inventory and Risk Register
Maintain a comprehensive inventory of all AI systems in use across the organisation, whether developed in-house, procured from vendors, or based on open-source models. For each system, record:
- Purpose and use case description.
- AI Act risk classification (if applicable).
- Personal data categories processed and lawful basis.
- Vendor details and contractual status.
- AIIA and DPIA status.
- Risk rating and mitigation measures.
- Model owner and governance committee review date.
4. Training and Awareness
AI governance is only as effective as the people implementing it. Develop role-specific training programmes: general awareness for all employees, technical governance training for AI developers and data scientists, and regulatory compliance training for legal and compliance teams. Training should be refreshed at least annually and supplemented with scenario-based exercises.
5. Monitoring and Continuous Improvement
- Implement automated monitoring for model performance, bias drift, and data quality.
- Establish KPIs for AI governance (e.g., percentage of AI systems with completed AIIAs, mean time to resolve AI incidents, training completion rates).
- Conduct periodic internal audits of AI governance compliance.
- Participate in industry benchmarking and peer learning initiatives.
6. Incident Response
Develop an AI-specific incident response plan that addresses scenarios including biased outputs affecting individuals, data breaches involving training data, model manipulation or adversarial attacks, and AI-generated content causing harm. The plan should integrate with existing data breach notification processes and include clear escalation paths, communication templates, and post-incident review procedures.
KSK Insight
KSK's technology and privacy practice helps organisations design and implement AI governance programmes tailored to their specific risk profile, regulatory obligations, and operational context. From policy drafting through to board-level training and regulatory engagement, we provide end-to-end support.
Key Compliance Checklist for AI Systems
The following checklist consolidates the requirements discussed in this guide into a practical assessment tool. Organisations should evaluate each AI system against these criteria before deployment and on an ongoing basis.
Legal Foundation
- Lawful basis established for each data processing activity in the AI pipeline (collection, training, inference, output storage).
- Consent mechanisms meet the specificity and granularity requirements for AI processing under applicable law (GDPR, DPDPA).
- Purpose limitation assessment completed — AI use is compatible with the purposes for which data was originally collected, or a separate lawful basis is established.
- Cross-border data transfer mechanisms in place for international data flows involved in AI training or processing.
Risk Assessment and Documentation
- AI system classified under the EU AI Act risk framework (if deploying in the EU).
- Data Protection Impact Assessment completed and documented.
- AI Impact Assessment completed, covering bias, fairness, proportionality, and stakeholder impact.
- Risk register entry created with mitigation measures and residual risk assessment.
Transparency and Rights
- Privacy notice updated to disclose AI involvement in data processing and decision-making.
- Mechanism in place for individuals to obtain meaningful explanations of AI-driven decisions.
- Process established for individuals to contest AI decisions and request human review.
- Data subject rights (access, correction, erasure, portability) operationalised for AI-processed data.
Technical Safeguards
- Bias testing conducted across relevant protected characteristics before deployment.
- Ongoing monitoring implemented for model performance, bias drift, and data quality.
- Security measures appropriate to the sensitivity of data and the risk level of the AI system.
- Data minimisation principles applied — only necessary data retained and processed.
- Model documentation (model card) created and maintained.
Governance and Oversight
- Model owner designated with clear accountability for compliance and performance.
- Human oversight mechanisms in place proportionate to the risk level of decisions.
- Vendor due diligence completed for third-party AI systems, including training data provenance review.
- Contractual safeguards in place with AI vendors (data processing agreement, model improvement restrictions, audit rights).
- AI incident response plan developed and tested.
- Staff training completed on AI governance policies and procedures.
- Periodic review schedule established (at minimum annually; triggered review for material changes).
Practical Tip
Use this checklist as the basis for an internal compliance scoring system. Assign each item a status (compliant, partially compliant, non-compliant, not applicable) and track progress toward full compliance. This provides a clear governance dashboard for reporting to the board and regulators.
Key Takeaways
- AI systems amplify privacy risks through scale, opacity, and the ability to infer sensitive personal data from non-sensitive inputs. Standard data protection principles apply, but require AI-specific implementation.
- The GDPR's Article 22 (automated decision-making), DPIA requirements, and transparency obligations apply directly to AI systems processing personal data of EU residents. Conduct DPIAs for all high-risk AI deployments.
- India's DPDPA applies to AI-driven processing of digital personal data. Significant Data Fiduciary obligations are likely to capture large AI companies, with additional AI-specific rules anticipated in 2026.
- The EU AI Act (in force since August 2024, phased application through 2027) creates a risk-based classification system that operates alongside, not instead of, the GDPR. Organisations must comply with both.
- International frameworks (OECD AI Principles, ISO/IEC 42001, Singapore Model Framework) provide structured governance guidance and can demonstrate regulatory due diligence across jurisdictions.
- AI impact assessments should be conducted before deployment and revisited periodically. They must cover data flows, bias, proportionality, and stakeholder impact.
- Vendor and third-party AI due diligence is critical. Deploying organisations remain responsible for compliance, regardless of whether the AI system was built in-house or procured.
- Build a comprehensive AI governance programme with clear policies, roles, a system inventory, risk register, training, monitoring, and incident response capabilities.
Download PDF
Save this guide for offline reading
Related Guides
Need Expert Guidance?
Deploying AI systems? Our technology and privacy team can help you build a compliant AI governance framework.
Book a Consultation