AI Restructuring in GCCs: Skill Obsolescence, Team Compression, and the Rise of “Prompt Pay”

Employment Law Risks in India’s Transforming Global Capability Centre Ecosystem
Global Capability Centres (“GCCs”) in India are at an inflection point. With over 1,800 centres employing nearly two million professionals and generating approximately USD 64.6 billion in revenue as of 2024, the sector accounts for roughly 35 percent of India’s technology services workforce and over 1.6 percent of national GDP. AI adoption is now restructuring the employment architecture that underpins this ecosystem, not through single, formally announced restructuring events, but through a gradual, often undocumented reconfiguration of roles, skills, teams, and compensation.
Unlike traditional restructuring exercises, this shift is not always accompanied by formal role redesign, redundancy planning, or compensation restructuring. Instead, organisations are witnessing: (i) gradual skill obsolescence in clearly identifiable functional categories; (ii) compression of supervisory hierarchies driven by AI-enabled productivity; and (iii) the emergence of non-traditional compensation models tied to AI-related capabilities such as prompt engineering and workflow automation, which are being awarded outside conventional job architecture.
This paper identifies emerging employment law risks arising from AI-led restructuring in the GCC context, with particular reference to the four Labour Codes implemented by the Government of India on 21 November 2025, the Digital Personal Data Protection Act, 2023 read with the Digital Personal Data Protection Rules, 2025 (“DPDPA framework”), and India’s evolving AI governance landscape. It examines potential areas of dispute relating to redundancy characterisation, algorithmic decision-making in workforce management, role restructuring, and compensation structures, and offers recommendations for GCCs seeking to align their employment frameworks with evolving operational realities.
Table of Contents
Introduction
The adoption of artificial intelligence within GCCs is fundamentally altering traditional team structures and role definitions. Engineering teams that previously operated across layered delivery models are being replaced with leaner, AI-augmented configurations. Quality assurance functions, once reliant on headcount, are increasingly automated. Junior development roles are being compressed by code generation tools. Entire supervisory layers are becoming structurally redundant as individual contributors, supported by AI copilots, can absorb the coordination tasks previously distributed across teams.
This transition is not proceeding through formally managed restructuring exercises. Organisations are instead relying on performance management frameworks, informal role realignment, and discretionary compensation adjustments to manage a fundamentally structural shift in workforce composition. While operationally convenient, this approach creates material legal ambiguity: employment frameworks are not being updated to reflect operational reality, and the basis for consequential employment decisions from role changes to separations may not withstand scrutiny if challenged.
The challenge has become more acute following two significant regulatory developments in late 2025. First, the Government of India brought all four Labour Codes into force on 21 November 2025, replacing 29 central labour statutes and introducing a modernised compliance architecture that alters the legal treatment of wages, retrenchment, fixed-term employment, and dispute resolution. Second, the Digital Personal Data Protection Rules, 2025, were notified on 13 November 2025, advancing the operationalisation of the DPDPA, 2023 in a phased manner, and eventually creating new compliance obligations for organisations that use AI or automated systems to process employee data. Together, these developments raise the compliance stakes for GCCs undertaking informal AI-led workforce change. This paper examines the key risk vectors arising from this intersection of AI adoption and regulatory transition, and offers practical observations for GCCs navigating the path forward.
Skill Obsolescence And Role Displacement
AI deployment is reducing demand for specific functional roles across GCCs in identifiable patterns. According to the NLB Services Workforce 2.0 report (2025), among the most affected categories are entry level IT support functions where AI copilots now handle first-level query resolution, manual quality assurance, legacy application development, and onpremises infrastructure management. The same report projects that by 2026, AI tools may automate up to 80 percent of routine operational tasks in these categories, compressing the FTE-based delivery model, the traditional staffing structure in which headcount is allocated and billed on the basis of full-time equivalent units across a defined hierarchy of roles that has historically anchored GCC headcount at lower levels of the organisational pyramid.
The practical consequence is that workforce displacement occurring within GCCs is not random: it is concentrated in clearly identifiable skill groups, seniority levels, and functions. This concentration is significant from a legal standpoint.
In many organisations, affected employees are not being managed through a formal redundancy exercise. Instead, they may be placed on performance improvement plans, moved to notionally adjacent roles, or exited through attrition-led mechanisms that do not trigger statutory redundancy obligations. This approach creates a mismatch between the true basis of separation, structural skill displacement and the documented basis, which may reflect individual performance.
Under Indian employment law, the workman/non-workman distinction under the Industrial Disputes Act, 1947 (“IDA”), read with the Industrial Relations Code, 2020 (in force from 21 November 2025), remains central to determining the applicable protections. For employees classified as workmen, retrenchment defined as termination of employment for reasons other than misconduct, attracts certain procedural obligations, including one month’s notice or wages in lieu, retrenchment compensation, and service of notice on the appropriate government authority. It is relevant to note, however, that courts have held the notice-to government requirement to be directory rather than mandatory in nature, that is, a failure to serve such notice has generally been treated as a procedural irregularity that may not, by itself, invalidate a retrenchment, though it may expose the employer to challenge on other grounds.
A further and practically important distinction bears mention. The line of authority beginning with Punjab Land Development and Reclamation Corporation Ltd. v. Presiding Officer, Labour Court (1990)1 clarifies that a termination grounded in an employee’s performance or conduct does constitute “retrenchment” within the meaning of the IDA, the definition of retrenchment being broad enough to encompass terminations for reasons other than misconduct or superannuation. However, the procedural obligations under Section 25G (the last-come-first-go principle) and Section 25H (re-employment preference) are, by their nature, incapable of application to performance-based exits, since those provisions presuppose a selection exercise among similarly situated employees rather than an assessment of individual performance. The procedural inapplicability of Sections 25G and 25H to performance-based retrenchments does not, however, resolve the underlying risk identified in this paper: where the true cause of separation is structural skill displacement driven by AI adoption, but the proximate mechanism employed is a performance improvement process, the sustainability of the performance-based characterisation may be questioned. In that context, the more defensible and organisationally sound approach would be to direct the focus away from performance management altogether and toward structured upskilling of the affected employee, which addresses the structural cause directly and reduces the prospect of a contested separation.
A further dimension of risk arises from the use of AI tools in identifying employees for performance action. Where algorithmic systems flag employees for underperformance without accounting for structural context such as the fact that a role is being automated, or that productivity metrics are benchmarked against AI-assisted comparators, there is a risk that performance action is initiated on an arbitrary or discriminatory basis. AI-driven performance systems may embed algorithmic bias, and may disproportionately impact identifiable groups including older employees or employees whose roles are structurally disadvantaged by AI adoption. Indian courts, in assessing whether termination was justified, look to the principles of natural justice and require that performance action be grounded in a fair and transparent process; where AI has played an undisclosed or unexplained role in that process, an employer may face difficulty demonstrating compliance.
AI Team Compression And Managerial Redundancy
AI adoption is compressing traditional team hierarchies across GCCs in a manner that is both measurable and structurally significant. Tasks previously distributed across developers, testers, coordinators, and reviewers can increasingly be performed by smaller, AI-augmented pods. According to the NLB Services Workforce 2.0 Outlook 2025–2030, AI-led delivery models have already made certain GCC organisations approximately 30 percent flatter in structure, and projections suggest that AI-led pods may remove up to 50 percent of middle management layers within GCCs by 2026.
The employment law implications of this compression are underappreciated. As managerial roles become structurally redundant or are reconfigured into individual contributor positions, affected employees may have claims relating to demotion, unilateral alteration of service conditions, or constructive dismissal. In India, unilateral reduction of an employee’s responsibilities, reporting level, or grade even absent a reduction in compensation may constitute a change in conditions of service that requires the employee’s consent or adherence to applicable contractual and statutory provisions.
A further complication arises where managerial compression affects multiple roles simultaneously within a team or function. Where the restructuring has the character of a group exercise, questions of selection criteria become relevant. If an employer cannot articulate a principled, documented basis for which managerial roles were retained or reconfigured and which were not, affected employees may challenge those decisions as arbitrary. The gradual, undocumented nature of AI-led compression means that contemporaneous records of restructuring rationale.
Compensation inversion wherein a scenario in which AI-skilled individual contributors receive compensation that exceeds that of their nominal supervisors, is an emerging feature of certain GCC environments, though its prevalence at scale is difficult to verify given the informal and discretionary manner in which AI skill premiums are currently being awarded. To the extent it occurs, it raises questions not only of employee relations but potentially of how roles are classified, what grade or band applies, and whether the supervisory relationship has been effectively dissolved in operational terms. These are questions that employment frameworks in most GCCs have not been designed to address.
The Emergence Of “Prompt Pay” And Skillbased Compensation
Alongside structural changes to team composition, GCCs are increasingly rewarding employees for AI-native capabilities: prompt engineering, AI tool orchestration, workflow automation design, and GenAI product ownership. These capabilities are being recognised through discretionary increments, off-cycle retention adjustments, and informal skill premiums. According to the NLB Services Workforce 2.0 report (2025), hybrid AI-technical roles including AI trainers and automation architects are 1.3 times more prevalent across the GCC ecosystem than in prior periods, and the EY GCC Pulse Report 2025 identifies prompt engineers and AI governance specialists as among the highest-demand emerging roles in the sector.
The legal risks associated with informal skill-based compensation are real but require careful framing. The Wage Code, 2019 that is now in force introduces a revised definition of “wages” that excludes certain allowances from the wage base for the purposes of computing statutory benefits such as provident fund contributions, gratuity, and bonus. These exclusions include, among others, house rent allowance, conveyance allowance, and other such components as may be prescribed. However, the statute incorporates an important limiting proviso: if the aggregate of all such excluded allowances exceeds 50 percent of an employee’s total remuneration, the excess amount is not excluded but is instead treated as “wages” for the purpose of statutory benefit computation.
The practical implication for AI skill premiums is as follows. Where an employer awards an AI-related skill premium as an allowance that sits within the excluded categories, and where the total quantum of excluded allowances across the employee’s compensation package thereby pushes past the 50 percent threshold, the portion of excluded allowances in excess of that threshold is counted back into the wage base. This increases the employer’s exposure on provident fund contributions, gratuity calculations, and other statutory benefit obligations. The risk is therefore not that employers are required to ensure basic pay constitutes 50 percent of total remuneration that is not what the statute provides but rather that informally structured allowance-heavy compensation may inadvertently expand the statutory wage base and the financial obligations that flow from it.
As a forward-looking matter, GCCs that are building out AI skill premium frameworks should model the interaction between those premiums and the Wage Code’s proviso at the individual employee level, to understand whether and to what degree existing or proposed compensation structures may expand statutory computation bases. This analysis is particularly relevant where skill premiums are being awarded as additions to already allowance-heavy salary structures.
Beyond statutory computation, the discretionary and inconsistent nature of AI skill compensation creates exposure on pay parity grounds. Employees performing equivalent functions whether prompt engineering, model evaluation, or AI workflow design may receive materially different compensation depending on their manager, business unit, or geography within the GCC. In the absence of a structured AI job architecture with defined compensation bands, such disparities may be difficult to defend in the context of a grievance or dispute.
The absence of formal role definitions for AI-native capabilities also creates ambiguity around classification. Where an employee’s responsibilities have shifted substantially toward AIrelated work without a formal role redesign, questions may arise as to the correct grade, band, or designation applicable to their employment. This ambiguity may affect promotion eligibility, performance evaluation criteria, and separation terms.
The New Regulatory Landscape
1. The Four Labour Codes (Effective since 21 November 2025)
The implementation of all four Labour Codes on 21 November 2025 represents the most significant structural reform of India’s labour law framework in the post-independence period. The Codes consolidate 29 central labour statutes into four instruments: the Wage Code, 2019; the Industrial Relations Code, 2020; the Code on Social Security, 2020; and the Occupational Safety, Health and Working Conditions Code, 2020. While Central and state-level rules under the Codes were still being finalised as of early 2026, all Codes are in force and employers are required to comply.
For GCCs undertaking AI-led workforce change, the most operationally relevant provisions are the following. The Wage Code revises the definition of “wages” and, through a proviso to the definition, ensures that the statutory wage base for benefit computation purposes cannot be artificially suppressed by loading compensation with excluded allowances beyond 50 percent of total remuneration. Employers should review existing salary structures particularly those in which skill premiums or other variable components have been added in allowance form to assess whether the proviso is engaged and what the consequential impact on provident fund, gratuity, and bonus obligations may be.
The Industrial Relations Code raises the threshold requiring prior government approval for retrenchment, layoffs, and closures from 100 to 300 workers, affording larger GCCs greater operational flexibility on the procedural approval dimension. However, it does not alter substantive obligations around notice, compensation, and the accurate characterisation of separations. The Code on Social Security reduces the gratuity eligibility period for fixed-term employees to one year of continuous service, a significant departure from the five-year threshold under the Payment of Gratuity Act, 1972. This change, which took effect on 21 November 2025, is particularly relevant for GCCs that have structured AI-skilled or project based roles as fixed-term engagements: gratuity obligations that may not have been anticipated in original cost modelling will now apply. The Codes together also introduce enhanced dispute resolution mechanisms, including two-member Industrial Tribunals and a greater emphasis on conciliation before adjudication, which may alter the litigation trajectory of AI-related employment disputes going forward.
2. The DPDPA Framework and AI-Driven HR Systems
The Digital Personal Data Protection Act, 2023 and the Digital Personal Data Protection Rules, 2025, notified on 13 November 2025, together constitute India’s first comprehensive data protection framework. It is important to note that the DPDPA framework is not yet fully operationalised. The Rules are being implemented in phases, with full compliance including provisions related to Significant Data Fiduciary obligations, audits, and data protection impact assessments expected to become mandatory only by May 2027. In the interim, certain foundational provisions have come into force, including the establishment of the Data Protection Board of India, consent and notice requirements, and breach reporting obligations.
For GCCs deploying AI in workforce management, the DPDPA framework is directionally relevant even during this transitional phase. The processing of employee data through AI systems including performance monitoring, productivity analytics, skill assessment, and redundancy modelling constitutes the processing of personal data. While processing for employment purposes is recognised as a legitimate use under Section 7 of the DPDPA, this is not an open-ended exemption: where AI systems process employee data for purposes that extend meaningfully beyond standard HR functions, or where monitoring is more pervasive than the employment relationship would ordinarily contemplate, the scope of the exemption may be contested.
The DPDPA does not currently mandate a right to explanation for automated employment decisions which is a gap that distinguishes the Indian framework from the GDPR. However, the absence of a specific statutory obligation does not eliminate legal risk. Employees adversely affected by AI-driven decisions may challenge those decisions on grounds of natural justice, arbitrariness, or absence of procedural fairness, particularly where the employer cannot explain the basis on which an AI system generated an output that affected their employment. Employers would be well-advised to begin building documentation and oversight practices now, in anticipation of fuller DPDPA compliance obligations crystallising over the coming two years.
3. Emerging AI Governance Framework
India does not yet have an enacted AI-specific law. The regulatory landscape, however, is in active formation. MeitY released AI Governance Guidelines under the IndiaAI Mission in November 2025, articulating a principle-based framework centred on seven foundational pillars including fairness, accountability, and transparency. While not legally binding, these guidelines represent the Government’s stated framework for responsible AI adoption and may be relevant in assessing the reasonableness of an employer’s AI governance practices if challenged before a court or tribunal.
In December 2025, the Artificial Intelligence (Ethics and Accountability) Bill, 2025 was introduced as a Private Member’s Bill in the Lok Sabha, proposing a statutory Ethics Committee for AI, mandatory ethical reviews for high-risk systems including those used in employment decisions, bias audit obligations, and developer accountability requirements. While the Bill has not been enacted and its legislative prospects remain uncertain, it reflects growing parliamentary attention to AI accountability in employment contexts and may signal the direction of future policy.
Convergence Of Risks: Informal AI-Led Restructuring
The employment law risks outlined in this paper do not arise in isolation. Skill obsolescence, team compression, informal compensation structures, and AI-driven performance management are interconnected features of a single underlying phenomenon: the informal restructuring of GCC workforces along AI capability lines, without a corresponding update to employment frameworks, compensation architectures, or documentation practices.
This creates a structural disconnect between operational reality and the documented employment relationship. The scale of this disconnect is beginning to manifest in measurable workforce corrections. As reported by BW People (February 2026), India’s GCCs experienced a workforce correction in 2025 in which more than 5,500 to 6,000 employees were impacted across sectors as a result of macroeconomic headwinds and AI-led restructuring, spanning engineering services, automotive, aerospace, retail, and technology functions. When disputes arising from such corrections proceed to adjudication, organisations will be required to justify consequential decisions before a labour court or tribunal on a record that may not accurately reflect the structural basis on which those decisions were made.
The newly established two-member Industrial Tribunals under the Industrial Relations Code are designed to expedite dispute resolution. A well-documented employer position will be correspondingly more important in proceedings before these Tribunals than it was under the prior regime, where protracted proceedings sometimes allowed gaps in documentation to be addressed over time.
The interaction of the regulatory landscape with informal AI-led restructuring intensifies overall exposure. Where an employer’s AI-driven performance system has processed employee data without adequate governance, or where a separation characterised as performance-based is in fact structural, the employer faces overlapping risk: under the industrial relations framework for improper separation or retrenchment, under the DPDPA framework for improper data processing, and potentially under the emerging AI governance framework for failure to audit or disclose AI decision-making in employment contexts.
Potential Employment Law Considerations
AI-led restructuring in GCCs may give rise to several employment law considerations, including:
- The distinction between retrenchment and performance-based termination, particularly where separation is driven by structural skill displacement rather than individual conduct, and the consequent risk of mischaracterisation.
- Selection criteria for role rationalisation and the absence of contemporaneous documentation of the justification for decisions made in compression exercises.
- Demotion or unilateral alteration of service conditions arising from role restructuring and reporting line changes.
- The interaction between informal AI skill premiums and the Wage Code’s proviso on excluded allowances, and the consequential impact on statutory benefit computation bases including provident fund and gratuity.
- The reduced gratuity eligibility threshold for fixed-term employees under the Code on Social Security, 2020, and its implications for roles structured as project-based or time limited engagements.
- Algorithmic bias in performance management and redundancy selection, and the absence of mechanisms for employees to contest or seek explanation for AI-driven employment decisions.
- DPDPA framework obligations as they progressively come into force in relation to the processing of employee data through AI systems used for productivity monitoring, performance evaluation, and exit modelling.
- Natural justice requirements in the context of AI-assisted employment decisions, where the employer may be unable to articulate or disclose the reasoning behind an adverse outcome.
- Documentation obligations relating to organisational restructuring, which become more significant where informal AI-led changes are later challenged before a tribunal.
Recommendations For GCCs
To address the emerging legal and compliance risks associated with AI-led workforce transformation, GCCs may consider the following:
Workforce Architecture and Role Definition
- Define AI-related job families, role descriptions, and competency frameworks as part of a formal job architecture, distinguishing AI-native roles from transitional or hybrid configurations.
- Formally redesign roles where operational responsibilities have materially shifted as a result of AI deployment, and update employment contracts or offer letters accordingly.
- Document the business rationale for team compression and managerial reconfiguration contemporaneously, with clear articulation of the structural basis for decisions affecting individual roles.
Compensation Structure and Statutory Compliance
- Review compensation structures particularly those in which AI skill premiums have been added in allowance form to model whether the Wage Code’s proviso on excluded allowances is engaged, and to assess the consequential impact on statutory benefit computation.
- Assess the gratuity implications of the reduced one-year eligibility threshold for fixed term employees under the Code on Social Security, 2020, particularly for AI-skilled roles structured as project-based engagements, and update cost projections accordingly.
- Establish structured, band-based compensation frameworks for AI-native roles to promote internal consistency and mitigate pay parity risks across teams and geographies.
Redundancy and Separation Governance
- Establish a clear protocol distinguishing structural redundancy from performance-based separation, with mandatory review before an AI-enabled performance outcome is used as the basis for a termination decision.
- Ensure that the characterisation of any separation whether as retrenchment or performance-based exit is grounded in contemporaneous documentation that accurately reflects the true basis for the decision, having regard to the legal consequences that flow from each characterisation.
- Review selection criteria used in any group restructuring exercise for consistency, objectivity, and freedom from algorithmic bias.
AI Governance and DPDPA Readiness
- Conduct a data mapping exercise to identify all AI systems that process employee personal data, including performance monitoring, productivity analytics, and exit modelling tools, and assess the applicable basis for processing under the DPDPA framework.
- Implement human oversight mechanisms for AI-assisted employment decisions, ensuring that consequential decisions including performance action and redundancy selection are reviewed and approved by a responsible manager before being actioned, and that the decision-making process is documented.
- Begin building DPDPA compliance infrastructure now, in anticipation of fuller obligations crystallising by May 2027, including governance frameworks, breach notification protocols, and data principal rights mechanisms.
- Monitor the emerging AI governance framework and consider voluntary adoption of bias audit and algorithmic transparency practices in advance of potential statutory obligations.
Retraining and Transition Pathways
- Offer structured retraining pathways for employees in roles identified as high-risk for AI displacement, and document the availability and uptake of such programmes. This documentation may be relevant in demonstrating good faith in the event of a dispute.
- Where employees are moved to adjacent or restructured roles rather than exited, ensure that the new role description is formally defined and that any material change in conditions of service is properly documented and, where required, consented to.
Conclusion
AI adoption within GCCs is not a discrete event. It is a structural transition, progressing incrementally across functions, seniority levels, and compensation frameworks. The GCC sector in India, now exceeding 1,800 centres and approaching two million employees is at the centre of this transition, and the scale of change is only expected to deepen as agentic AI and full-scale automation become embedded in core delivery models.
The simultaneous implementation of the four Labour Codes, the advancing operationalisation of the DPDPA framework, and the emergence of India’s AI governance architecture create a new and more complex compliance environment for GCCs managing this transition. Employment frameworks that have not kept pace with operational change are increasingly exposed not only in the context of individual disputes but in the broader context of regulatory expectations around how AI-driven decisions affecting workers should be governed, documented, and, where necessary, explained.
Proactive alignment of role definitions, compensation structures, redundancy characterisation, and AI governance practices may assist GCCs in managing workforce transformation in a manner that is legally defensible, organisationally coherent, and equitable to the employees whose roles, skills, and livelihoods are most directly affected by the shift.
- 3 SCC 682 ↩︎
By entering the email address you agree to our Privacy Policy.