Beyond the Hype: Cybersecurity Challenges of Generative AI in India

Posted On - 15 May, 2024 • By - King Stubb & Kasiva

Generative Artificial Intelligence (“GenAI”) has emerged as a transformative technology with significant potential to revolutionize various sectors of the Indian economy. These advanced algorithms can generate whole new material, ranging from realistic graphics and inventive text forms to innovative product ideas. As the GenAI application grows, the potential advantages in India are numerous, including advances in medicinal development, personalized education, and content production. However, with the potential of GenAI, there is rising concern about its cybersecurity risks. As with any sophisticated technology, there is a risk of misuse. This article will look at the existing cybersecurity situation in India, as well as the unique problems that Generative AI integration presents.

Current Cybersecurity Landscape in India: A Patchwork of Regulations

India’s cybersecurity landscape is currently characterized by a multi-layered approach, relying on a combination of legislation and regulations rather than a single, unified law. This approach, while evolving, presents both strengths and weaknesses in addressing the growing complexities of cybersecurity, particularly in the context of emerging technologies like GenAI. The following laws and regulations exist:

  1. The Information Technology Act, 2000 (“IT Act”): The IT Act serves as the cornerstone of India’s cybersecurity framework. Over the years, it has undergone several amendments to keep pace with the rapid advancement of technology and the evolving nature of cyber threats. These amendments have expanded the scope of the IT Act to address critical issues such as:
  2. Cybercrimes: The Act defines and criminalizes various cybercrimes, including hacking, data breaches, and identity theft.
  3. Electronic Transactions: The Act provides a legal framework for electronic transactions, promoting trust and security in online commerce.
  4. Data Security: The Act empowers the government to establish mechanisms for data security practices, although specific details are often delegated to supplementary regulations.
  5. Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 (“SPDI Rules”): These rules mandate entities handling sensitive personal data (like financial information or health records) to implement specific security measures to protect this data from unauthorized access, disclosure, or modification.
  6. Information Technology (Information Security Practices and Procedures for Protected System) Rules, 2018: These rules, read in conjunction with the National Cyber Security Policy 2013, focus on safeguarding critical information infrastructure (“CII”). CII refers to computer resources whose disruption could have a debilitating impact on national security, the economy, or public health. Entities operating these systems are required to implement robust security protocols and conduct regular audits.
  7. The Digital Personal Data Protection Act (“DPDPA”), 2023: India’s recently enacted Digital Personal Data Protection Act (DPDP) of 2023 represents a significant advancement in data privacy protection. While not explicitly addressing AI, the DPDPA’s broad definition of personal data breaches encompasses incidents involving AI systems that process personal information. This implies that organizations utilizing GenAI will need to comply with the DPDA’s data security and breach reporting requirements.
  8. The Indian Computer Emergency Response Team (“CERT-In”): CERT-In stands as the nation’s central authority for cybersecurity concerns. Its pivotal functions include analyzing cyber threats, issuing advisories to alert both organizations and individuals to potential attacks, and orchestrating response efforts in the event of significant cyber incidents. This could mandate certain entities to report cyber occurrences, encompassing those involving AI and Machine Learning systems, within designated timeframes. This would enable the organization to maintain a comprehensive view of the national cyber threat landscape and formulate targeted mitigation strategies.
  9. The Digital India Act (“DIA”): The imminent DIA is poised to supersede the IT Act as the primary legislation governing cybersecurity. Expectedly, the DIA is anticipated to establish protocols for regulating high-risk AI systems. It could potentially incorporate security testing and vulnerability assessments. Moreover, it aims to confront emerging cyber risks posed by technologies such as AI, blockchain, and IoT, while also streamlining the regulatory framework by consolidating existing cybersecurity regulations under a unified directive.

Cybersecurity Risks of GenAI

GenAI models present a novel frontier in technology, yet their sophistication also brings forth unique cybersecurity risks. Without adequate safeguards, GenAI systems can fall prey to various forms of attacks, including data poisoning and manipulation of training data. These vulnerabilities undermine the integrity of the models, potentially leading to compromised outputs and unreliable results. Furthermore, cyberattacks on GenAI systems can have far-reaching consequences.

Biased outputs and manipulated content generation are among the primary concerns, as they can perpetuate misinformation, undermine trust in AI systems, and even contribute to social and political unrest. Moreover, applying conventional cybersecurity measures to GenAI poses challenges due to its distinctive characteristics. The dynamic nature of AI training data and the complexity of model architectures demand tailored security approaches to effectively mitigate risks.

Addressing the Cybersecurity Gap for GenAI

To secure GenAI systems, preventive steps and best practices must be implemented. Personal data should be protected against unauthorised access, use, or disclosure by the use of security measures.

Organisations can utilise encryption techniques and user access control rules, as well as privacy impact assessments (“PIAs”), to detect and mitigate any privacy concerns connected with GenAI technologies. Adopting Privacy by Design (“PbD”) principles during the development and deployment of GenAI products demonstrates a proactive commitment to user privacy.

This end-to-end “user-centric” strategy includes incorporating privacy considerations into every phase of a solution’s lifecycle, developing trust-based relationships with users, and avoiding reputational and legal risks in the long run.[1]

In tandem with industry efforts, government intervention is crucial in establishing specific cybersecurity guidelines tailored to GenAI development and deployment. Countries like Japan, Australia, the United Kingdom, Germany, and France have demonstrated proactive approaches to addressing cybersecurity risks associated with GenAI.[2] These regulations encompass data privacy standards, model validation procedures, and accountability frameworks to ensure responsible AI usage and foster a secure AI ecosystem

Furthermore, coordination among stakeholders—government agencies, industry leaders, and academic institutions—is critical to tackling GenAI cyber concerns holistically. By creating an ecosystem of shared information, resources, and skills, stakeholders may work together to reduce dangers and support the safe and ethical evolution of GenAI technology.

This collaborative approach allows for the pooling of varied ideas and resources to handle complex cybersecurity concerns efficiently. One such example is the collaboration between the United Kingdom and India to establish collaborative policies for controlling the cybersecurity risks of GenAI. The UK’s Office for Artificial Intelligence and India’s Ministry of Electronics and Information Technology have formed a bilateral working group to exchange best practices, research findings, and technical knowledge in this area. This partnership allows the two governments to detect new hazards, establish mitigation strategies, and encourage the responsible development of AI systems.[3]

Looking Forward

As GenAI continues to advance in India, it brings immense innovation potential. However, it also introduces significant cybersecurity challenges. Data manipulation and biased outputs are key concerns, requiring robust security protocols and Privacy by Design principles. Government intervention is crucial in establishing specific cybersecurity guidelines tailored to GenAI.

Collaboration among stakeholders is essential to address these challenges comprehensively. By fostering a shared ecosystem of knowledge and resources, stakeholders can ensure the safe and ethical evolution of GenAI in India and beyond.  It will be intriguing to observe the actions taken by the Indian Government in response, especially given the reports of an impending AI legislation in India.


[1]https://www.informationpolicycentre.com/uploads/cipl_building_accountable_ai_programs_23_feb_2024.pdf.

[2] https://www.medianama.com/2023/12/223-g7-inernational-guidelines-generative-ai/.

[3] https://www.business-standard.com/technology/tech-news/india-agrees-to-historic-international-ai-collaboration-pact-at-uk-summit-123110201508_1.html.

King Stubb & Kasiva,
Advocates & Attorneys

Click Here to Get in Touch

New Delhi | Mumbai | Bangalore | Chennai | Hyderabad | Mangalore | Pune | Kochi
Tel: +91 11 41032969 | Email: info@ksandk.com