AI Regulation in India: Draft Frameworks and Legal Gaps

  • Post category:Blog
  • Reading time:6 mins read

AI Regulation in India: Draft Frameworks and Legal Gaps

Written by Mokshada Agarwal

Table of Contents

Introduction

Artificial Intelligence (AI) is rapidly transforming sectors such as healthcare, education, agriculture, finance, and governance in India. Its applications—from predictive analytics and natural language processing to autonomous vehicles and facial recognition—have opened new frontiers for innovation, efficiency, and economic growth. However, these advancements come with critical risks: privacy violations, algorithmic bias, accountability gaps, cybersecurity threats, and potential job displacement. Recognizing both the promise and perils of AI, the need for a comprehensive regulatory framework in India has become increasingly urgent.

India, unlike the European Union which has already proposed the AI Act, is still in the early stages of drafting formal legal frameworks for AI governance. While several policy papers and discussion documents have been released by government bodies, there is no unified or enforceable legislation dedicated exclusively to AI regulation. This article explores the current status of AI regulation in India, examines draft frameworks and policy initiatives, and identifies significant legal gaps that need to be addressed to ensure safe, ethical, and responsible AI development.

AI in India: A Policy-First Approach

India’s approach to AI regulation so far has been characterized by a policy-led, innovation-friendly strategy aimed at fostering development while cautiously exploring the need for regulation.

1. National Strategy for AI (NSAI), 2018 – NITI Aayog

The National Strategy for Artificial Intelligence, released by NITI Aayog, was India’s first significant step toward shaping its AI ecosystem. Branded as “AI for All,” the strategy focuses on five key sectors: healthcare, agriculture, education, smart mobility, and smart cities.

Key highlights:

  • Emphasis on responsible AI and ethical usage
  • Proposal for a National AI Marketplace
  • Recommendation for a Data Protection Framework
  • Advocacy for a Centre of Research Excellence (CORE) in AI and International AI Alliances

Despite being visionary, the NSAI lacks legislative force and focuses more on enabling infrastructure than enforceable regulation.

2. Responsible AI for All: Part 1 – Principles for Responsible AI (2021)

In February 2021, NITI Aayog published a two-part discussion paper, the first titled “Principles for Responsible AI”. This paper outlined key principles such as:

  • Safety and reliability
  • Equality and non-discrimination
  • Privacy and data protection
  • Transparency and explainability
  • Accountability and legal remedy

These principles mirror global ethical standards but again stop short of proposing binding rules or compliance mechanisms.

3. Draft Data Protection Frameworks

AI regulation is closely tied to data protection, since AI systems depend heavily on personal and non-personal data. India’s Digital Personal Data Protection Act, 2023 (DPDP Act) provides a foundational data privacy regime. While not specific to AI, the Act sets important boundaries on data processing that will affect AI models and applications.

However, the DPDP Act:

  • Does not address automated decision-making rights (e.g., the right to explanation)
  • Lacks sector-specific obligations for AI-based profiling
  • Does not mandate impact assessments for high-risk AI systems

Despite growing AI adoption, India’s legal framework remains fragmented and insufficient to tackle the multi-dimensional risks posed by AI technologies.

1. Absence of a Comprehensive AI Law

India currently lacks an overarching AI-specific legislation or a central regulatory authority to govern AI development and deployment. Most AI-related concerns are managed under general laws such as the Information Technology Act, 2000 (IT Act), Consumer Protection Act, 2019, and DPDP Act—none of which fully address the unique challenges of AI.

2. Lack of Regulatory Sandbox or Compliance Standards

Unlike financial technology sectors where regulatory sandboxes exist under RBI or SEBI, there is no equivalent for AI. Startups and developers have no official framework for testing high-risk AI applications in a controlled environment. Moreover, there are no defined technical standards or certification mechanisms for ethical AI development in India.

3. Accountability and Liability Gaps

The black-box nature of AI systems makes it difficult to assign legal liability when harm occurs. Indian law has yet to define:

  • Who is liable for an autonomous AI system’s actions?
  • Can an AI system be a legal entity?
  • What are the remedies for algorithmic discrimination or harm?

These questions are critical for future litigation and governance.

4. Algorithmic Bias and Discrimination

There are no legal safeguards or audit mandates to detect or prevent algorithmic bias, which could lead to unfair outcomes in areas like hiring, credit scoring, law enforcement, and healthcare. This poses a serious risk to equality and non-discrimination rights under Article 14 of the Indian Constitution.

5. Explainability and Transparency Requirements

Current Indian laws do not mandate explainability for AI systems used in decision-making. The lack of transparency in algorithmic decisions can result in opacity, exclusion, and denial of justice, particularly when AI is used by government agencies.

Global Best Practices for AI Regulation

India can draw inspiration from emerging global regulatory approaches:

  • EU AI Act (2021): Categorizes AI systems into risk levels and imposes obligations such as conformity assessments, human oversight, and transparency requirements for high-risk systems.
  • OECD AI Principles: Adopted by over 40 countries, including India, promoting inclusive growth, transparency, robustness, and accountability.
  • US AI Bill of Rights (2022): Sets out voluntary guidelines focused on privacy, discrimination, and explainability.

These frameworks provide templates for India to develop a legally binding and sector-specific AI regulation.

Recommendations for a Future-Ready AI Law in India

  1. Establish a Central AI Regulatory Authority
    • To oversee AI risk classification, audit mechanisms, certification, and enforcement
    • To coordinate among ministries, industry, and academia
  2. Create a Legal Framework for High-Risk AI
    • Mandatory algorithmic audits and impact assessments
    • Registration and certification of high-risk AI systems (e.g., facial recognition, financial scoring)
  3. Introduce Rights-Based Protections
    • Right to explanation for automated decisions
    • Right to opt-out of algorithmic profiling
    • Right to redress for harm caused by AI
  4. Define Liability Frameworks
    • Clear rules for civil and criminal liability
    • Safe harbor provisions for compliant developers
    • Insurance models for autonomous systems
  5. Promote Responsible Innovation
    • Incentivize ethical AI research
    • Regulatory sandboxes for startups and testing
    • Capacity-building initiatives for developers and legal professionals
  6. Ensure Inclusivity and Accessibility
    • AI systems must be trained on diverse data sets
    • Focus on reducing digital divide and language barriers

Conclusion

India stands at a critical juncture in its AI journey. While the country has made significant progress through policy formulation and public discourse, the absence of enforceable laws and regulatory structures leaves a legal vacuum. With AI being deployed across sensitive domains—education, policing, healthcare, and employment—the stakes are too high for delayed regulation.

A balanced legal framework that promotes innovation while safeguarding individual rights, fairness, and accountability is the need of the hour. Drawing on global best practices and tailoring them to India’s socio-economic context, policymakers must act swiftly to build a comprehensive and forward-looking AI regulatory regime. Only then can India harness the full potential of artificial intelligence as a force for inclusive growth and responsible digital governance.