top of page

What the India AI Governance Framework Actually Says (And Why Practitioners Should Care)

  • Feb 19
  • 5 min read

This week, while most of the world's attention was fixed on Silicon Valley product launches and Brussels enforcement actions, India did something significant: it formally released a full national AI governance framework — seven principles, three new institutions, a phased implementation roadmap, and an explicit bid for global governance leadership.


What Just Happened — and When

The India AI Governance Guidelines for Enabling Safe and Trusted AI Innovation were published on February 15, 2026, just ahead of the India AI Impact Summit 2026 — a five-day event (February 16–20) convening policymakers, researchers, and industry leaders from over 100 countries in New Delhi. The timing was deliberate. India wanted to walk into that room with a document, not just a vision statement.


The framework itself has been in development since July 2025, when the Ministry of Electronics and Information Technology (MeitY) constituted a drafting committee under Professor Balaraman Ravindran. That committee reviewed existing laws, examined global best practices, and ran a substantive public consultation that received over 2,500 submissions before the final text was locked.


Then, at the summit itself, Prime Minister Modi went further: he unveiled "NAAIV" — the Human-Vision for Artificial Intelligence — a five-pillar global governance architecture that India is explicitly positioning as its contribution to shaping international AI norms. Speaking before representatives of more than 100 countries, Modi described NAAIV as a blueprint for responsible, inclusive, and sovereign AI development worldwide, warning against "techno-nationalism and machine-centric development that sidelines human values."


In the space of a week, India moved from having no formal AI governance framework to having two interlocking ones — one domestic, one global in ambition.


What the Framework Actually Says

The Guidelines are a structured, four-part document covering governing principles, institutional architecture, an action plan, and sector-specific guidance. Here is what sits at the center of it.


The Seven Sutras. India anchored the entire framework in seven guiding principles it calls sutras — a deliberate linguistic choice that signals this isn't simply a translation of EU or US governance thinking. The seven are:

  1. Trust as Foundation

  2. People First

  3. Fairness & Equity

  4. Accountability

  5. Understandable by Design

  6. Safety, Resilience & Sustainability

  7. Innovation over Restraint


That last one is the headline. "Innovation over Restraint" is India's explicit philosophical stake in the ground — a direct rebuttal to the EU's precautionary, risk-classification-first approach. The framework is unambiguous: AI is a catalyst for inclusive growth, and governance design should reflect that rather than choke it.


Three new institutions. The framework proposes an AI Governance Group (AIGG) as a permanent inter-agency policy coordination body, a Technology & Policy Expert Committee (TPEC) to provide technical and legal advisory support, and — most significantly — an AI Safety Institute (AISI) to handle model testing, safety standards, and international coordination. The AISI is India's formal entry into the emerging global network of national AI safety institutes, currently dominated by the UK and US.


A phased roadmap. Short-term: stand up the institutions, issue voluntary commitments, build an AI incidents database. Medium-term: regulatory sandboxes, legal gap amendments targeting the IT Act, copyright law, and DPDP interfaces, plus common standards. Long-term: continuous horizon scanning and governance evolution as the technology matures. The sequencing is deliberate — it avoids front-loading compliance burden before the ecosystem is ready to carry it.


Why This Is Different From What You've Seen Before

The temptation is to read India's framework as "EU-lite" — principles without enforcement, aspirations without teeth. That reading misses what is actually novel here.


The DPI integration is genuinely distinctive. India's governance model is inseparable from its Digital Public Infrastructure stack — Aadhaar, UPI, and the emerging AI-layer equivalents like AIKosh (which now hosts over 9,500 datasets and 273 sectoral models) and a subsidized national compute facility with over 38,000 GPUs onboarded. The "carrot" for responsible AI adoption isn't just reputational; it's access to state-backed infrastructure that most Indian startups couldn't afford otherwise. That is a governance lever the EU and US don't have in the same form.


The risk taxonomy is India-specific. Rather than importing EU-style risk classifications (unacceptable, high, limited, minimal), India's framework builds a risk taxonomy around its own social context — deepfakes targeting women, child safety risks, language bias across hundreds of languages, caste discrimination in algorithmic outputs. This isn't just cultural flavor; it reflects a genuine methodological argument that governance designed around abstract categories will fail in practice if those categories don't match the actual harm landscape.


Soft law as a feature, not a bug. India has explicitly chosen a "soft law" approach — voluntary commitments, self-certification, transparency reports, and third-party audits — before moving to binding obligations. Critics will call this toothless. But India's framing is that in a nascent ecosystem, hard mandates imposed prematurely will push development underground or offshore, not make it safer. The framework builds in escalation mechanisms; it is designed to harden as the industry matures. Notably, it also explicitly states that a standalone AI Act is not needed at this stage — a pointed contrast to the EU.



What Should AI Governance Practitioners Take Away?

A few things worth sitting with.


The seven sutras will travel. Whether or not India's institutions get fully stood up on schedule, the framing of "Innovation over Restraint" as a first-order governance principle — not a reluctant carve-out — will be influential. Expect to see this language echoed in other Global South governance frameworks over the next 18 months.


NAAIV is the one to watch globally. The domestic Guidelines are India's internal framework. NAAIV is India's attempt to export a governance philosophy. Five pillars, 100+ countries in the room, PM-level endorsement. It's early — the architecture hasn't been fully detailed publicly yet — but this is India's attempt to do for AI governance what it did with UPI: build something domestically, then offer it as a global public good.


The soft-law-to-hard-law pipeline is the real design question. India has made a bet that voluntary compliance with smart incentives, sandboxes, and escalation triggers can do the work the EU is doing with penalties. That hypothesis will be tested over the next few years. The AISI's actual resourcing and enforcement authority is where the bet will be won or lost.

DPI as governance infrastructure is undertheorized globally. The idea that you can steer AI adoption behavior through infrastructure access rather than regulation alone is genuinely underexplored in governance scholarship. India may be running the world's largest experiment in that approach.


"Understandable by Design" has teeth if operationalized. Explainability as a design requirement — not just a disclosure requirement — is more demanding than most frameworks have attempted. If India develops concrete technical standards for what this means sector by sector, that would be a meaningful contribution to the global technical standards landscape.


The Real Test

India has produced a serious, thoughtful governance framework — and launched it onto the global stage with considerable political ambition. The committee that wrote the Guidelines is credible, the public consultation was substantive, and the NAAIV announcement signals that India intends to be a norm-setter, not just a norm-taker.


But frameworks are proposals. The AI Governance Group, the AISI, the incident database, the sandboxes — none of these exist yet. Resourcing, inter-ministry coordination, and political will to stand up institutions that can push back on powerful industry actors are the actual hard problems. India has a long history of excellent policy documents and uneven implementation.

The governance community watching this closely is right to watch it. India at scale — 1.4 billion people, a massive developer ecosystem, AI deployment planned across agriculture, healthcare, and public services — is a governance laboratory that will produce real-world evidence about what works and what doesn't. That evidence will matter for everyone building AI governance frameworks, regardless of jurisdiction.



 
 
 

Comments


bottom of page