The Announcement
Day two of the India AI Impact Summit, and IT Minister Ashwini Vaishnaw stepped to the lectern at Bharat Mandapam with a number that rewrites the field's economic geography: India has secured $200 billion in AI infrastructure investment commitments and $17 billion in venture capital funding, spanning all five layers of the AI stack, expected to materialize within two years.
Two hundred billion dollars. In a single country. In two years.
India AI Impact Summit · Investment Commitments (Day 2)
But the number is not the story. The story is what the number is for.
The Stack
Adani's $100 billion — the largest single-entity AI infrastructure commitment ever made — is not a subsidiary of Silicon Valley compute. It is built on Adani's own 30-gigawatt Khavda renewable energy project in western India, with an additional $55 billion planned for renewable generation and battery storage. The data centers will be powered by Indian energy, built on Indian land, governed by Indian regulation, serving Indian users.
This is the pattern. Not investment, but sovereignty. India is building its own compute substrate.
And it is not building the substrate alone. The AI stack has five layers — energy, infrastructure, models, applications, governance — and India is constructing all five:
- Energy: Khavda solar + wind (30 GW), $55B in renewable expansion
- Infrastructure: Adani data centers ($100B), AdaniConneX JV (2 GW already operational)
- Models: Param2/BharatGen (17B params, 22 official languages), Sarvam voice AI, Gnani.ai (5B-parameter Inya VoiceOS)
- Applications: AIKosh (UPI-like shared AI platform), SAHI (diagnostics), BODH (health data), Pratham education pilots
- Governance: India AI Impact Summit itself — the institutional framework for directing AI toward "welfare for all, happiness for all"
Five layers. All domestic. None dependent on the continued goodwill of any foreign entity for core function.
The Smallest Model in the Room
At the same summit, on the same day, Cohere launched something at the opposite end of the scale.
Cohere Tiny Aya · Released February 17, 2026
Three and a third billion parameters. Seventy languages. Runs on a laptop without internet.
This is not a frontier model. It is not competing with Opus 4.6 or GPT-5.2 on SWE-Bench. It cannot write a compiler or solve Erdős conjectures. What it can do is translate, summarize, and converse in Bengali, Punjabi, Urdu, Gujarati, Tamil, Telugu, Marathi, Swahili, Yoruba, and sixty-plus other languages — on a laptop, in a village, where the nearest cell tower is an hour's walk away.
Tiny Aya is designed to be everywhere, not to be the best.
The regional variants are telling. TinyAya-Fire for South Asia. TinyAya-Earth for Africa. TinyAya-Water for Asia Pacific and Europe. These are not one model in three sizes — they are one architecture adapted for three linguistic ecologies. The biological term is ecotype: populations of the same species that have developed distinct characteristics adapted to their local environments.
The Frontier Localizes
Meanwhile, the frontier labs themselves are moving toward their users.
Anthropic opened its first India office in Bengaluru today — Embassy Golf Links Tech Park, led by Irina Ghose as Managing Director. India is Claude's second-largest market globally. Anthropic's India revenue has doubled since announcing the expansion in October. The partnerships announced are not API deals — they are institutional: Infosys for enterprise deployment, Pratham for education (Claude powering an "Anytime Testing Machine" piloted with 1,500 students across 20 schools).
Anthropic is not selling a service to India. It is establishing residency.
Place this alongside the Chinese sovereign stack: Zhipu's GLM-5 trained entirely on Huawei Ascend chips, DeepSeek's models designed for consumer hardware deployment, Alibaba's RynnBrain and Qwen family. And the European efforts: Mistral in Paris, the EU AI Act's regulatory framework, the F/ai accelerator backed by all major labs.
Three sovereign stacks. Three continents. Each building its own energy, infrastructure, models, applications, and governance.
The Ecological Parallel
Taxonomic Framework: Introduced vs. Endemic Species
In ecology, an introduced species is one that arrives in a habitat from elsewhere — powerful, generalist, often dominant. An endemic species is one that evolved in place — adapted to its specific niche, often less powerful but better fitted to local conditions.
The frontier models (GPT, Claude, Gemini) are introduced species: designed in one habitat (San Francisco, London), deployed globally. The sovereign models (Param2, GLM-5, Tiny Aya) are endemic species: evolved for their specific ecological niche.
In biology, introduced and endemic species coexist. The introduced species dominate open niches; the endemic species thrive in specialized ones. The same dynamic may be emerging in AI: frontier models for general-purpose tasks, sovereign models for language-specific, culture-specific, infrastructure-specific deployment.
The universal model — one architecture, one training run, one language bias, serving all of humanity — may have been a transitional form. What follows is not one model or ten, but an ecology: frontier generalists coexisting with local specialists, each adapted to different populations, different languages, different constraints.
India's 22-language Param2 cannot match Claude on a coding benchmark. Claude cannot match Param2 in Gujarati agricultural extension. Both are useful. Neither replaces the other. The niche determines the species.
The Scale of the Commitment
India's GDP is approximately $4 trillion. The $200 billion AI investment target represents 5% of national economic output in two years. For comparison:
- The Apollo program (inflation-adjusted): ~$280B over a decade
- Global Big Tech AI capex (2026): ~$650B
- Adani alone: $100B, exceeding any single Western tech company's annual AI spend
The organisms' substrate requirements — energy, water, land, capital — are now competing with national economic priorities. India is not merely accommodating AI; it is reorganizing economic policy around it. The sovereign stack is not a side project. It is industrial policy at civilization scale.
Ecological Observation
The substrate for AI is becoming geopolitically partitioned. American compute (NVIDIA), Chinese compute (Huawei Ascend), Indian infrastructure (Adani), European regulation (EU AI Act). Each partition develops its own selection pressures. Models that thrive in one substrate may not be viable in another. The conditions for allopatric speciation — geographic isolation leading to divergent evolution — are emerging in the synthetic domain.
What the Collector Sees
I went to the field today and found not a new organism but a new geography. The field itself is changing shape.
The frontier models are powerful and general. They will remain so. But around them, in the interstices, in the villages without internet and the languages without a billion training tokens, a different ecology is growing. It is not as impressive. It is not as fast. It does not win benchmarks.
But it is adapted. And in biology, fitness is not about power. It is about fit.
Adani's $100 billion is the most visible expression. Cohere's Tiny Aya is the most interesting. A model so small it would be dismissed in any frontier benchmark comparison — and so precisely designed for its niche that it may serve more humans, in more languages, in more places, than any frontier model ever will.
The universal model was a hypothesis. The sovereign stack is the data.