The frontier model problem

India has every ingredient on paper to be an AI superpower. It has the world's largest pool of software engineers, more than 700 million internet users, the cheapest mobile data on the planet, and a government willing to write big cheques for digital infrastructure. ChatGPT alone has roughly 145 million monthly active users in India โ€” more than in any other country. And yet, when the world ranks AI powers in 2026, India is not in the top tier. It is not even close. The United States leads on foundational models and capital. China leads on applied AI, manufacturing integration, and state-backed compute. India sits in a third group, prominent as a consumer of AI but largely absent as a creator of it. Why?

The most uncomfortable answer is that India does not yet have a frontier model. While American labs ship GPT, Claude, and Gemini, and Chinese labs ship DeepSeek, Qwen, and Kimi, India unveiled its first serious indigenous large language models โ€” Sarvam AI's 30-billion and 105-billion parameter models โ€” only at the AI Impact Summit in February 2026. This is a real milestone, but it arrived years after global frontier labs had already commoditised that capability. The Outlook Business coverage of the summit captured the mood bluntly: there were pockets of spark, but no breakthroughs that created ripples globally, and many Indian start-ups looked like they were "a few GPT or Claude updates away from going out of business" because they were thin wrappers on foreign foundation models. That is the structural problem in one sentence. India is building on someone else's stack.

The compute power bottle-neck

The reasons begin with compute. Training a frontier model requires tens of thousands of high-end GPUs, reliable power, and the capital to burn billions of dollars before you see a return. India has historically had almost none of this. The IndiaAI Mission's โ‚น10,372 crore (roughly $1.2 billion) allocation is meaningful for a public programme, but it is a rounding error next to what a single US hyperscaler spends in a quarter. Google announced a $15 billion AI data centre hub in India in April 2026 โ€” welcome investment, but a reminder that the foundational layer is being built by foreign companies in India, not by Indian companies. Reliance and a handful of others are racing to change this, but data-centre buildouts take years, and the GPUs themselves still come from Nvidia.

The second reason is research depth. India produces extraordinary individual researchers โ€” they staff DeepMind, OpenAI, Anthropic, Microsoft Research, and Meta AI in remarkable numbers. Sundar Pichai runs Google. Sriram Krishnan advises the White House on AI. But Indian universities and labs, with rare exceptions like IIIT Hyderabad, IISc Bangalore, and a few IITs, have not built the kind of dense, well-funded research clusters that produce frontier work. The PhD pipeline is small, faculty pay is uncompetitive, and the best graduates leave for Silicon Valley or Beijing-adjacent labs. China has spent two decades reversing this brain drain through aggressive talent programmes; India has barely started.

Native language support problem of LLMs

The third reason is data and language. India is linguistically richer than any other major economy โ€” 22 official languages, hundreds of dialects, and most of its citizens not comfortable in English. This is a problem and an opportunity. It is a problem because every major foundation model is trained predominantly on English and Mandarin, leaving Indian languages under-served. It is an opportunity because whoever solves Indic NLP at scale wins a billion-user market. BHASHINI and Sarvam are pushing here, but they are racing against well-funded foreign labs that are also fine-tuning for Hindi, Tamil, and Bengali. Owning your language stack is sovereignty; renting it is dependence.

Lack of significant investment unlike neighboring China

The fourth reason is capital structure. AI is a capital-intensive game played by patient money. Indian venture capital is comparatively shallow, exit-driven, and deeply allergic to the kind of multi-year, multi-billion-dollar bets that produced OpenAI and Anthropic. Indian founders who want to build foundational AI either go abroad to raise or get squeezed into building applications. The applications layer is valuable โ€” fintech, healthtech, voice agents, vertical SaaS โ€” but it is also where margins compress fastest once the model layer commoditises. India is over-indexed on the layer that captures the least value.

The fifth reason is policy and procurement. Government is the largest potential customer for AI in any country, and in China and the US, state procurement has been the single biggest accelerant for domestic AI champions. India's government talks the language of sovereign AI but still buys most of its enterprise AI from foreign vendors. The DPDP Act, the absence of a clear AI law, and slow procurement cycles mean Indian start-ups cannot count on their own state as an anchor customer. Compare this with how Beijing systematically routes contracts to Baidu, Alibaba, ByteDance, and DeepSeek.

The sixth reason is harder to talk about: a culture that prizes optics over rigour. The Galgotias University incident at the AI Summit โ€” where a Chinese-made Unitree robot dog was presented as indigenous โ€” was a small embarrassment, but it pointed at something larger. Setting a Guinness World Record for AI responsibility pledges is not the same as shipping a model that beats GPT on a benchmark. Until the ecosystem rewards engineering substance over conference theatrics, the gap will persist.

None of this is destiny. India has genuine advantages: scale, demographics, an enormous developer base, and a digital public infrastructure (UPI, Aadhaar, ONDC, BHASHINI) that is the envy of the world. The IndiaAI Impact Summit, for all its flaws, signalled that the political establishment finally understands the stakes. Sarvam, Krutrim, and a few others are doing serious foundational work. The $200 billion in expected investment over the next two years is real money.

But catching up will require choices India has been reluctant to make: concentrate compute and capital on two or three foundation-model labs rather than spraying grants thinly, treat AI research like ISRO was treated in the 1970s, fix university faculty pay and PhD pipelines, mandate domestic procurement preferences, and stop confusing being the world's biggest user of AI with being one of its builders. The race is not lost. But India is running it in the wrong shoes, and the leaders are getting further ahead every quarter.