AI Regulation: Building Guardrails Without Killing Innovation

Sanjeev Sanyal, Member of the Prime Minister's Economic Advisory Council (PM-EAC), has proposed a sectorally partitioned regulatory architecture for AI to prevent cascading failures across critical infrastructure.

In the ever-accelerating world of artificial intelligence (AI), India is at a crucial crossroads: how to regulate AI without stifling its potential. Sanjeev Sanyal, Member of the Economic Advisory Council to the Prime Minister (EAC-PM), has put forward a regulatory architecture built around the idea of "partition walls" across sectors. His concern is rooted in the cascading risk AI poses to interconnected systems: a failure in one area, say finance, could quickly spiral into power, transport, and beyond.

In his January 2024 paper titled, "A Complex Adaptive System Framework to Regulate Artificial Intelligence", Sanyal co-authors a framework that borrows heavily from the regulation of financial markets. Just as circuit breakers prevent stock market crashes from escalating, he proposes manual overrides, transparency mandates, accountability trails, and algorithm registries for AI. He argues that AI systems behave like complex adaptive systems (CAS), where even small glitches can cause outsized harm—a butterfly effect. Thus, regulatory partitions between sectors can prevent a 'wildfire' from engulfing the entire digital ecosystem.

AI: Innovation and regulation should go hand-in-hand

However CIOs FE CIO spoke to propose a balanced approach as opposed to the ‘Partition walls’ stance of Sanyal; Nikhil Prabhakar, Chief Information Officer (CIO), IndiaMART says, “The future of AI lies not in choosing between innovation and regulation, but in architecting a framework where both can advance together," he said and further added, "A one-size-fits-all approach would stifle the immense potential of AI. At the same time, ensuring the safe and responsible use of AI remains paramount. A combination of thoughtful oversight acting as circuit breakers, along with an innovative spirit, is the right path forward."

Shiv Kumar Bhasin, Senior Technologist and BFSI Industry Practitioner, is deeply skeptical of the cascading-catastrophe hypothesis that underpins Sanyal’s partitioning logic. “It is just an imaginary fear I guess,” Bhasin said, dismissing the notion that a single AI failure in finance could take down power and transport networks, as stated by Sanyal in various media interactions. Bhasin added, “If I use generic LLM in the financial services, other services also use it, and there is some kind of a flash point or some kind of a glitch in the financial services sector, so that glitch will necessarily get propagated to other sector, I think it is just a imaginary thing.”

Further, Bhasin pointed out that AI models are context-dependent, making a cross-sectoral failure unlikely. “Because other sector context parameters will be very different. AI will behave the same in both sectors, it is highly unlikely, I do not think so.”

Idea of Domain Specific LLMs

That said, Bhasin does endorse domain-specific LLMs—but only in a complementary role. “If you have a highly specialized finance model or something like that still makes sense,” he said. He supports a maker-checker approach: “Generic AI is taking the same model logic, and that decision gets run by the specialized AI model, it validates it, or rather it refines it after validation.”

Further Bhasin envisions a layered model that uses both generic and sector-specific systems without strict silos. “In sensitive sectors, you can have two models, like one generic, one sector specific, but not curtailing, or not basically branching out one sector from another.”

Finally, while he accepts the need for certain boundaries in AI deployment, he rejects rigid sectoral walls. “While you don't propose a wall, there should still be some restrictions in terms of how AI operates in between various sectors. But curtailing is not an answer.”

From an Indian standpoint and perhaps global, all eyes are now set on the Artificial Intelligence Impact Summit 2026, to be organised by the government of India. The global summit is expected to drive the way forward on the direction to be taken for regulating AI.

Get the latest news, insights, and event invites delivered to your inbox.Stay Informed. Sign Up Now!

By continuing you agree to our Privacy Policy & Terms & Conditions

Footer banner