In this episode of Interpreting India, host Shruti Mittal, research analyst in the Technology and Society Program at Carnegie India, speaks with Chinmayi Sharma, associate professor of law at Fordham Law School and nonresident fellow at the Stoss Center, the Center for Democracy and Technology, and the Atlantic Council. Together, they explore the evolving and often misunderstood debate on openness in artificial intelligence. Drawing from her forthcoming paper, Unbundling AI Openness, in the Wisconsin Law Review, Sharma explains why the traditional “open versus closed” framing oversimplifies the reality of modern AI development. She introduces the concept of “differential openness,” a framework that views AI systems as composed of multiple interdependent components—each existing along its own spectrum of openness and carrying distinct implications for innovation, safety, democratic accountability, and national security.
The episode challenges the familiar “open versus closed” framing of AI systems. Sharma argues that openness is not inherently good or bad—it is an instrumental choice that should align with specific policy goals. She introduces a seven-part taxonomy of AI—compute, data, source code, model weights, system prompts, operational records and controls, and labor—to show how each component interacts differently with innovation, safety, and governance. Her central idea, differential openness, suggests that each component can exist along a spectrum rather than being entirely open or closed. For instance, a company might keep its training data private while making its system prompts partially accessible, allowing transparency without compromising competitive or national interests. Using the example of companion bots, Sharma highlights how tailored openness across components can enhance safety and oversight while protecting user privacy. She urges policymakers to adopt this nuanced approach, applying varying levels of openness based on context—whether in public services, healthcare, or defense. The episode concludes by emphasizing that understanding these layers is vital for shaping balanced AI governance that safeguards public interest while supporting innovation.
How can regulators determine optimal openness levels for different components of AI systems? Can greater transparency coexist with innovation and competitive advantage? What governance structures can ensure that openness strengthens democratic accountability without undermining safety or national security?
Episode Contributors
Chinmayi Sharma is an associate professor of law at Fordham Law School in New York. She is a nonresident fellow at the Stoss Center, the Center for Democracy and Technology, and the Atlantic Council. She serves on Microsoft’s Responsible AI Committee and the program committees for the ACM Symposium on Computer Science and Law and the ACM Conference on Fairness, Accountability, and Transparency.
Shruti Mittal is a research analyst at Carnegie India. Her current research interests include artificial intelligence, semiconductors, compute, and data governance. She is also interested in studying the potential socio-economic value that open development and diffusion of technologies can create in the Global South.
Suggested Readings
Unbundling AI Openness by Parth Nobel, Alan Z. Rozenshtein, and Chinmayi Sharma.
Tragedy of the Digital Commons by Chinmayi Sharma.
India’s AI Strategy: Balancing Risk and Opportunity by Amlan Mohanty and Shatakratu Sahu.