In this episode of Interpreting India, Tejas Bharadwaj is joined by Almudena Azcárate Ortega, senior program analyst at Secure World Foundation, for a wide-ranging conversation on how emerging technologies, particularly AI, are reshaping the landscape of space security. A space lawyer and policy scholar with deep experience in multilateral processes, Almudena brings both technical nuance and diplomatic realism to questions that most space conversations still treat as hypothetical. This episode explores: What is space security, and how is it different from space safety and why does that distinction matter more than ever in the age of AI? How is AI being used in space domain awareness, debris management, and Earth observation, and what are the limits of relying on AI for high-stakes decisions in space? What happens to accountability and liability when an AI system integrated into a satellite causes damage — either through malfunction or deliberate manipulation? Do we need new treaties to govern AI in space, or is the existing framework, built around the Outer Space Treaty, still fit for purpose?
Almudena opens with a distinction that anchors the entire conversation: space security, unlike space safety, is about intentional harm. It concerns deliberate attempts to disrupt, deny, or destroy space systems and the services they provide, and it is discussed not in Vienna at COPUOS but in forums like the Conference on Disarmament and the UN General Assembly's First Committee in Geneva. AI, she argues, is not new to space systems, having been slowly integrated since the late 1990s for data processing and autonomous operations, but its implications for security are only beginning to surface in multilateral discussions.
On the opportunities AI presents, Almudena is clear: faster data processing for space situational awareness, smarter collision avoidance, more efficient Earth observation, and greater autonomy for robotic explorers in deep space. But she is equally clear about the risks. The black box nature of AI systems adds a layer of opacity to operations that are already difficult to attribute, and in a geopolitically tense environment, opacity contributes to escalation. She walks through a scenario that captures the danger precisely: an adversary feeding incorrect data to an AI system managing satellite manoeuvres, causing it to collide rather than avoid. The AI has not been weaponized in the traditional sense, but the satellite has, and liability under existing frameworks is far from straightforward.
On governance, Almudena resists the temptation to call for an entirely new treaty architecture. The Outer Space Treaty, she argues, was always a treaty of principles, functioning more like a constitution than a rulebook, and its core provisions on non-discrimination, responsibility, and due regard remain relevant in the age of AI. What is needed is not a replacement but a layered approach: applying existing principles thoughtfully, developing non-legally binding norms where binding agreements are politically out of reach, and remaining flexible enough to adapt as the technology evolves. She also flags cyber as the technology deserving the most urgent attention in the near term, given how deeply software-dependent space systems have become and how difficult cyber-attacks are to attribute and deter.
Episode Contributors
Tejas Bharadwaj is a senior research analyst with the Technology and Society Program at Carnegie India. He works on space law and policies, tracking India’s space sector developments as well as issues pertaining to space security and sustainability globally. He also works on AI in military domain, including Lethal Autonomous Weapon Systems (LAWS), defense tech partnerships and cybersecurity policies.
Almudena Azcárate Ortega is the lead researcher at UNIDIR's Space Security Programme. She is an experienced space lawyer and policy scholar and has briefed UN member states on the topic of space security law policy and has presented her research in multiple forums.