AI in Defence: Reconciling Sovereignty and Exportability

  • Artificial intelligence
  • Group

© 123RF

  • Type Insight
  • Published

Artificial intelligence is gaining ground rapidly within the defence sector. Although large-scale integration remains in its infancy, the operational gains it promises are such that it will become indispensable over the coming decades. Yet high-performing AI systems depend on vast quantities of recent, high-quality data, which in turn presupposes sustained cooperation between exporting states, importing states and industrial partners. This raises an important question: how can we foster the collaboration that drives AI performance, while ensuring that every stakeholder maintains their independence and control?

Position paper - Reconciling AI sovereignty and exportability with purpose-built models and data strategies

For some, the notion of AI sovereignty may conjure up dystopian visions of an artificial intelligence operating independently, even beyond human control. This is a misconception. The sovereignty at stake here is not that of the machine itself, but that of the actors who design, deploy and operate it.

In AI, the richness and diversity of training data - sonar and radar imagery, photographs, satellite images, video, audio and documentation - are decisive factors in model performance. 

© Adrien Daste - Thales

Partner nations therefore have, in principle, every interest in working together to develop robust and reliable models at lower cost. However, AI models are inherently exposed to specific forms of attack. These include adversarial examples - inputs deliberately crafted to mislead a model - and reverse engineering* techniques, which may enable an attacker to infer elements of the data used during training.

“Given the risks surrounding data, exporting countries, including France, are currently exercising considerable caution with respect to these technologies, for reasons of national defence,” explains Gabriel Rangoni, Vice President Strategy & Marketing at Thales and co-author of the position paper “Reconciling AI Sovereignty and Exportability”

*Reverse engineering, when applied to AI models, refers to techniques aimed at analysing a model’s behaviour, outputs or parameters in order to infer the data used during its training.

The ultimate objective is to facilitate collaboration in AI while safeguarding each nation’s intellectual property and classified information.

Fabien Flacher - Technical & Engineering Director at Thales CortAIx Factory

Yet in the face of major powers such as the United States and China, which are investing heavily in AI systems, cooperation among allies - particularly within Europe - is essential if they are to remain competitive in the years ahead. This requires lifting barriers and fostering trust, to the benefit of all. As Fabien Flacher, Technical & Engineering Director at Thales CortAIx Factory, puts it: “We take a risk-based approach: we seek to understand what states are concerned about, clarify their requirements, and then address them through engineering solutions, which is our core business. The ultimate objective is to facilitate collaboration in AI while safeguarding each nation’s intellectual property and classified information.”

Enabling Secure Model Training

AI-based systems are distinguished by their capacity to evolve over time - provided they are regularly retrained on rich, diverse and up-to-date datasets. This explains why collaboration between manufacturers, exporting countries and importing countries is essential to achieve optimal performance. “An underwater mine detection model trained exclusively on images taken in the harbour of Toulon would perform less effectively than a system trained on data from a wide range of seabeds", Gabriel Rangoni notes.

 "Environmental diversity is what confers robustness on the algorithm.”

© Thales

To address the legitimate concerns of States about exposing certain of their AI models, Thales advocates secure collaboration methods. These include transfer learning, whereby each actor improves a pre-trained AI model independently without sharing raw data, and collaborative learning, in which several actors jointly train a model without ever centralising sensitive information.

These approaches can be reinforced through a range of trust-building mechanisms: data desensitisation and anonymisation, cryptographic guarantees during exchanges, and full traceability throughout the AI lifecycle. Together, these measures establish a clear and secure framework conducive to delivering the best possible product in the interests of all stakeholders.

Relevant and secure data: a cornerstone of trust

If an AI model is trained on imagery from a French military satellite, certain metadata revealing that satellite’s capabilities can be removed; the images can be slightly altered; resolution can be reduced.

Gabriel Rangoni - Vice president strategy & marketing at Thales

Data governance is the linchpin of AI sovereignty. Training data must first be classified and labelled, with precisely defined access controls. Data sanitisation may also be necessary for confidentiality reasons. As Gabriel Rangoni explains: “If an AI model is trained on imagery from a French military satellite, certain metadata revealing that satellite’s capabilities can be removed; the images can be slightly altered; resolution can be reduced.”

To further protect original datasets, “noise” can be injected during training. That is, deliberate perturbations or misleading elements designed to make model exploitation more difficult. As with data preparation and sanitisation, achieving the right balance is critical. Excessive protective measures may degrade algorithmic efficiency and, consequently, the performance of the final system.

© 123RF

An AI toolchain tailored to sensitive data and cooperation

TTo strike this balance, Thales has developed a sovereign AI toolchain, AIOps, designed to ensure robust data governance, model integrity and continuous retraining - even in disconnected or highly sensitive environments.

These tools provide critical support at every stage, from data classification and cleansing to successive model retraining cycles. As Gabriel Rangoni explains: “Our tools enable key trade-offs to be assessed, such as whether the performance gains justify retraining a model, or how far datasets can be sanitised without impairing effectiveness.”

Most importantly, the AIOps toolchain helps allied nations work together by including features like traceability, watermarking (to detect unauthorised use of a model), and protection against unlearning attacks. These safeguards make it possible to manage shared information dynamically and securely. SaferLearn offers a flexible framework for secure collaborative learning, allowing distributed sensitive data to be exploited without the need for a central trusted authority.

“It is an important tool,” confirms Fabien Flacher. “It tackles a key weakness in current AI systems: the risk that someone could ask questions in a way that reveals parts of the model’s training data.” Such tools, among others, make it possible to reconcile performance, cooperation and sovereign control. 

In the race to master AI, a sovereign AIOps toolchain is not a luxury; it is a strategic necessity, notwithstanding the complexity it entails.

Working together while respecting each nation’s sovereignty

The position paper “Reconciling AI Sovereignty and Exportability” builds on decades of work by Thales in artificial intelligence, notably within its CortAIx entities, and places respect for national sovereignty at the forefront of its priorities.

“Over the coming decades, artificial intelligence will continue to gain momentum and play a major role in the defence domain,” Gabriel Rangoni predicts. “ These AI systems will need to be high-performing and reliable, which requires significant resources and, in some cases, the sharing of certain models or operational data between States. In such a landscape, those who choose to partner and work together will be stronger. At Thales, our objective is to provide the technical means to enable this cooperation, while fully respecting each nation’s sovereignty."

Discover more

Thales is accelerating the integration of Artificial Intelligence into critical systems through its trusted AI strategy, combining expertise in secure, reliable architectures with mastery of advanced AI technologies.

  • Electronic warfare

In electronic warfare, AI could change the game

Insight
  • Artificial intelligence

Building Trustworthy AI for Critical Systems

Insight
  • Artificial intelligence

"It is vital that humans remain at the centre of the decision-making process"

Insight