"Responsible AI in action"
How can AI be responsible? David Sadek, Vice President, Artificial Intelligence, Algorithmics and Quantum Computing and Juliette Mattioli, Vice President Expert Fellow in Artificial Intelligence, talk about Thales's approach, cortAIx and the possibilities ahead.
© Studio Cabrelli
What commitments has Thales made in terms of responsible AI?
David Sadek/ Our Digital Ethics Charter, published in 2022, is a commitment to ensure that humans remain in control of AI at all times, in line with the recommendations of official bodies including the French government's Defence Ethics Committee.
We believe that AI should enhance people’s ability to make decisions, not replace human beings, so all our products are designed to enable operators to assume control at any time.
David Sadek/ For instance, we’re working on a concept we call an “autonomy contract”, which sets clear limits on the ability of drone systems to act autonomously during operational missions.
© Studio Cabrelli
How is this commitment playing out in practice?
Juliette Mattioli/ In late 2024, we introduced a new governance structure to oversee the assessment of all AI-enabled solutions against specific responsible AI criteria.
In 2025, a pilot project at cortAIx Factory will work on our first set of use cases, applying six of our key pledges:
- keeping humans in control
- designing transparent systems
- adopting a privacy-by-design approach
- making our solutions as secure and resilient as possible
- making frugal use of data
- tackling discriminatory bias in digital technologies.
© Thales
Juliette Mattioli/ We plan to introduce a system of specific checks before each key milestone in the technological maturity cycle can be signed off, starting at the research and technological development stage. cortAIx Labs France will run a pilot project in 2025 to determine how these checks are implemented during the engineering and production phases.
We’re also including dedicated AI clauses in our contracts to make sure suppliers and subcontractors are legally compliant and aligned with the principles of our Digital Ethics Charter.
How are you rallying employees behind these efforts?
Juliette Mattioli/ The Thales School of AI was created in 2019. Since then, 2,000 engineers have completed the “Basics of AI for Thales” module. The school also delivers an action learning programme called “From Concept to Use Cases”. And since 2023, close to 3,000 employees in 42 countries have been sharing experiences through the AIDA (AI, Data, Algorithmics) community.
© Julien Lutt - CAPA Pictures - Thales
David Sadek/ Another example is the Friendly Hackers team at cortAIx Labs France, which is developing security solutions for AI-enabled critical systems. In 2023, this team was among the winners of a challenge organised by the French defence procurement agency to find data used to train an AI model. In 2024, the team took part in a challenge organised by France’s Defence Innovation Agency, developing a model capable of detecting deepfakes with a high level of reliability.