Skip to main content

Physical hacking: Thales warns of new challenge
to AI systems

  • At the 30th edition of the ACM Conference on Computer and Communications Security, which took place in Copenhagen from 27-30 November 2023 and included a daylong workshop on artificial intelligence and security (AISec), Thales made a world-premiere presentation of a technique for carrying out a physical attack on an AI system.
  • Companies and organisations are now well aware of the risk of software attacks on their AI models, but less information is available ​ about potential attacks on AI hardware.
  • With the increasing use of AI in defence systems (drones, vehicles, fighter aircraft, etc.), the security of these highly critical assets needs to be taken into account from the earliest design stage in order to protect the intellectual property they contain and safeguard national security interests.
Side-channel attack on an embedded AI system by Thales ©Thales
Side-channel attack on an embedded AI system by Thales ©Thales

In an AI system, software is embedded in physical hardware components. Physical attacks known as side-channel attacks attempt to gain access to sensitive information by observing indirect signals generated by physical components of the system. Thales's demonstration of a side-channel attack during the AISec workshop was intended to raise awareness of this new type of threat among members of the scientific and technical community and encourage them to take this potential vulnerability into account when developing their systems.

The heat, power consumption or electromagnetic radiation generated by a system can be a valuable source of information for hackers. They can use this information to exploit the physical characteristics of a system's electronic components, understand how the embedded software functions, access its sensitive data and compromise the confidentiality of the parameters influencing the decisions made by the AI.

Joan Mazenc, Director of the Thales ITSEF1, said: "The risk of side-channel attacks on AI systems has not been fully addressed, and specific developments are needed to guarantee their reliability and resilience and to protect the intellectual property they contain. By presenting this new threat to the international scientific and technical community at the AISec conference, Thales seeks to alert industry to this emerging risk and the advanced solutions and techniques required to build a trusted AI system."

The attack presented at the AISec conference was conducted in two steps:

​ ​ 1 . Attack by observation, in which the physical behaviour of the system was analysed during operation to reveal the secret parameters used by the targeted AI for classifying images.

Conventional laboratory techniques can measure the electromagnetic radiation generated by a system's electronic components. By reproducing this process thousands of times in a variety of conditions, an attack AI can be trained to recognise the observed radiation patterns and thereby identify the secret parameters of the target AI with no prior knowledge of the system being observed.

​ ​ 2 . Active attack, in which the parameters influencing the target AI's decisions were exploited in order to deceive the model by creating adversarial examples.

Here, the operator forces the AI to identify the figure 7 as a 5. ©Thales
Here, the operator forces the AI to identify the figure 7 as a 5. ©Thales

This attack used scripts capable of exploiting the secret parameters revealed in the first step in order to create an image that would be incorrectly classified by the AI.

In addition, a hacker with physical access to the target system could achieve the same objective by physically disturbing the component while the AI is running. Creating a quick disturbance in the electronic component's power supply or exposing a circuit to a strong electromagnetic pulse could alter the AI's decisions, causing it to misclassify images. This type of "fault injection" attack also needs to be taken into consideration by embedded system designers in order to build a trusted AI.

Thales and AI

To counter these threats, Thales runs security evaluations at its ITSEF laboratory, which is accredited by the French National Cybersecurity Agency ANSSI, and draws on the Group's extensive expertise to propose customised solutions to mitigate any vulnerabilities that are detected.

As the Group's defence and security businesses address critical requirements, often with safety-of-life implications, Thales has developed an ethical and scientific framework for the development of trusted AI based on the four strategic pillars of validity, security, explainability and responsibility. Thales solutions combine the know-how of over 300 senior AI experts and more than 4,500 cybersecurity specialists with the operational expertise of the Group's aerospace, land defence, naval defence, space and other defence and security businesses.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies within three domains: Defence & Security, Aeronautics & Space, and Digital Identity & Security. It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key areas such as quantum technologies, Edge computing, 6G and cybersecurity.

Thales has 77,000 employees in 68 countries. In 2022, the Group generated sales of €17.6 billion.

1Information Technology Security Evaluation Facility

Contact
Marion Bonnet, Press and social media manager, Security and Cyber
+33 (0)6 60 38 48 92 marion.bonnet@thalesgroup.com