Skip to main content

How can we guarantee product resilience in the face of AI-assisted cyberattacks?

At its ITSEF facility in Toulouse, the Thales teams test products for their resistance to all types of cyberattacks — including attacks assisted by ever more powerful AI technologies. 

Artificial intelligence (AI) is a technological game-changer that upends many activities, including cybersecurity. State players, terrorist groups and organised crime networks are already known to be using AI to carry out attacks.

A cyberattacker with knowledge of artificial intelligence can really expand their capabilities, automatically solving complex problems that previously required expertise across multiple disciplines, says security assessor Gabriel Zaid from Thales's Cyber Defence Solutions business.

Today, AI has become an effective way to compromise the defences of many types of products and systems. It also brings down the overall cost of cyberattacks, which logically means more and more are being carried out.

The systems embedded in smartphones, credit cards, biometric passports, etc. are a prime target for cyberattackers, who can turn to AI to force access to sensitive data stored or shared by these systems, such as bank details and fingerprints

The best way to deal with the threat is to anticipate it – and AI has a key role to play here too. Thales’s Information Technology Security Evaluation Facility (ITSEF) in Toulouse uses AI techniques to simulate real-life attacks and assess the security of physical products for its customers. 

 

A team of ethical hackers 

The ITSEF’s customers —critical national infrastructure providers, businesses, government agencies, etc. — can’t afford to cut corners when it comes to product and data security. They need to have absolute confidence in their technologies and new product developments before they go to market. 

Working as ethical hackers, the ITSEF’s engineers scrutinise critical components of these products, putting themselves in the shoes of cyberattackers to better understand the threats and identify security vulnerabilities in the various types of systems.

Our AI experts subject the products to known AI-assisted attacks and also devise new forms of attacks – whatever it takes to test the inner workings of cryptographic protection!

Products that successfully withstand this shock treatment are duly awarded high-level security certification by public agencies such as ANSSI, France’s national agency for information system security, and/or private certification bodies. 

AI technology is evolving rapidly, and so is the corresponding cyberthreat. “AI will soon be embedded in physical systems like robots that help with routine tasks at home, assembly robots in factories or unmanned systems for the armed forces,” says Gabriel Zaid. “Here at the ITSEF, the next step is to conceptualise future attacks on the AI of these systems so we can assess their robustness.”

The stakes are high. By disrupting future machines and the decisions they make, cyberattackers could do massive damage, especially in a military context. More than ever, the ITSEF's engineers and their specialised expertise is going to be a crucial asset in the fight against AI-assisted cyberthreats.