AI is making life easier for criminals. Why? And how can CISOs fight back?
Estimated reading time: 5 minutes
Cybercriminals are enthusiastic early adopters of AI tools. In an edition of the Thales Security Sessions Podcast, three cybersecurity specialists discussed the best counter measures…
Reid Hoffman, co-founder of LinkedIn, describes AI as a new kind of superpower. He says: “A car gives you superpowers for mobility. A phone gives you superpowers for connectivity and information. AI gives you superpowers for the entire world of information, navigation and decision-making.”
Regrettably, these superpowers are available to everyone — and that included criminals. In fact, bad actors are already using AI tools to improve existing scams and invent new ones. One study revealed AI-enabled scams rose by 456 percent between May 2024 and April 2025, compared with the same period in 2023/2024.
So how exactly is AI changing the nature of cybercrime?
For Nadav Avital, Senior Director of Threat Research in Imperva (A Thales Company), perhaps the most important new factor is the way AI widens access to tools — enabling non-technical attackers can commit more crime.
He says: “AI is lowering the bar for unskilled attackers, which is overloading defenders with many more attacks that they need to deal with.” Avital gives a few examples of these new capabilities.
• Creating better content. Criminals can use AI tools to access specific knowledge and to compose more convincing phishing emails and texts.
• Mapping target systems. Hackers can use AI to probe a target system, then find and exploit vulnerabilities. This process used to be manual. Now it’s automated.
• Discovering zero-day vulnerabilities. Google recently used AI to discover a zero day hidden vulnerability in the SQLite database engine. It seems inevitable that attackers will copy this tactic.
Obviously, one way to tackle the content and deep fake problem is with better consumer education. But this is just the public-facing dimension. For cybersecurity specialists, the deeper question is how to protect AI systems themselves from attacks by hostile actors.
This demands a new approach, different from protecting against, say, viruses or malware. According to Asad Ali. Director of Technology at Thales, defence starts with defining the four main AI attack phases.
• The data collection phase. This refers to the potential poisoning the data when it is collected (from inside or outside the organisation).
• The training model phase. At this stage, hackers can modify the behaviour of the model to affect its outputs. This is sometimes called back door insertion.
• The deployment phase. Once the model is created, it is typically deployed in a server or application. It is now ‘in the open’ and vulnerable to theft.
• The influence stage. This is where the users come in. If the model is poisoned, the outputs will be corrupted and could be detrimental to the security of the company.
Of course, the usefulness of AI cuts both ways. CISOs are now learning to use AI tools to detect threats, lighten the load for CISOs and help non-technical staff to join the defence. Avital says: “AI can reduce alert fatigue by filtering out noise and false positives. It can allow less skilled people to use natural language to dive into complex security issues.”
A good example of this comes from Imperva. It built a model in which staff using the web application firewall can click on any security event to get readable answers to questions such as: What happened? Why did it happen? Which malicious part was flagged? How can future events be prevented?
Imperva is also working on application security chat bot that can answer complex questions in human language. This way, employees don't need to study API documentation to understand and solve a problem.
Another important dimension of the AI threat is the way it distributes data and models across multiple locations. In some deployments, data could be running across a customer’s private cloud. In others, the model might be at the edge, running locally on an autonomous vehicle or even a PC. In all examples, the model owner is not in control of the data.
Michael 'MiZu' Zunke, VP and CTO of Software Monetization at Thales, explains the challenge: “In these scenarios, the attack surface is the service interface,” he says. “A criminal can take it apart as much as they want, because they are the master of the machine…. So, this is really going beyond plain model security. What is needed is protection against reverse engineering, against analysis and modification of the application.”
A key defence method here is identity and access management. This means putting systems in place that ask: do you have permission to access to the data – and to use the model?
Thales has extensive experience in this area, and is now applying its knowledge to the AI space. It is exploring how to protect RAG (Retrieval-Augmented Generation) implementations, and also applying itself to data discovery, classification and searchable encryption.
Asad Ali, Director of Technology and Innovation at Thales, summarises the challenge as follows: “On the server side, you have full control – you can have physical restrictions on who can access in the edge environment. But when the model itself is on the edge, those restrictions are out of the window. How do you know it is protected? Anybody with a device can tamper with the model.”
Another identity and access management challenge arises from the use of AI agents. In other words, from bots that act on somebody's behalf. Further ahead, there’s the ttransparent encryption question posed by the use of AI GPUs.
Ali says: “Transparent encryption is one product where we encrypt everything that touches the IO layer of an OS, and that IO layer we handle through the CPUs. But AI models use GPUs. If the GPU can access the IO layer directly, then we have to change our models or implementations. We are actively looking into this, even though it's not mainstream right now.”
Ultimately, cybersecurity specialists expect organisations will protected their systems with a new kind of dedicated AI firewall. Like a traditional firewall, it will act as a gatekeeper and enforce security policies, but it will be tailored for the non-deterministic nature of LLMs. AI firewalls will employ natural language processing and contextual analysis, rather than focusing solely on network traffic patterns.
Nadav Avital says: “Gen AI introduces new kind of attack surface, …so we need defence mechanisms that understand the context, the intent, the semantics. This is where the AI firewall comes in. We're working toward creating such a solution for our customers now… I think the sooner that we have these kind of security solutions in place, the more people will learn to trust and use AI safely at scale.”