Skip to main content

Bad Bots on the Rise: Internet Traffic Hits Record Levels

Estimated reading time: 5 minutes

More than half of all online traffic is generated by automated software programs known as bots. While some bots are beneficial, others perpetrate fraud and crime. So, how serious is the ‘bad bot’ problem? And how can we fight back?

Internet traffic is increasingly non-human. According to the 2025 Bad Bot Report by Imperva (a Thales company) automated traffic has surpassed human activity for the first time in a decade. It now accounts for 51 percent of all web traffic. Meanwhile so-called ‘bad bots' account for more than two-thirds of this activity. These rogue programs mimic human behaviour to carry out criminal actions such as theft and fraud. 

Bad bots disrupt consumers and businesses across a range of sectors. Now, these attacks are becoming increasingly sophisticated. In the airline industry, for example, bad bots flood websites with fake search traffic, inflating search volumes without corresponding bookings. This skews the look-to-book ratio — a key metric that airlines rely on to forecast demand, set prices and make good business decisions.

Bot scams increasingly impact retailers too. Retail was the second most attacked industry in 2024 (15 percent of all bot attacks) thanks to threats such as scalping, credential stuffing, gift card fraud, price scraping, and DDoS attacks.

All of these challenges are outlined in the 2025 Bad Bot Report. The study explores the topic in depth and addresses important questions such as:

•    What’s causing the bad bot problem?
•    How damaging is it?
•    Who's affected?
•    What can we do about it?

So, let’s review the landscape and provide some key data on this growing challenge.
 

What is a bot?

Simply put, bots are software applications that run automated tasks online. These are often mundane, repetitive and tedious tasks that would take humans or businesses much longer to perform. So-called ‘good’ bots have become an integral part of the internet and are now used for a wide array of use cases.

Take the example of a flight booker. The customer arrives at the site, and then inputs their search criteria. The site bot then crawls multiple websites to gather and display the most up-to-date results for the user to peruse.

Of course, there are many different types of good bot such as:

•    Chatbots. These bots that simulate human conversation usually by text, but sometimes voice too
•    Web crawlers: Bots that scan content on webpages 
•    Social bots: Bots that interact on social media platforms.
 

Why do bots exist?

It’s really because of connectedness of the digital economy, and the existence of APIs that let software programs access third party systems. The web’s architects built in this connectedness to reap the benefit of the ‘network effect’. Regrettably, this open access is available to bad actors too. 
 

What is a bad bot?

Bad bots perform automated tasks with malicious intent. Their purpose is usually financial gain, fraud and theft. But they can also be deployed for mischief or even sabotage. Bad bots scrape content, spread spam, generate clicks, impersonate human users, perform credential stuffing and more. 

Some bad bots are more malicious than others. The Open Web Application Security Project (OWASP) lists 21 automated threats to web applications in its latest Automated Threat Handbook. They are:

Account aggregation. Harvesting user account credentials for identity theft, financial fraud, or unauthorised access to sensitive information.
Account creation. Using automated scripts to create fake user accounts to flood the platform with spam content, advertisements, or malicious links.
Ad fraud. Falsifying the number of interactions with ads with automated clicks or impressions. 
CAPTCHA defeat. Using image recognition software to solve and bypass visual CAPTCHAs.   
Card cracking. Using brute-force attacks to reveal payment card security features to make unauthorised purchases or commit financial fraud.
Carding. Mass testing stolen card details on websites to identify websites or applications that accept the stolen information.
Cashing out. Deploying bots to cash out stolen assets (gift cards, crypto etc) at scale. 
Credential cracking. Using brute force to try combinations of usernames and passwords to access user accounts.
Credential stuffing. Cybercriminals buy lists of username/password pairs on the dark web, then use bots to check against other accounts to see where the credentials apply.  
Denial of inventory. Sabotaging sites by filling a shopping cart without purchasing. The site registers a stock-out, but the purchases are never completed. Also applies for hotel rooms, restaurant tables, or airline seats. 
Denial of Service (DoS) and Distributed Denial of Service (DDoS). Overwhelming a site with traffic to render the service inaccessible. DDoS does the same thing but with a coordinated attack via a network of bots (botnet). 
Expediting. Using scripts to bypass normal restrictions on application processes. 
Fingerprinting. Collecting attributes of a user’s web browser or device to create a “fingerprint” and then tracking the user across the web.
Footprinting. Gathering information about a target web application’s security mechanisms to prepare an attack.
Scalping or inventory hoarding. Buying large quantities of limited-inventory goods for re-sale on secondary markets at a mark-up.
Scraping. This practice extracts data from websites or web applications for malicious purposes such as price manipulation. 
Skewing. Interacting with a site to distort the traffic data for misleading results. 
Sniping. Using automated bots or scripts to execute actions faster on events with time restrictions such as auction sites, reservations, etc.
Spamming. Posting questionable information on social media forums, site and applications. Can be malware, popups, pics, videos, ads, etc.
Token Cracking. Harvesting coupons, voucher codes, discount tokens and so on.
Vulnerability Scanning. Using automated tools or scripts to identify vulnerabilities in web applications.

Why are bad bots so effective?

Bot operators are becoming more sophisticated. They are deploying techniques to evade detection such as human-like mouse movements and click patterns. Some use a “low and slow” approach to carry out attacks using fewer requests, which is difficult to detect. Meanwhile, advances in GenAI and LLMs have made it easier for non-technical users to write scripts, leading to a significant rise in the number of simple bots. At the other extreme, GenAI is also helping cybercriminals to create more sophisticated bots at an accelerated rate.  

The 2025 Bad Bot report also noted an increase in the use of residential proxies by bot operators. This practice makes it appear that the bot is a person, and that the traffic originates from a legitimate, ISP-assigned residential IP address. Bot traffic from residential ISPs now accounts for 21 percent of bad bot traffic.

The same impulse to emulate human users is driving the amount of traffic appearing to come from mobile browsers. This is accomplished through browser automation tools – once considered an advanced method of evasion, but now a standard approach for many bad bots. 

And now there is AI. Tools such as ChatGPT, ClaudeBot, Google Gemini, and Perplexity are revolutionising how users interact with their favourite brands. But they are also being used as a new attack vector. In 2024, AI-enabled cyberattacks exploded. Imperva blocked an average of two million AI-powered cyber-attacks every day.
 

The cost of bad bots

How much do bad bots cost the global economy? It’s impossible to give a precise figure. But for every individual business impacted, the cost includes lost revenue, reputational damage and expensive disruption.

According to a study conducted for Imperva by the Marsh McLennan Cyber Risk Intelligence Center, the sharp rise in bad bot traffic is a side effect of the growing API economy. The “Economic Impact of API and Bot Attacks” report analysed more than 161,000 unique cybersecurity incidents and concluded:

•    Vulnerable APIs and automated bot attacks cost businesses up to $186 billion a year
•    API insecurity and automated abuse by bots are responsible for 11.8 percent of all cyber events and losses
•    Bot-related security incidents rose 88 percent in 2022 and 28 percent in 2023
•    Insecure APIs result in up to $12 billion more in losses in 2023 than they did in 2021.

But there’s more at stake for businesses than lost sales. There are also fines for failing to defend against attacks. In the European Union, regulatory fines can reach up to 4 percent of a company’s annual global turnover or €20 million, whichever is greater. The EU will apply these fines to any business that does not proactively prevent account takeover (ATO) attacks by bots. The potential damages can skyrocket into the millions.

It can get worse. Victims can file class action lawsuits, with all the attendant legal costs. 
 

Combating the bot epidemic

The rising tide of bad bot traffic has prompted businesses and cybersecurity experts to ramp up measures that can mitigate the threat. The Imperva report summarises them as follows:

Identify the risk
Businesses should use site traffic analysis, real-time bot detection tools and multi-factor authentication measures. They should test to see where there are vulnerabilities such as allowing attackers to use stolen credentials to gain unauthorised access. This applies not just to sites and apps, but also to APIs, which can provide ‘back door’ access for potential attacks.

Secure exposed APIs and mobile applications
APIs and apps often serve as gateways to sensitive data, creating additional vectors for cyber threats. Businesses should enforce authentication best practices and strict access controls to prevent token abuse and unauthorised data scraping.

Block outdated browsers 
Many bots contain user-agent strings with outdated browser versions. This can differentiate them from humans, who generally update their browsers to the latest iteration. Removing access from outdated browser versions can therefore block many bot attacks.

Identify dubious proxies
Bots use proxy services to obscure their activities and mask their true origins. Businesses can restrict access from known bulk IP data centres associated with these proxy-enabled attacks.

Look for patterns
High bounce rates, low conversion rates, sudden unexplained spikes, an unusually high number of requests targeting a specific URL – all of these can signal bot activity. Monitor for these anomalies and others to counter bot attacks, and protect your digital assets.

Monitor login traffic in real-time 
Define your typical failed login attempt baseline, then monitor for anomalies or spikes. Set up alerts so you’re automatically notified if any occur. An increase in failures can be a signal that bots such as GiftGhostBot are attempting to steal gift card balances, for example.

Deploy Multi-Factor Authentication (MFA)
Attackers frequently purchase credentials to conduct stuffing attacks and account takeovers (ATO). Stay informed about these breaches. And use MFA to add an additional layer of security to mitigate credential stuffing and token abuse attacks.

Assess prevention solutions
Bad bots are getting better at evading traditional detection methods. So businesses need to deploy a multi-layered approach comprising user behaviour analysis, profiling, and fingerprinting. 

Use the element of surprise 
Use all mitigation techniques and you will reveal your full defensive playbook. Instead, wait for critical moments — such as product launches or sales events. Then adjust these controls to prevent attackers from studying and adapting to them.

Conclusion

Bots can be good. But the plain truth is that most – two thirds in fact – are bad. These malicious bots pose a significant threat the digital landscape of today, wreaking havoc on businesses and consumers alike. 

This is a serious challenge. But on a more positive note, there are plenty of counter-measures available to digital businesses. They can assess where they are vulnerable, monitor traffic, deploy detection tools and bolster their on-boarding/authentication methods.