Malicious bots make up nearly a quarter of all website traffic[1]. These bots are responsible for a whole host of problems, including account takeovers, spreading spam, and price and content scraping. The sheer scale of malicious bots crawling the web is equally alarming and eerie.
Detecting bots is increasingly challenging for businesses. It’s estimated that approximately 54% of bots are moderately sophisticated, around 26% are simple, and 20% are sophisticated[2]. Bot developers are adopting new technologies at a rapid rate in an attempt to circumvent bot detection. Additionally, distinguishing bots from humans is also becoming increasingly complex. With this in mind, let’s take a look at the limitations of bot mitigation and share where compromised credential screening can be highly complementary.
Most of the bot countermeasures that companies put in place can be easily worked around by determined attackers. This is one of the fundamental flaws of any rules-based approach to fraud prevention. Bot attacks will continue to evade antibot controls, resulting in an exhausting game of whack-a-mole.
Attackers understand how to leverage and scale technology, using automation to launch endless credential stuffing campaigns. By utilizing point-and-click attack credential tools like Sentry MBA and leveraging open source operational tools like Wget, they can run scripted web login sessions. Attackers create an army of these bots, and mayhem ensues. Remember, these tried and tested strategies are lucrative for bad actors, so they will always retool.
All companies are in the unenviable position of balancing stringent security with compelling user experiences in the digital age. The goal should always be to diminish the risk, but not the user experience. It sounds simple enough on paper, but it’s much harder in practice. Many of the tools introduced to detect bots create hurdles for real users.
Be particularly wary of CAPTCHAs. It should come as no surprise that 2FA and CAPTCHAs dramatically increase user friction. The human failure rates for CAPTCHA range from 15%-50%. Many users will simply try another website if they fail, and you’re left with a boosted bounce rate. Not only are CAPTCHAs frustrating for users, but there’s growing evidence that they don’t work. A San Francisco based startup has claimed their algorithm can crack CAPTCHAs with 90% accuracy[3].
Even bot mitigation approaches that don’t directly require user interaction can be a source of fiction. This is common with false positives; when a real human is incorrectly marked as a bot.
For example, today attackers can cheaply and easily rotate through thousands of different IPs thanks to proxy services. They also increasingly use residential IPs, which come with an excellent reputation. As a result, bot mitigation solutions that rely heavily on IP blacklisting are no longer as effective as they were in the past. Additionally, IPs are increasingly shared, so IP filtering carries a high risk of false positives. You risk blocking legitimate users from accessing your service.
Despite your best efforts, some advanced attackers will manage to bypass detection and become a false negative. In bot mitigation, there is always a combination of false positives and false negatives. The balancing act is inherently challenging; if you tighten too much, you risk false positives. But if you loosen too much, you get false negatives that compromise the effectiveness of your strategy. Often, you won’t become aware of this until you see the harm in action (ATOs or fraud).
There’s no perfect strategy for eliminating bypasses because attackers will always find a way. For example, if you block users based on geographic origin, you’re unlikely to stop bots. If anything, you’ve caused a minor inconvenience and potentially locked out a legitimate person. Attackers use bots from all over the world, so they’ll just pick a different location. Most credential stuffing tools also have extensive configuration options that allow attackers to use lists of proxies.
If persistent attackers hit a roadblock with automation, they might change tack and hire a manual fraud team to do the work instead. This team of bad actors will input credentials by hand using real browsers, thereby bypassing bot detection (because they’re not bots!).
Compromised credential screening is highly complementary to bot detection solutions. While bot detection helps protect you against bad login traffic before username and password authentication, credential screening can save you after authentication but beforeaccess is allowed.
Compromised credential screening allows companies to identify credentials that have been compromised in a 3rd party data breach, turning attacker’s tools into an effective ATO defensive measure.
Unlike bot detection, compromised credential screening adds zero friction when credentials haven’t been compromised. Security measures only need to be considered when credentials are known to be unsafe. Credential screening is also not a rules-based approach. This means there’s no retooling that hackers can use. The approach is simple; it takes away the hacker’s ability to use stolen keys.
There are inherent limitations to bot detection, and it can only protect you so far. You’ll inevitably have to deal with attacker retooling, balancing security and friction, false positives and negatives, and manual fraud. And even when you have a robust bot detection solution in place, bad actors will still find a way to bypass it. This is why we recommend a two-pronged approach with both bot detection and compromised credential screening to ensure you have comprehensive protection. Use bot detection to keep the bulk of nefarious web activity away and use compromised credential screening as your last line of defense.
[1]https://www.infosecurity-magazine.com/news/imperva-bad-bot-report-2020/
[2]https://www.infosecurity-magazine.com/news/imperva-bad-bot-report-2020/
[3]https://mybroadband.co.za/news/internet/90435-captcha-cracked-by-artificial-intelligence.html