With APIs, Dealing with Bad Bots is the Cost of Success

PinIt
Crowd with leader

For businesses, both enabling good bots and thwarting bad bots are non-negotiable skills—essentially the price of success in today’s world.

Bots—they’re not intrinsically bad. As a consumer, you enjoy more personalized and relevant digital services because of bots, and in many cases, bots are central to how an organization partners with other companies or empowers its internal teams.

In fact, bots drive more internet traffic than humans, doing things such as crawling websites for search engines or refreshing information in your apps. Put simply, modern IT wouldn’t work without bots.

Part of the confusion is, when we hear about bots being used to influence the prices of popular holiday gifts, crash a website, manipulate the stock market or commit some other nefarious act, we’re hearing about a very particular subset of bots deployed for a very particular set of uses. This skews our idea of what bots are and do—and our sense of how bots that behave badly should be addressed.

Bots Fuel Modern IT

Most companies have something they are world class at. A major courier service’s expertise is logistics, collecting and delivering packages, and so forth. A large retailer’s expertise includes supply chain management and selling goods directly to consumers.

But increasingly, as end users have become more digitally empowered, these proficiencies are not enough. A customer might not care about a retailer’s huge in-store inventory if they prefer to purchase from a mobile device and make decisions based on personalized, data-driven recommendations. This means that to satisfy customers’ needs, a business will likely need to not only be world class in its areas of specialty but also adept at leveraging IT to deliver those specialties to consumers—when, where, and how they want it.

As companies have embraced this new reality, they’ve developed capabilities to automate the connection of systems and services, from legacy IT backends to modern microservices running in the cloud. This has led to digital ecosystems in which businesses use application programming interfaces (APIs) to encapsulate the complexity of these systems into a form developers can consume and leverage. APIs connect applications to one another and to the data and services that power them. When a bot executes its automated functions, whether it’s delivering the most current information to a user’s app or attempting to gain illicit access to a system, the bot calls APIs to do its job.

Because the APIs automate access to valuable business capabilities, they enable businesses to more easily partner with one another—including by making it easier for bots to interact with various systems. This ability rewrites many market dynamics by allowing businesses to leverage one another’s strengths, such as ridesharing companies leveraging APIs such as Google Maps for their apps, weather apps using data services from providers such as AccuWeather, and retailers harnessing shipping services from FedEx.

See also: The API ecosystem — critical to your digital transformation

Additionally, within such ecosystems, enterprises seek symbiotic relationships. A business might integrate its IT systems with IT systems of partners who need what the business’s systems provide and who are world class at things the business needs. A new service might not develop its own payment mechanisms, for example, because it can rely on APIs from credit card companies, banks, and digital services such as Square, PayPal or Google Pay. Likewise, such API providers can rely on enterprises to drive users to their services, transferring the burden of customer acquisition to the ecosystem.

All of this means that business today gets done largely via the automation of information exchange between systems—and broadly speaking, that means it gets done largely with bots.

This raises a problem: it’s difficult for IT teams to make things easier for good bots that do good things without also enabling bad bots that try to crash services, manipulate markets or phish unsuspecting users.

Handling Bad Bots

For businesses, both enabling good bots and thwarting bad bots are non-negotiable skills—essentially the price of success in today’s world.

The first step in this process is to eliminate sloppy development and deployment processes. Relying on individual developers to implement security mechanisms into their code instead of applying a security layer over all of an organization’s APIs will, sooner or later, provide an entry point for attackers or rogue actors. Giving individual developers and teams more autonomy to innovate is one of the chief benefits of modern distributed IT architectures and the software development approaches they support—but businesses must still manage their systems such that nothing can be deployed or moved into production environments without the necessary encryption and authentication precautions.

However, when bots connect into systems and process thousands or even millions of requests per minute, only a computer can work at the speed and scale necessary to distinguish legitimate users from crooks. So while rigorous development and deployment processes are proactive ways to address the problem, automation, as close to real-time as possible, is essential for protection.

The most effective and scalable way to defeat bad bots when they attack is to deploy good bots supported by machine learning algorithms that spot and block suspicious activity, such as a bot making more requests than expected over a short period or a bot that seems to be cycling through dictionary items in a brute force attempt to crack passwords. In order that this may happen effectively, the ability to block suspicious activity should be performed inline with an API call. After all, once an API call has been responded to, it is not very valuable to learn downstream that the response should not have occurred. Mature API platforms should have a policy engine that is able to apply defensive policies without any substantial latency on live traffic.

Cybercriminals are clever, of course, and this can lead to an endless game of cat and mouse. Each time a new defense policy is put in place, criminals roll out a new attack strategy that gets around these defenses. This is why it’s important to employ machine learning rather than static, rules-based algorithms that don’t have the ability to adapt to changes in attack behavior.

With tools that learn and evolve over time, enterprises can make their defenses more robust each time bad bots attempt something dodgy. In this continuous escalation, the winner will be the one who survives the attrition. Therefore, having a way to respond with less human effort, by leveraging statistical and machine learning techniques to tune and refine the defense policy, may be a phenomenal advantage.

For example, when a specific machine is identified as the origin of malicious traffic, a bad actor could abandon the attack from that machine, and move the attack to a different infected machine. This may effectively reset the clock for the business trying to protect its assets—unless, of course, the business can learn from the traffic patterns observed on the previous machine, and transfer that knowledge to reduce the time it takes to spot the new attack. In this way, each subsequent attack makes the protection stronger, and eventually, increases the cost borne by the attacker in mounting the attack.

More Success Attracts More Bots

The more successful an enterprise becomes and the larger the digital ecosystem in which it participates, the more bots it will attract. This is an unavoidable dynamic—so again, business leaders must view dealing with bad bots and the battle scars that come with foiling them as both points of pride and one of the costs of running a thriving organization.

Bad bots aren’t going to go away anytime soon—but with the right approach to API management and the right investments in machine learning, enterprises can better protect themselves with smarter, more responsive bots of their own. And if you are not yet being attacked, ask why that is so—and question whether your APIs are delivering value that is business critical at all.

Sridhar Rajagopalan

About Sridhar Rajagopalan

Sridhar Rajagopalan is the Lead Data Architect for Apigee, now part of Google. He has published over 50 academic papers, has authored over 25 patents, and won several best paper awards.

Leave a Reply