In the spring of 2017, Twitter published a series of guidelines for automated API users utilizing bots. These guidelines were created to help control the intent, actions, and result of bots on the service. Accordingly, there was some discussion about just what these guidelines did and didn’t do, and how valuable such a set of guidelines were.

Bots are exceedingly powerful – and that is why they are used so much in modern service administration. Bots can take a hundred actions that are otherwise monotonous and time-intensive, and resolve these actions into a single “click to run” style interface. They are developed either by API consumers who want to carry out routine functions, providers who want to allow certain ranges of activities, or by other API developers who wish to interact with outside systems.

While bots have a great power to complete tasks, if gone unchecked they can abuse an API. Of course, with such powerful abilities, the need to secure these bots and ensure that their actions are allowed is very important.

In this piece, we discuss the responsibilities that should be held upon bots, and the guidelines that should govern their usage. While we identify some specific usage guidelines, do keep in mind that these are simply suggested guidelines – every network is different with it’s own unique intricacies.

Why Worry About Bots?

Are bots really that dangerous? After all, the only people who need bots are administrators, and they should be trusted, right?

First, a note on trust within administration teams — every interaction on the network should be treated with the same level of caution. Placing too much trust in the goodwill of others can have damaging consequences, and as such, even admin bots should be heavily regulated and controlled.

Secondly, bots are not just for administration — they can extend the functionality of your API for developers and end consumers alike without requiring them to develop extra tools. Without bots, if a user wants to delete all their posts on your service, download a mass amount of images, or retrieve all entries from a given date, they either have to tie into your API or utilize a pre-built bot.

As such, the value of bots cannot be ignored. That being said, some reports suggest that 52% of all internet traffic is due to bots, and without proper guidelines, this mass bot army can do some pretty horrible things extremely quickly. Thus, bots should be treated with the careful respect they deserve, and should be heavily regulated, monitored, and controlled if they are allowed to exist within an API ecosystem.

Guidelines for Bots Design

These are very general guidelines for API usage by bots – your specific needs and requirements may dictate special exceptions and rights that are not covered below. That being said, these guidelines are a solid starting point.

While some are style recommendations, others are requirements for any bot – born out of considerations of security, user experience, and system safety, and as such, should be de facto requirements for any automated system on an API.

1: Clearly Establish Accepted Use Cases

As said before, a bot is exceedingly powerful – but that power is not naturally arising, but instead based upon a mandate that is transferred from the system owner to the bot owner. Establishing what this mandate specifically allows is a key guideline to bot usage.

One such guideline is to contstrain high usage rates. Conceivably, this could be implemented using a daily rate limit, a specific user rate limit, and other such systems. A way to determine this cap is to consider what an average user could perform, given enough time and manpower.

Bots should also be limited to the same abilities as their corresponding users. When establishing what a bot can do, these privileges should be constrained by the user rights requesting the bot. Bots are meant for automation, not class circumvention.

Guidelines for bot use cases must also restrain the access point. A bot should stick to the codebase at all times. Using undocumented endpoints or calls creates significant security issues, and bots can take what might otherwise be a one-off security fault and propagate it over hundreds or thousands of calls in a minute. Bots should therefore only be allowed to carry out actions using well-documented and secured endpoints, verbiage, and commands.

2: Allow for a Failsafe

According to Isaac Asimov, there are Three Rules of Robotics. One of them says simply that “a robot must obey orders given it by human beings […]”. This is just as valid a requirement of bots on an API as it is on artificial intelligence, and is a key component of securing bots that utilize APIs.

When designing a bot, a failsafe should be mandated and included to allow for the bot to be overridden and excluded from the system at large. While this is certainly useful in cases where the bot is doing something that is disallowed, it is also very useful for a rogue bot, or a bot that is given a poorly formed command and carries out damaging functions as a result.

A bot is nothing more than a tool — and with any tool, there needs to be an on/off button. Do note that this doesn’t necessarily need to only be mandated via the bot itself, either — you can identify potential bots using heuristics and force them to use a specific server, filter the traffic, or even disable their functionality altogether. That being said, it’s far easier to dictate this functionality be included rather than carried out tertiarily to the bot.

3: Document the Bot

Every single bot should be well documented, and even more importantly, well declared. Knowing what is on a network leads to greater security and easier fixes when issues arise — conversely, not knowing a bot is on the network is a recipe for disaster. Failure to document a bot causes huge security issues and can result in increased time to solve key issues.

This point can be easily and quickly summarized thus — do not masquerade as a human. A bot is a bot, and it has specific instructions, restrictions, and functions. Pretending to be a human or not announcing the status of a bot should be punished by exclusion from the service and should be a key requirement for allowance of any traffic on the network.

4: Comply with Rules and Regulations

A bot is a tool — accordingly, as with any tool, a bot is an extension of the user who has created it. Thus, a bot is subject to the same exact rules as the user creating it. All bots should strictly follow the terms of service and specific guidelines set out by the API owner, and should conform to the rules that the user creating the bot already works under, be they legal, data usage, or privacy oriented.

Circumventing these rules, which we’ll discuss later, is a huge security issue — this should be heavily monitored, tracked, and enforced.

5: Remember User Experience

This concept is somewhat of a “catch-all”, but it is a hugely important one; a bot should deliver a great user experience. A bot should not be intrusive, and should respect the privacy of all users on the system. A bot should have error reporting if it interacts with other users, as well as ample documentation to show what these errors mean. A bot should declare what it is, and stick to conventions of declaration and information delivery.

A bot should essentially be designed as if it were its own API, largely because in many ways, it is a link between the user and the system being worked upon. Thus, user experience is a hugely important consideration.

6: Create Proper Identity Control and Segregation

A bot is not a user. No matter how much the bot may behave like a user, use the same tools as a user, and engage in the same kind of resources as users, bots are a unique and different type of role. Accordingly, they must be differentiated from the typical user base, both in ability and in established role.

As part of this, identity control can help to establish such segregation between “users” and “bots.” While this does not necessarily mean having their own unique authentication system, they should be demarcated as such. This can be achieved in a variety of ways, from requiring re-authentication in regular intervals to delegating rights from the user to their bot. However this is done, the identity control method employed should result in a class defined as “users” being separate from a class defined as “bots.”

7: Don’t Circumvent Rate Limits

Rate limits are like speed limits — regardless of whether you agree with them or not, they are law, and they ostensibly exist for the betterment of the entire network. Accordingly, circumventing these rate limits should be prohibited and punished heavily. Circumventing limits breaks the terms of service, and objectively makes the experience for everyone else on the network worse. Circumventing rate limits can also lead to spam, memory overflow, and authentication issues in servers that do not properly address such high traffic loads.

8: Don’t Utilize a Bot for Spamming or Abuse

This seems like it shouldn’t have to be said, but bots should never be used for spamming or abuse. A bot is powerful, and this power, when put to use for good, can make administration easier, quicker, more efficient, and more precise. When used for wrong, however, a bot can cause significant headaches and pains.

A bot that is used for spamming or abuse should immediately and permanently be terminated. It’s bad for user experience, bad for the brand of your API, and bad for everyone involved.

Conclusion: Secure Automated Systems That Use Your API

Bots are great tools, but as with any tool in the wrong hands, they can very quickly become a weapon. Understanding the bots on your platform, what they do, how they do what they do, and why they are doing it, is a first major step to securing automated systems that use your API.

Establishing proper guidelines and enforcing them as a set of rules and regulations can result in a stronger network, a more secure system, and better user experience all-around. What do you think the future of bots holds for the API space? Let us know in the section below.

About Kristopher Sandoval

Kristopher Sandoval is a web developer, author, and artist based out of Northern California with over six years of experience in Information Technology and Network Administration. He writes for Nordic APIs and blogs on his site, A New Sincerity.