Continuous delivery is a hallmark of the modern development world. As tools have matured and the needs of the consumer have evolved, constant development and deployment have become the norm rather than the exception.
With this increase in deployment, security has increased part and parcel. In this piece, we’re going to discuss how to maintain security in such a unique deployment environment, and the challenges inherent therein.
What Is Continuous Delivery?
Continuous delivery is the process by which developers push consistent and timely updates via a deployment system. This is typically an automated system, wherein DevOps teams join ideation, initial development, and deployment into a single, agile development track.
There’s a lot to be said for this kind of delivery interaction. For one, the system is far more agile to the needs of the market — because delivery isn’t tied to a long-term cycle, features can be rapidly developed and pushed through quality assurance as consumers notify the developer of their needs.
This cycle change also means that when errors and bugs arrive, they’re typically short-lived. Developers can rapidly address security concerns, bugs, and errors through additional patching and deployment, reducing the effective life of issues in an API.
As part of this change to an automated and continuous development cycle, there comes some caveats that prohibit more traditional development. Most importantly, the common practice of manual code auditing becomes unrealistic due to the sheer rapid agility of development.
Not everything is “sunshine and rainbows”, though. Rapid and continuous delivery has some caveats that developers need to manage.
Chief of these is the fact that rapid and continuous development can make feature creep easier to engage in. With ongoing incremental releases, the “greater picture” is often lost, and feature creep becomes a legitimate issue. Likewise, constant continuous deployment can also proliferate bugs that would otherwise be eliminated over long-term testing and implementation.
These caveats are nothing compared to the benefits granted by increasing agility and consumer interaction, but they allow a unique perspective on development — continuous deployments inherently require more consistent integrations — all of which need to be secured properly.
Thankfully, there are a number of ways an API provider can audit and secure their APIs in a continuous delivery environment. While each of these solutions are incredibly powerful, they are generally best used for specific use cases — there is no “perfect” implementation.
Code Scanning and Review
Code scanning — the automated process by which code is scanned and checked for vulnerabilities — is an incredibly powerful tool for auditing security. One of the most powerful features of code scanning is the fact that, in most solutions, the code is checked against common and known vulnerabilities, removing a lot of the dependency based issues that plague rapid codebases.
Implementing this as a development procedure makes sense, but even so, it’s often overlooked. When you submitted a final paper in school, was it a first draft? Of course not, most students passed the work through spell-check a hundred times, checked the grammar, checked every single fact, checked their periods and commas, and made sure everything flowed.
Accordingly, knowing how many people depend on the functionality within, why would an API developer release a product without first doing their own “spell-check”?
A lot of these solutions are additionally open source. While there’s been a lot of discourse about open source security, and whether or not it’s actually as powerful and useful as has been stated, the power of collaboration makes having a crowd sourced, open database of faults more powerful than having a closed, limited list of possibly outdated and irrelevant references.
Creating a Clean Ecosystem
Code scanning can only go so far, however — for many development teams, the devil’s in the details. Establishing a secure and stable development and operations platform is just as important as scanning code for common issues.
There seems to be a disconnect in most DevOps systems where the development and operations clusters are far removed from one another. What this ultimately results in is a system where hotfixes are applied to properly functioning code on one cluster to get it to work on another cluster.
While this is fine for the basal requirement, it’s terrible for security, as it often introduces new, unique errors and faults that would otherwise not exist without this cluster discrepancy.
As crowdsourcing has become more accepted by the mainstream, there have been more and more tools introduced to the market that harness the power of the group to produce some amazing results.
One such tool in the security space, Evident.io, utilizes crowdsourced environment and protocol registers to intelligently analyze code, reducing complexity to understand analytics. These analytics are then used to pinpoint attack vectors, expose common issues, and clarify security issues that can be hard to see.
Adopting More Effective Development Strategies
The adoption of two-speed IT as a production principle is also incredibly powerful for both production and security. In this approach, two “lanes” are formed — rapid beta development and static release development.
In this approach, the rapid beta development is where new features are crafted and implemented, whereas the static release development track focuses on releasing products that meet need requirements and are stable.
Positioning separate tracks helps ensure security in a continuous environment as it allows for an opt-in channel for experimental and beta features without impacting the stable track. The security for the opt-in track does not necessarily need to be as intense as the stable track, as the de jure principle is certainly “caveat emptor”.
That being said, implementing future features in a low security environment can help pinpoint the holes in the armor that might otherwise be obscured when implemented in a high security environment.
Segmentation of Services
While creating a “unified” experience for developers has long been the rallying cry of most API proponents, in some cases, it is actually better to segment services, especially in the case of security and auditing.
Consider the following example. An API provider has created a “unified” API that combines data processing, media conversion, and large file transfer between servers, clients, and users. Each update to code requires a long-term audit, with multiple teams using the same code base.
What are the problems with this application? Well, first of all, we have multiple teams utilizing the same general codebase and applying specific solutions therein. The best operations schematic for the Media Conversion Team may not necessarily be best for the Data Processing Team, and certainly not for the Large File Transfer Team. With each new code fix, the code bloats, and different teams implement solutions that are contradictory in nature. Even with the teams conversing directly, this is inevitable.
What’s the solution? Segmentation. With segmentation, developers take functionality and divide the API along those lines. Essentially, a “main” API is developed to unify the functions in these other, disparate APIs, allowing individual APIs to be formed for specific use cases and functionalities.
In such a development process, the API, which formerly looked like this:
- Function API – Media Conversion, Data Processing, Large File Transfer
Turns into this:
- Function API – API with general purpose calls, tying into:
- Media Conversion API – API specifically designed to convert media for use in either Data Processing or Large File Transfer;
- Data Processing API – API specifically designed for large data processing for use in either Large File Transfer or Media Conversion;
- Large File Transfer – API specifically designed to handle the transfer of large files, including those generated from the Media Conversion and Data Processing APIs;
By segmenting the API into various secondary APIs, each essentially becomes its own development segment. By doing this, security can be audited for each function, as the security needs of each is drastically different.
Most importantly, segmentation results in secondary layers of security. This creates a situation where, even if a hacker can break through the “Function API”, additional gateways for each new segment makes it almost impossible to actually get through the security ecosystem.
Continuous Delivery is an incredibly powerful implementation, but it comes with its own issues and security concerns. While ensuring users have the most up-to-date revisions of a code base can make for more powerful interactions with that code base, it can also necessarily increase the chance of code failure or lax security. The solutions offered here are but few of the many solutions which can be implemented to negate the concerns offered by the development strategy.
While adoption of some or even all of these security solutions might seem a daunting prospect, the fact is that most API developers should implement them regardless — nothing but good can come from proper design approaches, segmentation, and code scanning.
Implementing Continuous Delivery is not only one of the best solutions for API developers facing large development lifecycles — it’s possibly the most powerful method for forming a strong userbase and ecosystem.
Do you have any suggestions for securing a Continuous Delivery environment? Comment below!