Ways to Harden New Platform Architecture

New software architecture design styles are coming out every year. They promise to offer greater flexibility, more power, and more freedom to compute in unique ways. Unfortunately, with this greater power comes a greater responsibility to ensure security holes are addressed.

Popular architectural styles like virtual machines, serverless, microservices, and containers all come with new security concerns that architects can’t afford to overlook. Today, we take a DevSecOps approach to consider how new platform architectures may be vulnerable in new ways.

Let’s address fundamental issues these new styles are subject to, and consider methods we can adopt to harden their security.

Virtual Machines

A virutal machine is, in essence, a virtualized single instance running across a multiple of physical resources. For this reason, virtual machines present unique issues in terms of data access, redundancy, and holistic security when compared to other offerings. As we look at hardening techniques, it should be noted that these techniques have to be universally applied. This is, of course, made much easier through the use of an image when setting up the virtual instances, but it is nonetheless a potential weak point that should be addressed.

First and foremost, unless there is a valid and strong business case (such as the creation of a virtual network or virtual office), each virtual machine should function essentially in isolation. While communication to the outside world is of course expected, the core system itself should be isolated. If the virtual machine can speak with other virtual machines in the same cluster, privilege escalation can rapidly become a significant threat.

More specifically, when hardening a virtual machine, the ability of that machine to write to host memory, regardless of the reason, should be heavily limited and monitored. While there are some specific caveats (and indeed specific areas in which greater restriction is warranted), the simple fact is that a virtual machine is only as safe as it is controlled. When that machine is able to break free from its constraints, damage can be done at quite a large scale.

One example of the potential damage that can be done can be found in the way virtual machines report and log data. As a virtual machine executes its functions, it will often write data to a local log so that remote administration can track changes and contextualize processes. In theory, if this is not curtailed and the sizes are not limited, an attacker could overflow the size of this logging, engaging the host machine in an internal DDoS-type attack that would then affect all other virtualized machines.

Another major security hole in virtual machines is the virtualization of attached media. A virtual machine does not have a CD drive or a USB port, but this can often be emulated pretty readily, especially by porting virtual drives to local physical media. Some virtualization hypervisors allow this by default, and in such cases, a gaping security hole is exposed, allowing users to mount problem ISOs or load portable, sandboxed programs for privilege escalation, virus distribution, or any other of the many possible attack vectors enabled.

No matter what, however, a virtual machine is like any other machine: it is fundamentally insecure, and efforts to make it more secure will always come up against more advanced attacks and escalations of threats. Accordingly, one of the best things you can do to improve holistic virtual machine security is remove virtual machines when they no longer serve a purpose. Often, because they are so cheap to spin up and maintain, virtual machines are often left in perpetuity. This increases the attack surface of your entire underlying network, as well as adds attack vectors to the system itself due to poor maintenance and lack of oversight.

MicroVMs

MicroVMs are, generally speaking, the same thing as a standard virtual machine, with the caveat that they are for a singular function or isolated purpose. Due to this, they have some specific caveats to their security and to the nature of their use, interaction, and maintenance.

A microVM doesn’t really do anything on its own – instead, it leverages a powerful backend of integrations and resources to actually carry out its work in what is essentially a discrete compute unit. Unfortunately, this means that the security of the underlying system of integrations and supporting resources are a single, unified threat. Ensuring that each resource is trusted, updated, isolated, and dependable is a large task, but it must be engaged in so that the microVM can be safely utilized.

There is also the fact that, unlike a virtual machine, microVMs are often taking in purpose-built data from a variety of sources. This, in turn, means that data input should be sanitized and validated, as data put into the system could render many safeguards incapacitated.

That being said, a microVM is by design so isolated from other compute units that each security threat is typically erased when the microVM reaches its short shelf life. Still, microVMs present many of the same security concerns a traditional VM does, and should be treated as a different type of the same class of solutions.

Serverless

Serverless is an interesting approach; it is essentially a server managing resource allocation when and where needed. While this Function-as-a-Service approach is great for event-driven, high-availability, speed-dependent systems, it does come with its own security implications that should be properly contextualized.

The greatest security risk in the serverless world is the method and level of access granted to each resource request. By its nature, a resource request must be allowed access to resources in a serverless world. But in doing so, there is both the danger of giving far too much access and then not properly managing such access later on. This can result in a function being granted too many resources, and then allowing it to requisition more and more resources, thereby initiating a type of denial of service attack.

These sorts of attacks can largely be mitigated by sanitizing inputs. This is especially true for any serverless application that allows for scripting or database functionality, as something as simple as a SQL injection can dramatically change the resource allocation and capabilities of a simple, discrete compute system.

There is also the obvious issue raised with putting all of the serverless requests on the same pool of resources. This, in turn, suggests that the libraries and integrations that underlie that service must themselves be managed, updated, and secured. This was a problem with virtual machines as well, but in the serverless world, a custom-built function utilizing a publicly facing library error without having its access control properly managed can result in a single function essentially taking over an entire system. With microVMs you can just shut down the instance. With Serverless, it’s a bit more complex in such a case.

There is also the reality that serverless introduces greater system complexity versus traditional solutions. In this complexity, security issues can become larger than they initially seem simply due to the complex nature of detecting them, testing them, patching them, and preventing them. This also results in a greater attack surface that is in turn more complex, multiplying these problematic situations to greater heights with each additional function spun up.

Microservices

In many ways, a lot of the core security threats underlying APIs foundationally are just as valid in microservices. External requests mean that each data package needs to be validated, sanitized, and within a given parameter of form. Resources need to be limited, access should be regulated, and generally speaking, microservices should speak to one another internally while not necessarily exposing the functionality of the underlying core system.

That being said, there are many more moving parts with a collection of microservices when compared to a traditional API, and as such, it bears some discussion. A single API split into ten microservices means that your attack surface has increased dramatically, as well as the number of specific vectors. While this can be fought against with proper rate limiting and data validation, this starts bringing the collection of microservices closer to a singular, unified product – in other words, an API.

The unique nature of such a collection of microservices means that some types of attacks only exist in this specific configuration. For instance, with improper configuration, it’s possible an attacker could form a unique request that would have the microservices echoing back and forth to one another, constantly sending the same traffic over and over to each node. This would be akin to a self-service DDoS attack. Even when properly rate limited, if size is not taken into consideration, this attack can result in overflow attacks which do massive damage to the underlying network health and the ability of the microservice to do what it needs to do.

There is also the very obvious fact that a microservice is essentially a service that has been broken into parts, and with this, the same redundant requirements on libraries and resources needing to be upgraded, monitored, etc. comes to the forefront. Much of this can be automated, or even excluded.  For instance, there’s no need to have a messaging library for the microservice piece that does not message the client, but in practice, this means added complexity.

Microservice hardening is best handled in a layered approach. The microservice should be first to be secured. Isolate any API issues, undocumented endpoints, and escalation threats. Then, look to the underlying software and the transport layer. Securing these will ensure that data in transit is secure from manipulation and that your efforts on the microservice layer are not wasted. Finally, look to the hardware. Core package updates, underlying vulnerabilities, and even physical insecurities can mean your entire system’s security is essentially nullified. Hardening at each stage; implementing firewalls, preventing code injection, mitigating attacks using heuristics and baseline comparisons, etc., is required for total microservice security.

Containerization

Containers are, by design, discrete systems. Their functions and data are meant to be separated from one another, and as such, there is a certain amount of inbuilt hardening. Unfortunately, this ethos also requires that each container talks to other containers and to the underlying systems, and as such, all of the issues inherent in virtual machines are also found with containers.

The biggest threat here, as with most virtualization products, is unrestricted data access. Data flow should also be limited, however, as container data flowing from one container to another can impact the whole collection of containers, ensuring that data is limited only to the container in question before being sanitized can go a long way towards securing your network.

Each container can also have its own set of updates and patches, especially with many clients. This can lead to security issues form an older version populating to newer versions when code execution is not properly limited and communication is free and open.

It should also be noted that, unlike a VM, the kernel is shared by all containers. This means that any kernel exposure or vulnerability is essentially global, and can impact every single container on the kernel. Thus, kernel security is a major issue. It’s also possible that the images used by the kernel to create a container can be exposed or poisoned, and when that occurs, poisoned images can distribute to all areas and fundamentally make your network insecure.

Efforts to Secure Virtualization

Almost all of the new platform architecture we’ve discussed in this piece are fundamentally different approaches to discrete virtualization. This is becoming more and more normal as enterprise adopts cloud computing and data rental strategies. Unfortunately, that also means that the core security risks more traditional to physical systems are being replaced with more complex, sometimes fundamental issues that are often overlooked in favor of simple suggestions such as “install firewalls on your emulated operating system” which do very little, if anything, to stem systemic threats.

With this in mind, understanding the core limitations, threats, and basic exposures of the virtualization concept can help secure your systems, if not perfectly insulate them from potential threats.

Do you think we hit the biggest threats to this model of architecture design? Let us know in the comments below.