fostering-an-internal-culture-of-security-02API security is a common subject, and for a good reason — as the average user becomes more adept at utilizing more powerful systems than ever before to complete incredible tasks, the old methods of secure communication become less secure. What was once considered the crème de la crème of IT and API security has become tomorrow’s vulnerability.

Due to the ever shifting world of API development and the Internet of Things (IoT), fostering an internal culture of security is paramount to an organization’s success. This means adopting API security and developer responsibility as cultural norms within your organization. In this piece, we talk about why this culture is so important, and what steps to take to improve your internal approach to promote sustainability and growth.

Holistic Security — Whose Responsibility?

There is a mindset amongst many novice developers (and, unfortunately, many seasoned veteran developers) that security is the responsibility of the user. After all, it is the user that holds the keys to the kingdom in the form of a username and password, along with authentication/authorization profiles and usage needs.

This is fundamentally flawed, however, if only for one reason — the user does not have complete access to the system they are requesting access to. By its very nature, an API is restrictive to those remotely accessing it when compared to those with physical access to its server. Thus, both the the API user and the API developer have a large security responsibility.

giving-keysThink of it this way — a person invites a friend to house sit for a week and gives them a key. At that moment, the key is the friend’s responsibility. In an API environment, the username, password, or token are similarly the user’s responsibility.

But whose responsibility is the front door? Who decided the type of material the door was made of? Who needed to change the lock when it stopped working properly? Who had the keys made? The homeowner did. In the API space, providers must similarly take responsibility to ensure security within their system.

A user can only be responsible for that which they have — the methods by which they authenticate and authorize themselves. Federation, delegation, physical security, and internal data security is within the purview of the API developer for the simple fact that they are the ones most able to ensure these systems are secure.

The Importance of CIA

An ideal system balances Confidentiality, Integrity, and Availability in harmony with security solutions and access requirements. With so many APIs functioning on a variety of platforms, and with many modern systems utilizing cloud computing and storage, internal security balanced with external security is of incredible importance.


An Internal security culture restricts data to only those who have the rights to see it.

lockConfidentiality is the act of keeping information away from those who should not be accessing it. In the API space, a division needs to be made between external and internal confidentiality. External confidentiality is, obviously, the restriction of external access to confidential materials. This includes access to API functionality not needed for the user’s specific requirements, and restricted access to password databases.

While confidentiality is often handled utilizing encryption to obscure information, there is a great deal of information that cannot be encrypted — the information held by the developers of the API. This internal confidentiality is often far more dangerous than any external confidentiality issue could ever be.

As an example, assume a developer is using a flat database system for passwords that is protected by an internal authentication service. This service, hosted on a Linux server, requires a username and password on the root level to access the authentication tables.

A hacker is attempting to access a confidential server, and has a direct connection to your systems and servers through an API. He calls the developer’s office, and states that he is the hardware provider for the server calling to issue a patch for a massive vulnerability, and that he needs a private, unrestricted session with the server.

The developer creates a root username/password combination, which the hacker is able to use to enter the service unrestricted, and steal the authentication tables for nefarious purposes.

This is called phishing, and it’s a huge risk that many people have fallen afoul of. Promoting an internal culture of security in this realm means ensuring that data is kept secure, that developers follow policies ensuring security of authentication and authorization protocols, and that things like phishing are avoided.

In addition to ensuring that a culture of security exists, make developers aware of these threats. Have a properly utilized and understood system of Authentication, Authorization, Federation, and Delegation to ensure unauthorized external access granted by internal developers becomes a non-threat.


An internal security culture guarantees data is only changed by those authorized to do so.

userIntegrity in the API space means ensuring data accuracy and trustworthiness. By maintaining data streams and employing secure workstations, unauthorized changes to data in transit should not occur, and the alteration of hard-coded information becomes a non-threat.

Unlike confidentiality, threats in this category are often internal. In the corporate world, disgruntled employees, faulty servers, and even poor versioning can lead to the change of data during the transit cycle. Poorly coded software can return values that should not be returned, vulnerabilities that should be secured by software can be breached, and physical transmission of code can result in captured sessions and man-in-the-middle attacks.

One of the best ways to manage the integrity of a system is to educate developers on exactly what their API traffic should look like. This can be done with a range of API Metric solutions which can show the rate of traffic, the type of data requested, the average connection length, and more.

Know which services are most often attacked and through what method, and take steps to secure these resources, such as bastion workstations (nodes that are designed purposely to withstand a tremendous amount of illicit traffic) or DMZ zones (zones that separate a secure internal network from an insecure external network). Prevent these problems by educating developers on secure data transmission, protocol requirements, and what is a “normal” and “abnormal” data stream.

Adopt a culture that places prime importance on risk management — especially when it comes to integrity — one of the harder things to maintain. Balance risk management with effectiveness of the service, however, ensuring that integrity exists alongside ease of use and access for clients and users.

Linda Stutsman put it best in an interview with Andrew Briney of Information Security Magazine:

”It’s said time and time again, but it’s absolutely true: You have to get to the point where risk management becomes part of the way you work. That starts with good policies driven by the business — not by security. Communication is absolutely the top factor, through policies and training programs. Then it’s determining the few significant metrics that you need to measure.”

As an aside, the integrity of an API is not wholly dependent on the software or code transmission factors — a lot can be said for the physical network the API is planning on running through, and the limitations inherent therein.

For instance, if an API developer is creating an API for local area record transmission, such as a hospital setting, knowing whether the signal will be transmitted through coaxial or fibre-optic cable, whether these cables will be running near power transmission causing data loss, and even whether the data will be exiting to the wider Internet, will inform the developer as to error-checking, packet-loss mitigation, and integrity-increasing features that might be required and maintained.


An internal security culture focuses on high uptime and ease of access.

keysWhile it’s important to ensure that your API has great data confidentiality and integrity, perhaps the most important attribute of an effective culture of security is ensuring availability. After all, if an API cannot be used by its users, then is it really an API?

Whether an API is Private, Public or Partner-centric, ensuring your API is accessible is incredibly important. This can be balanced in a handful of ways, but all of these techniques can be broadly summed up in two categories — ensuring availability through developer activity and through user activity.

Let’s look at developer activity. First and foremost, developers should understand that every single thing they do to an API or the server an API runs on will affect the availability of the system. Updating server firmware, changing the way an API call functions, or even accidentally bumping into a power strip can result in the failure of availability for many users.

Additionally, some changes that are considered simple are actually catastrophic for the end-user. Consider versioning — while updating to the newest version of a dependency might deliver the most up-to-date content for your user, if this update does not support legacy systems or services, an API might be fundamentally broken. Changes should be balanced through the lifecycle of the API, regardless of whether the API in question is a First or Third Party API.

User activity is far easier to handle. Threats to availability from users often spring from poorly formed requests in the case of non-malicious threats, and in port-scanning or traffic flooding (especially UDP flooding) in malicious threats. These user threats are easier to handle, and can often be taken care of simply by choosing the correct architecture type and implementing solutions such as buffer overflow mitigation, memory registers, and error reporting.

Blog Post Tour CTA 5-01

4 Aspects of a Culture of Security

So far we’ve covered high-level concepts — let’s break down what an effective culture of security is in four bullet points. These points, when implemented fully, should not only create an effective culture of security, but lead to growth and stability over time.

A culture of security entails:

  • Awareness of Threats – developers should be aware of potential threats to their system, and code their APIs accordingly;
  • Awareness of Vulnerabilities – developers should recognize vulnerabilities inherent in their system, servers, or devices. This includes vulnerabilities arising from their own code and systems as well as those from third party vendors, services, or servers;
  • Awareness of Faults – developers should be conscious of their own personal faults. If a developer has a history of misplacing thumbdrives, sharing passwords, or worse, they should not be responsible for internally managed secure services;
  • Awareness of Limitations – developers should know the network that is being utilized for their API, and what limitations it represents. Security solutions for a close intranet running on fibre-optic cable will be different than solutions for an Internet-facing connection running on coaxial or twisted-pair;

Considering “Culture”

It’s important to consider the function of culture within an organization. All of the topics discussed in this piece are applicable in a huge range of situations, environments, and organizations, due to the nature of security. Security concepts are universal, and scale directly with the size of the data being protected.

The way a culture of security is built and perpetuated is directly influenced by the type of organization which adopts it. For instance, in a governmental organization, this culture can be directly enforced through policy, law, and guidelines, whereas in a non-profit, this information must be disseminated through classes or instructional guidelines to workers who may be unfamiliar with such stringent policies.

In a corporate environment, much of this security can be managed directly through limiting privileges and abilities.In a corporate environment, each server or service might have its own administrator, and by limiting powers, knowledge, and abilities to only those that need it to function, you maintain a culture of security.

In a small startup or non-profit, however, one person may need access to ten different services and servers. In this environment, where the success of the company directly controls the well-being of its employees in a very granular way, reaching out verbally or via email can be extremely effective, as there is a personal stake in security.

All Organizations Should Perpetuate an Internal Culture of Security

Fundamentally, fostering an internal culture of security is easiest to do in the earliest stages — beginning with a strong security-focused mindset ensures that you can revise, expand, and reiterate while staying safe against current attacks. Additionally, preparing your systems for known attacks and being aware of any vulnerabilities ensures that any system can stay secure long into the future against new, unforseen attacks.

By acting on these points, you make your API a more effective service for the user. An insecure service might be functional, but a secure service is fundamentally useful.

About Kristopher Sandoval

Kristopher Sandoval is a web developer, author, and artist based out of Northern California with over six years of experience in Information Technology and Network Administration. He writes for Nordic APIs and blogs on his site, A New Sincerity.