LLM Security Hinges on API Security

LLM Security Hinges on API Security

Posted in

The use of large language models (LLMs) has gained tremendous traction over a short time span. Generative AI is dramatically altering not only user experiences but how software gets made. And paired with APIs, LLMs are being easily integrated into all sorts of applications to enable access to impressive AI capabilities.

Ankita Gupta

At Austin API Summit 2024, Ankita Gupta, Co-founder and CEO of Akto, will share advice for LLM API security.

According to Ankita Gupta, Co-founder and CEO of Akto, APIs are the backbone of the current AI revolution, helping to collect data and refine and integrate these powerful models. However, there are API security risks to consider with LLMs. Most notably, LLM-related threats, like prompt injection and data leakages, should be top of mind for AI companies hoping to compete in a crowded market.

Before Austin API Summit 2024, we’re syncing with key speakers to learn a bit about what they’re working on and to gather their perspectives on the API economy at large. Gupta, who previously worked at VMware, LinkedIn, and JP Morgan before founding Akto, has a keen sense of the technical and business implications of poor security concerning LLMs.

I recently interviewed Gupta about securing LLM APIs, a theme of her upcoming session. Check out Gupta’s answers below, and be sure to attend the Austin API Summit for insights about the future of AI and APIs, API security, and much more API-related knowledge.Austin API Summit

Why are APIs so crucial for LLMs to function?

In our experience, an average organization uses about ten LLM models. These LLMs get their data indirectly through APIs. So, when we talk about LLMs, we’re really talking about APIs that help us use and fine-tune these models.

Developers use APIs to add LLM features to their existing applications for operations such as fetching and processing data. Scale is another important aspect. We rely on APIs to ensure that our LLMs can handle more and more data. We use APIs to update and improve LLMs and integrate them into our application workflows. Basically, APIs are the way to use, scale, and customize LLM models.

Why has LLM security risen in importance lately?

On March 20, 2023, there was an outage with ChatGPT. The outage was caused by a vulnerability in an open-source library, which may have exposed payment-related information of some customers. In September 2023, Gartner revealed that 34% of organizations are either already using or implementing AI application security tools to mitigate the accompanying LLM risks. Over half (56%) are also exploring such solutions.

In the past year, approximately 77% of organizations have embraced or are exploring generative AI (GenAI), driving the demand for streamlined and automated processes. As the reliance on GenAI models and LLMs such as ChatGPT has grown exponentially in the last 12 months, the importance of security measures for these models has become a priority for organizations.

What are some specific API vulnerabilities that are unique to LLMs? How can they be avoided?

One of the major ones is prompt injection, where malicious inputs can manipulate the LLM’s output. It has become a major concern. Others are denial of service (DoS) threats, where the system is overloaded with requests, leading to service disruptions. Another threat is overreliance on LLM outputs without adequate verification mechanisms, which can lead to data inaccuracies and leaks.

Is the onus on the LLM API provider or the consumer to plug these gaps?

It’s a shared responsibility, really.

The LLM API provider must ensure that the API is built to be used in a secure way and complies with data protection laws. This includes strong authentication, encryption, and access controls. Providers should also make sure that the APIs can handle a high number of requests and have proper rate limiting in place to avoid DoS attacks.

On the other hand, it’s the consumer’s responsibility to integrate APIs properly according to the guidelines and documentation. Consumers should also ensure they are inputting data in the APIs in a secure way, amongst other things.

What are the negative business outcomes of not securing LLM APIs?

Well, there can be quite a few negative outcomes if LLM APIs are not properly secured. One major concern is the risk of customers’ data leaking. Just imagine being a provider of LLMs and having your customers’ sensitive data exposed because of vulnerabilities in your LLM APIs. You don’t want that situation. It can seriously damage your brand reputation. There’s a lot at stake, you know?

On top of that, the LLM industry is getting more and more competitive these days. It’s not just about having great functionality anymore. Security is becoming a crucial factor, especially when your consumers are enterprises. So, if you want to stand out in this crowded market, you need to prioritize the security of your LLM APIs. It’s not just a nice-to-have anymore — it’s a must-have.

Why are you excited to speak at Austin API Summit 2024?

Austin API Summit is a great way to meet fellow API designers, developers, and API security folks. I am excited as it’s my first time speaking at a Nordic API summit. I look forward to attending more sessions and learning from the incredible speakers.

Without giving away too much else, what do you hope attendees take away from your session?

The purpose of the talk is to educate developers, API designers, architects, and organizations about the potential security risks when deploying and managing LLM APIs and what are some basic measures to mitigate those risks.