Session

Security of LLM APIs

In this session, I will talk about API security of LLM APIs, addressing key vulnerabilities and attack vectors. The purpose is to educate developers, API designers, architects and organizations about the potential security risks when deploying and managing LLM APIs.

1. Overview of Large Language Models (LLMs) APIs
2. Understanding LLM Vulnerabilities:
– Prompt Injections
– Sensitive Data Leakage
– Inadequate Sandboxing
– Insecure Plugin Design
– Model Denial of Service
– Unauthorized Code Execution
– Input attacks
– Poisoning attacks
3. Best practices to secure LLM APIs from data breaches

I will explain all the above using real life examples.

This session will be held at at our upcoming event:

Austin API Summit 2024

Register Learn More
Smarter Tech Decisions Using APIs

Smarter Tech Decisions Using APIs

API blog

High impact blog posts and eBooks on API business models, and tech advice

API conferences

Connect with market leading platform creators at our events

API community

Join a helpful community of API practitioners

API Insights Straight to Your Inbox!

Can't make it to the event? Signup to the Nordic APIs newsletter for quality content. High impact blog posts on API business models and tech advice.

Join Our Thriving Community

Become a part of our global community of API practitioners and enthusiasts. Share your insights on the blog, speak at an event or exhibit at our conferences and create new business relationships with decision makers and top influencers responsible for API solutions.