In this session, I will talk about API security of LLM APIs, addressing key vulnerabilities and attack vectors. The purpose is to educate developers, API designers, architects and organizations about the potential security risks when deploying and managing LLM APIs.
1. Overview of Large Language Models (LLMs) APIs 2. Understanding LLM Vulnerabilities: – Prompt Injections – Sensitive Data Leakage – Inadequate Sandboxing – Insecure Plugin Design – Model Denial of Service – Unauthorized Code Execution – Input attacks – Poisoning attacks 3. Best practices to secure LLM APIs from data breaches
I will explain all the above using real life examples.
View the Session Slides Here.
High impact blog posts and eBooks on API business models, and tech advice
Connect with market leading platform creators at our events
Join a helpful community of API practitioners
Can't make it to the event? Signup to the Nordic APIs newsletter for quality content. High impact blog posts on API business models and tech advice.
Become a part of our global community of API practitioners and enthusiasts. Share your insights on the blog, speak at an event or exhibit at our conferences and create new business relationships with decision makers and top influencers responsible for API solutions.