Skip to main content
DASH NYC, June 9-10 | AI + Observability.

Back to Catalog

From Prompt Injection to Data Exfiltration: Defending AI Agents at Runtime

About this Session

As your organization ships more AI agents and LLM-powered apps, each deployment expands the attack surface — often without security review or visibility into runtime behavior. Unlike traditional apps, agents combine access to sensitive data, exposure to untrusted external content (like web results or online forums), and the ability to act autonomously on that data. When something goes wrong, it's rarely obvious where.

 

Join this session to see how AI Guard gives AI development and security teams runtime visibility into agentic applications. We'll cover:

  • What's breaking: Prompt injection, tool misuse, and unintended data exfiltration are showing up in production AI systems, and standard AppSec tooling doesn't catch them.
  • What AI Guard does today: Automatically discover agents in your environment, trace full conversational context, detect behavioral anomalies, and block attacks without requiring manual rule-writing.
  • Where we're taking it: We’re exploring deeper pipeline integration, MCP protection, enhanced customization, and authentication protection to make security a native part of how teams ship AI, not an afterthought.

 

If you're deploying AI-powered applications and want to know what's actually happening inside it, this one's for you.

 

Related Sessions