LLM Observability in Action: Monitor, Evaluate, and Optimize Agentic AI
About this Session
Agentic AI introduces a new class of observability challenges. When a LangGraph workflow routes a single customer request across multiple agents, tools, RAG pipelines, and LLMs, traditional APM cannot explain why an agent hallucinated, why latency doubled after a model change, or why your LLM bill spiked 10x overnight.
In this hands-on workshop, you'll use Datadog LLM Observability to instrument SwagBot, a multi-agent ecommerce chatbot powered by LangGraph. With just three environment variables, you'll unlock end-to-end visibility into every agent decision, LLM call, token cost, and retrieval step. You’ll connect all the dots, from frontend user experience to backend LLM execution.
From there, you will seamlessly extend that visibility into quality and security. You will configure managed evaluations for hallucination detection, failure to answer, and prompt injection, then build custom LLM-as-a-Judge evaluations tailored to your domain. When a new version of SwagBot is deployed and issues appear, you will use monitors, traces, and evaluation results to diagnose root causes and fix them with confidence. Finally, you will run LLM Experiments to compare multiple models across latency, cost, and quality dimensions. Instead of guessing, you will make data-driven model decisions that balance performance, reliability, and business impact.
By the end of this lab, you will know how to observe, evaluate, and continuously optimize any agentic AI application running in production.
Related Sessions
From Reactive to Proactive: How SREs Can Optimize Their Application Services Before Users Are Affected
Speakers
Build with LLM Observability: From Setup to Signal
Datadog Core Skills for Developers - Pre-Day
Datadog Core Skills for Site Reliability Engineers (SREs) - Pre-Day
Serverless Observability on AWS
From Ingestion to AI: Ensuring Data Reliability Across the Full Lifecycle
From Reactive to Proactive: How SREs Can Optimize Their Application Services Before Users Are Affected
Speakers
Build with LLM Observability: From Setup to Signal
Datadog Core Skills for Developers - Pre-Day
Datadog Core Skills for Site Reliability Engineers (SREs) - Pre-Day
Serverless Observability on AWS
From Reactive to Proactive: How SREs Can Optimize Their Application Services Before Users Are Affected
Speakers
How AI Is Redefining the Datadog Experience—and How to Make the Most of It
The AI Engineering Playbook: How to Evaluate & Iterate at Every Phase of Development
Build with LLM Observability: From Setup to Signal
Datadog Core Skills for Developers - Pre-Day
Datadog Core Skills for Site Reliability Engineers (SREs) - Pre-Day
From Ingestion to AI: Ensuring Data Reliability Across the Full Lifecycle
From Reactive to Proactive: How SREs Can Optimize Their Application Services Before Users Are Affected
Speakers