Datadog LLM Observability for Google Vertex AI
About this Session
Large Language Model (LLM) applications introduce new challenges in understanding performance, cost, and output quality across complex, multi-step workflows. Datadog LLM Observability provides end-to-end visibility into these systems, enabling teams to trace requests, evaluate responses, and correlate LLM activity with the rest of their application and infrastructure telemetry.
In this hands-on workshop, you'll explore Datadog LLM Observability using a pre-instrumented SwagBot application leveraging Vertex AI (Gemini). You'll analyze LLM traces, prompts, and responses, along with evaluations, token usage, and latency. Through guided exercises, you'll correlate LLM activity with APM, logs, metrics, and infrastructure data to gain full context into application behavior. You'll enable and customize LLM-as-a-Judge evaluations with Vertex AI and managed evaluations, apply Sensitive Data Scanner to traces, and tune evaluation signals for better insights. Finally, you'll use LLM Experiments—working with datasets, evaluators, and comparison views—to identify optimization opportunities in Gemini model selection, helping retaining accuracy while reducing cost and latency.