Skip to main content
DASH NYC, June 9-10 | AI + Observability.

Back to Catalog

Deterministic Until Proven Otherwise: Building AI Agents That Ship

About this Session

Every engineering team is being asked to "build AI agents," but most guidance skips the hard part: how do you go from prototype to production in a way that actually delivers business value?

 

In this talk, Matthew Littlehale (Director of Engineering at Nomad Health) presents a framework for building AI agents deliberately, drawn from real experience at the healthcare staffing company where trust and reliability aren't optional. He'll walk through the evolution from autocomplete to assistant to autonomous agent—and why skipping steps is how demos that impress leadership with hype become products that are useless to users and the business.

 

The core philosophy he uses is "deterministic until proven otherwise": start every piece of agent logic as a rule or structured API call, and only graduate to LLM inference where you can prove it's necessary. He'll cover the micro-service pattern for agent capabilities, human-in-the-loop as intentional UX design (not a safety net), and why AI agents demand a fundamentally different approach to observability—one where you're monitoring correctness, not just availability.

 

Matthew will share real stories from production including a hallucination caught only through LLM-specific observability, and why his team ended up demoing Datadog's LLM Observability to their entire company. When non-technical stakeholders can see the decision traces and confidence scores behind an agent's actions, it can turn your AI experiment into a trusted product. You'll walk away with a framework for shipping AI agents that earn trust—from your users, your stakeholders, and your own team.

Related Sessions