
Mark Derdzinski
[introductory] From Prototype to Production: Evaluation Strategies for Agentic Applications
Summary
Evaluating Generative AI-powered agents raises unique challenges of defining metrics for subjective and dynamic outputs while ensuring alignment and reliability in real-world settings. This lecture series equips participants with actionable strategies for designing scalable evaluation frameworks and establishing robust monitoring strategies for real-world AI applications.
We will explore practical methods for evaluating LLM-based and Generative AI-powered agents, from prototype development through production deployment. Participants will learn to define meaningful metrics, navigate the challenges of drafting subjective evaluation criteria, and incorporate diverse inputs and human feedback in their development process. We will subsequently cover a range of evaluation techniques, from assertion-based tests to heuristic-driven methods and LLM-as-a-judge approaches, examining how each contributes to aligning outputs with user needs and system goals. Finally, we will delve into post-deployment monitoring strategies to ensure long-term agent reliability and adaptability in real-world applications.
Syllabus
- Understanding the Role of Evaluation Across the AI Lifecycle
- Defining Metrics and Evaluation Criteria for Agentic Systems
- Aligning AI Systems Iteratively with Human Feedback and User Needs
- Exploring and Overcoming Challenges in Evaluation Techniques
- Ensuring Post-Deployment Reliability and Adaptability
References
To be added.
Pre-requisites
Basic understanding of AI/ML concepts and neural networks. Familiarity with large language models and generative AI principles. Some familiarity with AI agents and agentic pipelines (chain-of-thought, tool use, etc.) a plus.
Short bio
Mark Derdzinski is the Senior Manager of the Data Products & AI team at Dexcom, where he focuses on developing practical AI applications, including agentic workflows and Large Language Model integrations, to support users in achieving their health goals. At Dexcom, he collaborates with AI scientists, software engineers, and mobile developers to build production AI systems and advance AI research at Dexcom. His primary research interests include multi-modal models, AI alignment, and evaluation. Recently, his team contributed to the development of the first FDA-regulated generative AI platform in glucose biosensing (press release).
Mark holds a Ph.D. in Physics from the University of California, San Diego, and a B.A. in Physics and Mathematics from the University of California, Berkeley.