Product Updates

[ What's launched at LaunchDarkly ]
blue starsGreen toggleFlaming arrow
February 16, 2026
AI Engineering

Paul Loeb

Online Evaluations for AI Configs (GA)

Online Evaluations is now Generally Available for AI Configs, enabling automated, model-based evaluation of LLM completions in both production and pre-production environments. This GA release also introduces Customizable Judges, allowing you to define evaluation prompts and scoring criteria aligned with your domain-specific requirements.

Online Evaluations attaches AI Judges to AI Config variations and emits structured evaluation metrics (for example, accuracy, relevance, or toxicity) for each completion. Judge results are surfaced in real time on the AI Config Monitoring page and can be used to detect regressions when prompts, models, or parameters change.

Customizable Judges give you fine-grained control over evaluation logic while maintaining consistent metric output for monitoring and analysis.

Docs