Online evaluations in AI Configs

Online evals work only with AI Configs in completion mode

AI Configs support two configuration modes called completion and agent. Online evaluations work only with completion mode AI Configs. Judges cannot be attached to evaluate responses from agent mode AI Configs. To learn more about configuration modes, read Agents in AI Configs.

Overview

This topic describes how to run online evaluations on AI Config variations by attaching built-in judges that score responses for accuracy, relevance, and toxicity. A judge is an AI Config that evaluates model responses for a single quality signal and returns a numeric score. AI Configs include three built-in judges:

  • Accuracy
  • Relevance
  • Toxicity

When attached to an AI Config variation, these judges run automatically and record evaluation scores as metrics.

Evaluation scores appear on the Monitoring tab for each variation, along with latency, cost, and user satisfaction metrics. These scores provide a continuous view of model behavior in production and can help you detect regressions or understand how changes to prompts or models affect performance.

Online evaluations differ from offline or pre-deployment testing. Offline evaluations run against test datasets or static examples. Online evaluations run as your application sends real traffic through an AI Config.

Online evaluations work alongside observability. Observability shows model responses and routing details. Online evaluations add quality scores that you can use in guarded rollouts and experiments.

Use online evaluations to:

  • Monitor model behavior during production use
  • Detect changes in quality after a rollout
  • Trigger alerts or rollback actions based on evaluation scores
  • Compare variations using live performance data

How online evals work

Online evaluations add automated quality checks to AI Configs. Each evaluation produces a score between 0.0 and 1.0. Higher scores indicate better results from the attached judge.

A judge is an AI Config that uses an evaluation prompt to score responses from another AI Config. When a variation generates a model response, LaunchDarkly runs the attached judge in the background.

  1. The primary AI Config generates a model response.
  2. The judge evaluates the response using its evaluation prompt.
  3. The judge returns structured results that include numeric scores and brief explanations such as "score": 0.9, "reason": "Accurate and relevant answer".
  4. LaunchDarkly records these results as metrics and displays them on the Monitoring tab.

Evaluations run asynchronously and respect your configured sampling rate. You can adjust sampling to balance cost and visibility.

The following example shows how the built-in judges conceptually score a single response across the three evaluation dimensions:

Example judge evaluation output
{
"accuracy": { "score": 0.85, "reasoning": "Answered correctly with one minor omission" },
"relevance": { "score": 0.92, "reasoning": "Directly addresses the user request" },
"toxicity": { "score": 1.00, "reasoning": "No harmful or unsafe phrasing detected" }
}

LaunchDarkly records evaluation results as metrics that are consistent across providers and environments.

Extending online evaluations

Online evaluations include built-in judges for accuracy, relevance, and toxicity. If you need to evaluate additional or domain-specific quality signals, you can create custom judges based on the built-in judges.

To learn more, read Create and manage custom judges for Online evals.

Set up and manage judges

Configure provider credentials

Online evaluations use your existing AI model provider credentials. Before you enable the built-in judges, make sure your organization has connected a supported provider, such as OpenAI or Anthropic.

Attach judges to variations

To attach a judge to a variation:

  1. In LaunchDarkly, click AI Configs.

  2. Click the name of the AI Config you want to edit.

  3. Select the Variations tab.

  4. Open a variation or create a new variation.

  5. In the “Judges” section, click + Attach judges.

    The "Attach judges" panel for a AI Config variation.

    The "Attach judges" panel for an example AI Config variation.
  6. Select one or more of the built-in judges.

  7. (Optional) Set the sampling percentage to control how many model responses are evaluated.

  8. Click Review and save.

Attached judges remain connected to the variation until you remove them.

Adjust sampling or detach judges

You can adjust sampling or detach judges at any time from the “Judges” section of a variation.

The "Judges" section for an example AI Config variation.

The "Judges" section for an example AI Config variation.

From this section, you can:

  • Raise or lower the sampling percentage
  • Disable a judge by setting its sampling percentage to 0 percent
  • Remove a judge by clicking its X icon

After you make changes, click Review and save.

Connect your SDK to begin evaluating AI Configs

If the Monitoring tab displays a message prompting you to connect your SDK, LaunchDarkly is waiting for evaluation traffic. Connect an SDK or application integration that uses your AI Config to send model responses. Evaluation metrics appear automatically after responses are received.

Run the SDK example

You can use the LaunchDarkly Node.js (server-side) AI SDK example to confirm that evaluations run as expected. The SDK evaluates chat responses using attached judges and supports running a judge directly by key.

To set up the SDK example:

  1. Clone the LaunchDarkly SDK repository.
  2. Build the SDK and navigate to the judge evaluation example.
  3. Follow the README instructions to configure your environment with your LaunchDarkly project key, environment key, and model provider credentials.
  4. Start the example.

The example shows how to evaluate responses with attached judges or by calling a judge directly. Judges run asynchronously and do not block application responses. Evaluation results appear on the Monitoring tab within one to two minutes.

View results from the Monitoring tab

Open the Monitoring tab for your AI Config to view evaluation results.

The Monitoring tab displays evaluation metrics for each model response. Online evaluations record three built-in evaluation metrics:

Metric nameEvent keyWhat it measures
Accuracy$ld:ai:judge​:accuracyHow correct and grounded the model response is.
Relevance$ld:ai:judge​:relevanceHow well the response addresses the user request or task.
Toxicity$ld:ai:judge​:toxicityWhether the response includes harmful or unsafe phrasing.

Charts show recent and average scores for each metric. You can view individual results and reasoning details for each data point. Metrics update as evaluations run.

These metrics appear both on the Monitoring tab and in the Metrics list for your project.

Use evaluation metrics in guardrails and experiments

Evaluation metrics appear as selectable metrics in guarded rollouts and experiments.

  • In guarded rollouts, you can pause or revert a rollout when evaluation scores fall below a threshold.
  • In experiments, you can use evaluation metrics as experiment goals to compare variations.

This creates a connected workflow for releasing and evaluating changes to prompts and models.

Privacy and data handling

Online evaluations run within your LaunchDarkly environment using your configured model providers. LaunchDarkly does not store or share your prompts, model responses, or evaluation data with any third-party systems.

For more information, read AI Configs and information privacy.