Understand AI impact with AI Insights

Overview

This topic explains how to use AI insights to understand the impact of AI Configs by analyzing metrics across your project. AI insights provides a project-level view across configurations, models, and targeting rules. Use it to identify changes, compare configurations, and determine which configurations, models, or variations are driving results and their impact on performance and outcomes.

Use AI insights to:

  • Detect changes in cost and usage
  • Identify regressions in quality metrics after updates
  • Compare configurations to understand differences in performance and outcomes
  • Review experiment activity across AI Configs
  • Investigate specific AI Configs and recent changes

To analyze performance for a single AI Config, use the Monitoring tab.

AI insights page

The Insights page provides a unified view of metrics across your AI Configs. It includes time series charts, summary metrics, and a configuration-level table so you can review performance, identify changes, and understand their impact. All components reflect the selected time range and filters.

Use the controls at the top of the page to select a metric view, group results, filter configurations, and adjust the time range.

To open the Insights page:

  1. Navigate to your project.
  2. In the left navigation, expand AI, then select Insights.

The AI insights page showing trends, quick stats, and configurations table.

The AI insights page showing trends, quick stats, and configurations table.

Use this page to monitor performance and investigate changes across your AI Configs. A typical workflow includes:

  1. Use the trends view to analyze changes over time and correlate them with updates.
  2. Review quick stats to identify changes in key metrics.
  3. Use the configs and variations table to compare configurations and identify which require further investigation so you can understand what changed and decide whether to act.

Alerts highlight changes in key metrics so you can identify where to investigate without reviewing each configuration individually.

The trends view displays metrics as time series charts so you can compare performance and understand how metrics change over time across configurations. You can group results by AI Config, model, provider, or agent graph, including multi-agent workflows.

The trends view showing time series charts and grouping controls.

The trends view showing time series charts and grouping controls.

Use the trends view to track changes over time and understand their impact on performance. Apply filters to focus your analysis on specific configurations or models.

You can also correlate performance changes with updates to prompts, models, or targeting. For example, changes in cost or satisfaction may align with a recent variation or model update.

Quick stats

Quick stats summarize key metrics, including active AI Configs and experiments, average satisfaction, and cost.

Quick stats showing active configs, experiments, satisfaction, and cost.

Quick stats showing active configs, experiments, satisfaction, and cost.

Use quick stats to identify changes in these metrics and determine where to investigate further, because these changes may reflect shifts in performance and impact.

Configs and variations

The configs and variations table shows metrics for each AI Config, including generations, token usage, satisfaction, latency, error rate, model and provider, and experiment status.

Table showing cost, generations, tokens, satisfaction, and model.

Use this table to compare configurations and identify differences in metrics that require further investigation.

Alerts

AI insights includes system-generated alerts for changes in key metrics. Use alerts to determine which configurations are driving changes and require further investigation.

Alerts provide a proactive way to monitor performance by highlighting configurations with recent changes so you can focus your investigation. Each alert includes the affected AI Configs and details about the change.

Instrumentation requirements

AI insights depends on metrics recorded from your application.

To populate insights, use a LaunchDarkly AI SDK to evaluate AI Configs and record generation metrics such as latency, token usage, success, and error. You can also record evaluation metrics using judges.

If your application does not record metrics, the Insights page may not display data.

Choose a view

Use the following guidance to select the appropriate view:

  • Use AI insights to monitor metrics and identify changes across configurations.
  • Use the Monitoring tab to analyze performance for a single AI Config and its variations.

These views support different levels of analysis, from investigating a single AI Config to understanding patterns across configurations.