Monitor AI Configs
Overview
This topic explains how to monitor the performance of your AI Configs. Performance metrics for AI Configs are available in the LaunchDarkly user interface if you track AI metrics in your SDK.
Collect data for AI Config variations
Data appears on the Monitoring tab for an AI Config when you record metrics from your AI model generation. For each AI SDK, the function to record metrics takes a completion from your AI model generation, so you can make the call to your AI model provider and record metrics from model generation in one step. You can record duration, token usage, generation success and error, time to first token, output satisfaction, and more.
To learn how, read Tracking AI metrics. For a detailed example, refer Step 4 in the Quickstart for AI Configs.
You can also monitor variation rollouts
You can use a guarded rollout when you are releasing new variations to your customers. The Monitoring tab shows a guarded rollout’s progress and metric results. LaunchDarkly highlights regressions and pauses a guarded rollout automatically if needed.
Monitor an AI Config
To monitor the performance of an AI Config:
- Navigate to the detail page for the AI Config. Select the Monitoring tab.
- At the top of the tab, use the controls to specify the monitoring data you want to view:
- Use the Environment dropdown to select the environment you want to monitor. Performance metrics are specific to each environment.
- Select a date range.
- Use the Charts dropdown to select which set of charts to view.
- Review charts of the available monitoring data. Each chart displays results for the selected environment and breaks out the data by variation:
- The Tokens charts include the average input and output tokens used by the AI Config.
- The Satisfaction chart shows the percentage of “thumbs up” ratings provided by end users who have encountered the AI Config.
- The Generations chart shows the average number of successful generations completed using the AI Config.
- The Time to generate chart shows the average duration per generation. This value represents the total duration of calls to your LLM provider divided by the total number of generations completed.
- The Error rate chart shows the percentage of errors out of the total number of generations attempted for the AI Config.
- The Time to first token chart shows the mean time it takes to generate the initial token.
- The Costs charts show the sum of the input token cost and output token cost used by the AI Config.
- Review the table of available monitoring data. The table includes the data displayed in the charts as well as additional data. For example, the table includes both the average and the total input and output tokens.
- (Optional) In the table, select the variations or versions for which you want to view data. The charts update based on your selection.
- (Optional) Click Export data as CSV to download a CSV file for further analysis.
For the data on the Monitoring tab to appear, you must record the metrics in your application, using a track call from any of the LaunchDarkly AI SDKs. To learn more, read Tracking AI metrics.
The data on the Monitoring tab updates approximately every minute. For each metric, the results are broken out by AI Config variation. Metrics with no results display “No data.” If there is no data for a particular variation, that variation is not included in the total displayed at the top of the metric card.
Here is a partial image of the Monitoring tab:

Monitoring versus trends explorer
The Monitoring tab displays performance metrics for a single AI Config in a selected environment. It is designed for detailed analysis of variation performance, guarded rollout progress, and version-level comparisons within that AI Config.
The Trends explorer tab aggregates performance data across multiple AI Configs, variations, models, providers, and targeting rules in a single view. Trends explorer offers:
- Time series charts that span multiple configs, variations, models, providers, and targeting rules
- Quick filtering to focus on specific variations, models, or rules
- Version change annotations to correlate updates with performance trends
- Quick stats summarizing costs, generations, token usage, and satisfaction across your system
- CSV export of aggregated performance data for multiple AI Configs
Use Monitoring to diagnose and optimize a specific AI Config. Use Trends explorer to identify trends across all your AI Configs.
To learn more, read View AI Config trends explorer.