Run experiments with AI Configs
Overview
This topic introduces the role of AI Configs in LaunchDarkly Experimentation. Experimentation lets you measure how AI Config variations affect end-user behavior using the metrics you define. By connecting metrics to your AI Configs, you can compare variations and decide which variation to serve.
Monitoring and Experimentation
Each AI Config includes a Monitoring tab in the LaunchDarkly user interface (UI). This tab displays performance data if you track AI metrics in your SDK, such as input and output tokens or total call duration to your LLM provider. To learn more, read Monitor AI Configs.
In contrast, Experimentation lets you measure how your application changes affect end user behavior, based on signals like page views and clicks. For example, you might use the Monitoring tab of an AI Config to identify which variation consumes the fewest output tokens. But to determine which variation results in the most clicks in your chatbot, you need to run an experiment.
To get started with Experimentation in the context of AI Configs, explore the following resources:
- Experimentation reference
- Metrics reference
- Experimentation guides, including best practices and background on statistical methods
Guarded rollouts and experiments cannot run at the same time
You cannot run a guarded rollout and an experiment on the same flag at the same time. To monitor a variation rollout for regressions, use a guarded rollout. To measure how a variation affects a metric, use an experiment.