Run experiments with AI Configs
Overview
This topic introduces the role of AI Configs in LaunchDarkly Experimentation. LaunchDarkly’s Experimentation feature lets you measure the effect of features on end users by tracking metrics your team cares about. By connecting metrics you create to AI Configs in your LaunchDarkly environment, you can measure changes in customer behavior based on the different AI Config variations your application serves. This helps you make more informed decisions and ensures that the features your development team ships align with your business objectives.
Monitoring and Experimentation
Each AI Config includes a Monitoring tab in the LaunchDarkly user interface (UI). This tab displays performance data if you track AI metrics in your SDK, such as input and output tokens or total call duration to your LLM provider. To learn more, read Monitor AI Configs.
In contrast, Experimentation lets you measure how your application changes affect end user behavior, based on signals like page views and clicks. For example, you might use the Monitoring tab of an AI Config to identify which variation consumes the fewest output tokens. But to determine which variation results in the most clicks in your chatbot, you need to run an experiment.
To get started with Experimentation in the context of AI Configs, explore the following resources:
- Experimentation reference
- Metrics reference
- Experimentation guides, including best practices and background on statistical methods