Experimentation

Multi-armed bandits

This feature is for Early Access Program customers only

Multi-armed bandits are available only to members of LaunchDarkly’s Early Access Program (EAP). If you want access to this feature, join the EAP.

Overview

This section contains documentation on multi-armed bandits, which are a type of experiment that use a decision-making algorithm that dynamically allocates traffic to the best-performing variation of a flag based on a metric you choose.

Unlike traditional A/B experiments, which split traffic between variations and waits for performance results, multi-armed bandits continuously evaluate variation performance and automatically shift traffic toward the best performing variation. Multi-armed bandits are useful when fast feedback loops are important, such as optimizing calls to action, pricing strategies, or onboarding flows.

A multi-armed bandit probability density chart.

A multi-armed bandit probability density chart.

To learn how to create and read the results for multi-armed bandits, read Creating multi-armed bandits and Multi-armed bandit results.