Creating funnel optimization experiments
Overview
This topic explains how to set up and configure a funnel optimization experiment in LaunchDarkly.
Configuring a funnel optimization experiment requires several steps:
- Creating the flag or AI Config and its variations,
- Creating metrics for your funnel,
- Creating a funnel group,
- Building the experiment,
- Turning on the flag or AI Config, and
- Starting an iteration.
These steps are explained in detail below.
Prerequisites
Before you build an experiment, you should read about and understand the following concepts:
Create flags or AI Configs
Before you begin an experiment, create a flag or AI Config with the variations you plan to test the performance of. You do not need to toggle on the flag before you create an experiment, but you do have to toggle on the flag before you start an experiment iteration. AI Configs are on by default.
To learn more, read Creating new flags, Creating flag variations, Create AI Configs, and Create and manage AI Config variations.
You cannot run an experiment on a flag if:
- the flag has an active guarded rollout
- the flag has an active progressive rollout
- the flag is in a running Data Export experiment
- the flag is in a running warehouse native experiment
- the flag is a migration flag
You can build and run multiple funnel optimization and feature change experiments on the same flag or AI Config as long as there is only one running experiment per rule. You cannot run multiple experiments on the same rule at the same time.
Create metrics
Metrics measure audience behaviors affected by the flags in your experiments. Custom conversion binary metrics and clicked or tapped metrics are most often used with funnel optimization experiments. To learn more, read Choose a metric type.
Funnel experiments can only use metrics that use the “Average” analysis method. You cannot use metrics that use a percentile analysis method in a funnel experiment. To learn more, read Analysis method.
If you want to learn which variation performs better, it must be possible for that metric to measure something in all of the variations within the experiment. To learn more, read Metrics and flag variations.
To learn how to create your own new metric, read Metrics. LaunchDarkly also automatically creates metrics for AI Configs. To learn more, read Metrics generated from AI SDK events.
Create funnel metric groups
A metric group is a reusable, ordered list of metrics you can use to standardize metrics across multiple experiments. You must create a funnel metric group before you build a funnel optimization experiment. To learn how, read Metric groups.
To create a useful funnel metric group, each metric within the group should represent a mandatory step in the customer journey. Customers should not be able to skip steps in the funnel, or complete steps out of order. If they can, you should instead create a standard metric group to use with a feature change experiment. Or, if you have other metrics you want to measure in addition to an ordered funnel with required steps, you can add them as secondary metrics.
Build experiments
You can view all of the experiments in your environment on the Experiments list.
To build an experiment:
- Click Create and choose Experiment. The “Create experiment” dialog appears.
- Enter an experiment Name.
- Enter a Hypothesis.
- Click Create experiment. The experiment Design tab appears.
- Select the Funnel optimization experiment type.
- Choose a context kind to Randomize by.
- Select a funnel metric group from the Metrics list.
- A list of environments displays. It shows which environments have received events for these metrics. If no environments are receiving events, check that your SDKs are configured correctly.
- Click Create to create and use a new funnel metric group.
- Choose a Flag or AI Config to use in the experiment.
- Click Create flag or Create AI Config to create and use a new flag or AI Config.
- Choose a targeting rule for the Experiment audience.
- If you want to restrict your experiment audience to only contexts with certain attributes, create a targeting rule on the flag or AI Config you include in the experiment and run the experiment on that rule.
- If you don’t want to restrict the audience for your experiment, run the experiment on the default rule. If the flag or AI Config doesn’t have any targeting rules, the default rule will be the only option.
- (Optional) If you want to exclude contexts in this experiment from certain other experiments, click Add experiment to exclusion layer and select a layer.
Expand layer options
A layer is a set of experiments that cannot share traffic with each other. All of the experiments within a layer are mutually exclusive, which means that if a context is included in one experiment, LaunchDarkly will exclude it from any other experiments in the same layer.
To add the experiment to an existing layer:
- Click Select layer.
- Search for and choose the layer you want to add the experiment to.
- Enter a Reservation amount. This is the percentage of the contexts within this layer you want LaunchDarkly to include in this experiment.
- Click Save layer.
If you need to create a new layer:
- Click Create layer.
- Add a Name and Description.
- Click Create layer.
- Enter a Reservation amount. This is the percentage of the contexts within this layer you want LaunchDarkly to include in this experiment.
- Click Save layer.
- Choose the Variation served to users outside this experiment. Contexts that match the selected targeting rule but are not in the experiment will receive this variation.
- Select the Sample size for the experiment. This is the percentage of all of the contexts that match the experiment’s targeting rule that you want to include in the experiment.
- (Optional) Click Advanced to edit variation reassignment. For most experiments, we recommend leaving this option on its default setting. To learn more, read Carryover bias and variation reassignment.
- (Optional) Click Edit to update the variation split for contexts that are in the experiment.
- You can Split equally between variations, or assign a higher percentage of contexts to some variations than others.
- Click Save audience split.
- Select a variation to serve as the Control.
- Select a Statistical approach of Bayesian or frequentist.
- If you selected a statistical approach of Bayesian, select a preset or Custom success threshold.
- If you selected a statistical approach of frequentist, select:
- a Significance level.
- a one-sided or two-sided Direction of hypothesis test.
Expand statistical approach options
You can select a statistical approach of Bayesian or Frequentist. Each approach includes one or more analysis options.
We recommend Bayesian when you have a small sample size of less than a thousand contexts, and we recommend Frequentist when you have a larger sample size of a thousand or more.
The Bayesian options include:
- Threshold:
- 90% probability to beat control is the standard success threshold, but you can raise the threshold to 95% or 99% if you want to be more confident in your experiment results.
- You can lower the threshold to less than 90% using the Custom option. We recommend a lower threshold only when you are experimenting on non-critical parts of your app and are less concerned with determining a clear winning variation.
The frequentist options include:
- Significance level:
- 0.05 p-value is the standard significance level, but you can lower the level to 0.01 or raise the level to 0.10, depending on if you need to be more or less confident in your results. A lower significance level means that you can be more confident in your winning variation.
- You can raise the significance level to more than 0.10 using the Custom option. We recommend a higher significance level only when you are experimenting on non-critical parts of your app and are less concerned with determining a clear winning variation.
- Direction of hypothesis test:
- Two-sided: We recommend two-sided when you’re in doubt about whether the difference between the control and the treatment variations will be negative or positive, and want to look for indications of statistical significance in both directions.
- One-sided: We recommend one-sided when you feel confident that the difference between the control and treatment variations will be either negative or positive, and want to look for indications of statistical significance only in one direction.
To learn more, read Bayesian versus frequentist statistics.
- (Optional) If you want to include the experiment in a holdout, click Advanced, then select a Holdout name.
Experiments cannot be in a holdout and in a layer at the same time
Experiments can either be in a holdout or in a layer, but not both. If you added the experiment to a layer, you will not see the option to add it to a holdout.
- (Optional) If you want to be able to filter your experiment results by attribute, click Advanced, then select up to five context attributes to filter results by.
- Scroll to the top of the page and click Save.
If needed, you can save your in-progress experiment design to finish later. To save your design, click Save at the top of the creation screen. Your in-progress experiment design is saved and appears on the Experiments list. To finish building the experiment, click on the experiment’s name and continue editing.
The next step is to toggle on the flag. AI Configs are on by default. Then, you can start an iteration.
You can also use the REST API: Create experiment
Turn on flags or AI Configs
For an experiment to begin recording data, the flag or AI Config used in the experiment must be on. Targeting rules for AI Configs are on by default. To learn how to turn targeting rules on for flags, read Turning flags on and off.
Start experiment iterations
After you create an experiment and the flag or AI Config is toggled on, you can start an experiment iteration in one or more environments.
To start an experiment iteration:
- Navigate to the Experiments list.
- Click on the environment section containing the experiment you want to start.
- If the environment you need isn’t visible, click the + next to the list of environment sections. Search for the environment you want, and select it from the list.
- Click on the name of the experiment you want to start an iteration for. The Design tab appears.
- Click Start.
- Repeat steps 1-4 for each environment you want to start an iteration in.
Experiment iterations allow you to record experiments in individual blocks of time. To ensure accurate experiment results, when you make changes that impact an experiment, LaunchDarkly starts a new iteration of the experiment.
To learn more about starting and stopping iterations, read Starting and stopping experiment iterations..
You can also use the REST API: Create iteration