Managing guarded rollouts

Overview

This topic explains how to manage guarded rollouts.

Guarded rollouts availability

All LaunchDarkly accounts include a limited trial of guarded rollouts. Use this to evaluate the feature in real-world releases.

Monitor a guarded rollout

You can monitor a guarded rollout on a flag’s Monitoring tab. The Monitoring tab shows the rollout progression, how many contexts have been served the new variation, and how each metric is performing.

From the Monitoring tab, you can:

Monitoring charts

In a guarded rollout, each metric appears in its own tile. Each tile contains a difference chart.

Each metric tile includes:

  • Header: Identifies the metric and shows the current difference estimate. Hover over the metric name to view its definition, description, environment, and last-seen timestamp.
  • Chart: Shows the absolute difference between the new and original variations over time. The dark grey line represents the observed difference. The shaded grey area represents the confidence interval. If sequential testing determines that the observed absolute difference represents a statistically significant negative impact on a monitored metric, LaunchDarkly identifies a regression. When this occurs, the tile highlights the regression, and if automatic rollback is enabled, LaunchDarkly rolls back the release.
  • Footer: Summarizes the most recent results and displays the latest values for both the new and original variations.

On the Targeting tab, metric tiles appear in up to three columns in a single row. On the Monitoring tab, each metric appears on its own tab. Hover over a chart to view the new and original variation estimates, the confidence interval, and the number of contexts served at that point in time.

Difference charts availability

Difference charts are available only for guarded rollouts. Progressive rollouts and experiments use other visualization methods.

In January 2026, LaunchDarkly began using sequential testing to evaluate mean and percentile metrics in guarded rollouts.

If all data points in a rollout were collected after this change, including for newly created guarded rollouts, LaunchDarkly displays results using this frequentist sequential testing approach.

If any data points in the rollout were collected before this change, LaunchDarkly displays results using the original Bayesian method. In that legacy method, results were based on estimating whether the treatment performed worse than the control using a probability-based model.

As a result, metric behavior and hover details may differ across rollouts depending on when the underlying data was recorded.

Interval of evaluation

During a guarded rollout, LaunchDarkly recalculates metric results on a regular schedule called the interval of evaluation.

At each interval, LaunchDarkly processes recent metric events, recalculates the absolute difference between the new and original variations, and evaluates whether the result is statistically significant using sequential testing. These evaluations occur roughly every minute. If metric events arrive more frequently, LaunchDarkly aggregates them into the next interval.

These regular updates ensure that the monitoring charts reflect the most recent metric data throughout the rollout.

Monitoring window

Each guarded rollout has a monitoring window that spans the full rollout duration. LaunchDarkly uses all metric data collected within that window to evaluate performance.

If performance degrades late in the rollout, LaunchDarkly weighs that new data against earlier data. Larger degradations are detected more quickly, while smaller changes may require additional data before statistical significance is reached. To improve responsiveness, use shorter stage durations or metrics that emit data more frequently.

New variation estimates

New variation estimates are point estimates of the metric value for the new variation. A point estimate uses sample data to calculate a single value that approximates the metric’s true value.

For metrics that use a percentile analysis method, such as latency at the 99th percentile, the estimate of new variation cell shows the estimated percentile value for contexts served the new variation.

For metrics that use the average analysis method, the estimate of new variation cell shows the average metric value for contexts served the new variation.

Difference from original variation

The difference from the original variation measures how much the metric value for the new variation differs from the original variation.

LaunchDarkly calculates this value as the absolute difference between the two variations using the metric’s unit:

The absolute difference between the two variations formula.
Absolute difference = (new variation estimate − original variation estimate)

For binary metrics, such as conversion rate or error rate, absolute difference is expressed in percentage points (pp). Percentage points represent the arithmetic difference between two percentage values.

The confidence interval below the difference shows the range of values within which the true difference is likely to fall across repeated measurements.

When sequential testing determines that the absolute difference is statistically significant and indicates a negative impact based on the metric’s success criteria, LaunchDarkly identifies a regression.

Roll back releases

To manually roll back a release after LaunchDarkly has detected a regression:

  1. Navigate to the flag’s Targeting or Monitoring tab.
  2. Click Roll back. The “Stop rollout early” dialog appears.
  3. Choose which Variation to serve to all contexts after you stop monitoring. The field defaults to the original variation.
  4. Click Stop.

If you are using a guarded rollout on a prerequisite flag and you roll back the change, LaunchDarkly will not also roll back any changes on the dependent flags. You must roll back changes on dependent flags separately.

Automatic rollback behavior

LaunchDarkly automatically rolls back a guarded rollout in these scenarios:

  • If LaunchDarkly detects a regression at any time during the rollout and you have automatic rollback enabled.
  • If LaunchDarkly detects a sample ratio mismatch (SRM) any time during the rollout. LaunchDarkly rolls back a release with an SRM whether or not you have automatic rollback enabled. To learn how to fix SRMs, read Sample ratio mismatch.
  • If the new variation is not served to enough contexts by the end of the rollout, LaunchDarkly cannot detect a regression and rolls back the release, even if automatic rollback is off. To improve results, try the rollout again with a longer duration.

When LaunchDarkly automatically rolls back a rollout, it sends an email and, if you use the LaunchDarkly Slack or Microsoft Teams app, a Slack or Teams notification.

A guarded rollout with insufficient contexts served.

A guarded rollout with insufficient contexts served.

If you are using a guarded rollout on a prerequisite flag and LaunchDarkly automatically rolls back the change, it will not also roll back any changes on the dependent flags. You must roll back changes on dependent flags separately.

Dismiss regression alerts

If LaunchDarkly detects a regression on a metric but you want to continue with the release, you can dismiss the regression alert for that metric. You can have multiple regressions on the same rollout if you have more than one metric attached to the rollout.

To dismiss an alert:

  1. Navigate to the flag’s Targeting or Monitoring tab.
  2. Next to the metric with the alert you want to dismiss, click Dismiss. The “Dismiss regression” dialog appears.
  3. Click Dismiss regression.

If LaunchDarkly detects another regression, you will receive another alert. If LaunchDarkly does not detect any additional regressions, the release will continue.

Stop monitoring early

To stop monitoring before the monitoring window is over:

  1. Navigate to the flag’s Targeting or Monitoring tab.
  2. Click Stop monitoring. The “Stop monitoring” dialog appears.
  3. Choose which Variation to serve to all contexts after you stop monitoring. The field defaults to the original variation.
  4. Click Stop.

You can also use the REST API: Update feature flag