Product Updates
[ What's launched at LaunchDarkly ]
Filter by topic
Filter by date
Subscribe for updates
December 09, 2025
Feature Flags
Qualitative Feedback
You can now collect qualitative user feedback directly from inside your LaunchDarkly feature flags. The new Feedback tab on any flag lets teams gather lightweight, contextual input from users during rollouts, right where you already monitor release health.
What’s new
- A guided setup flow that provides all the backend and frontend snippets needed to embed a feedback widget in your application
- Automatic attachment of flag metadata, including which variation each user saw
- Real-time feedback inside LaunchDarkly enriched with sentiment signals
- Seamless debugging workflows for Observability customers, including direct links from feedback to Session Replay
This brings user sentiment into the same workflow where you plan, monitor, and validate releases, helping teams quickly understand how changes landed and make fast, informed decisions.
Learn more
Tutorial
Share link copied!
December 08, 2025
Feature Flags
Lifecycle Settings
Lifecycle Settings gives you more control over how LaunchDarkly identifies stale flags. You can now customize the “ready to archive” and “ready for code removal” calculations on a per-project basis, allowing your team to define flag cleanup rules that match your organization’s own standards.
This update also includes a redesigned archive workflow that provides clearer visibility into a flag’s lifecycle state and whether it’s safe to archive, helping teams clean up with confidence.
Why it matters
Stale flag cleanup is critical for long-term flag management, but every organization defines “stale” differently. Lifecycle Settings lets teams tailor LaunchDarkly’s status calculations to their own workflows so they can more easily organize, filter, and operationalize flag cleanup.
Learn more
Docs
Video overview
Share link copied!
December 01, 2025
AI Engineering
Playgrounds for AI Configs
You can now use Playgrounds in LaunchDarkly to quickly test and compare AI Configs without writing any custom code. Playgrounds let teams define reusable evaluations that bundle prompts, models, parameters, and variables, then run them on demand to generate completions and inspect results in a structured, repeatable way.
Playgrounds also support automatic scoring: attach a separate LLM to evaluate each completion using your own rubric (for example, correctness, relevance, or toxicity). This shortens the iteration loop and makes it easier to understand which configuration performs best before you roll it out.
Future updates will include bulk evaluations, dataset uploads, and more advanced comparison tools, all powered by the same evaluation service underlying Playgrounds.
Learn more
Docs
Share link copied!
December 01, 2025
Guarded Release
Flag Audiences
Flag Audiences gives you a clear view of who evaluated a feature flag and which variation they received. You can now see unique contexts for each variation, filter by time range, and search by key to understand exactly who saw a new feature or who was impacted during a regression.
When the Observability SDK is installed, audience entries also link to Session Replay, allowing you to watch user sessions to understand behavior, troubleshoot issues, and validate releases.
What’s new
- Audience view on supported client-side flags
- Unique contexts per variation
- Search by context key
- Filter by time range and variation
- Session Replay links when observability data is available
- Shows up to 250 audience members at once (from the most recent 1,000 evaluations)
Availability
Guardian customers and Guardian trial accounts. Requires supported client-side SDKs.
(Session Replay links require the Observability SDK.)
Learn more
Docs
Share link copied!
November 19, 2025
Guarded Release
Introducing Vega — the LaunchDarkly Observability AI (Early Access)
Vega is your AI-powered debugging companion inside LaunchDarkly.
It helps developers understand, debug, and fix issues directly within Observability views — logs, traces, errors, and sessions — by gathering relevant flag and telemetry context to explain what happened, why, and how to fix it.
Vega includes two integrated capabilities:
- Vega Agent: An AI debugging assistant that summarizes errors, identifies root causes, and suggests code fixes (with GitHub integration).
- Vega Search Assistant: A natural-language search tool that lets you ask questions across your observability data, like “Which traces increased error rates?”
Unlike off-the-shelf AI copilots, Vega is context-aware and built specifically for LaunchDarkly, combining observability data, recent flag changes, and AI reasoning for faster triage and smarter fixes.
Availability
Early Access — Available for self-serve customers who signed up on or after July 7, 2024.
Enterprise availability will follow at a later date.
Learn more:
Share link copied!
Explore the docs
What makes experimentation with LaunchDarkly stand out among other tools on the market?
Explore the blog
What makes experimentation with LaunchDarkly stand out among other tools on the market?
Join our community
What makes experimentation with LaunchDarkly stand out among other tools on the market?



.jpg)
