Product Updates
[ What's launched at LaunchDarkly ]
Filter by topic
Filter by date
Subscribe for updates
December 01, 2025
AI Engineering
Playgrounds for AI Configs
You can now use Playgrounds in LaunchDarkly to quickly test and compare AI Configs without writing any custom code. Playgrounds let teams define reusable evaluations that bundle prompts, models, parameters, and variables, then run them on demand to generate completions and inspect results in a structured, repeatable way.
Playgrounds also support automatic scoring: attach a separate LLM to evaluate each completion using your own rubric (for example, correctness, relevance, or toxicity). This shortens the iteration loop and makes it easier to understand which configuration performs best before you roll it out.
Future updates will include bulk evaluations, dataset uploads, and more advanced comparison tools, all powered by the same evaluation service underlying Playgrounds.
Learn more
Docs
Share link copied!
November 19, 2025
Guarded Release
Introducing Vega — the LaunchDarkly Observability AI (Early Access)
Vega is your AI-powered debugging companion inside LaunchDarkly.
It helps developers understand, debug, and fix issues directly within Observability views — logs, traces, errors, and sessions — by gathering relevant flag and telemetry context to explain what happened, why, and how to fix it.
Vega includes two integrated capabilities:
- Vega Agent: An AI debugging assistant that summarizes errors, identifies root causes, and suggests code fixes (with GitHub integration).
- Vega Search Assistant: A natural-language search tool that lets you ask questions across your observability data, like “Which traces increased error rates?”
Unlike off-the-shelf AI copilots, Vega is context-aware and built specifically for LaunchDarkly, combining observability data, recent flag changes, and AI reasoning for faster triage and smarter fixes.
Availability
Early Access — Available for self-serve customers who signed up on or after July 7, 2024.
Enterprise availability will follow at a later date.
Learn more:
Share link copied!
November 19, 2025
Feature Flags
Introducing Feature Delivery v2 (FDv2) for the Python SDK (Early Access)
Feature Delivery v2 (FDv2) is now available in Early Access for the LaunchDarkly Python SDK v9.13.0, bringing faster initialization, smarter data handling, and improved resiliency to feature flag delivery.
FDv2 introduces several key enhancements:
- Two-phase initialization: Start quickly from a polling data source, then synchronize from a long-lived streaming connection.
- Data-saving mode: Resume from a previously saved state during reconnects to reduce bandwidth and processing time.
- File-based initialization: Initialize from a local file first, then connect to LaunchDarkly APIs for updates.
- Automatic failover: Seamlessly switch to polling if the streaming connection fails.
- Improved flag caching: When using a persistent store in non-daemon mode, flag data is no longer limited by cache TTL.
Together, these improvements make flag delivery faster, more reliable, and more efficient across all environments.
Availability
Early Access — Data-saving mode is only available to members of LaunchDarkly’s Early Access Program (EAP). If you want access to this feature, join the EAP.
Learn more
Share link copied!
November 18, 2025
Feature Flags
Improved Feature Flag Creation Workflow
We’ve redesigned one of the most used workflows in LaunchDarkly — flag creation — to make it cleaner, simpler, and faster.
The new experience includes:
- Fewer steps to create and configure flags, with smart defaults.
- A modern sidebar interface that surfaces configuration options more intuitively.
- Smarter defaults: new flags are now available to client-side and mobile SDKs by default for new accounts, helping prevent silent integration issues.
This update streamlines the process of getting started with flags, reducing complexity and setup time for both new and experienced users.
Share link copied!
November 14, 2025
Experimentation
Tags for Experiments
You can now add and manage tags on LaunchDarkly experiments, making it easier to organize and scale your experimentation program across products, teams, and environments.
With this update, teams can:
- Add custom tags to any experiment — label by team, goal, product area, or any category that fits your workflow.
- Filter experiments by one or more tags to quickly find, compare, or manage related tests.
- Apply consistent tags across other LaunchDarkly resources (flags, segments, metrics, projects, environments) to align experiments with your broader release taxonomy.
Learn more: Tags documentation
Share link copied!
Explore the docs
What makes experimentation with LaunchDarkly stand out among other tools on the market?
Explore the blog
What makes experimentation with LaunchDarkly stand out among other tools on the market?
Join our community
What makes experimentation with LaunchDarkly stand out among other tools on the market?



