All Blog Posts

Feb 27
Beyond feature flags: LaunchDarkly vs. other release management tools

Not all release management platforms are built for production-grade control.

LaunchDarkly

Feb 25
Why "free" tools aren't free

DIY tools are great at the start...

LaunchDarkly

Risk mitigation
Feb 23
A false sense of security: Guardrails don’t prevent incidents

Tools alone aren't enough.

LaunchDarkly

Experimentation
Feb 21
Introducing sequential testing for LaunchDarkly Experimentation

Sequential testing lets you adapt quickly and check results as you go.

Cameron Savage

AI
Feb 20
AI-generated code ships fast, but runtime control hasn’t kept up

AI is speeding up code generation, but control in production is lagging behind.

Experimentation
Feb 04
Metric Data Sources: import multiple tables for warehouse-native experimentation

Bring your own warehouse tables and schemas to power experimentation

Eric Wang

AI
Jan 22
Introducing LLM Playground for AI Configs

Test, compare, and trace LLM prompt and model variations before they reach production.

Kelvin Yap

AI
Jan 06
LLM Evaluation: Tutorial & Best Practices

Learn how to properly evaluate large language models in various applications and contexts.

LaunchDarkly

Experimentation
Dec 18
Introducing stratified sampling for LaunchDarkly Experimentation

Support fair, reliable experiment outcomes by eliminating hidden sample bias.

Neha Julka

Product Updates
Dec 16
Meet the new navigation in LaunchDarkly

A cleaner, more focused navigation reduces noise and helps you move faster.

Sruthy Kumar

AI
Dec 12
Creating better runtime control with LaunchDarkly and AWS

Ship bold AI changes without the guesswork.

Neha Julka

Product Updates
Dec 01
Introducing Audiences: See who your flags are really impacting

Instantly trace who saw your flag and what happened next.

Rachel Groberman

AI
Nov 26
Online evals: LLM-as-a-Judge

Online evals in AI Configs give teams quality signals to successfully ship AI changes.

Kelvin Yap

Feature Flags
Nov 04
Join us at AWS re:Invent 2025

Visit us at booth #1339!

Neha Julka

Experimentation
Nov 04
New Experimentation tools for PMs who test, learn, and move fast

Test, learn, and ship faster with new Experimentation tools.

Allison Rogers

AI
Oct 31
Delivering adaptive AI with LaunchDarkly and Snowflake Cortex

LaunchDarkly & Snowflake enable AI delivery with real-time config and runtime safety.

Neha Julka

Feature Flags
Oct 30
Less clutter, more control: Manage flag permissions at scale

Preset Role Scope and Flag Lifecycle Settings can help you issue cleaner, faster releases.

Bhargav Brahmbhatt

AI
Oct 30
Understanding AI behavior: LLM observability in AI Configs

Get deeper visibility into model behavior and impact with LLM observability.

Kelvin Yap

Experimentation
Oct 30
The Metrics glow-up: Smoother, smarter, simpler

Experience a faster, simpler way to build, manage, and trust your metrics.

Sruthy Kumar

AI
Oct 30
Prompt Engineering Best Practices

Poorly written prompts can throw entire AI projects off track.

LaunchDarkly

Oct 28
Our October 20 Service Disruption: What Happened, What We Learned and How We’re Improving

Sonesh Surana

Feature Flags
Oct 27
Accelerating release safety with LaunchDarkly Vega and GitHub

Using Vega and GitHub together increases your release control plane.

Neha Julka

Experimentation
Oct 23
Multiple Multi-Armed Bandits

Run multiple MABs on one flag to optimize experiences in parallel.

Scott Shindeldecker

Feature Flags
Oct 08
The developer's guide to free feature flagging services

Feature flags let you deploy code safely, test in production, and roll back instantly.

Jesse Sumrak