Day 3 | 🔔 Jingle All the Way to Zero-Config Observability

Published December 10, 2025

portrait of Alexis Roberson.

by Alexis Roberson

For years, auto-instrumentation promised effortless observability but kept falling short. You’d still end up manually adding spans to business logic, hunting down missing metadata, or trying to piece together how a feature rollout was affecting customers.

That finally shifted in 2025. With OTel auto-instrumentation maturing and LaunchDarkly adding built-in OTel support to server-side SDKs, teams started getting feature flag context baked into their traces without writing instrumentation code. The zero-config promise actually started delivering.

Auto-instrumentation has always had a blind spot: it shows you what happened, but not why. You’d see a latency spike, but had no idea which feature flag was active, which users hit it, or what experiment was running.

Without that context, you’re doing detective work. Digging through logs, matching up timestamps, guessing at what caused what. Manual instrumentation helped, but you paid for it in engineering time, inconsistent coverage, and mounting technical debt.

Auto-instrumentation that actually knows about your features

The game changed when OTel auto-instrumentation actually got good. Instead of just capturing basic HTTP calls, it now handles:

  • Framework-level request tracing.
  • Automatic context propagation across services.
  • Runtime metadata and environment details.
  • Errors and exceptions without manual try-catch blocks.

LaunchDarkly takes this further by injecting flag evaluation data straight into OTel spans. Every time you evaluate a flag, you automatically get the flag key, user context, which variation served, and the targeting rule that fired. That data feeds into your existing OTel pipeline, so your traces finally show which features were active and who was affected - not just database queries and API calls.

So how do you actually set this up?

To get started with Otel trace hooks and feature flag data, simply add the hooks to your LaunchDarkly client config.

1import ldclient
2from ldclient import Config
3from ldotel.tracing import Hook
4
5config = Config('YOUR_SDK_KEY', hooks=[Hook()])
6ldclient.set_config(config=config)
7
8client = ldclient.get()

This flows into your existing OpenTelemetry pipeline, enriching every trace with feature-aware context.

The TracingHook automatically decorates your OpenTelemetry spans with flag evaluation events. When your application evaluates flags during a request, those evaluations become part of the trace along with the full context about what was evaluated and for whom.

You can also configure your OpenTelemetry collector or exporter to point to LaunchDarkly’s OTLP endpoint, and you’re done.

For HTTP:

$https://otel.observability.app.launchdarkly.com:4318

For gRPC:

$https://otel.observability.app.launchdarkly.com:4317

This feature is also available in .Net, Go, Java, Node.JS and Ruby.

Auto-instrumentation handles the rest, HTTP spans, database calls, framework-level tracing, error capture, and now, feature flag context.

What auto-instrumentation unlocks

When you ship a new feature variant, you immediately see how it performs per cohort. If there’s a latency spike in the “new-checkout-flow” variation, you’ll know within minutes before it affects user experience.

That same visibility matters during incidents. When an outage hits, filter traces by flag evaluation to see which features were active when errors occurred. The trace shows you whether it was the new recommendation engine, the optimized query path, or something else entirely.

This is especially powerful for experimentation. LaunchDarkly processes your OTel traces into metrics automatically, so when you run an A/B test, you get latency, error rate, and throughput calculated per variation without extra config. The same telemetry powering your dashboards powers your experiments.

The best part of this setup is that it scales without additional work. As teams ship more features behind flags, the telemetry gets more valuable without getting more expensive to maintain. New services inherit feature-aware tracing just by initializing the SDK.

When to add custom spans

Zero-config doesn’t mean never-config. You’ll still want custom spans for:

  • Business logic milestones. If you need to measure time-to-recommendation or search-to-purchase, custom spans make that explicit.
  • ML pipeline stages. Feature extraction, model inference, and post-processing often warrant their own spans for detailed performance analysis.
  • Cross-service boundaries. Queue producers, stream processors, and async workers may need manual context propagation and span creation.
  • Experiment-specific KPIs. If your A/B test measures “items added to cart” or “video completion rate,” you’ll instrument those as custom metrics.

The important part is you’re writing these spans to capture business value, not to patch holes in your instrumentation.

Delivering real value

Combining mature auto-instrumentation with feature-aware enrichment changes how teams approach observability. It’s no longer a separate investment that competes with feature development. It’s a byproduct of how you ship features.

When you evaluate a flag, you get telemetry. When you roll out a feature, you get performance data segmented by variation. When you run an experiment, you get metrics derived from production traces. The instrumentation you would have written manually is now embedded in the tools you already use.

Observability stops being something you retrofit after launch and becomes something you inherit by default. Which means teams spend less time debugging instrumentation gaps and more time acting on insights.

That’s the promise of zero-config, finally delivered.

Ready to try it? Explore LaunchDarkly’s OpenTelemetry integration documentation or sign up for a free trial account.

Enable Observability or Experimentation in your LaunchDarkly dashboard and start seeing feature-aware telemetry from your existing traces.