AI Configs

Overview

The topics in this category explain how to use LaunchDarkly to manage your AI Configs. You can use AI Configs to customize, test, and roll out new large language models (LLMs) within your generative AI applications.

An AI Config is a single resource that you create in LaunchDarkly to control how your application uses large language models. It lets teams manage model configuration outside of application code, enabling safer iteration, experimentation, and releases without redeploying.

Updated: AI Configs support two configuration modes

AI Configs support two configuration modes:

  • Completion mode: Use messages and roles to configure prompts for single-step model responses. You can attach judges to completion-mode AI Config variations in the LaunchDarkly UI. To learn more, read Create and manage AI Config variations.
  • Agent mode: Use instructions to configure multi-step workflows. For agent-based variations, invoke a judge programmatically using the AI SDK. Agent mode does not create a separate resource. To learn more, read Agents in AI Configs.

Both completion mode and agent mode can integrate with external tools or APIs. Tool usage depends on how your application and SDK are implemented, not on the AI Config mode. Agent mode is optimized for structured, multi-step workflows, but using tools is not limited to agent mode.

Both modes use the same AI Config resource type and share core concepts such as variations, targeting rules, monitoring, experimentation, and lifecycle management.

With AI Configs, you can:

  • Manage your model configuration outside of your application code. This means you can update model details and messages at runtime, without deploying changes. Teammates who have LaunchDarkly access but no codebase familiarity can collaborate and iterate on messages.
  • Upgrade to new model versions as soon as they are available and roll out changes gradually and safely.
  • Add new model providers and progressively shift production traffic between them.
  • Compare variations to determine which performs better based on cost, latency, satisfaction, or other metrics.
  • Run experiments to measure the impact of generative AI features on end user behavior.

AI Configs support advanced use cases such as retrieval-augmented generation, integration with external tools or APIs, and evaluation in production. You can:

  • Track which knowledge base or vector index is active for a given model or audience.
  • Experiment with different chunking strategies, retrieval sources, or prompt and instruction structures.
  • Evaluate outputs using side-by-side comparisons or online evaluations with judges in completion mode, or invoke a judge programmatically using the AI SDK for other variations.
  • Build guardrails into runtime configuration using targeting rules to block risky generations or switch to fallback behavior.
  • Apply different safety filters by user type, geography, or application context.
  • Use live metrics, including satisfaction and quality signals you define, to guide rollouts.

These capabilities let you evaluate model behavior in production, run targeted experiments, and adopt new models safely without being locked into a single provider or manual workflow.

If you are using an AI agent to create and manage AI Configs, you can use LaunchDarkly agent skills to help AI coding agents execute common tasks safely and consistently.

Availability

AI Configs is an add-on feature. Access depends on your organization’s LaunchDarkly plan. If AI Configs does not appear in your project, your organization may not have access to it.

To enable AI Configs for your organization, contact your LaunchDarkly account team. They can confirm eligibility and assist with activation.

For information about pricing, visit the LaunchDarkly pricing page or contact your LaunchDarkly account team.

How AI Configs work

Every AI Config contains one or more variations. Each variation includes a model configuration and uses messages in completion mode or instructions in agent mode. You can define targeting rules to control which variations are served to specific contexts.

You can attach judges to completion-mode AI Config variations in the LaunchDarkly UI. For other variations, invoke a judge programmatically using the AI SDK. To learn more, read Online evaluations in AI Configs.

Then, within your application, you use one of LaunchDarkly’s AI SDKs. The SDK determines which variation your application should serve to each context, including its model configuration and messages or instructions. The SDK can also customize the messages based on context attributes and other variables that you provide. This means the variation configuration can be customized for each end user at runtime. You can update your messages, specific to each end user, without redeploying your application.

After you use this customized config in your AI model generation, you can use the SDK to record various metrics, including generation count, tokens, and satisfaction rate. These appear in the LaunchDarkly user interface for each AI Config variation.

The topics in this category explain how to create AI Configs and variations, update targeting rules, monitor related metrics, and incorporate AI Configs in your application.

Additional resources

In this section:

In our guides:

In our SDK documentation: