Quickstart for AI Configs

Overview

AI Configs let you manage model configuration and instructions for your AI agents outside of your application code. With AI Configs, you can update prompts and models without redeploying.

By the end of this quickstart, you will have:

  • Created your first agent-based AI Config
  • Deployed your config and called it from your application
  • Made a change to your prompt or model without redeploying
Using an AI coding assistant?

If you have the LaunchDarkly MCP server or agent skills configured, prompt your coding assistant to create the AI Config for you. For example:

“Create an agent-based AI Config called ‘my-agent’ with a GPT-5.5 variation with instructions: ‘You are a helpful assistant.’”

Prerequisites

To complete this quickstart, you need the following:

  • A LaunchDarkly account and a server-side SDK key for your environment. To find your SDK key, read SDK credentials.
  • An API key for your model provider, such as OpenAI or Anthropic, made available to your application as an environment variable.
  • Python 3.10 or higher. To get started with NodeJS or other languages, read the AI SDKdocumentation.

This quickstart uses LangChain as the agent framework. You can adapt the same configuration code to the OpenAI Agents SDK, Strands, or the Claude Agent SDK. For an end-to-end code sample showing different frameworks and model providers, read Full example code.

Step 1: Install the SDK

First, install the LaunchDarkly server-side SDK and AI SDK, as well as the agent framework you want to use. This quickstart uses LangChain with Anthropic.

To get started with other frameworks or model providers, read the AI Configs guides.

Python
$pip install launchdarkly-server-sdk
$pip install launchdarkly-server-sdk-ai
$pip install langchain langchain-anthropic

Step 2: Initialize the client

Python
1import ldclient
2from ldclient.config import Config
3from ldai.client import LDAIClient
4
5ldclient.set_config(Config("YOUR_SDK_KEY"))
6aiclient = LDAIClient(ldclient.get())
7
8context = ldclient.Context.create("user-123")

Step 3: Create an agent-based AI Config in LaunchDarkly

Create an agent-based AI Config to store your model settings and instructions. You can do this through the LaunchDarkly UI, or agentically using the LaunchDarkly MCP server from your AI coding assistant. For example, Claude Code or Cursor can both perform this task.

To create the agent-based AI Config in the UI:

  1. In the left navigation, select AI Configs, then click Create AI Config.
  2. In the “Create AI Config” dialog, select Agent-based.
  3. Enter a name for your agent-based AI Config and click Create.
Save the AI Config key

When you name your AI Config, a key generates automatically. Copy and save this key now. You’ll use it in Step 5.

  1. On the Variations tab, add instructions for your agent. For example:
Example instructions
You are a helpful weather assistant. Use the get_weather tool to look up
current conditions for any location the user asks about, then answer
concisely in plain language.
  1. Click Review and save.

The Variations tab for an agent-based AI Config, showing a default variation with a model and agent task instructions.

To learn more about configuring agent variations, read Agents in AI Configs.

Step 4: Set up targeting

Now that your agent-based AI Config has a variation, configure targeting to serve it to your users.

  1. Select the Targeting tab for your agent-based AI Config.
  2. In the Default rule section, click Edit.
  3. Set the default rule to serve your new variation.
  4. Click Review and save.

The Targeting tab for an agent-based AI Config, showing the Default rule with the variation dropdown open.

Your agent-based AI Config is now active. When your application calls the SDK, LaunchDarkly evaluates the targeting rules and returns the variation you configured.

To learn more, read Target with AI Configs.

Step 5: Use the agent-based AI Config with your agent framework

First, retrieve the agent-based AI Config in your application. Replace your-agent-config-key with AI Config Key you saved in Step 3:

Python
1from ldai import AIAgentConfigDefault
2
3config = aiclient.agent_config(
4 "your-agent-config-key",
5 context,
6 AIAgentConfigDefault(),
7)

Next, pass the model name and instructions from config into your agent framework. The example below uses LangChain:

LangChain
1from ldai import AIAgentConfig
2from langchain.agents import create_agent
3from langchain.messages import HumanMessage
4
5
6def handle_agent_call_langchain(
7 config: AIAgentConfig,
8 user_input: str,
9) -> str:
10 model = config.model.get_parameter("name") if config.model else "anthropic:claude-sonnet-4-5"
11
12 agent = create_agent(
13 model=model,
14 system_prompt=config.instructions,
15 )
16
17 response = agent.invoke({"messages": [HumanMessage(user_input)]})
18 return response["messages"][-1].content

Finally, run the agent:

Python
1response = handle_agent_call_langchain(
2 config=config,
3 user_input="Hello, what can you help me with?",
4)
5print(response)

For an equivalent setup with the OpenAI Agents SDK, Strands, or the Claude Agent SDK, read Full example code.

Step 6: Make a change without redeploying

One of the key benefits of AI Configs is that you can update your model or prompt at any time without redeploying your application.

To update without redeploying:

  1. Select the Variations tab for your agent-based AI Config.
  2. Open your variation and change the model, model provider, or instructions.
  3. Click Review and save.

LaunchDarkly immediately serves the updated configuration to your application. The next time your application calls the SDK, it receives the new model and instructions without needing to redeploy.

Step 7: Monitor the agent-based AI Config

Select the Monitoring tab for your agent-based AI Config. When end users use your application, LaunchDarkly monitors AI Config performance. Metrics update approximately every minute.

To learn more, read Monitor AI Configs.

Summary

Congratulations! You now have an agent-based AI Config that can:

  • Load model and instructions from LaunchDarkly at runtime, so your app doesn’t hard-code either
  • Change in production without redeploying, by editing the variation in LaunchDarkly
  • Keep a full version history of every change to prompts and model settings
  • Report cost, token usage, and error rates on the Monitoring tab as real traffic flows through

From here, add a judge, run evals, log traces, and explore the managed AI SDKs.

Full example code

1import ldclient
2from ldclient.config import Config
3from ldai.client import LDAIClient
4from ldai import AIAgentConfig, AIAgentConfigDefault
5from agents import Agent
6from agents.run import Runner
7
8ldclient.set_config(Config("YOUR_SDK_KEY"))
9aiclient = LDAIClient(ldclient.get())
10context = ldclient.Context.create("user-123")
11
12config = aiclient.agent_config('your-agent-config-key', context, AIAgentConfigDefault())
13tracker = config.tracker
14
15
16async def handle_agent_call_openai(
17 name: str,
18 config: AIAgentConfig,
19 user_input: str,
20) -> str:
21 model = config.model.get_parameter("name") if config.model else "gpt-5"
22 root = Agent(
23 name=name,
24 instructions=config.instructions,
25 handoffs=[],
26 tools=[],
27 model=model,
28 )
29 response = await Runner.run(root, user_input)
30 return response.final_output

Next steps

Advanced topics