Getting started with Strands and AI Configs

Overview

This guide explains how to integrate Strands Agents with LaunchDarkly AI Configs. Using AI Configs with Strands lets you manage agent instructions, model configuration, and parameters outside of your application code.

This guide uses AI Configs’ agent mode. Agent mode uses a single instructions string, which maps directly to Strands’ system_prompt. To learn more, read Agents in AI Configs.

New to AI Configs?

If you’re a new user of AI Configs, start with the Quickstart and return to this guide when you are ready for a Strands-specific example.

To learn more about AI Configs-specific SDKs, read AI SDKs. For Python-specific details, read the Python AI SDK reference.

The Strands TypeScript SDK is in beta

The Strands TypeScript SDK is a pre-1.0 release candidate and only ships BedrockModel and OpenAIModel. It cannot run Anthropic-backed variations. If you want a single codebase that serves both OpenAI and Anthropic variations, use the Python SDK.

The Node.js example in this guide uses an OpenAI-backed default variation so it works without targeting rules.

Prerequisites

To complete this guide, you must have the following prerequisites:

  • A LaunchDarkly account, including:
    • A LaunchDarkly SDK key for your environment.
    • A member role that allows AI Config actions. The LaunchDarkly project admin, maintainer, and developer project roles, as well as the admin and owner base roles, include this ability. To learn more about LaunchDarkly roles, read Roles.
  • A Python 3.10+ or Node.js 20+ development environment.
  • Strands Agents installed in your application.
  • An API key for your chosen model provider.

Concepts

Before you begin, review these key concepts.

Strands agents

Strands provides a minimal, provider-agnostic framework for building tool-using agents. The Agent class accepts a model, a system_prompt, a list of tools, and an optional conversation_manager. It exposes invoke_async to run a single turn. The SlidingWindowConversationManager keeps the last N messages in memory so followup turns automatically reference earlier context without passing a thread or session ID.

Agent mode AI Configs

Agent mode AI Configs use an instructions field instead of a messages array. This single instruction string serves as the system prompt for your agent. Agent mode is ideal for:

  • Multi-step agent workflows
  • Tool-using agents
  • Persistent agent sessions

The agent_config function

The agent_config function retrieves the AI Config variation for a given context. It returns an AIAgentConfig object that includes the customized instructions, model configuration, and a tracker property for recording metrics. Call this function each time you create an agent so LaunchDarkly can evaluate targeting and return the current configuration.

Provider dispatch

Unlike LangChain, Strands does not currently have a first-party LaunchDarkly provider package. Each Strands model class is provider-specific and uses provider-specific names: AnthropicModel for Anthropic, OpenAIModel for OpenAI, and so on. To serve different providers from a single AI Config, dispatch on agent_config.provider.name and construct the matching Strands model class. This guide includes a create_strands_model helper that does this for you.

Step 1: Install dependencies

Install the LaunchDarkly SDKs and Strands packages.

$pip install launchdarkly-server-sdk-ai strands-agents strands-agents-tools anthropic openai python-dotenv

Step 2: Create an AI Config in LaunchDarkly

Create an AI Config in agent mode to store your agent configuration. This guide creates two variations, one backed by OpenAI and one backed by Anthropic, to show you how Strands dispatches to different providers from the same AI Config key.

To create an AI Config:

  1. In the left navigation, click Create and select AI Config.
  2. In the “Create AI Config” dialog, select Agent.
  3. Enter a name for your AI Config and set the key to strands-agent.
  4. Click Create. The new AI Config appears.

Then, create the first variation:

  1. On the AI Config’s Variations tab, replace “Untitled variation” with a variation name, such as “GPT-5 agent”.
  2. Click Select a model and choose the gpt-5 OpenAI model.
  3. Click Parameters and set max_completion_tokens to 2000.
  4. In the Instructions field, enter your agent’s system prompt:
You are a helpful order-status assistant. Use the get_order_status tool to look up orders by their ID. Always explain your reasoning and summarize results clearly.
  1. Click Review and save.

Add a second variation:

  1. Click Add variation and name the new variation “Claude Sonnet agent”.
  2. Click Select a model and choose the claude-sonnet-4 Anthropic model.
  3. Click Parameters and set max_tokens to 2000.
  4. Use the same instructions as the first variation.
  5. Click Review and save.

A completed variation with model configuration and instructions.

A completed variation with model configuration and instructions.

Step 3: Set up targeting rules

Configure targeting rules to control which users receive which variation. Serve the “GPT-5 agent” variation as the default so the Node.js example runs without changes, and target specific users or segments to the “Claude Sonnet agent” variation.

To create the default rule:

  1. Select the Targeting tab for your AI Config.
  2. In the “Default rule” section, click Edit.
  3. Configure the default rule to serve the “GPT-5 agent” variation.
  4. Click Review and save.

The default targeting rule configured to serve a variation.

The default targeting rule configured to serve a variation.

The AI Config is enabled by default. After you add the integration code to your application, LaunchDarkly serves the variation you configured to your users.

Step 4: Integrate Strands with AI Configs

With the AI Config and targeting in place, integrate Strands with the LaunchDarkly AI SDK so your application fetches the current model, instructions, and parameters on every request instead of reading hardcoded values. Because Strands does not currently have a first-party LaunchDarkly provider package, the integration involves mapping the AI Config payload to the matching Strands model class yourself.

Complete these steps in order, since each depends on the previous one.

The integration involves these key steps:

  1. Define the tools your agent can call using the Strands @tool decorator (Python) or tool() helper (Node.js).
  2. Build a provider dispatcher that maps agent_config.provider.name to the matching Strands model class.
  3. Initialize the LaunchDarkly base SDK client with your SDK key.
  4. Initialize the LaunchDarkly AI client from the base client.
  5. Get the agent config using agent_config() (Python) or aiClient.agentConfig() (Node.js).
  6. Build a Strands Agent with a SlidingWindowConversationManager for short-term memory.
  7. Invoke the agent and track metrics with the AI Config’s tracker.

The following example defines a get_order_status tool that looks up a customer order by its ID. The tool handler returns the order status text your agent will summarize in its reply. In Python, the @tool decorator reads the function’s type hints and docstring to generate the JSON schema Strands passes to the model. In Node.js, the tool() helper takes the name, description, and an explicit Zod input schema.

1from strands import tool
2
3
4@tool
5def get_order_status(order_id: str) -> str:
6 """Look up the status of a customer order by order ID."""
7 orders = {
8 "ORD-123": "Shipped — arrives Thursday",
9 "ORD-456": "Processing — estimated ship date: tomorrow",
10 "ORD-789": "Delivered on Monday",
11 }
12 return orders.get(order_id, f"No order found with ID {order_id}")

Build a provider dispatcher. Strands model classes are provider-specific, so read agent_config.provider.name and construct the matching class. LaunchDarkly surfaces attached tools via a flat parameters.tools shape in the variation payload. Drop that key before passing parameters through, because Strands receives tools from the Agent constructor.

1from strands.models.anthropic import AnthropicModel
2from strands.models.openai import OpenAIModel
3
4
5def create_strands_model(agent_config):
6 """Map an LDAIAgentConfig to the matching Strands model class by provider."""
7 provider = (agent_config.provider.name if agent_config.provider else "").lower()
8 model_id = agent_config.model.name
9 params = dict(agent_config.model.to_dict().get("parameters") or {})
10 # LaunchDarkly surfaces attached tools from `parameters.tools` in its own flat shape.
11 # Drop the key here. Strands receives tools from the Agent constructor.
12 params.pop("tools", None)
13
14 if provider == "anthropic":
15 # AnthropicModel requires max_tokens as a kwarg, not in params.
16 max_tokens = int(params.pop("max_tokens", None) or params.pop("maxTokens", None) or 1024)
17 return AnthropicModel(model_id=model_id, max_tokens=max_tokens, params=params or None)
18 if provider == "openai":
19 # Pass parameters through unchanged. GPT-5 wants `max_completion_tokens`,
20 # GPT-4o wants `max_tokens`. Keep that choice in the AI Config variation.
21 return OpenAIModel(model_id=model_id, params=params)
22 raise ValueError(f"Unsupported provider for Strands: {provider!r}")

Initialize the LaunchDarkly SDK and AI client, fetch the agent config, build the Strands model with create_strands_model (Python) or createStrandsModel (Node.js), and create the agent.

1import os
2import ldclient
3from ldclient import Context
4from ldclient.config import Config
5from ldai.client import LDAIClient
6from strands import Agent
7from strands.agent.conversation_manager.sliding_window_conversation_manager import (
8 SlidingWindowConversationManager,
9)
10
11
12ldclient.set_config(Config(os.environ.get("LAUNCHDARKLY_SDK_KEY")))
13
14ai_client = LDAIClient(ldclient.get())
15
16context = Context.builder("user-123").kind("user").name("Sandy").build()
17
18# Pass a default for improved resiliency when the AI config is unavailable
19# or LaunchDarkly is unreachable. Omit it to disable the default.
20# Example:
21# from ldai.client import AIAgentConfigDefault
22# default = AIAgentConfigDefault(
23# enabled=True,
24# model={"name": "gpt-5"},
25# provider={"name": "openai"},
26# instructions="You are a helpful assistant.",
27# )
28# agent_config = ai_client.agent_config("strands-agent", context, default)
29agent_config = ai_client.agent_config("strands-agent", context)
30
31model = create_strands_model(agent_config)
32
33# SlidingWindowConversationManager gives the agent short-term memory across turns.
34conversation_manager = SlidingWindowConversationManager(window_size=40)
35
36agent = Agent(
37 name="order-assistant",
38 model=model,
39 system_prompt=agent_config.instructions,
40 tools=[get_order_status],
41 conversation_manager=conversation_manager,
42)
43
44tracker = agent_config.tracker

Invoke the agent and track metrics. Strands returns an AgentResult whose metrics.accumulated_usage (Python) or metrics.accumulatedUsage (Node.js) aggregates token counts across every provider call in the turn, including any round trips to call tools. The Python example wraps agent.invoke_async with tracker.track_duration_of and records tokens and success manually. The Node.js example uses tracker.trackMetricsOf with a converter that returns the usage shape the tracker expects.

1from ldai.tracker import TokenUsage
2
3
4def track_strands_metrics(tracker, result):
5 """Record token usage from a Strands AgentResult on the LD tracker."""
6 usage = getattr(result.metrics, "accumulated_usage", {}) or {}
7 input_tokens = usage.get("inputTokens", 0)
8 output_tokens = usage.get("outputTokens", 0)
9 total = usage.get("totalTokens", 0) or (input_tokens + output_tokens)
10 if total > 0:
11 tracker.track_tokens(TokenUsage(input=input_tokens, output=output_tokens, total=total))
12
13
14async def run_turn(agent, tracker, user_input):
15 try:
16 result = await tracker.track_duration_of(lambda: agent.invoke_async(user_input))
17 tracker.track_success()
18 track_strands_metrics(tracker, result)
19 print(f"Agent: {result.message['content'][0]['text']}")
20 except Exception as e:
21 tracker.track_error()
22 print(f"Error: {e}")
23
24
25# Here are three turns on the same agent instance: first fires the tool, second
26# reuses conversation memory for a follow-up that reuses the tool, third
27# summarizes without calling any tool.
28await run_turn(agent, tracker, "What's the status of order ORD-123?")
29await run_turn(agent, tracker, "What about ORD-456?")
30await run_turn(agent, tracker, "Summarize both orders for me.")

The fallback argument to agent_config / agentConfig is optional. When omitted, LaunchDarkly returns a disabled config if the flag is off or the SDK is unreachable. Pass an explicit fallback to keep the agent running during outages.

Complete example

Here is a complete working example that combines all the steps.

1import asyncio
2import os
3import ldclient
4from ldclient import Context
5from ldclient.config import Config
6from ldai.client import LDAIClient
7from ldai.tracker import TokenUsage
8from strands import Agent, tool
9from strands.models.anthropic import AnthropicModel
10from strands.models.openai import OpenAIModel
11from strands.agent.conversation_manager.sliding_window_conversation_manager import (
12 SlidingWindowConversationManager,
13)
14from dotenv import load_dotenv
15
16load_dotenv()
17
18SDK_KEY = os.environ.get("LAUNCHDARKLY_SDK_KEY")
19AGENT_CONFIG_KEY = "strands-agent"
20
21
22@tool
23def get_order_status(order_id: str) -> str:
24 """Look up the status of a customer order by order ID."""
25 orders = {
26 "ORD-123": "Shipped — arrives Thursday",
27 "ORD-456": "Processing — estimated ship date: tomorrow",
28 "ORD-789": "Delivered on Monday",
29 }
30 return orders.get(order_id, f"No order found with ID {order_id}")
31
32
33def create_strands_model(agent_config):
34 """Map an LDAIAgentConfig to the matching Strands model class by provider."""
35 provider = (agent_config.provider.name if agent_config.provider else "").lower()
36 model_id = agent_config.model.name
37 params = dict(agent_config.model.to_dict().get("parameters") or {})
38 # LaunchDarkly surfaces attached tools from `parameters.tools` in its own flat shape.
39 # Drop the key here. Strands receives tools from the agent constructor.
40 params.pop("tools", None)
41
42 if provider == "anthropic":
43 # AnthropicModel requires max_tokens as a kwarg, not in params.
44 max_tokens = int(params.pop("max_tokens", None) or params.pop("maxTokens", None) or 1024)
45 return AnthropicModel(model_id=model_id, max_tokens=max_tokens, params=params or None)
46 if provider == "openai":
47 # Pass parameters through unchanged. GPT-5 wants `max_completion_tokens`,
48 # GPT-4o wants `max_tokens`. Keep that choice in the LaunchDarkly variation.
49 return OpenAIModel(model_id=model_id, params=params)
50 raise ValueError(f"Unsupported provider for Strands: {provider!r}")
51
52
53def track_strands_metrics(tracker, result):
54 """Record token usage from a Strands AgentResult on the LD tracker."""
55 usage = getattr(result.metrics, "accumulated_usage", {}) or {}
56 input_tokens = usage.get("inputTokens", 0)
57 output_tokens = usage.get("outputTokens", 0)
58 total = usage.get("totalTokens", 0) or (input_tokens + output_tokens)
59 if total > 0:
60 tracker.track_tokens(TokenUsage(input=input_tokens, output=output_tokens, total=total))
61
62
63async def run_turn(agent, tracker, user_input):
64 try:
65 result = await tracker.track_duration_of(lambda: agent.invoke_async(user_input))
66 tracker.track_success()
67 track_strands_metrics(tracker, result)
68 print(f"Agent: {result.message['content'][0]['text']}")
69 except Exception as e:
70 tracker.track_error()
71 print(f"Error: {e}")
72
73
74async def async_main():
75 ldclient.set_config(Config(SDK_KEY))
76 if not ldclient.get().is_initialized():
77 print("LaunchDarkly SDK failed to initialize")
78 return
79
80 ai_client = LDAIClient(ldclient.get())
81
82 context = Context.builder("user-123").kind("user").name("Sandy").build()
83
84 # Pass a default for improved resiliency when the AI Config is unavailable
85 # or LaunchDarkly is unreachable. Omit it to disable the default.```
86 # Example:
87 # from ldai.client import AIAgentConfigDefault
88 # default = AIAgentConfigDefault(
89 # enabled=True,
90 # model={"name": "gpt-5"},
91 # provider={"name": "openai"},
92 # instructions="You are a helpful assistant.",
93 # )
94 # agent_config = ai_client.agent_config(AGENT_CONFIG_KEY, context, default)
95 agent_config = ai_client.agent_config(AGENT_CONFIG_KEY, context)
96
97 if not agent_config.enabled:
98 print("Agent Config is disabled — run the notebook to set up the config")
99 return
100
101 model = create_strands_model(agent_config)
102
103 # SlidingWindowConversationManager gives the agent short-term memory across turns.
104 conversation_manager = SlidingWindowConversationManager(window_size=40)
105
106 agent = Agent(
107 name="order-assistant",
108 model=model,
109 system_prompt=agent_config.instructions,
110 tools=[get_order_status],
111 conversation_manager=conversation_manager,
112 )
113
114 tracker = agent_config.tracker
115
116 await run_turn(agent, tracker, "What's the status of order ORD-123?")
117 await run_turn(agent, tracker, "What about ORD-456?")
118 await run_turn(agent, tracker, "Summarize both orders for me.")
119
120 # Always flush events before closing. Otherwise, trailing events are at risk of being
121 # lost, in both short-lived scripts and long-running services.
122 ldclient.get().flush()
123 ldclient.get().close()
124
125
126def main():
127 asyncio.run(async_main())
128
129
130if __name__ == "__main__":
131 main()

Step 5: Monitor results

View metrics for your AI Config in the LaunchDarkly UI.

To monitor results, navigate to your AI Config and click the Monitoring tab.

LaunchDarkly displays metrics including:

  • Generation count
  • Token usage (input, output, total)
  • Time to generate
  • Error rate

Use these metrics to compare agent performance across the OpenAI and Anthropic variations, identify cost differences, and make data-driven decisions about which configuration to use for different user segments. To learn more, read Monitor AI Configs.

To view aggregated metrics across all your AI Configs, navigate to Insights in the left navigation under the AI section. The Insights overview page displays cost, latency, error rate, invocation counts, and model distribution across your organization. To learn more, read about AI insights.

The Insights overview page showing cost, latency, error rate, and invocation metrics for a Strands AI Config.

The Insights overview page showing cost, latency, error rate, and invocation metrics for a Strands AI Config.

Comparing agent mode and completion mode

AspectAgent ModeCompletion Mode
Config fieldinstructions (string)messages (array)
SDK methodagent_config()completion_config()
Default classAIAgentConfigDefaultAICompletionConfigDefault
Use caseMulti-step workflows, tool useSingle-turn completions

Conclusion

In this guide, you learned how to integrate Strands Agents with LaunchDarkly AI Configs to manage agent configuration outside of your application code.

You can now:

  • Change agent models and instructions without redeploying your application
  • Swap between Anthropic and OpenAI-backed variations from a single AI Config key
  • Target different agent configurations to different users based on context attributes
  • Track and compare agent performance across variations
  • Maintain multi-turn conversation memory with SlidingWindowConversationManager
  • Govern tools centrally in LaunchDarkly and attach them to variations

To explore additional capabilities, read:

For more AI Configs examples, read the other AI Configs guides in this section.

Want to know more? Start a trial.
Your 14-day trial begins as soon as you sign up. Get started in minutes using the in-app Quickstart. You'll discover how easy it is to release, monitor, and optimize your software.

Want to try it out? Start a trial.