Getting started with Amazon Bedrock and AI Configs

Overview

This guide shows how to connect an Amazon Bedrock-powered application to LaunchDarkly AI Configs. Amazon Bedrock provides a single API for multiple foundation models with enterprise-grade AWS integration. By the end, you will be able to manage your model configuration and prompts outside of your application code, and track metrics automatically.

AI Configs support two modes:

  • Completion mode returns messages and roles (system, user, assistant). Use it for chat-style interactions and message-oriented workflows. Completion mode supports online evaluations with judges attached in the LaunchDarkly UI.
  • Agent mode returns a single instructions string. Use it when your runtime or framework expects a goal/instructions input for a structured workflow. Agent mode changes the configuration shape from messages to instructions. Your application maps these instructions into your provider or framework’s native input.

Both modes support tool calling. This guide walks through completion mode as the main path, with an optional agent config section. To learn more about when to use each mode, read When to use prompt-based vs agent mode.

This guide provides examples in both Python and Node.js (TypeScript).

Additional resources for AI Configs

If you are not familiar with AI Configs, start with the Quickstart for AI Configs and return to this guide when you are ready for a more detailed example.

You can find reference guides for each of the AI SDKs at AI SDKs.

Prerequisites

To complete this guide, you need the following:

  • A LaunchDarkly account.
  • An AWS account with Amazon Bedrock access enabled and model access granted for the models you want to use. To learn more, read Model access in the AWS documentation.
  • AWS credentials with the bedrock-runtime:Converse permission.
  • A development environment:
    • Python: Python 3.10 or higher
    • Node.js: Node.js 20 or higher
  • Familiarity with LaunchDarkly contexts. To learn more, read Contexts and segments.

Concepts

Before you begin, review these key concepts.

AI Configs

An AI Config is a LaunchDarkly resource that controls how your application uses large language models. Each AI Config contains one or more variations. Each variation specifies:

  • A model configuration, including the model name and parameters
  • Messages that define the prompt

You can update these settings in LaunchDarkly at any time without changing your application code.

Contexts

A context represents the end user interacting with your application. LaunchDarkly uses context attributes to:

  • Determine which variation to serve based on targeting rules
  • Populate {{ ldctx.* }} placeholders in your prompts with context attribute values

Other placeholders, such as {{ topic }}, are populated from the variables argument you pass at runtime.

The tracker

When you retrieve an AI Config, the SDK returns a config object with a tracker property. The tracker records metrics from your Bedrock calls, including:

  • Generation count
  • Input and output tokens
  • Latency
  • Success and error rates

These metrics appear on the AI Insights dashboard in LaunchDarkly.

Step 1: Install the SDK

Install the LaunchDarkly AI SDK and the AWS Bedrock SDK in your application. AI Configs are supported by LaunchDarkly server-side SDKs only. The Node.js examples in this guide use the server-side Node.js AI SDK.

Here is how to install the required packages:

$pip install "launchdarkly-server-sdk-ai>=0.17.0"
$pip install "boto3>=1.34.0"
$pip install "python-dotenv>=1.0.0"

Create a .env file in your project root to store your credentials:

$# .env
$LAUNCHDARKLY_SDK_KEY_BEDROCK=<your-launchdarkly-sdk-key>
$AWS_REGION=<your-aws-region>
$AWS_ACCESS_KEY_ID=<your-aws-access-key-id>
$AWS_SECRET_ACCESS_KEY=<your-aws-secret-access-key>

Add .env to your .gitignore to keep credentials out of version control.

Step 2: Initialize the clients

Initialize both the LaunchDarkly client and the Amazon Bedrock runtime client. Store your credentials in environment variables.

Here is the initialization code:

1import os
2import boto3
3import ldclient
4from ldclient import Context
5from ldclient.config import Config
6from ldai.client import LDAIClient, AICompletionConfigDefault, AIAgentConfigDefault
7from dotenv import load_dotenv
8
9load_dotenv()
10
11SDK_KEY = os.environ.get("LAUNCHDARKLY_SDK_KEY")
12CONFIG_KEY = "bedrock-assistant"
13AGENT_CONFIG_KEY = "bedrock-agent"
14
15bedrock_client = boto3.client(
16 'bedrock-runtime',
17 region_name=os.environ.get('AWS_REGION', 'us-west-2'),
18)
19ldclient.set_config(Config(SDK_KEY))
20if not ldclient.get().is_initialized():
21 exit(1)
22
23ai_client = LDAIClient(ldclient.get())

Step 3: Create an AI Config in LaunchDarkly

Create an AI Config in the LaunchDarkly UI to store your Bedrock model settings and prompts.

Using the MCP server or agent skills

If you have the LaunchDarkly MCP server or agent skills configured, prompt your coding assistant to create the AI Config for you. For example:

“Create a completion mode AI Config called ‘Bedrock assistant’ with a ‘Claude Sonnet 4’ variation using the us.anthropic.claude-sonnet-4-20250514-v1:0 Bedrock model, temperature 0.7, max_tokens 1024, and the system message: ‘You are a helpful assistant. Answer questions about {{topic}}.’ Enable targeting.”

To create the AI Config:

  1. In the left navigation, click Create and select AI Config.
  2. In the Create AI Config dialog, select Completion.
  3. Enter a name, such as “Bedrock assistant”.
  4. Click Create.

To create a variation:

  1. On the Variations tab, replace “Untitled variation” with a name, such as “Claude Sonnet 4”.
  2. Click Select a model and choose an Amazon Bedrock model such as us.anthropic.claude-sonnet-4-20250514-v1:0.
  3. Click Parameters and set temperature to 0.7 and max_tokens to 1024.
  4. Add a system message to define your assistant’s behavior:
System message
You are a helpful assistant. Answer questions about {{topic}}.
  1. Click Review and save.

A completed variation with model configuration and system message.

A completed variation with model configuration and system message.

To enable targeting:

  1. Select the Targeting tab.
  2. In the “Default rule” section, click Edit.
  3. Set the default rule to serve your variation.
  4. Click Review and save.

The default targeting rule configured to serve a variation.

The default targeting rule configured to serve a variation.

Step 4: Get the AI Config in your application

Retrieve the AI Config from LaunchDarkly by calling the completion config function. Pass a context that represents the current user. You can also pass an optional fallback configuration that the SDK uses when LaunchDarkly is unreachable.

Here is how to get the AI Config:

1# Define the context for the current user
2context = Context.builder("user-123") \
3 .kind("user") \
4 .name("Sandy") \
5 .build()
6
7# Pass a default for improved resiliency when the AI config is unavailable
8# or LaunchDarkly is unreachable; omit for a disabled default.
9# Example:
10# from ldai.client import AICompletionConfigDefault
11# default = AICompletionConfigDefault(
12# enabled=True,
13# model={"name": "us.anthropic.claude-sonnet-4-20250514-v1:0"},
14# provider={"name": "bedrock"},
15# messages=[{"role": "system", "content": "You are a helpful assistant."}],
16# )
17# config = ai_client.completion_config(CONFIG_KEY, context, default, variables={"topic": "Python"})
18
19# Get the AI Config
20config = ai_client.completion_config(CONFIG_KEY, context, variables={"topic": "Python"})
21tracker = config.tracker

The SDK uses the fallback configuration when LaunchDarkly is unreachable. Check the enabled property and handle the disabled case in your application.

Best practices

For production use:

  • Retrieve the AI Config each time you generate content so LaunchDarkly can evaluate the latest targeting rules and prompt changes.
  • Provide a fallback configuration when possible so your application can fail gracefully if LaunchDarkly is unavailable.
  • Avoid sending personally identifiable information in contexts unless you have a specific need and an approved handling pattern. To learn more, read Privacy in AI Configs.

Step 5: Call Bedrock and track metrics

The Amazon Bedrock Converse API expects a specific format. System messages go in a top-level system list, user and assistant messages go in the messages array with nested content blocks, and model parameters go in inferenceConfig. The tracker’s track_bedrock_converse_metrics helper records latency, success, errors, and token usage directly from the Converse response.

Define a helper to filter AI Config parameters to Bedrock’s inferenceConfig shape. Bedrock’s Converse API accepts maxTokens, temperature, topP, and stopSequences. Rename LaunchDarkly’s max_tokens key to maxTokens; the rest already match.

1def inference_config_from_params(params):
2 """Map AI Config parameters to Bedrock's Converse inferenceConfig shape.
3 Rename max_tokens to maxTokens; keep temperature, topP, stopSequences as-is."""
4 mapping = {"max_tokens": "maxTokens"}
5 allowed = {"maxTokens", "temperature", "topP", "stopSequences"}
6 renamed = {mapping.get(k, k): v for k, v in (params or {}).items()}
7 return {k: v for k, v in renamed.items() if k in allowed}

Then call Bedrock with the config and pass the response to the tracker:

1if config.enabled:
2 tracker = config.tracker
3 messages = config.messages or []
4
5 # Bedrock Converse: system messages are a top-level list, not a role
6 system_messages = [{'text': m.content} for m in messages if m.role == 'system']
7 conversation = [
8 {'role': m.role, 'content': [{'text': m.content}]}
9 for m in messages if m.role in ('user', 'assistant')
10 ]
11 conversation.append({
12 'role': 'user',
13 'content': [{'text': 'How do I read a file in Python?'}]
14 })
15
16 converse_params = {
17 'modelId': config.model.name,
18 'messages': conversation,
19 }
20 if system_messages:
21 converse_params['system'] = system_messages
22 ld_params = (config.model.to_dict().get("parameters") if config.model else None) or {}
23 inference = inference_config_from_params(ld_params)
24 if inference:
25 converse_params['inferenceConfig'] = inference
26
27 response = tracker.track_bedrock_converse_metrics(
28 bedrock_client.converse(**converse_params)
29 )

Step 6: Optional: Use agent mode with tool calling

Agent-mode AI Configs return a single instructions string instead of a message list, and they let you attach reusable tools from the LaunchDarkly tools library. With Bedrock’s Converse API, the instructions map to a system content block and tools pass through on the toolConfig parameter.

Create the tool in the tools library

First, define the tool in LaunchDarkly so the AI Config variation can reference it:

  1. In the left navigation, click Library, then select the Tools tab.
  2. Click Add tool.
  3. Enter get_order_status as the Key.
  4. Enter “Look up the status of a customer order by order ID” as the Description.
  5. Define the schema using the JSON editor:
1{
2 "type": "object",
3 "properties": {
4 "order_id": {
5 "type": "string",
6 "description": "The order ID to look up"
7 }
8 },
9 "required": ["order_id"]
10}
  1. Click Save.

The Create tool dialog.

The Create tool dialog.

Create the agent AI Config

  1. Click Create and select AI Config.
  2. Select Agent mode.

The Create AI Config dialog with Agent mode selected.

The Create AI Config dialog with Agent mode selected.
  1. Enter a name:
AI Config name
Bedrock agent
  1. Click Create.
  2. On the Variations tab, name the variation (for example, “Claude Sonnet 4 agent”).
  3. Click Select a model and choose an Amazon Bedrock model such as us.anthropic.claude-sonnet-4-20250514-v1:0.
  4. Click Parameters and set max_tokens to 1024.
  5. Add the agent instructions:
Agent instructions
You are an order status assistant. Use the get_order_status tool to look up customer orders by ID. Only call the tool when the user asks about an order.
  1. Click + Attach tools and select get_order_status.

A variation editor with an attached tool.

A variation editor with an attached tool.
  1. Click Review and save.
  2. On the Targeting tab, set the default rule to serve your variation and save.

To learn more about managing tools, read Tools in AI Configs.

Retrieve the agent config and run the tool loop

Use agent_config() instead of completion_config(). The SDK returns attached tools under parameters.tools in OpenAI’s type=function shape, so convert them to Bedrock Converse’s {toolSpec: {name, description, inputSchema: {json}}} shape and pass them on the toolConfig parameter. Tool handler functions stay in your application code: LaunchDarkly stores the schema, your application owns the behavior.

1agent = ai_client.agent_config(
2 AGENT_CONFIG_KEY,
3 context,
4 AIAgentConfigDefault(enabled=False),
5)
6
7if agent.enabled:
8 tracker = agent.tracker
9 ld_params = (agent.model.to_dict().get("parameters") if agent.model else None) or {}
10 inference = inference_config_from_params(ld_params)
11
12 # LD returns attached tools under parameters.tools in OpenAI type=function shape.
13 # Bedrock Converse wants {toolSpec: {name, description, inputSchema: {json}}}.
14 ld_tools = ld_params.get("tools", []) or []
15 tool_config = {
16 "tools": [
17 {
18 "toolSpec": {
19 "name": t["name"],
20 "description": t.get("description", ""),
21 "inputSchema": {"json": t.get("parameters", {"type": "object", "properties": {}})},
22 }
23 }
24 for t in ld_tools
25 ]
26 }
27
28 # Handlers stay in application code — LD governs the schema, the app owns execution.
29 def get_order_status(order_id: str) -> str:
30 orders = {
31 "ORD-123": "Shipped — arrives Thursday",
32 "ORD-456": "Processing — estimated ship date: tomorrow",
33 "ORD-789": "Delivered on Monday",
34 }
35 return orders.get(order_id, f"No order found with ID {order_id}")
36
37 tool_handlers = {"get_order_status": get_order_status}
38
39 conversation = [{"role": "user", "content": [{"text": "What's the status of order ORD-123?"}]}]
40
41 # Agent loop: call Bedrock, handle toolUse blocks, repeat
42 MAX_STEPS = 5
43 for _ in range(MAX_STEPS):
44 converse_params = {
45 "modelId": agent.model.name,
46 "messages": conversation,
47 "system": [{"text": agent.instructions}],
48 "toolConfig": tool_config,
49 }
50 if inference:
51 converse_params["inferenceConfig"] = inference
52
53 response = tracker.track_bedrock_converse_metrics(
54 bedrock_client.converse(**converse_params)
55 )
56
57 output_message = response.get("output", {}).get("message", {})
58 stop_reason = response.get("stopReason")
59
60 if stop_reason != "tool_use":
61 break
62
63 # Append assistant turn (preserves toolUse blocks)
64 conversation.append(output_message)
65
66 # Build a single tool-result user turn covering every toolUse block
67 tool_results = []
68 for block in output_message.get("content", []):
69 if "toolUse" not in block:
70 continue
71 tool_use = block["toolUse"]
72 if tool_use["name"] not in tool_handlers:
73 raise ValueError(f"Unknown tool: {tool_use['name']}")
74 result = tool_handlers[tool_use["name"]](**tool_use["input"])
75 tracker.track_tool_call(tool_use["name"])
76 tool_results.append({
77 "toolResult": {
78 "toolUseId": tool_use["toolUseId"],
79 "content": [{"text": result}],
80 }
81 })
82 conversation.append({"role": "user", "content": tool_results})

In completion mode, you can attach judges to variations in the LaunchDarkly UI for automatic evaluation. In agent mode, invoke judges programmatically through the AI SDK.

To learn more, read When to use prompt-based vs agent mode and Agents in AI Configs.

Step 7: Monitor your AI Config

Use the LaunchDarkly UI to monitor how your applications are performing across all AI Configs and for individual configs.

To view aggregated metrics across all your AI Configs, navigate to Insights in the left navigation under the AI section. The Insights overview page displays cost, latency, error rate, invocation counts, and model distribution across your organization. To learn more, read AI Insights.

To view metrics for a specific AI Config:

  1. Navigate to your AI Config.
  2. Select the Monitoring tab.

The dashboard displays the following metrics:

  • Generation count: The total number of AI generation calls tracked for this config.
  • Input and output tokens: Token consumption broken down by prompt tokens sent and completion tokens received.
  • Latency: The time taken for each generation call, shown as percentiles (p50, p95).
  • Success and error rates: The proportion of successful versus failed generation calls.

Metrics update approximately every minute. Use these metrics to compare variations and optimize your prompts. To learn more, read Monitor AI Configs.

The Insights overview page showing cost, latency, error rate, and invocation metrics for a Bedrock AI Config.

The Insights overview page showing cost, latency, error rate, and invocation metrics for a Bedrock AI Config.
Observability

The AI SDKs emit OpenTelemetry-compatible spans for each generation call. You can forward these spans to your existing observability stack for deeper analysis. To learn more, read Observability and LLM observability.

Step 8: Close the client

Close the LaunchDarkly client when your application shuts down to flush pending events.

Here is how to close the client:

1ldclient.get().flush()
2ldclient.get().close()

For short-lived applications such as scripts, explicitly flush events before closing.

Here is how to flush events:

1ldclient.get().flush()
2ldclient.get().close()

Complete example

Here is a complete working example that combines all the steps.

1import os
2import boto3
3import ldclient
4from ldclient import Context
5from ldclient.config import Config
6from ldai.client import LDAIClient, AICompletionConfigDefault, AIAgentConfigDefault
7from dotenv import load_dotenv
8
9load_dotenv()
10
11SDK_KEY = os.environ.get("LAUNCHDARKLY_SDK_KEY_BEDROCK")
12
13CONFIG_KEY = "bedrock-assistant"
14AGENT_CONFIG_KEY = "bedrock-agent"
15
16
17def inference_config_from_params(params):
18 """Map AI Config parameters to Bedrock's Converse inferenceConfig shape.
19 Rename max_tokens to maxTokens; keep temperature, topP, stopSequences as-is."""
20 mapping = {"max_tokens": "maxTokens"}
21 allowed = {"maxTokens", "temperature", "topP", "stopSequences"}
22 renamed = {mapping.get(k, k): v for k, v in (params or {}).items()}
23 return {k: v for k, v in renamed.items() if k in allowed}
24
25
26def main():
27 ldclient.set_config(Config(SDK_KEY))
28 if not ldclient.get().is_initialized():
29 return
30
31 ai_client = LDAIClient(ldclient.get())
32
33 bedrock_client = boto3.client(
34 'bedrock-runtime',
35 region_name=os.environ.get('AWS_REGION', 'us-west-2'),
36 )
37
38 context = Context.builder("user-123").kind("user").name("Sandy").build()
39
40 # ===================
41 # COMPLETION MODE
42 # ===================
43 # Pass a default for improved resiliency when the AI config is unavailable
44 # or LaunchDarkly is unreachable; omit for a disabled default.
45 # Example:
46 # default = AICompletionConfigDefault(
47 # enabled=True,
48 # model={"name": "us.anthropic.claude-sonnet-4-20250514-v1:0"},
49 # provider={"name": "bedrock"},
50 # messages=[{"role": "system", "content": "You are a helpful assistant."}],
51 # )
52 # config = ai_client.completion_config(CONFIG_KEY, context, default, variables={"topic": "Python"})
53 config = ai_client.completion_config(CONFIG_KEY, context, variables={"topic": "Python"})
54
55 if config.enabled:
56 tracker = config.tracker
57 messages = config.messages or []
58
59 # Bedrock Converse: system messages are a top-level list, not a role
60 system_messages = [{'text': m.content} for m in messages if m.role == 'system']
61 conversation = [
62 {'role': m.role, 'content': [{'text': m.content}]}
63 for m in messages if m.role in ('user', 'assistant')
64 ]
65 conversation.append({
66 'role': 'user',
67 'content': [{'text': 'How do I read a file in Python?'}]
68 })
69
70 converse_params = {
71 'modelId': config.model.name,
72 'messages': conversation,
73 }
74 if system_messages:
75 converse_params['system'] = system_messages
76 ld_params = (config.model.to_dict().get("parameters") if config.model else None) or {}
77 inference = inference_config_from_params(ld_params)
78 if inference:
79 converse_params['inferenceConfig'] = inference
80
81 tracker.track_bedrock_converse_metrics(
82 bedrock_client.converse(**converse_params)
83 )
84
85 # ===================
86 # AGENT MODE
87 # ===================
88 agent = ai_client.agent_config(
89 AGENT_CONFIG_KEY,
90 context,
91 AIAgentConfigDefault(enabled=False),
92 )
93
94 if agent.enabled:
95 tracker = agent.tracker
96 ld_params = (agent.model.to_dict().get("parameters") if agent.model else None) or {}
97 inference = inference_config_from_params(ld_params)
98
99 # LD returns attached tools under parameters.tools in OpenAI type=function shape.
100 # Bedrock Converse wants {toolSpec: {name, description, inputSchema: {json}}}.
101 ld_tools = ld_params.get("tools", []) or []
102 tool_config = {
103 "tools": [
104 {
105 "toolSpec": {
106 "name": t["name"],
107 "description": t.get("description", ""),
108 "inputSchema": {"json": t.get("parameters", {"type": "object", "properties": {}})},
109 }
110 }
111 for t in ld_tools
112 ]
113 }
114
115 # Handlers stay in application code — LD governs the schema, the app owns execution.
116 def get_order_status(order_id: str) -> str:
117 orders = {
118 "ORD-123": "Shipped — arrives Thursday",
119 "ORD-456": "Processing — estimated ship date: tomorrow",
120 "ORD-789": "Delivered on Monday",
121 }
122 return orders.get(order_id, f"No order found with ID {order_id}")
123
124 tool_handlers = {"get_order_status": get_order_status}
125
126 conversation = [{"role": "user", "content": [{"text": "What's the status of order ORD-123?"}]}]
127
128 # Agent loop: call Bedrock, handle toolUse blocks, repeat
129 MAX_STEPS = 5
130 for _ in range(MAX_STEPS):
131 converse_params = {
132 "modelId": agent.model.name,
133 "messages": conversation,
134 "system": [{"text": agent.instructions}],
135 "toolConfig": tool_config,
136 }
137 if inference:
138 converse_params["inferenceConfig"] = inference
139
140 response = tracker.track_bedrock_converse_metrics(
141 bedrock_client.converse(**converse_params)
142 )
143
144 output_message = response.get("output", {}).get("message", {})
145 stop_reason = response.get("stopReason")
146
147 if stop_reason != "tool_use":
148 break
149
150 # Append assistant turn (preserves toolUse blocks)
151 conversation.append(output_message)
152
153 # Build a single tool-result user turn covering every toolUse block
154 tool_results = []
155 for block in output_message.get("content", []):
156 if "toolUse" not in block:
157 continue
158 tool_use = block["toolUse"]
159 if tool_use["name"] not in tool_handlers:
160 raise ValueError(f"Unknown tool: {tool_use['name']}")
161 result = tool_handlers[tool_use["name"]](**tool_use["input"])
162 tracker.track_tool_call(tool_use["name"])
163 tool_results.append({
164 "toolResult": {
165 "toolUseId": tool_use["toolUseId"],
166 "content": [{"text": result}],
167 }
168 })
169 conversation.append({"role": "user", "content": tool_results})
170
171 ldclient.get().flush()
172 ldclient.get().close()
173
174
175if __name__ == "__main__":
176 main()

What to explore next

After you have the basic integration working, you can extend it with:

  • Tools for calling external functions from your workflows
  • Online evaluations to score response quality automatically
  • Experiments to compare AI Config variations statistically
  • Agents for multi-step workflows

For more AI Configs guides, read the other guides in the AI Configs guides section.

Troubleshooting

If you are experiencing problems with your configuration, this section lists common errors and solutions.

Metrics not appearing

If metrics do not appear on the AI Insights dashboard:

  • Verify that you are calling the tracker methods (track_bedrock_converse_metrics in Python, trackBedrockConverseMetrics in Node.js).
  • Ensure you call flush() before closing the client, especially for short-lived scripts.
  • Wait at least one minute for metrics to process.

SDK initialization failures

If the LaunchDarkly SDK fails to initialize:

  • Verify your SDK key is correct and matches the environment you are targeting.
  • Check that your network can reach LaunchDarkly servers.
  • Review the SDK logs for specific error messages.

Config returns fallback value

If you always receive the fallback configuration:

  • Verify targeting is enabled for your AI Config.
  • Check that the AI Config key in your code matches the key in LaunchDarkly.
  • Ensure your context matches the targeting rules.

Amazon Bedrock errors

If you receive Amazon Bedrock API errors:

  • Verify your AWS credentials are set correctly and have the bedrock-runtime:Converse permission.
  • Confirm model access is granted for the model ID referenced in your AI Config.
  • Ensure the model ID in your AI Config matches an available Amazon Bedrock model in your region.

Conclusion

In this guide, you connected an Amazon Bedrock-powered application to LaunchDarkly AI Configs. You can now:

  • Manage prompts and model settings in LaunchDarkly without code changes
  • Track token usage, latency, and success rates automatically
  • Use template variables to customize prompts per user
Want to know more? Start a trial.
Your 14-day trial begins as soon as you sign up. Get started in minutes using the in-app Quickstart. You'll discover how easy it is to release, monitor, and optimize your software.

Want to try it out? Start a trial.