Getting started with Anthropic Claude and AI Configs

Overview

This guide shows how to connect an Anthropic Claude-powered application to LaunchDarkly AI Configs. Claude is well-suited for complex reasoning, code generation, and long-context tasks. By the end, you will be able to manage your model configuration and prompts outside of your application code, and track metrics automatically.

AI Configs support two modes:

  • Completion mode returns messages and roles (system, user, assistant). Use it for chat-style interactions and message-oriented workflows. Completion mode supports online evaluations with judges attached in the LaunchDarkly UI.
  • Agent mode returns a single instructions string. Use it when your runtime or framework expects a goal/instructions input for a structured workflow. Agent mode changes the configuration shape from messages to instructions. Your application maps these instructions into your provider or framework’s native input.

Both modes support tool calling. This guide walks through completion mode as the main path, with an optional agent config section. To learn more about when to use each mode, read When to use prompt-based vs agent mode.

This guide provides examples in both Python and Node.js (TypeScript).

Additional resources for AI Configs

If you are not familiar with AI Configs, start with the Quickstart for AI Configs and return to this guide when you are ready for a more detailed example.

You can find reference guides for each of the AI SDKs at AI SDKs.

Prerequisites

To complete this guide, you need the following:

Concepts

Before you begin, review these key concepts.

AI Configs

An AI Config is a LaunchDarkly resource that controls how your application uses large language models. Each AI Config contains one or more variations. Each variation specifies:

  • A model configuration, including the model name and parameters
  • Messages that define the prompt

You can update these settings in LaunchDarkly at any time without changing your application code.

Contexts

A context represents the end user interacting with your application. LaunchDarkly uses context attributes to:

  • Determine which variation to serve based on targeting rules
  • Populate template variables in your prompts

The tracker

When you retrieve an AI Config, the SDK returns a config object with a tracker property. The tracker records metrics from your Anthropic calls, including:

  • Generation count
  • Input and output tokens
  • Latency
  • Success and error rates

These metrics appear on the AI Insights dashboard in LaunchDarkly.

Step 1: Install the SDK

Install the LaunchDarkly AI SDK and the Anthropic SDK in your application. AI Configs are supported by LaunchDarkly server-side SDKs only. The Node.js examples in this guide use the server-side Node.js AI SDK.

Here is how to install the required packages:

$pip install "launchdarkly-server-sdk-ai>=0.17.0"
$pip install "anthropic>=0.39.0"
$pip install "python-dotenv>=1.0.0"

Create a .env file in your project root to store your API keys:

$# .env
$LAUNCHDARKLY_SDK_KEY=<your-launchdarkly-sdk-key>
$ANTHROPIC_API_KEY=<your-anthropic-api-key>

Add .env to your .gitignore to keep credentials out of version control.

Step 2: Initialize the clients

Initialize both the LaunchDarkly client and the Anthropic client. Store your API keys in environment variables.

Here is the initialization code:

1import os
2import ldclient
3from ldclient import Context
4from ldclient.config import Config
5from ldai.client import LDAIClient
6from ldai.providers.types import LDAIMetrics
7from ldai.tracker import TokenUsage
8from anthropic import Anthropic
9from dotenv import load_dotenv
10
11load_dotenv()
12
13ANTHROPIC_API_KEY = os.environ.get("ANTHROPIC_API_KEY")
14SDK_KEY = os.environ.get("LAUNCHDARKLY_SDK_KEY")
15CONFIG_KEY = "anthropic-assistant"
16AGENT_CONFIG_KEY = "anthropic-agent"
17
18anthropic_client = Anthropic(api_key=ANTHROPIC_API_KEY)
19ldclient.set_config(Config(SDK_KEY))
20if not ldclient.get().is_initialized():
21 exit(1)
22
23ai_client = LDAIClient(ldclient.get())

Step 3: Create an AI Config in LaunchDarkly

Create an AI Config in the LaunchDarkly UI to store your Anthropic model settings and prompts.

Using the MCP server or agent skills

If you have the LaunchDarkly MCP server or agent skills configured, prompt your coding assistant to create the AI Config for you. For example:

“Create a completion mode AI Config called ‘Anthropic assistant’ with a ‘Claude Sonnet 4’ variation using the claude-sonnet-4-20250514 model, temperature 0.7, maxTokens 1024, and the system message: ‘You are a helpful assistant. Answer questions about {{topic}}.’ Enable targeting.”

To create the AI Config:

  1. In the left navigation, click Create and select AI Config.
  2. In the Create AI Config dialog, select Completion.
  3. Enter a name, such as “Anthropic assistant”.
  4. Click Create.

To create a variation:

  1. On the Variations tab, replace “Untitled variation” with a name, such as “Claude Sonnet 4”.
  2. Click Select a model and choose the claude-sonnet-4-20250514 Anthropic model.
  3. Click Parameters and set temperature to 0.7 and maxTokens to 1024.
  4. Add a system message to define your assistant’s behavior:
System message
You are a helpful assistant. Answer questions about {{topic}}.
  1. Click Review and save.

A completed variation with model configuration and system message.

A completed variation with model configuration and system message.

To enable targeting:

  1. Select the Targeting tab.
  2. In the “Default rule” section, click Edit.
  3. Set the default rule to serve your variation.
  4. Click Review and save.

The default targeting rule configured to serve a variation.

The default targeting rule configured to serve a variation.

Step 4: Get the AI Config in your application

Retrieve the AI Config from LaunchDarkly by calling the completion config function. Pass a context that represents the current user. You can also pass an optional fallback configuration so your application fails gracefully if LaunchDarkly is unreachable.

Here is how to get the AI Config:

1# Define the context for the current user
2context = Context.builder("user-123") \
3 .kind("user") \
4 .name("Sandy") \
5 .build()
6
7# Pass a default for improved resiliency when the AI config is unavailable
8# or LaunchDarkly is unreachable; omit for a disabled default.
9# Example:
10# from ldai.client import AICompletionConfigDefault
11# default = AICompletionConfigDefault(
12# enabled=True,
13# model={"name": "claude-opus-4-7"},
14# provider={"name": "anthropic"},
15# messages=[{"role": "system", "content": "You are a helpful assistant."}],
16# )
17# config = ai_client.completion_config(CONFIG_KEY, context, default, variables={"topic": "Python"})
18
19# Get the AI Config
20config = ai_client.completion_config(CONFIG_KEY, context, variables={"topic": "Python"})
21tracker = config.tracker

Check the enabled property and handle the disabled case in your application. When you pass a fallback configuration, the SDK uses it if LaunchDarkly is unreachable.

Best practices

For production use:

  • Retrieve the AI Config each time you generate content so LaunchDarkly can evaluate the latest targeting rules and prompt changes.
  • Provide a fallback configuration when possible so your application can fail gracefully if LaunchDarkly is unavailable.
  • Avoid sending personally identifiable information in contexts unless you have a specific need and an approved handling pattern. To learn more, read Privacy in AI Configs.

Step 5: Call Anthropic and track metrics

Anthropic’s Messages API expects a specific format. The system message goes in a top-level system parameter, separate from the user/assistant messages.

To track metrics, define an anthropic_metrics (Python) / anthropicMetrics (TypeScript) converter function that maps an Anthropic response to an LDAIMetrics object. Then pass it to tracker.track_metrics_of / tracker.trackMetricsOf along with a callable that performs the Anthropic API call. The tracker handles duration, success, and error tracking automatically.

Define the metrics converter:

1def anthropic_metrics(response) -> LDAIMetrics:
2 """Convert an Anthropic Messages response into LDAIMetrics. Passed to
3 tracker.track_metrics_of, which handles duration + success/error itself."""
4 usage = response.usage
5 return LDAIMetrics(
6 success=True,
7 usage=TokenUsage(
8 total=usage.input_tokens + usage.output_tokens,
9 input=usage.input_tokens,
10 output=usage.output_tokens,
11 ),
12 )

Then call Anthropic with the config using tracker.track_metrics_of / tracker.trackMetricsOf. Variation parameters are stored in the same snake_case shape Anthropic’s SDK expects (max_tokens, top_p, …), so they pass through directly:

1if config.enabled:
2 tracker = config.tracker
3 messages = config.messages or []
4
5 # Anthropic uses a top-level `system` parameter, not a system role in messages
6 system = next((m.content for m in messages if m.role == "system"), None)
7 chat_messages = [{"role": m.role, "content": m.content} for m in messages if m.role != "system"]
8 chat_messages.append({"role": "user", "content": "How do I read a file in Python?"})
9
10 params = dict((config.model.to_dict().get("parameters") if config.model else None) or {})
11 params.setdefault("max_tokens", 1024)
12
13 response = tracker.track_metrics_of(
14 lambda: anthropic_client.messages.create(
15 model=config.model.name,
16 system=system or "",
17 messages=chat_messages,
18 **params,
19 ),
20 anthropic_metrics,
21 )

Step 6 (optional): Use agent mode with tool calling

Agent-mode AI Configs return a single instructions string instead of a message list, and they let you attach reusable tools from the LaunchDarkly tools library. With Anthropic, the instructions map to the top-level system parameter and tools pass through on the top-level tools parameter.

Create the tool in the tools library

First, define the tool in LaunchDarkly so the AI Config variation can reference it:

  1. In the left navigation, click Library, then select the Tools tab.
  2. Click Add tool.
  3. Enter get_order_status as the Key.
  4. Enter “Look up the status of a customer order by order ID” as the Description.
  5. Define the schema using the JSON editor:
1{
2 "type": "object",
3 "properties": {
4 "order_id": {
5 "type": "string",
6 "description": "The order ID to look up"
7 }
8 },
9 "required": ["order_id"]
10}
  1. Click Save.

The Create tool dialog.

The Create tool dialog.

Create the agent AI Config

  1. Click Create and select AI Config.
  2. Select Agent mode.

The Create AI Config dialog with Agent mode selected.

The Create AI Config dialog with Agent mode selected.
  1. Enter a name:
AI Config name
Anthropic agent
  1. Click Create.
  2. On the Variations tab, name the variation (for example, “Claude Sonnet 4 agent”).
  3. Click Select a model and choose claude-sonnet-4-20250514.
  4. Click Parameters and set max_tokens to 1024.
  5. Add the agent instructions:
Agent instructions
You are an order status assistant. Use the get_order_status tool to look up customer orders by ID. Only call the tool when the user asks about an order.
  1. Click + Attach tools and select get_order_status.

A variation editor with an attached tool.

A variation editor with an attached tool.
  1. Click Review and save.
  2. On the Targeting tab, set the default rule to serve your variation and save.

To learn more about managing tools, read Tools in AI Configs.

Retrieve the agent config and run the tool loop

Use agent_config() instead of completion_config(). The SDK returns the attached tools under parameters.tools in OpenAI’s type=function shape, so convert them to Anthropic’s {name, description, input_schema} shape and pass them on the top-level tools= argument. Tool handler functions stay in your application code — LaunchDarkly stores the schema, your application owns the behavior.

1# Same fallback pattern as completion — omit the default for a disabled fallback.
2agent = ai_client.agent_config(AGENT_CONFIG_KEY, context)
3
4if agent.enabled:
5 tracker = agent.tracker
6 params = dict((agent.model.to_dict().get("parameters") if agent.model else None) or {})
7 params.setdefault("max_tokens", 1024)
8
9 # LD returns attached tools under `parameters.tools` in OpenAI's type=function
10 # shape: [{type:"function", name, description, parameters:{...schema}}].
11 # Anthropic expects {name, description, input_schema}. Convert and move them
12 # out of params (they go on the top-level `tools=` argument).
13 ld_tools = params.pop("tools", []) or []
14 tools = [
15 {
16 "name": t["name"],
17 "description": t.get("description", ""),
18 "input_schema": t.get("parameters", {"type": "object", "properties": {}}),
19 }
20 for t in ld_tools
21 ]
22
23 # Handlers stay in application code — LD stores the schema, the app owns the behavior.
24 def get_order_status(order_id: str) -> str:
25 orders = {
26 "ORD-123": "Shipped — arrives Thursday",
27 "ORD-456": "Processing — estimated ship date: tomorrow",
28 "ORD-789": "Delivered on Monday",
29 }
30 return orders.get(order_id, f"No order found with ID {order_id}")
31
32 tool_handlers = {"get_order_status": get_order_status}
33
34 messages = [{"role": "user", "content": "What's the status of order ORD-123?"}]
35
36 # Agent loop: call Anthropic, handle tool_use blocks, repeat
37 MAX_STEPS = 5
38 for _ in range(MAX_STEPS):
39 response = tracker.track_metrics_of(
40 lambda: anthropic_client.messages.create(
41 model=agent.model.name,
42 system=agent.instructions,
43 messages=messages,
44 tools=tools,
45 **params,
46 ),
47 anthropic_metrics,
48 )
49
50 if response.stop_reason != "tool_use":
51 break
52
53 # Append assistant turn (preserves tool_use blocks)
54 messages.append({"role": "assistant", "content": response.content})
55
56 # Build a single tool_result user turn covering every tool_use block
57 tool_results = []
58 for block in response.content:
59 if block.type != "tool_use":
60 continue
61 if block.name not in tool_handlers:
62 raise ValueError(f"Unknown tool: {block.name}")
63 result = tool_handlers[block.name](**block.input)
64 tracker.track_tool_call(block.name)
65 tool_results.append({
66 "type": "tool_result",
67 "tool_use_id": block.id,
68 "content": result,
69 })
70 messages.append({"role": "user", "content": tool_results})

In completion mode, you can attach judges to variations in the LaunchDarkly UI for automatic evaluation. In agent mode, invoke judges programmatically through the AI SDK.

To learn more, read When to use prompt-based vs agent mode and Agents in AI Configs.

Step 7: Monitor your AI Config

Use the LaunchDarkly UI to monitor how your applications are performing across all AI Configs and for individual configs.

To view aggregated metrics across all your AI Configs, navigate to Insights in the left navigation under the AI section. The Insights overview page displays cost, latency, error rate, invocation counts, and model distribution across your organization. To learn more, read AI Insights.

To view metrics for a specific AI Config:

  1. Navigate to your AI Config.
  2. Select the Monitoring tab.

The dashboard displays the following metrics:

  • Generation count: The total number of AI generation calls tracked for this config.
  • Input and output tokens: Token consumption broken down by prompt tokens sent and completion tokens received.
  • Latency: The time taken for each generation call, shown as percentiles (p50, p95).
  • Success and error rates: The proportion of successful versus failed generation calls.

Metrics update approximately every minute. Use these metrics to compare variations and optimize your prompts. To learn more, read Monitor AI Configs.

The Insights overview page showing cost, latency, error rate, and invocation metrics for Anthropic AI Configs.

The Insights overview page showing cost, latency, error rate, and invocation metrics for Anthropic AI Configs.
Observability

The AI SDKs emit OpenTelemetry-compatible spans for each generation call. You can forward these spans to your existing observability stack for deeper analysis. To learn more, read Observability and LLM observability.

Step 8: Close the client

Close the LaunchDarkly client when your application shuts down to flush pending events.

Here is how to close the client:

1ldclient.get().close()

Always flush events before closing. Trailing events are at risk of being lost otherwise, in short-lived scripts and long-running services alike.

Here is how to flush events:

1# Always flush events before closing — trailing events are at risk of being
2# lost otherwise, in short-lived scripts and long-running services alike.
3ldclient.get().flush()
4ldclient.get().close()

Complete example

Here is a complete working example that combines all the steps.

1import os
2import ldclient
3from ldclient import Context
4from ldclient.config import Config
5from ldai.client import LDAIClient
6from ldai.providers.types import LDAIMetrics
7from ldai.tracker import TokenUsage
8from anthropic import Anthropic
9from dotenv import load_dotenv
10
11load_dotenv()
12
13ANTHROPIC_API_KEY = os.environ.get("ANTHROPIC_API_KEY")
14SDK_KEY = os.environ.get("LAUNCHDARKLY_SDK_KEY_ANTHROPIC")
15
16CONFIG_KEY = "anthropic-assistant"
17AGENT_CONFIG_KEY = "anthropic-agent"
18
19
20def anthropic_metrics(response) -> LDAIMetrics:
21 """Convert an Anthropic Messages response into LDAIMetrics. Passed to
22 tracker.track_metrics_of, which handles duration + success/error itself."""
23 usage = response.usage
24 return LDAIMetrics(
25 success=True,
26 usage=TokenUsage(
27 total=usage.input_tokens + usage.output_tokens,
28 input=usage.input_tokens,
29 output=usage.output_tokens,
30 ),
31 )
32
33
34def main():
35 anthropic_client = Anthropic(api_key=ANTHROPIC_API_KEY)
36 ldclient.set_config(Config(SDK_KEY))
37 if not ldclient.get().is_initialized():
38 return
39
40 ai_client = LDAIClient(ldclient.get())
41
42 context = Context.builder("user-123").kind("user").name("Sandy").build()
43
44 # ===================
45 # COMPLETION MODE
46 # ===================
47
48 # Pass a default for improved resiliency when the AI config is unavailable
49 # or LaunchDarkly is unreachable; omit for a disabled default.
50 # Example:
51 # from ldai.client import AICompletionConfigDefault
52 # default = AICompletionConfigDefault(
53 # enabled=True,
54 # model={"name": "claude-opus-4-7"},
55 # provider={"name": "anthropic"},
56 # messages=[{"role": "system", "content": "You are a helpful assistant."}],
57 # )
58 # config = ai_client.completion_config(CONFIG_KEY, context, default, variables={"topic": "Python"})
59 config = ai_client.completion_config(CONFIG_KEY, context, variables={"topic": "Python"})
60
61 if config.enabled:
62 tracker = config.tracker
63 messages = config.messages or []
64
65 # Anthropic uses a top-level `system` parameter, not a system role in messages
66 system = next((m.content for m in messages if m.role == "system"), None)
67 chat_messages = [{"role": m.role, "content": m.content} for m in messages if m.role != "system"]
68 chat_messages.append({"role": "user", "content": "How do I read a file in Python?"})
69
70 params = dict((config.model.to_dict().get("parameters") if config.model else None) or {})
71 params.setdefault("max_tokens", 1024)
72
73 tracker.track_metrics_of(
74 lambda: anthropic_client.messages.create(
75 model=config.model.name,
76 system=system or "",
77 messages=chat_messages,
78 **params,
79 ),
80 anthropic_metrics,
81 )
82
83 # ===================
84 # AGENT MODE
85 # ===================
86
87 # Same fallback pattern as completion — omit the default for a disabled fallback.
88 agent = ai_client.agent_config(AGENT_CONFIG_KEY, context)
89
90 if agent.enabled:
91 tracker = agent.tracker
92 params = dict((agent.model.to_dict().get("parameters") if agent.model else None) or {})
93 params.setdefault("max_tokens", 1024)
94
95 # LD returns attached tools under `parameters.tools` in OpenAI's type=function
96 # shape: [{type:"function", name, description, parameters:{...schema}}].
97 # Anthropic expects {name, description, input_schema}. Convert and move them
98 # out of params (they go on the top-level `tools=` argument).
99 ld_tools = params.pop("tools", []) or []
100 tools = [
101 {
102 "name": t["name"],
103 "description": t.get("description", ""),
104 "input_schema": t.get("parameters", {"type": "object", "properties": {}}),
105 }
106 for t in ld_tools
107 ]
108
109 # Handlers stay in application code — LD stores the schema, the app owns the behavior.
110 def get_order_status(order_id: str) -> str:
111 orders = {
112 "ORD-123": "Shipped — arrives Thursday",
113 "ORD-456": "Processing — estimated ship date: tomorrow",
114 "ORD-789": "Delivered on Monday",
115 }
116 return orders.get(order_id, f"No order found with ID {order_id}")
117
118 tool_handlers = {"get_order_status": get_order_status}
119
120 messages = [{"role": "user", "content": "What's the status of order ORD-123?"}]
121
122 # Agent loop: call Anthropic, handle tool_use blocks, repeat
123 MAX_STEPS = 5
124 for _ in range(MAX_STEPS):
125 response = tracker.track_metrics_of(
126 lambda: anthropic_client.messages.create(
127 model=agent.model.name,
128 system=agent.instructions,
129 messages=messages,
130 tools=tools,
131 **params,
132 ),
133 anthropic_metrics,
134 )
135
136 if response.stop_reason != "tool_use":
137 break
138
139 # Append assistant turn (preserves tool_use blocks)
140 messages.append({"role": "assistant", "content": response.content})
141
142 # Build a single tool_result user turn covering every tool_use block
143 tool_results = []
144 for block in response.content:
145 if block.type != "tool_use":
146 continue
147 if block.name not in tool_handlers:
148 raise ValueError(f"Unknown tool: {block.name}")
149 result = tool_handlers[block.name](**block.input)
150 tracker.track_tool_call(block.name)
151 tool_results.append({
152 "type": "tool_result",
153 "tool_use_id": block.id,
154 "content": result,
155 })
156 messages.append({"role": "user", "content": tool_results})
157
158 # Always flush events before closing — trailing events are at risk of being
159 # lost otherwise, in short-lived scripts and long-running services alike.
160 ldclient.get().flush()
161 ldclient.get().close()
162
163
164if __name__ == "__main__":
165 main()

What to explore next

After you have the basic integration working, you can extend it with:

  • Tools for calling external functions from your workflows
  • Online evaluations to score response quality automatically
  • Experiments to compare AI Config variations statistically
  • Agents for multi-step workflows

For more AI Configs guides, read the other guides in the AI Configs guides section.

Troubleshooting

If you are experiencing problems with your configuration, this section lists common errors and solutions.

Metrics not appearing

If metrics do not appear on the AI Insights dashboard:

  • Verify that you are calling the tracker methods (trackDuration, trackSuccess, trackTokens).
  • Ensure you call flush() before closing the client, especially for short-lived scripts.
  • Wait at least one minute for metrics to process.

SDK initialization failures

If the LaunchDarkly SDK fails to initialize:

  • Verify your SDK key is correct and matches the environment you are targeting.
  • Check that your network can reach LaunchDarkly servers.
  • Review the SDK logs for specific error messages.

Config returns fallback value

If you always receive the fallback configuration:

  • Verify targeting is enabled for your AI Config.
  • Check that the AI Config key in your code matches the key in LaunchDarkly.
  • Ensure your context matches the targeting rules.

Anthropic API errors

If you receive Anthropic API errors:

  • Verify your ANTHROPIC_API_KEY is set correctly.
  • Check that your API key has sufficient permissions.
  • Ensure the model name in your AI Config matches an available Anthropic model.

Conclusion

In this guide, you connected an Anthropic Claude-powered application to LaunchDarkly AI Configs. You can now:

  • Manage prompts and model settings in LaunchDarkly without code changes
  • Track token usage, latency, and success rates automatically
  • Use template variables to customize prompts per user
Want to know more? Start a trial.
Your 14-day trial begins as soon as you sign up. Get started in minutes using the in-app Quickstart. You'll discover how easy it is to release, monitor, and optimize your software.

Want to try it out? Start a trial.