Getting started with OpenAI and AI Configs

Overview

This guide shows how to connect an OpenAI-powered application to LaunchDarkly AI Configs. OpenAI models are widely adopted for chat completions, function calling, and structured outputs. By the end, you will be able to manage your model configuration and prompts outside of your application code, and track metrics automatically.

AI Configs support two modes:

  • Completion mode returns messages and roles (system, user, assistant). Use it for chat-style interactions and message-oriented workflows. Completion mode supports online evaluations with judges attached in the LaunchDarkly UI.
  • Agent mode returns a single instructions string. Use it when your runtime or framework expects a goal/instructions input for a structured workflow. Agent mode changes the configuration shape from messages to instructions. Your application maps these instructions into your provider or framework’s native input.

Both modes support tool calling. This guide walks through completion mode as the main path, with an optional agent config section. To learn more about when to use each mode, read When to use prompt-based vs agent mode.

This guide provides examples in both Python and Node.js (TypeScript).

New to AI Configs?

If you’re a new user of AI Configs, start with the Quickstart and return to this guide when you are ready for a more detailed example.

To learn more about AI Configs-specific SDKs, read AI SDKs. For Python-specific details, read the Python AI SDK reference.

Prerequisites

To complete this guide, you need the following:

Concepts

Before you begin, review these key concepts.

AI Configs

An AI Config is a LaunchDarkly resource that controls how your application uses large language models. Each AI Config contains one or more variations. Each variation specifies:

  • A model configuration, including the model name and parameters
  • Messages that define the prompt

You can update these settings in LaunchDarkly at any time without changing your application code.

Contexts

A context represents the end user interacting with your application. LaunchDarkly uses context attributes to:

  • Determine which variation to serve based on targeting rules
  • Populate {{ ldctx.* }} placeholders in your prompts with context attribute values

Other placeholders, such as {{ topic }}, are populated from the variables argument you pass at runtime.

The tracker

When you retrieve an AI Config, the SDK returns a config object with a tracker property. The tracker records metrics from your OpenAI calls, including:

  • Generation count
  • Input and output tokens
  • Latency
  • Success and error rates

These metrics appear on the AI Insights dashboard in LaunchDarkly.

Step 1: Install the SDK

Install the LaunchDarkly AI SDK and the OpenAI SDK in your application. AI Configs are supported by LaunchDarkly server-side SDKs only. The Node.js examples in this guide use the server-side Node.js AI SDK.

Here is how to install the required packages:

$pip install "launchdarkly-server-sdk-ai>=0.18.0"
$pip install "launchdarkly-server-sdk-ai-openai>=0.4.0"
$pip install openai
$pip install python-dotenv

Create a .env file in your project root to store your API keys:

$# .env
$LAUNCHDARKLY_SDK_KEY=<your-launchdarkly-sdk-key>
$OPENAI_API_KEY=<your-openai-api-key>

Add .env to your .gitignore to keep credentials out of version control.

Step 2: Initialize the clients

Initialize both the LaunchDarkly client and the OpenAI client. Store your API keys in environment variables.

Here is the initialization code:

1import os
2import json
3import ldclient
4from ldclient import Context
5from ldclient.config import Config
6from ldai.client import LDAIClient, AICompletionConfigDefault, AIAgentConfigDefault
7from ldai.providers.types import LDAIMetrics
8from ldai.tracker import TokenUsage
9from openai import OpenAI
10from dotenv import load_dotenv
11
12load_dotenv()
13
14OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
15SDK_KEY = os.environ.get("LAUNCHDARKLY_SDK_KEY_OPENAI")
16CONFIG_KEY = "openai-assistant"
17AGENT_CONFIG_KEY = "openai-agent"
18
19openai_client = OpenAI(api_key=OPENAI_API_KEY)
20ldclient.set_config(Config(SDK_KEY))
21if not ldclient.get().is_initialized():
22 exit(1)
23
24ai_client = LDAIClient(ldclient.get())

Step 3: Create an AI Config in LaunchDarkly

Create an AI Config in the LaunchDarkly UI to store your OpenAI model settings and prompts.

Using the MCP server or agent skills

If you have the LaunchDarkly MCP server or agent skills configured, prompt your coding assistant to create the AI Config for you. For example:

“Create a completion mode AI Config called ‘OpenAI assistant’ with a ‘GPT-4o’ variation using the gpt-4o model, temperature 0.7, max_tokens 1024, and the system message: ‘You are a helpful assistant. Answer questions about {{topic}}.’ Enable targeting.”

To create the AI Config:

  1. Click Create and select AI Config.
  2. In the Create AI Config dialog, select Completion.
  3. Enter a name, such as “OpenAI assistant”.
  4. Click Create.

To create a variation:

  1. On the Variations tab, replace “Untitled variation” with a name, such as “GPT-4o”.
  2. Click Select a model and choose the gpt-4o OpenAI model.
  3. Click Parameters and set temperature to 0.7 and max_tokens to 1024.
  4. Add a system message to define your assistant’s behavior:
System message
You are a helpful assistant. Answer questions about {{topic}}.
  1. Click Review and save.

A completed variation with model configuration and system message.

A completed variation with model configuration and system message.

To enable targeting:

  1. Select the Targeting tab.
  2. In the “Default rule” section, click Edit.
  3. Set the default rule to serve your variation.
  4. Click Review and save.

The default targeting rule configured to serve a variation.

The default targeting rule configured to serve a variation.

Step 4: Get the AI Config in your application

Retrieve the AI Config from LaunchDarkly by calling the completion config function. Pass a context that represents the current user. A fallback configuration is optional — if LaunchDarkly is unreachable and no fallback is provided, the config returns with enabled: false.

Here is how to get the AI Config:

1# Define the context for the current user
2context = Context.builder("user-123") \
3 .kind("user") \
4 .name("Sandy") \
5 .build()
6
7# Pass a default for improved resiliency when the AI config is unavailable
8# or LaunchDarkly is unreachable; omit for a disabled default.
9# Example:
10# from ldai.client import AICompletionConfigDefault
11# default = AICompletionConfigDefault(
12# enabled=True,
13# model={"name": "gpt-5"},
14# provider={"name": "openai"},
15# messages=[{"role": "system", "content": "You are a helpful assistant."}],
16# )
17# config = ai_client.completion_config(CONFIG_KEY, context, default, variables={"topic": "Python"})
18
19# Get the AI Config
20config = ai_client.completion_config(CONFIG_KEY, context, variables={"topic": "Python"})
21tracker = config.create_tracker()

Check the enabled property and handle the disabled case in your application. If you want your application to serve a specific model and prompt when LaunchDarkly is unreachable, pass an AICompletionConfigDefault (Python) or a plain config object (TypeScript) as the third argument — see the commented fallback example above.

Best practices

For production use:

  • Retrieve the AI Config each time you generate content so LaunchDarkly can evaluate the latest targeting rules and prompt changes.
  • Provide a fallback configuration when possible so your application can fail gracefully if LaunchDarkly is unavailable.
  • Avoid sending personally identifiable information in contexts unless you have a specific need and an approved handling pattern. To learn more, read Privacy in AI Configs.

Step 5: Call OpenAI and track metrics

OpenAI’s Chat Completions API expects messages in an array, including system-role messages. Unlike Anthropic, there is no separate top-level system parameter. Instead, the system prompt is a message with role: "system" in the messages array. Use the generic track_metrics_of (Python) or trackMetricsOf (Node.js) helper with get_ai_metrics_from_response (Python) or OpenAIProvider.getAIMetricsFromResponse (Node.js) from the launchdarkly-server-sdk-ai-openai / @launchdarkly/server-sdk-ai-openai package. The helper records duration, success/error, and token usage automatically.

Chat Completions vs. Responses API

This guide uses OpenAI’s Chat Completions API for completion mode (this step) and the Responses API for agent mode (Step 6). The track_metrics_of / trackMetricsOf tracker takes a response-to-metrics converter: get_ai_metrics_from_response / OpenAIProvider.getAIMetricsFromResponse for Chat Completions, and a custom responses_metrics / responsesMetrics converter for the Responses API shape (response.output[] / response.usage.*_tokens).

LaunchDarkly variations store OpenAI-native parameter names (max_completion_tokens, temperature, top_p, and so on), so they pass directly to the SDK. You do need to drop tools from params before spreading them — attached tools pass through on the top-level tools argument, and leaving them in params would duplicate the argument. Define a small helper for that:

1def drop_tools(params):
2 """Drop `tools` from params — attached tools are passed via the top-level
3 `tools=` argument, so leaving them in params would duplicate the argument."""
4 return {k: v for k, v in (params or {}).items() if k != "tools"}

Then call OpenAI with the config:

1if config.enabled:
2 tracker = config.create_tracker()
3 messages = config.messages or []
4
5 chat_messages = [m.to_dict() for m in messages]
6 chat_messages.append({"role": "user", "content": "How do I read a file in Python?"})
7
8 ld_params = (config.model.to_dict().get("parameters") if config.model else None) or {}
9
10 # The helper lives in the launchdarkly-server-sdk-ai-openai package.
11 completion = tracker.track_metrics_of(
12 lambda: openai_client.chat.completions.create(
13 model=config.model.name,
14 messages=chat_messages,
15 **drop_tools(ld_params),
16 ),
17 get_ai_metrics_from_response,
18 )
Streaming

The track_metrics_of (Python) and trackMetricsOf (Node.js) helpers expect a complete response object. If your application uses streaming responses, use the lower-level track_duration, track_tokens, track_success, and track_error methods to record metrics manually. To learn more, read Monitor AI Configs.

Step 6: Optional: Use agent mode with tool calling

Agent-mode AI Configs return a single instructions string instead of a message list, and they let you attach reusable tools from the LaunchDarkly tools library. This step uses OpenAI’s Responses API for agent mode. The Responses API is OpenAI’s agent-oriented surface and accepts LaunchDarkly’s flat {type, name, description, parameters} tool shape directly — no per-call conversion. It also manages conversation state server-side through previous_response_id, so each turn only sends the new input (the initial user message, or the tool outputs from the previous turn).

Create the tool in the tools library

First, define the tool in LaunchDarkly so the AI Config variation can reference it:

  1. In the left navigation, click Library, then select the Tools tab.
  2. Click Add tool.
  3. Enter get_order_status as the key.
  4. Enter “Look up the status of a customer order by order ID” as the description.
  5. Define the schema using the JSON editor:
1{
2 "type": "object",
3 "properties": {
4 "order_id": {
5 "type": "string",
6 "description": "The order ID to look up"
7 }
8 },
9 "required": ["order_id"]
10}
  1. Click Save.

The Create tool dialog.

The Create tool dialog.

Create the agent AI Config

  1. Click Create and select AI Config.
  2. Select Agent mode.

The Create AI Config dialog with Agent mode selected.

The Create AI Config dialog with Agent mode selected.
  1. Enter a name:
AI Config name
OpenAI agent
  1. Click Create.
  2. On the Variations tab, name the variation (for example, “GPT-4o agent”).
  3. Click Select a model and choose gpt-4o.
  4. Click Parameters and set max_tokens to 1024.
  5. Add the agent instructions:
Agent instructions
You are an order status assistant. Use the get_order_status tool to look up customer orders by ID. Only call the tool when the user asks about an order.
  1. Click + Attach tools and select get_order_status.

A variation editor with an attached tool.

A variation editor with an attached tool.
  1. Click Review and save.
  2. On the Targeting tab, set the default rule to serve your variation and save.

To learn more about managing tools, read Tools in AI Configs.

Retrieve the agent config and run the tool loop

Use agent_config() instead of completion_config(). The SDK returns attached tools under parameters.tools in LaunchDarkly’s flat shape, so convert them to OpenAI’s nested {type: 'function', function: {name, description, parameters}} shape before passing them on the tools= argument. Tool handler functions stay in your application code. LaunchDarkly stores the schema, while your application owns the behavior.

Because the Responses API has a different response shape than Chat Completions, use tracker.track_metrics_of (Python) / tracker.trackMetricsOf (TypeScript) with a provider-specific converter function. The converter receives the raw API response and returns an LDAIMetrics object; the tracker handles duration and success/error itself.

1def responses_metrics(response) -> LDAIMetrics:
2 """Convert an OpenAI Responses API result into LDAIMetrics. Passed to
3 tracker.track_metrics_of — which handles duration + success/error itself.
4 Parses the Responses output shape: response.output[] and response.usage.*_tokens."""
5 usage = getattr(response, "usage", None)
6 tokens = None
7 if usage:
8 tokens = TokenUsage(
9 total=usage.total_tokens or 0,
10 input=usage.input_tokens or 0,
11 output=usage.output_tokens or 0,
12 )
13 return LDAIMetrics(success=True, usage=tokens)
14
15
16# Same fallback pattern as completion — omit the default for a disabled fallback.
17agent = ai_client.agent_config(AGENT_CONFIG_KEY, context)
18
19if agent.enabled:
20 tracker = agent.create_tracker()
21 ld_params = (agent.model.to_dict().get("parameters") if agent.model else None) or {}
22 # Tools attached to the variation in LD are already in the Responses API's
23 # flat shape — pass them straight through.
24 tools = ld_params.get("tools", []) or []
25
26 # Handler stays in app code; LD governs the schema.
27 def get_order_status(order_id: str) -> str:
28 orders = {
29 "ORD-123": "Shipped — arrives Thursday",
30 "ORD-456": "Processing — estimated ship date: tomorrow",
31 "ORD-789": "Delivered on Monday",
32 }
33 return orders.get(order_id, f"No order found with ID {order_id}")
34
35 tool_handlers = {"get_order_status": get_order_status}
36
37 # Responses API agent loop: chain turns via previous_response_id so OpenAI
38 # keeps track of conversation state for us — each turn we only send the
39 # new input (the initial user message, or the function_call_outputs for
40 # the previous turn's tool calls).
41 next_input = [{"role": "user", "content": "What's the status of order ORD-123?"}]
42 previous_response_id = None
43
44 MAX_STEPS = 5
45 for _ in range(MAX_STEPS):
46 response = tracker.track_metrics_of(
47 lambda: openai_client.responses.create(
48 model=agent.model.name,
49 instructions=agent.instructions,
50 input=next_input,
51 tools=tools,
52 previous_response_id=previous_response_id,
53 **drop_tools(ld_params),
54 ),
55 responses_metrics,
56 )
57 previous_response_id = response.id
58
59 function_calls = [item for item in response.output if item.type == "function_call"]
60 if not function_calls:
61 break
62
63 next_input = []
64 for call in function_calls:
65 args = json.loads(call.arguments)
66 if call.name not in tool_handlers:
67 raise ValueError(f"Unknown tool: {call.name}")
68 result = tool_handlers[call.name](**args)
69 tracker.track_tool_call(call.name)
70 next_input.append({
71 "type": "function_call_output",
72 "call_id": call.call_id,
73 "output": result,
74 })

In completion mode, you can attach judges to variations in the LaunchDarkly UI for automatic evaluation. In agent mode, invoke judges programmatically through the AI SDK.

To learn more, read When to use prompt-based vs agent mode and Agents in AI Configs.

Step 7: Monitor your AI Config

Use the LaunchDarkly UI to monitor how your applications are performing across all AI Configs and for individual configs.

To view aggregated metrics across all your AI Configs, navigate to Insights in the left navigation under the AI section. The Insights overview page displays cost, latency, error rate, invocation counts, and model distribution across your organization. To learn more, read AI Insights.

To view metrics for a specific AI Config:

  1. Navigate to your AI Config.
  2. Select the Monitoring tab.

The dashboard displays the following metrics:

  • Generation count: The total number of AI generation calls tracked for this config.
  • Input and output tokens: Token consumption broken down by prompt tokens sent and completion tokens received.
  • Latency: The time taken for each generation call, shown as percentiles (p50, p95).
  • Success and error rates: The proportion of successful versus failed generation calls.

Metrics update approximately every minute. Use these metrics to compare variations and optimize your prompts. To learn more, read Monitor AI Configs.

The Insights overview page showing cost, latency, error rate, and invocation metrics for OpenAI AI Configs.

The Insights overview page showing cost, latency, error rate, and invocation metrics for OpenAI AI Configs.
Observability

The AI SDKs emit OpenTelemetry-compatible spans for each generation call. You can forward these spans to your existing observability stack for deeper analysis. To learn more, read Observability and LLM observability.

Step 8: Flush events and close the client

Always flush events before closing the LaunchDarkly client when your application shuts down — trailing events are at risk of being lost otherwise, in short-lived scripts and long-running services alike.

Here is how to flush events and close the client:

1# Always flush events before closing — trailing events are at risk of being
2# lost otherwise, in short-lived scripts and long-running services alike.
3ldclient.get().flush()
4ldclient.get().close()

Complete example

Here is a complete working example that combines all the steps.

1import os
2import json
3import ldclient
4from ldclient import Context
5from ldclient.config import Config
6from ldai.client import LDAIClient, AICompletionConfigDefault, AIAgentConfigDefault
7from ldai.providers.types import LDAIMetrics
8from ldai.tracker import TokenUsage
9from openai import OpenAI
10from dotenv import load_dotenv
11
12load_dotenv()
13
14OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
15SDK_KEY = os.environ.get("LAUNCHDARKLY_SDK_KEY_OPENAI")
16CONFIG_KEY = "openai-assistant"
17AGENT_CONFIG_KEY = "openai-agent"
18
19
20def drop_tools(params):
21 """Drop `tools` from params — attached tools are passed via the top-level
22 `tools=` argument, so leaving them in params would duplicate the argument."""
23 return {k: v for k, v in (params or {}).items() if k != "tools"}
24
25
26def responses_metrics(response) -> LDAIMetrics:
27 """Convert an OpenAI Responses API result into LDAIMetrics. Passed to
28 tracker.track_metrics_of — which handles duration + success/error itself.
29 Parses the Responses output shape: response.output[] and response.usage.*_tokens."""
30 usage = getattr(response, "usage", None)
31 tokens = None
32 if usage:
33 tokens = TokenUsage(
34 total=usage.total_tokens or 0,
35 input=usage.input_tokens or 0,
36 output=usage.output_tokens or 0,
37 )
38 return LDAIMetrics(success=True, usage=tokens)
39
40
41def main():
42 openai_client = OpenAI(api_key=OPENAI_API_KEY)
43
44 ldclient.set_config(Config(SDK_KEY))
45 if not ldclient.get().is_initialized():
46 return
47
48 ai_client = LDAIClient(ldclient.get())
49 context = Context.builder("user-123").kind("user").name("Sandy").build()
50
51 # ===================
52 # COMPLETION MODE — Chat Completions API
53 # ===================
54
55 # Pass a default for improved resiliency when the AI config is unavailable
56 # or LaunchDarkly is unreachable; omit for a disabled default.
57 # Example:
58 # default = AICompletionConfigDefault(
59 # enabled=True,
60 # model={"name": "gpt-5"},
61 # provider={"name": "openai"},
62 # messages=[{"role": "system", "content": "You are a helpful assistant."}],
63 # )
64 # config = ai_client.completion_config(CONFIG_KEY, context, default, variables={"topic": "Python"})
65 config = ai_client.completion_config(CONFIG_KEY, context, variables={"topic": "Python"})
66
67 if config.enabled:
68 tracker = config.create_tracker()
69 chat_messages = [m.to_dict() for m in (config.messages or [])]
70 chat_messages.append({"role": "user", "content": "How do I read a file in Python?"})
71
72 ld_params = (config.model.to_dict().get("parameters") if config.model else None) or {}
73
74 # The helper lives in the launchdarkly-server-sdk-ai-openai package.
75 tracker.track_metrics_of(
76 lambda: openai_client.chat.completions.create(
77 model=config.model.name,
78 messages=chat_messages,
79 **drop_tools(ld_params),
80 ),
81 get_ai_metrics_from_response,
82 )
83
84 # ===================
85 # AGENT MODE — Responses API (OpenAI's preferred agent surface;
86 # flat tool shape matches LD's native tool format, no conversion needed)
87 # ===================
88
89 # Same fallback pattern as completion — omit the default for a disabled fallback.
90 agent = ai_client.agent_config(AGENT_CONFIG_KEY, context)
91
92 if agent.enabled:
93 tracker = agent.create_tracker()
94 ld_params = (agent.model.to_dict().get("parameters") if agent.model else None) or {}
95 # Tools attached to the variation in LD are already in the Responses API's
96 # flat shape — pass them straight through.
97 tools = ld_params.get("tools", []) or []
98
99 def get_order_status(order_id: str) -> str:
100 orders = {
101 "ORD-123": "Shipped — arrives Thursday",
102 "ORD-456": "Processing — estimated ship date: tomorrow",
103 "ORD-789": "Delivered on Monday",
104 }
105 return orders.get(order_id, f"No order found with ID {order_id}")
106
107 tool_handlers = {"get_order_status": get_order_status}
108
109 # Responses API agent loop: chain turns via previous_response_id so OpenAI
110 # keeps track of conversation state for us.
111 next_input = [{"role": "user", "content": "What's the status of order ORD-123?"}]
112 previous_response_id = None
113
114 MAX_STEPS = 5
115 for _ in range(MAX_STEPS):
116 response = tracker.track_metrics_of(
117 lambda: openai_client.responses.create(
118 model=agent.model.name,
119 instructions=agent.instructions,
120 input=next_input,
121 tools=tools,
122 previous_response_id=previous_response_id,
123 **drop_tools(ld_params),
124 ),
125 responses_metrics,
126 )
127 previous_response_id = response.id
128
129 function_calls = [item for item in response.output if item.type == "function_call"]
130 if not function_calls:
131 break
132
133 next_input = []
134 for call in function_calls:
135 args = json.loads(call.arguments)
136 if call.name not in tool_handlers:
137 raise ValueError(f"Unknown tool: {call.name}")
138 result = tool_handlers[call.name](**args)
139 tracker.track_tool_call(call.name)
140 next_input.append({
141 "type": "function_call_output",
142 "call_id": call.call_id,
143 "output": result,
144 })
145
146 # Always flush events before closing — trailing events are at risk of being
147 # lost otherwise, in short-lived scripts and long-running services alike.
148 ldclient.get().flush()
149 ldclient.get().close()
150
151
152if __name__ == "__main__":
153 main()

What to explore next

After you have the basic integration working, you can extend it with:

  • Tools for calling external functions from your workflows
  • Online evaluations to score response quality automatically
  • Experiments to compare AI Config variations statistically
  • Agents for multi-step workflows

For more AI Configs guides, read the other guides in the AI Configs guides section.

Troubleshooting

Metrics not appearing

If metrics do not appear on the AI Insights dashboard:

  • Verify that you are calling tracker.track_metrics_of() (Python) or tracker.trackMetricsOf() (Node.js) with the correct metrics converter.
  • Ensure you call flush() before closing the client, especially for short-lived scripts.
  • Wait at least one minute for metrics to process.

SDK initialization failures

If the LaunchDarkly SDK fails to initialize:

  • Verify your SDK key is correct and matches the environment you are targeting.
  • Check that your network can reach LaunchDarkly servers.
  • Review the SDK logs for specific error messages.

Config returns fallback value

If you always receive the fallback configuration:

  • Verify targeting is enabled for your AI Config.
  • Check that the AI Config key in your code matches the key in LaunchDarkly.
  • Ensure your context matches the targeting rules.

OpenAI API errors

If you receive OpenAI API errors:

  • Verify your OPENAI_API_KEY is set correctly.
  • Check that your API key has sufficient permissions.
  • Ensure the model name in your AI Config matches an available OpenAI model.

Conclusion

In this guide, you connected an OpenAI-powered application to LaunchDarkly AI Configs. You can now:

  • Manage prompts and model settings in LaunchDarkly without code changes
  • Track token usage, latency, and success rates automatically
  • Use template variables to customize prompts per user
Want to know more? Start a trial.
Your 14-day trial begins as soon as you sign up. Get started in minutes using the in-app Quickstart. You'll discover how easy it is to release, monitor, and optimize your software.

Want to try it out? Start a trial.