Getting started with OpenAI and AI Configs

Overview

This guide shows how to connect an OpenAI-powered application to LaunchDarkly AI Configs. OpenAI models are widely adopted for chat completions, function calling, and structured outputs. By the end, you will be able to manage your model configuration and prompts outside of your application code, track metrics automatically, and switch between models or update prompts without redeploying.

This guide uses the base SDK with track_openai_metrics (Python) / trackOpenAIMetrics (Node.js) tracking helpers. LaunchDarkly also publishes higher-level OpenAI provider packages for Python and Node.js that handle model creation, parameter forwarding, and structured output.

AI Configs support two modes:

  • Completion mode returns messages and roles (such as “system,” “user,” “assistant”). Use it for chat-style interactions and message-oriented workflows. Completion mode supports online evaluations with judges attached in the LaunchDarkly user interface (UI).
  • Agent mode returns a single instructions string. Use it when your runtime or framework expects a goal/instructions input for a structured workflow. Agent mode changes the configuration shape from messages to instructions. Your application maps these instructions into your provider or framework’s native input.

Both modes support tool calling. This guide walks through completion mode as the main path, with an optional agent config section and an advanced tool-calling example. To learn more about when to use each mode, read When to use prompt-based vs agent mode.

This guide provides examples in both Python and Node.js (TypeScript).

Additional resources for AI Configs

If you are not familiar with AI Configs, start with the Quickstart for AI Configs and return to this guide when you are ready for a more detailed example.

You can find reference guides for each of the AI SDKs at AI SDKs.

Prerequisites

To complete this guide, you need the following:

  • A LaunchDarkly account.
  • An OpenAI API key.
  • A development environment:
    • Python: Python 3.10 or higher
    • Node.js: Node.js 20 or higher
  • Familiarity with LaunchDarkly contexts. To learn more, read Contexts and segments.

Concepts

Before you begin, review these key concepts.

AI Configs

An AI Config is a LaunchDarkly resource that controls how your application uses large language models. Each AI Config contains one or more variations. Each variation specifies:

  • A model configuration, including the model name and parameters
  • Messages that define the prompt

You can update these settings in LaunchDarkly at any time without changing your application code.

Contexts

A context represents the end user interacting with your application. LaunchDarkly uses context attributes to:

  • Determine which variation to serve based on targeting rules
  • Populate template variables in your prompts

The tracker

When you retrieve an AI Config, the SDK returns a config object with a tracker property. The tracker records metrics from your OpenAI calls, including:

  • Generation count
  • Input and output tokens
  • Latency
  • Success and error rates

These metrics appear on the Monitoring tab in LaunchDarkly.

Step 1: Install the SDK

Install the LaunchDarkly AI SDK and the OpenAI SDK in your application. AI Configs are supported by LaunchDarkly server-side SDKs only. The Node.js examples in this guide use the server-side Node.js AI SDK.

Here is how to install the required packages:

$pip install launchdarkly-server-sdk
$pip install launchdarkly-server-sdk-ai
$pip install openai
$pip install python-dotenv

Create a .env file in your project root with your API keys:

$# .env
$LAUNCHDARKLY_SDK_KEY=<your-launchdarkly-sdk-key>
$OPENAI_API_KEY=<your-openai-api-key>

Step 2: Initialize the clients

Initialize both the LaunchDarkly client and the OpenAI client. Store your API keys in environment variables.

Here is the initialization code:

1import os
2import ldclient
3from ldclient import Context
4from ldclient.config import Config
5from ldai.client import LDAIClient, AICompletionConfigDefault
6from openai import OpenAI
7
8# Initialize OpenAI
9openai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
10
11# Initialize LaunchDarkly
12ldclient.set_config(Config(os.environ.get("LAUNCHDARKLY_SDK_KEY")))
13
14if not ldclient.get().is_initialized():
15 print("LaunchDarkly SDK failed to initialize")
16 exit(1)
17
18# Initialize the AI client
19aiclient = LDAIClient(ldclient.get())

Step 3: Create an AI Config in LaunchDarkly

Create an AI Config in the LaunchDarkly UI to store your OpenAI model settings and prompts.

Using the MCP server or agent skills

If you have the LaunchDarkly MCP server or agent skills configured, prompt your coding assistant to create the AI Config for you. For example:

“Create a completion mode AI Config called ‘OpenAI assistant’ with a GPT-4o variation that has a system message: ‘You are a helpful assistant. Provide clear, concise answers.’ Set temperature to 0.7 and enable targeting.”

To create the AI Config:

  1. Click Create and select AI Config.
  2. In the Create AI Config dialog, select Completion.
  3. Enter a name, such as “OpenAI assistant”.
  4. Click Create.

To create a variation:

  1. On the Variations tab, replace “Untitled variation” with a name, such as “GPT-4o variation”.
  2. Click Select a model and choose an OpenAI model, such as “gpt-4o”.
  3. Add a system message to define your assistant’s behavior. Here is an example:
System message
You are a helpful assistant. Provide clear, concise answers.
  1. Click Review and save.

A completed variation with model configuration and system message.

A completed variation with model configuration and system message.

To enable targeting:

  1. Select the Targeting tab.
  2. In the “Default rule” section, click Edit.
  3. Set the default rule to serve your variation.
  4. Click Review and save.

The default targeting rule configured to serve a variation.

The default targeting rule configured to serve a variation.

Step 4: Get the AI Config in your application

Retrieve the AI Config from LaunchDarkly by calling the completion config function. Pass a context that represents the current user and a fallback configuration.

Here is how to get the AI Config:

1# Define the context for the current user
2context = Context.builder("user-123") \
3 .kind("user") \
4 .name("Sandy") \
5 .build()
6
7# Define a fallback configuration
8fallback_value = AICompletionConfigDefault(enabled=False)
9
10# Get the AI Config
11config = aiclient.completion_config(
12 "openai-assistant",
13 context,
14 fallback_value
15)
16tracker = config.tracker

The SDK uses the fallback configuration when LaunchDarkly is unreachable. Check the enabled property and handle the disabled case in your application.

Best practices

For production use:

  • Retrieve the AI Config each time you generate content so LaunchDarkly can evaluate the latest targeting rules and prompt changes.
  • Provide a fallback configuration when possible so your application can fail gracefully if LaunchDarkly is unavailable.
  • Avoid sending personally identifiable information in contexts unless you have a specific need and an approved handling pattern. To learn more, read Privacy in AI Configs.

Step 5: Call OpenAI and track metrics

Use the tracker to call OpenAI and record metrics automatically. The SDK provides a track_openai_metrics function (Python) or trackOpenAIMetrics function (Node.js) that wraps your OpenAI call.

Here is how to make the API call:

1if config.enabled:
2 messages = [] if config.messages is None else config.messages
3
4 config_dict = config.to_dict()
5 params = config_dict.get("model", {}).get("parameters", {})
6
7 completion = tracker.track_openai_metrics(
8 lambda: openai_client.chat.completions.create(
9 model=config.model.name,
10 messages=[message.to_dict() for message in messages],
11 **params,
12 )
13 )
14
15 print(completion.choices[0].message.content)
16else:
17 print("AI Config is disabled")

The tracker automatically records token usage, latency, and success or error status.

Streaming

The track_openai_metrics (Python) and trackOpenAIMetrics (Node.js) helpers expect a complete response object. If your application uses streaming responses, use the lower-level track_duration, track_tokens, track_success, and track_error methods to record metrics manually. To learn more, read Monitor AI Configs.

Not every parameter or capability applies uniformly to every OpenAI model. For example, function calling behavior and token limits differ between model families. Refer to OpenAI’s model documentation for model-specific details.

Step 6: Add user input to the conversation

Combine the messages from your AI Config with user input to create a complete conversation.

Here is how to add a user message:

1if config.enabled:
2 messages = [] if config.messages is None else config.messages
3
4 # Convert to list and add user message
5 message_list = [message.to_dict() for message in messages]
6 message_list.append({
7 "role": "user",
8 "content": "What is the capital of France?"
9 })
10
11 config_dict = config.to_dict()
12 params = config_dict.get("model", {}).get("parameters", {})
13
14 completion = tracker.track_openai_metrics(
15 lambda: openai_client.chat.completions.create(
16 model=config.model.name,
17 messages=message_list,
18 **params,
19 )
20 )
21
22 print(completion.choices[0].message.content)

Step 7: Use template variables

Use template variables in your AI Config messages to customize prompts at runtime. Variables use double curly brace syntax: {{ variable_name }}.

To use context attributes, prefix with ldctx:

Address the user as {{ ldctx.name }}.

To use custom variables, pass them when calling the completion config function.

Here is an example AI Config message with variables:

AI Config message in LaunchDarkly
You are a {{ assistant_type }} assistant. Help {{ ldctx.name }} with their questions about {{ topic }}.

Here is how to pass variables in your application code:

1config = aiclient.completion_config(
2 "openai-assistant",
3 context,
4 fallback_value,
5 {
6 "assistant_type": "technical",
7 "topic": "Python programming"
8 }
9)
10tracker = config.tracker

The SDK replaces variables with their values before returning the configuration.

Step 8: Monitor your AI Config

To view aggregated metrics across all your AI Configs, navigate to Insights in the left navigation under the AI section. The Insights overview page displays cost, latency, error rate, invocation counts, and model distribution across your organization. To learn more, read Trends Explorer.

To view metrics for a specific AI Config:

  1. Navigate to your AI Config.
  2. Select the Monitoring tab.

The Monitoring tab displays the following metrics:

  • Generations: Average number of successful generations per variation.
  • Token usage: Input and output tokens consumed by each variation.
  • Time to generate: The latency of each model call, measured end to end.
  • Error rate: The percentage of invocations that returned an error.
  • Costs: Estimated costs based on token usage and the model’s pricing.

Metrics update approximately every minute. Use these metrics to compare variations and optimize your prompts.

Observability

LaunchDarkly provides metrics for AI Config invocations, including latency, token usage, costs, and error rates. If you also want traces associated with an evaluated AI Config, run the model request inside an active OpenTelemetry parent span. To learn more, read Observability and LLM observability.

Optional: Retrieve an agent-based AI Config

Agent mode returns a single instructions string instead of a messages array. Use it when your runtime or framework expects a goal/instructions input for a structured workflow. Agent mode changes the configuration shape from messages to instructions. Your application maps these instructions into your provider or framework’s native input.

To use agent mode, create an AI Config with mode set to Agent in the LaunchDarkly UI, then retrieve it with the agent config function. Map the LaunchDarkly instructions field into OpenAI’s top-level instructions parameter.

1from ldai.client import AIAgentConfigDefault
2
3agent = aiclient.agent_config(
4 'openai-agent',
5 context,
6 AIAgentConfigDefault(enabled=False),
7)
8
9if agent.enabled:
10 tracker = agent.tracker
11 agent_dict = agent.to_dict()
12 params = agent_dict.get("model", {}).get("parameters", {})
13
14 completion = tracker.track_openai_metrics(
15 lambda: openai_client.chat.completions.create(
16 model=agent.model.name,
17 messages=[
18 {"role": "system", "content": agent.instructions},
19 {"role": "user", "content": "Triage this support request: my order hasn't arrived"},
20 ],
21 **params,
22 )
23 )
24 print(completion.choices[0].message.content)

In completion mode, you can attach judges to variations in the LaunchDarkly UI for automatic evaluation. In agent mode, invoke judges programmatically through the AI SDK.

To learn more, read When to use prompt-based vs agent mode and Agents in AI Configs.

Step 9: Close the client

Close the LaunchDarkly client when your application shuts down to flush pending events.

Here is how to close the client:

1ldclient.get().close()

For short-lived applications such as scripts, explicitly flush events before closing.

Here is how to flush events:

1ldclient.get().flush()
2ldclient.get().close()

Complete example

Here is a complete working example that combines all the steps.

1import os
2import ldclient
3from ldclient import Context
4from ldclient.config import Config
5from ldai.client import LDAIClient, AICompletionConfigDefault
6from openai import OpenAI
7from dotenv import load_dotenv
8
9def main():
10 # Load environment variables
11 load_dotenv()
12
13 # Initialize clients
14 openai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
15
16 ldclient.set_config(Config(os.environ.get("LAUNCHDARKLY_SDK_KEY")))
17
18 if not ldclient.get().is_initialized():
19 print("LaunchDarkly SDK failed to initialize")
20 return
21
22 aiclient = LDAIClient(ldclient.get())
23
24 # Define context and fallback
25 context = Context.builder("user-123") \
26 .kind("user") \
27 .name("Sandy") \
28 .build()
29
30 fallback_value = AICompletionConfigDefault(enabled=False)
31
32 # Get AI Config
33 config = aiclient.completion_config(
34 "openai-assistant",
35 context,
36 fallback_value,
37 {"topic": "Python programming"}
38 )
39 tracker = config.tracker
40
41 # Call OpenAI
42 if config.enabled:
43 messages = [] if config.messages is None else config.messages
44 message_list = [message.to_dict() for message in messages]
45 message_list.append({
46 "role": "user",
47 "content": "How do I read a file in Python?"
48 })
49
50 config_dict = config.to_dict()
51 params = config_dict.get("model", {}).get("parameters", {})
52
53 try:
54 completion = tracker.track_openai_metrics(
55 lambda: openai_client.chat.completions.create(
56 model=config.model.name,
57 messages=message_list,
58 **params,
59 )
60 )
61 print(completion.choices[0].message.content)
62 except Exception as e:
63 print(f"Error: {e}")
64 else:
65 print("AI Config is disabled")
66
67 # Close client
68 ldclient.get().flush()
69 ldclient.get().close()
70
71if __name__ == "__main__":
72 main()

What to explore next

After you have the basic integration working, you can extend it with:

  • Tools for calling external functions from your workflows
  • Online evaluations to score response quality automatically
  • Experiments to compare AI Config variations statistically
  • Agents for multi-step workflows

For more AI Configs guides, read the other guides in the AI Configs guides section.

Advanced: Add tool calling with OpenAI

OpenAI’s function calling lets the model request tool executions during a conversation. You can combine this with either completion or agent mode AI Configs. The following example uses an agent config with a tool-calling loop.

1from ldai.client import AIAgentConfigDefault
2import json
3
4# Define a tool
5def get_order_status(order_id: str) -> str:
6 orders = {
7 "ORD-123": "Shipped — arrives Thursday",
8 "ORD-456": "Processing — estimated ship date: tomorrow",
9 "ORD-789": "Delivered on Monday",
10 }
11 return orders.get(order_id, f"No order found with ID {order_id}")
12
13tool_handlers = {"get_order_status": get_order_status}
14
15tools = [{
16 "type": "function",
17 "function": {
18 "name": "get_order_status",
19 "description": "Look up the status of a customer order",
20 "parameters": {
21 "type": "object",
22 "properties": {
23 "order_id": {"type": "string", "description": "The order ID"}
24 },
25 "required": ["order_id"],
26 },
27 },
28}]
29
30agent = aiclient.agent_config(
31 'openai-agent', context, AIAgentConfigDefault(enabled=False),
32)
33
34if agent.enabled:
35 tracker = agent.tracker
36 agent_dict = agent.to_dict()
37 params = agent_dict.get("model", {}).get("parameters", {})
38
39 messages = [
40 {"role": "system", "content": agent.instructions},
41 {"role": "user", "content": "What's the status of order ORD-123?"},
42 ]
43
44 # Tool-calling loop
45 MAX_STEPS = 5
46 for step in range(MAX_STEPS):
47 completion = tracker.track_openai_metrics(
48 lambda: openai_client.chat.completions.create(
49 model=agent.model.name,
50 messages=messages,
51 tools=tools,
52 **params,
53 )
54 )
55
56 choice = completion.choices[0]
57 if choice.finish_reason != "tool_calls":
58 print(choice.message.content)
59 break
60
61 messages.append(choice.message)
62 for tool_call in choice.message.tool_calls:
63 fn_name = tool_call.function.name
64 fn_args = json.loads(tool_call.function.arguments)
65 result = tool_handlers[fn_name](**fn_args)
66 messages.append({
67 "role": "tool",
68 "tool_call_id": tool_call.id,
69 "content": result,
70 })
Production considerations

Always set a maximum iteration limit to prevent runaway loops. You can manage tool definitions in the Tools Library in LaunchDarkly and attach them to your AI Config variations.

For multi-agent orchestration patterns with OpenAI Swarm, LangGraph, or Strands, read Compare AI orchestrators.

Troubleshooting

Some solutions for common problems are outlined below.

Metrics not appearing

If metrics do not appear on the Monitoring tab:

  • Verify that you are calling tracker.track_openai_metrics() (Python) or tracker.trackOpenAIMetrics() (Node.js).
  • Ensure you call flush() before closing the client, especially for short-lived scripts.
  • Wait at least one minute for metrics to process.

SDK initialization failures

If the LaunchDarkly SDK fails to initialize:

  • Verify your SDK key is correct and matches the environment you are targeting.
  • Check that your network can reach LaunchDarkly servers.
  • Review the SDK logs for specific error messages.

Config returns fallback value

If you always receive the fallback configuration:

  • Verify targeting is enabled for your AI Config.
  • Check that the AI Config key in your code matches the key in LaunchDarkly.
  • Ensure your context matches the targeting rules.

Conclusion

In this guide, you connected an OpenAI-powered application to LaunchDarkly AI Configs. You can now:

  • Manage prompts and model settings in LaunchDarkly without code changes
  • Track token usage, latency, and success rates automatically
  • Use template variables to customize prompts per user
Want to know more? Start a trial.
Your 14-day trial begins as soon as you sign up. Get started in minutes using the in-app Quickstart. You'll discover how easy it is to release, monitor, and optimize your software.

Want to try it out? Start a trial.