Getting started with LangChain and AI Configs

Overview

This guide shows how to use a LangChain chat model with LaunchDarkly AI Configs. LangChain is a framework for building LLM-powered applications that abstracts away provider differences — choose it when you want to swap between OpenAI, Anthropic, Gemini, and other providers without changing your application code. By the end, you will have a working integration that retrieves model configuration and prompts from LaunchDarkly, invokes a LangChain model, and reports metrics back automatically.

This guide uses the official LaunchDarkly LangChain provider packages, which handle model creation, provider mapping, and metrics extraction. This guide uses the low-level model-creation flow so you can see how the pieces fit together. For higher-level helper methods such as direct provider creation and structured-output support, read the Python or Node.js LangChain provider package documentation.

AI Configs support two modes:

  • Completion mode returns messages and roles (such as “system,” “user,” “assistant”). Use it for chat-style interactions and message-oriented workflows. Completion mode supports online evaluations with judges attached in the LaunchDarkly user interface (UI).
  • Agent mode returns a single instructions string. Use it when your runtime or framework expects a goal/instructions input for a structured workflow. Agent mode changes the configuration shape from messages to instructions. Your application maps these instructions into your provider or framework’s native input.

Both modes support tool calling. LangChain’s provider abstraction works with both modes, letting you swap models without changing your application code. This guide walks through completion mode as the main path, with an optional agent config section. To learn more about when to use each mode, read When to use prompt-based vs agent mode.

This guide provides examples in both Python and Node.js (TypeScript).

Additional resources for AI Configs

If you are not familiar with AI Configs, start with the Quickstart for AI Configs and return to this guide when you are ready for a LangChain-specific example.

You can find reference guides for each of the AI SDKs at AI SDKs.

Prerequisites

To complete this guide, you need the following:

  • A LaunchDarkly account.
  • An API key for your chosen model provider (OpenAI, Anthropic, or another supported provider).
  • A development environment:
    • Python: Python 3.10 or higher
    • Node.js: Node.js 20 or higher
  • The LangChain integration package for your model provider installed locally. For example, langchain-openai for OpenAI models, langchain-anthropic for Anthropic models, or langchain-google-genai for Gemini. LangChain raises an ImportError if the provider package is not installed when the model is created.
  • Familiarity with LaunchDarkly contexts. To learn more, read Contexts and segments.

Concepts

Before you begin, review these key concepts.

AI Configs

An AI Config is a LaunchDarkly resource that controls how your application uses large language models. Each AI Config contains one or more variations. Each variation specifies:

  • A model configuration, including the model name and parameters
  • Messages that define the prompt

You can update these settings in LaunchDarkly at any time without changing your application code.

The LaunchDarkly LangChain provider

LaunchDarkly publishes official LangChain provider packages for Python and Node.js. These packages handle model creation, message conversion, and metrics extraction so you do not need to write custom integration code.

The provider creates a LangChain chat model directly from an AI Config, passing through all model parameters and mapping provider names automatically. It also extracts token usage from LangChain responses for tracking in LaunchDarkly.

Contexts

A context represents the end user interacting with your application. LaunchDarkly uses context attributes to:

  • Determine which variation to serve based on targeting rules
  • Populate template variables in your prompts

The tracker

When you retrieve an AI Config, the SDK returns a config object with a tracker property. The tracker records metrics from your LangChain calls, including:

  • Generation count
  • Input and output tokens
  • Latency
  • Success and error rates

These metrics appear on the Monitoring tab in LaunchDarkly.

Step 1: Install the SDK

Install the LaunchDarkly AI SDK and the official LangChain provider package. AI Configs are supported by LaunchDarkly server-side SDKs only. The Node.js examples in this guide use the server-side Node.js AI SDK.

Here is how to install the required packages:

$pip install launchdarkly-server-sdk
$pip install launchdarkly-server-sdk-ai
$pip install launchdarkly-server-sdk-ai-langchain
$pip install langchain

Then install the LangChain integration package for your model provider:

$# Install one or more provider packages:
$pip install langchain-openai # OpenAI models
$pip install langchain-anthropic # Anthropic models
$pip install langchain-google-genai # Google Gemini models

If your AI Config targets a provider whose LangChain package is not installed, the model creation step raises an error at runtime.

Create a .env file in your project root with your credentials:

$# .env
$LAUNCHDARKLY_SDK_KEY=<your-launchdarkly-sdk-key>
$OPENAI_API_KEY=<your-openai-api-key>

Step 2: Initialize the clients

Initialize the LaunchDarkly client and the AI client. Store your SDK key in an environment variable.

Here is the initialization code:

1import os
2import asyncio
3import ldclient
4from ldclient import Context
5from ldclient.config import Config
6from ldai.client import LDAIClient
7from ldai_langchain import (
8 create_langchain_model,
9 convert_messages_to_langchain,
10 get_ai_metrics_from_response,
11)
12from langchain_core.messages import HumanMessage
13
14# Initialize LaunchDarkly
15sdk_key = os.getenv('LAUNCHDARKLY_SDK_KEY')
16ldclient.set_config(Config(sdk_key))
17
18if not ldclient.get().is_initialized():
19 print("LaunchDarkly SDK failed to initialize")
20 exit(1)
21
22# Initialize the AI client
23aiclient = LDAIClient(ldclient.get())

Replace LAUNCHDARKLY_SDK_KEY with your LaunchDarkly SDK key. You can find your SDK key from the Environments list for your LaunchDarkly project. To learn how, read SDK credentials.

Step 3: Create an AI Config in LaunchDarkly

Create an AI Config to store your model settings and prompts. You can do this through the LaunchDarkly UI, or programmatically using the LaunchDarkly MCP server from an AI coding assistant such as Claude Code or Cursor.

Using the MCP server or agent skills

If you have the LaunchDarkly MCP server or agent skills configured, prompt your coding assistant to create the AI Config for you. For example:

“Create a completion mode AI Config called ‘LangChain assistant’ with a GPT-4o variation that has a system message: ‘You are a helpful assistant. Provide clear, concise answers.’ Set temperature to 0.7 and enable targeting.”

To create the AI Config in the UI:

  1. Click Create and select AI Config.
  2. In the Create AI Config dialog, select Completion.
  3. Enter a name, such as “LangChain assistant”.
  4. Click Create.

To create a variation:

  1. On the Variations tab, replace “Untitled variation” with a name, such as “GPT-4o variation”.
  2. Click Select a model and choose a model, such as “gpt-4o” from OpenAI.
  3. Click Parameters to configure model parameters such as temperature and max tokens.
  4. Add a system message to define your assistant’s behavior. Here is an example:
System message
You are a helpful assistant. Provide clear, concise answers.
  1. Click Review and save.

A completed variation with model configuration and system message.

A completed variation with model configuration and system message.

To enable targeting:

  1. Select the Targeting tab.
  2. In the “Default rule” section, click Edit.
  3. Set the default rule to serve your variation.
  4. Click Review and save.

The default targeting rule configured to serve a variation.

The default targeting rule configured to serve a variation.

Step 4: Get the AI Config in your application

Retrieve the AI Config from LaunchDarkly by calling the completion config function. Pass a context that represents the current user.

Here is how to get the AI Config:

1# Define the context for the current user
2context = (
3 Context
4 .builder('example-user-key')
5 .kind('user')
6 .name('Sandy')
7 .build()
8)
9
10# Get the AI Config
11config_value = aiclient.completion_config(
12 'langchain-assistant',
13 context,
14 variables={'myVariable': 'value'}
15)
16tracker = config_value.tracker

In Node.js, the SDK uses the provided fallback configuration when LaunchDarkly is unreachable. In Python, the default parameter is optional; if omitted, the SDK returns a disabled config as the fallback. Check the enabled property and handle the disabled case in your application.

Best practices

For production use:

  • Retrieve the AI Config each time you generate content so LaunchDarkly can evaluate the latest targeting rules and prompt changes.
  • Provide a fallback configuration when possible so your application can fail gracefully if LaunchDarkly is unavailable.
  • Avoid sending personally identifiable information in contexts unless you have a specific need and an approved handling pattern. To learn more, read Privacy in AI Configs.

Step 5: Create a LangChain model from the AI Config

Use the LaunchDarkly LangChain provider to create a chat model from the AI Config. The provider passes all model parameters from the variation (temperature, max tokens, and others) through to LangChain, and maps provider names automatically.

Here is how to create the model:

1if config_value.enabled:
2 # Creates a LangChain BaseChatModel with all parameters from the AI Config
3 llm = create_langchain_model(config_value)

The provider maps LaunchDarkly provider names to LangChain equivalents. Both Python and Node.js map “Gemini” to google-genai. The Python provider additionally maps “Bedrock” variants (such as “Bedrock:Anthropic”) to bedrock_converse and passes the sub-provider as a parameter. Other provider names pass through as-is in both languages.

Step 6: Call the model and track metrics

Combine the messages from your AI Config with user input, then use the tracker to call the model and record metrics automatically.

Here is how to make the API call:

1from langchain_core.messages import HumanMessage
2
3async def run_completion():
4 if not config_value.enabled:
5 print("AI Config is disabled")
6 return
7
8 # Convert AI Config messages to LangChain format and add user input
9 messages = convert_messages_to_langchain(config_value.messages or [])
10 messages.append(HumanMessage(content='What can you help me with?'))
11
12 # Track the completion with LaunchDarkly metrics
13 completion = await tracker.track_metrics_of_async(
14 lambda: llm.ainvoke(messages),
15 get_ai_metrics_from_response,
16 )
17
18 print(completion.content)
19
20asyncio.run(run_completion())

The tracker wraps the model call and automatically records duration, token usage, and success or error status.

Step 7: Use template variables

Use template variables in your AI Config messages to customize prompts at runtime. Variables use double curly brace syntax: {{ variable_name }}.

To use context attributes, prefix with ldctx:

Address the user as {{ ldctx.name }}.

To use custom variables, pass them when calling the completion config function.

Here is an example AI Config message with variables:

AI Config message in LaunchDarkly
You are an expert assistant for {{ topic }}. Provide detailed answers with examples.

Here is how to pass variables in your application code:

1config_value = aiclient.completion_config(
2 'langchain-assistant',
3 context,
4 variables={
5 'topic': 'machine learning'
6 }
7)

The SDK replaces variables with their values before returning the configuration.

Step 8: Monitor your AI Config

View metrics for your AI Configs in the LaunchDarkly UI.

To view aggregated metrics across all your AI Configs, navigate to Insights in the left navigation under the AI section. The Insights overview page displays cost, latency, error rate, invocation counts, and model distribution across your organization. To learn more, read Trends Explorer.

To view metrics for a specific AI Config:

  1. Navigate to your AI Config.
  2. Select the Monitoring tab.

The Monitoring tab displays:

  • Generation count: number of successful model invocations
  • Token usage: input and output tokens per variation
  • Time to generate: average latency per generation
  • Error rate: percentage of failed invocations
  • Costs: estimated spend based on token usage and model pricing

Metrics update approximately every minute. Use these metrics to compare variations and optimize your prompts. To learn more, read Monitor AI Configs.

To run statistical comparisons between variations, read Run experiments with AI Configs. To score response quality automatically using judges, read Online evaluations in AI Configs.

Observability

LaunchDarkly provides metrics for AI Config invocations, including latency, token usage, costs, and error rates. If you also want traces associated with an evaluated AI Config, run the model request inside an active OpenTelemetry parent span. To learn more, read Observability and LLM observability.

Optional: Retrieve an agent-based AI Config

Agent mode returns a single instructions string instead of a messages array. Use it when your runtime or framework expects a goal/instructions input for a structured workflow. Agent mode changes the configuration shape from messages to instructions. Your application maps these instructions into your provider or framework’s native input.

To use agent mode, create an AI Config with mode set to Agent in the LaunchDarkly UI, then retrieve it with the agent config function. Map the LaunchDarkly instructions field into LangChain’s message abstraction.

1from langchain_core.messages import SystemMessage
2
3agent = aiclient.agent_config(
4 'langchain-agent',
5 context,
6 variables={'topic': 'Python'}
7)
8
9if agent.enabled:
10 llm = create_langchain_model(agent)
11
12 # Map instructions into LangChain's SystemMessage
13 messages = [
14 SystemMessage(content=agent.instructions),
15 HumanMessage(content='Triage this support request: my order has not arrived'),
16 ]
17
18 tracker = agent.tracker
19 completion = await tracker.track_metrics_of_async(
20 lambda: llm.ainvoke(messages),
21 get_ai_metrics_from_response
22 )
23 print(completion.content)

In completion mode, you can attach judges to variations in the LaunchDarkly UI for automatic evaluation. In agent mode, invoke judges programmatically through the AI SDK.

To learn more, read When to use prompt-based vs agent mode and Agents in AI Configs. For multi-agent workflows with LangGraph, read Compare AI orchestrators.

Step 9: Close the client

Close the LaunchDarkly client when your application shuts down to flush pending events.

Here is how to close the client:

1ldclient.get().flush()
2ldclient.get().close()

For short-lived applications such as scripts, explicitly flush events before closing.

Complete example

Here is a complete working example that combines all the steps.

1import os
2import asyncio
3import ldclient
4from ldclient import Context
5from ldclient.config import Config
6from ldai.client import LDAIClient
7from ldai_langchain import (
8 create_langchain_model,
9 convert_messages_to_langchain,
10 get_ai_metrics_from_response,
11)
12from langchain_core.messages import HumanMessage
13
14# Set sdk_key to your LaunchDarkly SDK key.
15sdk_key = os.getenv('LAUNCHDARKLY_SDK_KEY')
16
17# Set config_key to the AI Config key you want to evaluate.
18ai_config_key = os.getenv('LAUNCHDARKLY_AI_CONFIG_KEY', 'sample-ai-config')
19
20async def async_main():
21 if not sdk_key:
22 print("*** Please set the LAUNCHDARKLY_SDK_KEY env first")
23 exit()
24
25 ldclient.set_config(Config(sdk_key))
26 if not ldclient.get().is_initialized():
27 print("*** SDK failed to initialize.")
28 exit()
29
30 aiclient = LDAIClient(ldclient.get())
31 print("*** SDK successfully initialized")
32
33 # Set up the evaluation context.
34 context = (
35 Context
36 .builder('example-user-key')
37 .kind('user')
38 .name('Sandy')
39 .build()
40 )
41
42 config_value = aiclient.completion_config(
43 ai_config_key,
44 context,
45 variables={'myUserVariable': "Testing Variable"}
46 )
47 tracker = config_value.tracker
48
49 if not config_value.enabled:
50 print("AI Config is disabled")
51 return
52
53 try:
54 # Create LangChain model from AI Config
55 print("Model:", config_value.model.name,
56 "Provider:", config_value.provider.name)
57 llm = create_langchain_model(config_value)
58
59 # Convert messages and add user input
60 messages = convert_messages_to_langchain(
61 config_value.messages or []
62 )
63 USER_INPUT = "What can you help me with?"
64 print("User Input:\n", USER_INPUT)
65 messages.append(HumanMessage(content=USER_INPUT))
66
67 # Track the LangChain completion with LaunchDarkly metrics
68 completion = await tracker.track_metrics_of_async(
69 lambda: llm.ainvoke(messages),
70 get_ai_metrics_from_response,
71 )
72 ai_response = completion.content
73 print("AI Response:\n", ai_response)
74 print("Success.")
75
76 except Exception as e:
77 print(f"Error during completion: {e}")
78
79 # Flush pending events and close the connection.
80 ldclient.get().flush()
81 ldclient.get().close()
82
83
84def main():
85 asyncio.run(async_main())
86
87
88if __name__ == "__main__":
89 main()

What to explore next

After you have the basic integration working, you can extend it with:

  • Tools for calling external functions from your workflows
  • Online evaluations to score response quality automatically
  • Experiments to compare AI Config variations statistically
  • Agents for multi-step workflows with LangChain or LangGraph

For structured outputs and higher-level helper methods, read the Python or Node.js LangChain provider package documentation.

Troubleshooting

Some solutions for common problems are outlined below.

Provider package not installed

If you receive ImportError (Python) or Error (Node.js) when creating the model, verify that the LangChain integration package for your provider is installed. For example, if your AI Config uses an OpenAI model, install langchain-openai (Python) or @langchain/openai (Node.js). The LaunchDarkly LangChain provider creates models using LangChain’s init_chat_model, which requires the correct provider package at runtime.

Metrics not appearing

If metrics do not appear on the Monitoring tab:

  • Verify that you are calling tracker.track_metrics_of_async() (Python) or tracker.trackMetricsOf() (Node.js) with the LangChain metrics extractor.
  • Ensure you call flush() before closing the client, especially for short-lived scripts.
  • Wait at least one minute for metrics to process.

SDK initialization failures

If the LaunchDarkly SDK fails to initialize:

  • Verify your SDK key is correct and matches the environment you are targeting.
  • Check that your network can reach LaunchDarkly servers.
  • Review the SDK logs for specific error messages.

Config returns fallback value

If you always receive the fallback configuration:

  • Verify targeting is enabled for your AI Config.
  • Check that the AI Config key in your code matches the key in LaunchDarkly.
  • Ensure your context matches the targeting rules.

Package import errors

If you receive ModuleNotFoundError (Python) or Cannot find module (Node.js) for the LaunchDarkly packages:

  • Verify the LangChain provider package is installed. The Python package is launchdarkly-server-sdk-ai-langchain (imported as ldai_langchain). The Node.js package is @launchdarkly/server-sdk-ai-langchain.
  • Ensure you are using compatible current versions of launchdarkly-server-sdk-ai, launchdarkly-server-sdk-ai-langchain, and your LangChain provider package. Check the package documentation or release notes if you encounter import or runtime errors.

Conclusion

In this guide, you connected a LangChain chat model to LaunchDarkly AI Configs using the official LangChain provider packages. You can now:

  • Manage prompts and model settings in LaunchDarkly without code changes
  • Track token usage, latency, and success rates automatically
  • Use template variables to customize prompts per user
Want to know more? Start a trial.
Your 14-day trial begins as soon as you sign up. Get started in minutes using the in-app Quickstart. You'll discover how easy it is to release, monitor, and optimize your software.

Want to try it out? Start a trial.