Getting started with LangGraph and AI Configs
Overview
This guide shows how to integrate LangGraph agent workflows with LaunchDarkly AI Configs. Using AI Configs with LangGraph lets you manage agent instructions, model configuration, and parameters outside of your application code.
Benefits of LangGraph
LangGraph is the leading framework for building stateful, multi-agent AI applications. LangGraph excels at:
- Directed graph workflows: define complex agent flows as graphs with nodes and edges
- State management: maintain conversation context and agent state across turns
- Multi-agent orchestration: coordinate multiple specialized agents in a single workflow
- Conditional routing: branch execution based on agent outputs or external conditions
- Human-in-the-loop: pause execution for human approval or input
- Checkpointing: save and resume agent state for long-running workflows
LangGraph’s create_react_agent function works naturally with AI Config agent mode, where the instructions field provides the system prompt for your agents.
With this integration, you can:
- Switch between models for your agents without code changes
- Update agent instructions and system prompts at runtime
- Target different agent configurations to different users
- Track token usage, latency, and success rates per agent
- Compare agent performance across variations
- Coordinate multi-agent workflows with centralized configuration
This guide uses agent mode for LangGraph workflows. Agent mode uses a single instructions string rather than a messages array, which maps directly to LangGraph’s agent prompts. To learn more, read Agents in AI Configs.
Additional resources for AI Configs
If you are not familiar with AI Configs, start with the Quickstart for AI Configs and return to this guide when you are ready for a LangGraph-specific example.
You can find reference guides for each of the AI SDKs at AI SDKs. For Python-specific details, read the Python AI SDK reference.
Prerequisites
To complete this guide, you must have the following prerequisites:
- A LaunchDarkly account, including:
- A LaunchDarkly SDK key for your environment.
- A role that allows AI Config actions. The LaunchDarkly Project Admin, Maintainer, and Developer project roles, as well as the Admin and Owner base roles, include this ability.
- A Python development environment. The LaunchDarkly Python AI SDK is compatible with Python 3.8.0 and higher.
- LangGraph installed in your project.
- An API key for your chosen model provider (OpenAI, Anthropic, or another supported provider).
Concepts
Before you begin, review these key concepts.
LangGraph agents
LangGraph provides a framework for building agent workflows as directed graphs. The create_react_agent function creates a ReAct-style agent that can use tools and maintain state across conversation turns. Agents receive a system prompt that defines their behavior and capabilities.
Agent mode AI Configs
Agent mode AI Configs use an instructions field instead of a messages array. This single instruction string serves as the system prompt for your agent. Agent mode is ideal for:
- Multi-step agent workflows
- Tool-using agents
- Persistent agent sessions
The agent_config function
The agent_config function retrieves the AI Config variation for a given context. It returns an AIAgentConfig object that includes the customized instructions, model configuration, and a tracker property for recording metrics. Call this function each time you create an agent so LaunchDarkly can evaluate targeting and return the current configuration.
Step 1: Install dependencies
Install the LaunchDarkly SDKs and LangGraph packages.
Install the LangChain provider package for your model. Common provider packages include:
langchain-openaifor OpenAI modelslangchain-anthropicfor Anthropic modelslangchain-google-genaifor Google Gemini models
Step 2: Create an AI Config in LaunchDarkly
Create an AI Config in agent mode to store your agent configuration.
To create an AI Config:
- In the left navigation, click Create and select AI Config.
- In the “Create AI Config” dialog, select Agent.
- Enter a name for your AI Config, for example, “LangGraph Agent.”
- Click Create.
Then, create a variation:
- On the Variations tab, replace “Untitled variation” with a variation name, such as “GPT-4o Agent”.
- Click Select a model and choose the
gpt-4oOpenAI model. - Click Parameters and set
temperatureto0.7andmax_tokensto2000. - In the Instructions field, enter your agent’s system prompt:
- Click Review and save.

Step 3: Set up targeting rules
Configure targeting rules to control which users receive the AI Config variation.
To set up the default rule:
- Select the Targeting tab for your AI Config.
- In the “Default rule” section, click Edit.
- Configure the default rule to serve your variation, such as “GPT-4o Agent”.
- Click Review and save.

The AI Config is enabled by default. After you add the integration code to your application, LaunchDarkly serves the configured variation to your users.
Step 4: Integrate LangGraph with AI Configs
The integration involves these key steps:
- Define the tools your agent can call.
- Initialize the LaunchDarkly SDK and AI client.
- Get the agent config using
agent_config()(Python) oraiClient.agentConfig()(Node.js). - Build a LangChain model from the AI Config using the LaunchDarkly LangChain provider.
- Create a LangGraph ReAct agent with a
MemorySavercheckpointer. - Invoke the agent and track metrics with the config’s tracker.
Define the agent’s tools.
Initialize the LaunchDarkly SDK and AI client, fetch the agent config, build the LangChain model with create_langchain_model (Python) or LangChainProvider.createLangChainModel (Node.js), and create the ReAct agent.
The provider reads the model name, provider, and all parameters (temperature, max tokens, and others) from the variation, maps LaunchDarkly provider names to LangChain equivalents — for example, "gemini" to "google_genai" — and returns a configured chat model. The same AI Config key can serve OpenAI, Anthropic, or any other provider-backed variation from the same code path.
Invoke the agent and track metrics. The Python example uses a track_langgraph_metrics helper that calls tracker.track_duration_of, then iterates over every message in the result and calls get_ai_metrics_from_response to aggregate token counts before calling tracker.track_tokens. The Node.js example uses tracker.trackMetricsOf with a langgraphMetrics converter that sums token counts across all messages the agent produced, with multiple fallback field names to handle different provider response shapes.
The fallback argument to agent_config / agentConfig is optional. When omitted, LaunchDarkly returns a disabled config if the flag is off or the SDK is unreachable. Pass an explicit fallback to keep the agent running during outages.
Complete example
Here is a complete working example that combines all the steps.
Step 5: Monitor results
View metrics for your AI Config in the LaunchDarkly UI.
To monitor results:
- In LaunchDarkly, navigate to your AI Config.
- Select the Monitoring tab.
LaunchDarkly displays metrics including:
- Generation count
- Token usage (input, output, total)
- Time to generate
- Error rate
Use these metrics to compare agent performance, identify cost differences, and make data-driven decisions about which configuration to use for different user segments. To learn more, read Monitor AI Configs.
To view aggregated metrics across all your AI Configs, navigate to Insights in the left navigation under the AI section. The Insights overview page displays cost, latency, error rate, invocation counts, and model distribution across your organization. To learn more, read about AI insights.

Agent mode vs completion mode
Conclusion
In this guide, you learned how to integrate LangGraph agent workflows with LaunchDarkly AI Configs to manage agent configuration outside of your application code.
You can now:
- Change agent models and instructions without redeploying your application
- Target different agent configurations to different users based on context attributes
- Track and compare agent performance across variations
- Maintain conversation state with LangGraph checkpointing
- Coordinate multi-agent workflows with centralized configuration
To explore additional capabilities, read:
- Run experiments with AI Configs to compare agent variations using statistical analysis
- Target with AI Configs to serve different agents to different user segments
- Agent graphs in AI Configs to orchestrate multi-agent workflows
For more AI Configs examples, read the other AI Configs guides in this section.
Want to know more? Start a trial.
Your 14-day trial begins as soon as you sign up. Get started in minutes using the in-app Quickstart. You'll discover how easy it is to release, monitor, and optimize your software.Want to try it out? Start a trial.