LLM Product Development with LaunchDarkly Agent Skills in Claude Code, Cursor, or Windsurf

Published February 13, 2026

Portrait of Scarlett Attensil.

by Scarlett Attensil

You’ve got an idea for a side hustle. Maybe it’s a micro-SaaS, a Chrome extension, or a tool you wish existed. The problem isn’t building it. It’s everything before that: validating the idea, writing copy that converts, and picking a stack you won’t regret in six months.

What if you could just tell your AI coding assistant what you need?

“Create three agents: one that validates startup ideas, one that writes landing pages, and one that recommends tech stacks. Give each one the right tools and variables.”

That’s it. No clicking through UIs. No YAML files. No manual setup. Your assistant reads the LaunchDarkly Agent Skills, understands how to use the API, and builds everything for you. In under ten minutes, you have a working multi-agent system with targeting, metrics, and instant model swapping built in.

That’s what we’re building in this tutorial.

Why LaunchDarkly

Here’s what happens without centralized AI config management: your prompts live in code, scattered across files. Want to try GPT instead of Claude? Redeploy. Want to A/B test two different system prompts? Write custom infrastructure. Want to kill a misbehaving agent at 2am? Wake up a developer.

LaunchDarkly AI Configs fix this:

  • Change models instantly: swap Sonnet 4.5 for GPT-5.2 from the dashboard, no deploy needed
  • A/B test everything: split traffic between prompt variations and measure which converts
  • Kill switch: disable any agent in one click when it starts hallucinating
  • Cost visibility: track tokens and latency per agent, per variation, per user segment

Agent Skills are the shortcut. Instead of learning the LaunchDarkly API or navigating the UI, you just describe what you want in a sentence. Your AI coding assistant reads the skills and handles the rest: creating projects, configuring agents, attaching tools, setting up targeting. You talk, it builds.

Overview

LaunchDarkly Agent Skills speed up LLM product development by letting you describe what you want in plain English. Install them once, and your AI coding assistant knows how to create AI Configs, set up targeting, and wire up metrics, all from natural language prompts.

What you’ll build

A “Side Project Launcher,” a multi-agent system that helps you go from idea to shipped product:

  • Idea Validator: researches competitors, analyzes market gaps, scores viability
  • Landing Page Writer: generates headlines, copy, and CTAs based on your value prop
  • Tech Stack Advisor: recommends frameworks, databases, and hosting based on your requirements
  • Ready-to-use SDK integration code

Prerequisites

  • LaunchDarkly account (free trial works)
  • Claude Code, Cursor, VS Code, or Windsurf installed
  • LaunchDarkly API access token

Expected outcome

After 5–10 minutes, you’ll have a working AI Config project in LaunchDarkly and the SDK integration to connect it to LangGraph or any other framework.

Start your free trial

Want to follow along? Start your 14-day free trial of LaunchDarkly. No credit card required.

30-second quickstart

If you just want to get started, here’s the fastest path:

1. Install skills:

$npx skills add launchdarkly/agent-skills

Or ask your editor: “Download and install skills from https://github.com/launchdarkly/agent-skills

Restart your editor after installing.

2. Set your token:

$export LAUNCHDARKLY_ACCESS_TOKEN="api-xxxxx"

3. Build something:

Create AI Configs for a "Side Project Launcher" with three configs:
idea-validator, landing-page-writer, and tech-stack-advisor. Add tools for
competitor research, copy generation, and stack recommendations.
Put them in a new project called side-project-launcher.

Expected output: The assistant creates everything and gives you links like:

Created project: side-project-launcher
Created AI Configs:
- idea-validator: https://app.launchdarkly.com/side-project-launcher/production/ai-configs/idea-validator
- landing-page-writer: https://app.launchdarkly.com/side-project-launcher/production/ai-configs/landing-page-writer
- tech-stack-advisor: https://app.launchdarkly.com/side-project-launcher/production/ai-configs/tech-stack-advisor

That’s it. Three agents, three tools, full targeting and metrics. Done. The rest of this tutorial walks through each step if you want the details.

Who should use Agent Skills

Agent Skills are for anyone who wants to set up LaunchDarkly AI Configs without learning the API or clicking through the UI. You describe what you need in natural language, and your AI coding assistant handles the implementation.

Install Agent Skills in Claude Code, Cursor, VS Code, or Windsurf

Agent Skills work with any editor that supports the Agent Skills Open Standard.

Step 1: Install the skills

You have two options:

Option A: Use skills.sh (recommended)

skills.sh is an open directory for agent skills. Install LaunchDarkly skills with one command:

$npx skills add launchdarkly/agent-skills

Option B: Ask your AI assistant

Open your editor and ask:

Download and install skills from https://github.com/launchdarkly/agent-skills

Both methods install the same skills.

Step 2: Restart your editor

Close and reopen your editor. The skills load on startup.

How to verify: Type /aiconfig in Claude Code. You should see autocomplete suggestions. In Cursor, ask “what LaunchDarkly skills do you have?” and the assistant should list them.

Step 3: Set your API token

$export LAUNCHDARKLY_ACCESS_TOKEN="api-xxxxx"

Get your token from LaunchDarkly Authorization settings.

Required scopes: writer role or custom role with createAIConfig, createProject permissions.

Build a multi-agent project

Now let’s build something real: a Side Project Launcher that helps you validate ideas, write landing pages, and pick the right tech stack. Tell the assistant:

Create AI Configs for a "Side Project Launcher" with three configs:
1. idea-validator: Analyzes startup ideas by researching competitors, estimating
market size, and scoring viability. Use variables for {{idea}}, {{target_audience}},
and {{problem_statement}}. Give it tools for web search and competitor analysis.
2. landing-page-writer: Generates compelling headlines, value props, and CTAs
based on {{idea}}, {{target_audience}}, and {{unique_value_prop}}.
Give it tools for copy generation and A/B test suggestions.
3. tech-stack-advisor: Recommends frameworks, databases, and hosting based on
{{expected_users}}, {{budget}}, and {{team_expertise}}. Give it a tool for
stack recommendations.
Put them in a new project called side-project-launcher.

What the assistant creates

The assistant uses several skills automatically:

  1. aiconfig-projects: creates the LaunchDarkly project
  2. aiconfig-create: builds each agent configuration with variables
  3. aiconfig-tools: defines tools for function calling

Expected output:

Creating project: side-project-launcher
Creating AI Config: idea-validator
- Model: anthropic.claude-sonnet-4-20250514
- Variables: idea, target_audience, problem_statement
- Instructions: "Validate the idea: {{idea}}. Research competitors targeting
{{target_audience}} who have {{problem_statement}}..."
- Tools: web_search, competitor_analysis
Creating AI Config: landing-page-writer
- Model: anthropic.claude-sonnet-4-20250514
- Variables: idea, target_audience, unique_value_prop
- Instructions: "Write landing page copy for {{idea}}. The target audience is
{{target_audience}}. Lead with: {{unique_value_prop}}..."
- Tools: generate_copy, suggest_ab_tests
Creating AI Config: tech-stack-advisor
- Model: anthropic.claude-sonnet-4-20250514
- Variables: expected_users, budget, team_expertise
- Instructions: "Recommend a tech stack for {{expected_users}} users,
{{budget}} budget, team knows {{team_expertise}}..."
- Tools: recommend_stack
Done! View your project:
https://app.launchdarkly.com/side-project-launcher/production/ai-configs

Claude Code showing created AI Configs with models, tools, variables, and SDK keys

Claude Code creates the configs and provides SDK keys

The variables ({{idea}}, {{target_audience}}, etc.) get filled in at runtime when you call the SDK. That’s how each user gets personalized output.

What it looks like in LaunchDarkly

AI Configs list in LaunchDarkly showing the three agents: idea-validator, landing-page-writer, and tech-stack-advisor

AI Configs list showing the three agents created by Agent Skills

After creation, your LaunchDarkly project contains:

  • 3 AI Configs with instructions, model settings, and variables
  • 3 tools with parameter definitions ready for function calling
  • Default targeting serving the configuration to all users

Default targeting settings showing the configuration served to all users

Default targeting serves the configuration to all users

Each agent has its own configuration with instructions, variables, and tools. Here’s the idea-validator:

Idea validator AI Config showing instructions, model settings, and variables

Idea validator config with instructions, variables, and tools

The landing-page-writer and tech-stack-advisor follow the same pattern with their own instructions and tools.

Run the Side Project Launcher

The full working code is available on GitHub: launchdarkly-labs/side-project-researcher

Clone it and run:

$git clone https://github.com/launchdarkly-labs/side-project-researcher.git
$cd side-project-researcher
$pip install -r requirements.txt
$export LAUNCHDARKLY_SDK_KEY="sdk-xxxxx"
$python side_project_launcher_langgraph.py

The app prompts you for your idea details:

Terminal prompts asking for idea, target audience, problem statement, and tech requirements

The app prompts you for your side project details

Then each agent runs in sequence, fetching its config from LaunchDarkly and generating output:

Idea validator agent output with market analysis and viability score

Idea validator output with market analysis

Tech stack advisor output recommending frameworks and infrastructure

Tech stack advisor recommending frameworks and infrastructure

Connect to your framework

The AI Config stores your model, instructions, and tools. The SDK fetches the config and handles variable substitution automatically.

Initialize the SDK

1import ldclient
2from ldclient import Context
3from ldclient.config import Config
4from ldai.client import LDAIClient, AIAgentConfigDefault
5
6# Initialize once at startup
7SDK_KEY = os.environ.get('LAUNCHDARKLY_SDK_KEY')
8ldclient.set_config(Config(SDK_KEY))
9ld_client = ldclient.get()
10ai_client = LDAIClient(ld_client)

Fetch agent configs

1def build_context(user_id: str, **attributes):
2 """Build LaunchDarkly context for targeting."""
3 builder = Context.builder(user_id)
4 for key, value in attributes.items():
5 builder.set(key, value)
6 return builder.build()
7
8def get_agent_config(config_key: str, context: Context, variables: dict = None):
9 """Get agent-mode AI Config from LaunchDarkly."""
10 fallback = AIAgentConfigDefault(enabled=False)
11 return ai_client.agent_config(config_key, context, fallback, variables or {})

Wire it to LangGraph

LangGraph orchestrates multi-agent workflows as a graph of nodes, but you can use any orchestrator—CrewAI, LlamaIndex, Bedrock AgentCore, or custom code. To compare options, read Compare AI orchestrators.

By wiring AI Configs to each node, your agents fetch their model, instructions, and tools dynamically from LaunchDarkly. This lets you swap models, update prompts, or disable agents without touching code or redeploying.

Each agent becomes a node in your graph:

1from langchain_anthropic import ChatAnthropic
2from langchain_core.messages import HumanMessage, SystemMessage
3from langgraph.graph import StateGraph, END
4
5def idea_validator_node(state: SideProjectState) -> SideProjectState:
6 context = build_context(state["user_id"])
7 config = get_agent_config("idea-validator", context, {
8 "idea": state["idea"],
9 "target_audience": state["target_audience"],
10 "problem_statement": state["problem_statement"]
11 })
12
13 if config.enabled:
14 llm = ChatAnthropic(model=config.model.name)
15 messages = [
16 SystemMessage(content=config.instructions),
17 HumanMessage(content="Please validate this idea and provide your analysis.")
18 ]
19 response = llm.invoke(messages)
20 state["idea_validation"] = response.content
21 config.tracker.track_success() # Track metrics
22
23 return state
24
25# Build the graph
26workflow = StateGraph(SideProjectState)
27workflow.add_node("validate_idea", idea_validator_node)
28workflow.add_node("write_landing_page", landing_page_writer_node)
29workflow.add_node("recommend_stack", tech_stack_advisor_node)
30
31workflow.set_entry_point("validate_idea")
32workflow.add_edge("validate_idea", "write_landing_page")
33workflow.add_edge("write_landing_page", "recommend_stack")
34workflow.add_edge("recommend_stack", END)
35
36app = workflow.compile()
37
38# Don't forget to flush before exiting
39ld_client.flush()

To see a full example running across LangGraph, Strands, and OpenAI Swarm, read Compare AI orchestrators.

What you can do next

Once your agents are in LaunchDarkly:

  • A/B test models: split traffic between Claude and GPT to see which performs better
  • Target by segment: premium users get one model, free users get another
  • Kill switch: disable a misbehaving agent instantly from the UI
  • Track costs: monitor tokens and latency per variation

To learn more about targeting and experimentation, read AI Configs Best Practices.

FAQ

Do I need Claude Code, or does this work in Cursor/Windsurf?

Agent Skills work in any editor that supports the Agent Skills Open Standard. This includes Claude Code, Cursor, VS Code, Windsurf, and others. The installation process is the same.

What’s the difference between Agent Skills and the MCP server?

Both give your AI assistant access to LaunchDarkly. Agent Skills are text-based playbooks that teach the assistant workflows. The MCP server exposes LaunchDarkly’s API as tools. You can use either or both.

What permissions does my API token need?

At minimum: createProject, createAIConfig, createSegment. The easiest option is a token with the writer built-in role.

Where do I see the created AI Configs?

In the LaunchDarkly UI: go to your project, then AI Configs in the left sidebar. Each config shows its instructions, model, tools, and targeting rules.

How do I delete or reset generated configs?

In the LaunchDarkly UI, open the AI Config and click Archive (or Delete if available). Or ask the assistant: “Delete the AI Config called researcher-agent in project valentines-day.”

Can I use this with frameworks other than LangGraph?

Yes. The SDK returns model name, instructions, and tools as data. You wire that into whatever framework you use: CrewAI, LlamaIndex, Bedrock AgentCore, or custom code.

Does this work for completion mode (chat) or just agent mode?

Both. Use ai_client.completion_config() for completion mode (chat with message arrays) or ai_client.agent_config() for agent mode (instructions for multi-step workflows). To learn more, read Agent mode vs completion mode.

Next steps