Smart AI Agent Targeting with MCP Tools

Published September 22nd, 2025

Portrait of Scarlett Attensil.

by Scarlett Attensil

Overview

Here’s what nobody tells you about multi-agentic systems: the hard part isn’t building them but making them profitable. One misconfigured model serving enterprise features to free users can burn $20K in a weekend. Meanwhile, you’re manually juggling dozens of requirements for different user tiers, regions, and privacy compliance and each one is a potential failure point.

Part 2 of 3 of the series: Chaos to Clarity: Defensible AI Systems That Deliver on Your Goals

The solution? LangGraph multi-agent workflows controlled by LaunchDarkly AI Config targeting rules that intelligently route users: paid customers get premium tools and models, free users get cost-efficient alternatives, and EU users get Mistral for enhanced privacy. Use the LaunchDarkly REST API to set up a custom variant-targeting matrix in 2 minutes instead of spending hours setting it up manually.

What You’ll Build Today

In the next 18 minutes, you’ll transform your basic multi-agent system with:

  • Business Tiers & MCP Integration: Free users get internal keyword search, Paid users get premium models with RAG, external research tools and expanded tool call limits, all controlled by LaunchDarkly AI Configs
  • Geographic Targeting: EU users automatically get Mistral and Claude models (enhanced privacy), other users get cost-optimized alternatives
  • Smart Configuration: Set up complex targeting matrices with LaunchDarkly segments and targeting rules

Prerequisites

βœ… Part 1 completed with exact naming:

  • Project: multi-agent-chatbot
  • AI Configs: supervisor-agent, security-agent, support-agent
  • Tools: search_v2, reranking
  • Variations: supervisor-basic, pii-detector, rag-search-enhanced

πŸ”‘ Add to your .env file:

$LD_API_KEY=your-api-key # Get from LaunchDarkly settings
>MISTRAL_API_KEY=your-key # Get from console.mistral.ai (free, requires phone + email validation)

Getting Your LaunchDarkly API Key

The automation scripts in this tutorial use the LaunchDarkly REST API to programmatically create configurations. Here’s how to get your API key:

To get your LaunchDarkly API key, start by navigating to Organization Settings by clicking the gear icon (βš™οΈ) in the left sidebar of your LaunchDarkly dashboard. Once there, access Authorization Settings by clicking β€œAuthorization” in the settings menu. Next, create a new access token by clicking β€œCreate token” in the β€œAccess tokens” section.


API Token Creation

Click 'Create token' in the Access tokens section

When configuring your token, give it a descriptive name like β€œmulti-agent-chatbot”, select β€œWriter” as the role (required for creating configurations), use the default API version (latest), and leave β€œThis is a service token” unchecked for now.


Name API Token

Configure your token with a descriptive name and Writer role

After configuring the settings, click β€œSave token” and immediately copy the token value. This is IMPORTANT because it’s only shown once!


Copy API Token

Copy the token value immediately - it's only shown once

Finally, add the token to your environment:

$# Add this line to your .env file
>LD_API_KEY=your-copied-api-key-here

Security Note: Keep your API key private and never commit it to version control. The token allows full access to your LaunchDarkly account.

Step 1: Add External Research Tools (4 minutes)

Your agents need more than just your internal documents. Model Context Protocol (MCP) connects AI assistants to live external data and they agents become orchestrators of your digital infrastructure, tapping into databases, communication tools, development platforms, and any system that matters to your business. MCP tools run as separate servers that your agents call when needed.

The MCP Registry serves as a community-driven directory for discovering available MCP servers - like an app store for MCP tools. For this tutorial, we’ll use manual installation since our specific academic research servers (ArXiv and Semantic Scholar) aren’t yet available in the registry.

Install external research capabilities:

$# Install ArXiv MCP server for academic paper search
>uv tool install arxiv-mcp-server
>
># Install Semantic Scholar MCP server for citation data
>git clone https://github.com/JackKuo666/semanticscholar-MCP-Server.git

MCP Tools Added:

  • arxiv_search: Live academic paper search (Paid users)
  • semantic_scholar: Citation and research database (Paid users)

These tools integrate with your agents via LangGraph while LaunchDarkly controls which users get access to which tools.

Step 2: Configure with API Automation (2 minutes)

Now we’ll use programmatic API automation to configure the complete setup. The LaunchDarkly REST API lets you manage tools, segments, and AI Configs programmatically. Instead of manually creating dozens of variations in the UI, this configuration automation makes REST API calls to provision user segments, AI Config variations, targeting rules, and tools. These are the same resources you could create manually through the LaunchDarkly dashboard. Your actual chat application continues running unchanged.

Configure your complete targeting matrix with one command:

$cd bootstrap
>uv run python create_configs.py

What the script creates:

  • 3 new tools: search_v1 (basic search), arxiv_search and semantic_scholar (MCP research tools)
  • 4 combined user segments with geographic and tier targeting rules
  • Updated AI Configs: security-agent with 2 new geographic variations
  • Complete targeting rules that route users to appropriate variations
  • Intelligently reuses existing resources: supervisor-agent, search_v2, and reranking tools from Part 1

Understanding the Bootstrap Script

The automation works by reading a YAML manifest and translating it into LaunchDarkly API calls. Here’s how the key parts work:

Segment Creation with Geographic Rules:

1def create_segment(self, project_key, segment_data):
2 # Step 1: Create empty segment
3 payload = {
4 "key": segment_data["key"],
5 "name": segment_data["key"].replace("-", " ").title()
6 }
7
8 # Step 2: Add targeting rules via semantic patch
9 clauses = []
10 for clause in segment_data["rules"]:
11 clauses.append({
12 "attribute": clause["attribute"], # "country" or "plan"
13 "op": clause["op"], # "in"
14 "values": clause["values"], # ["DE", "FR", ...] or ["free"]
15 "contextKind": "user",
16 "negate": clause["negate"] # false for EU, true for non-EU
17 })

Model Configuration Mapping:

1# The script maps your YAML model IDs to LaunchDarkly's internal keys
2model_config_key_map = {
3 "claude-3-5-sonnet-20241022": "Anthropic.claude-3-7-sonnet-latest",
4 "claude-3-5-haiku-20241022": "Anthropic.claude-3-5-haiku-20241022",
5 "gpt-4o": "OpenAI.gpt-4o",
6 "gpt-4o-mini": "OpenAI.gpt-4o-mini-2024-07-18",
7 "mistral-small-latest": "Mistral.mistral-small-latest"
8}

Customizing for Your Use Case:

To adapt this for your own multi-agent system:

  1. Add your geographic regions in the YAML segments:

    1- key: apac-paid
    2 rules:
    3 - attribute: "country"
    4 values: ["JP", "AU", "SG", "KR"] # Your APAC countries
  2. Define your business tiers:

    1- attribute: "plan"
    2 values: ["enterprise", "professional", "starter"] # Your pricing tiers
  3. Map your models in the script:

    1"your-model-id": "Provider.your-launchdarkly-key"

The script handles the complexity of LaunchDarkly’s API while letting you define your targeting logic in simple YAML.

Validating the Bootstrap Script

Expected terminal output:

$πŸš€ LaunchDarkly AI Config Bootstrap
>==================================================
>⚠️ IMPORTANT: This script is for INITIAL SETUP ONLY
>πŸ“ After bootstrap completes:
> β€’ Make ALL configuration changes in LaunchDarkly UI
> β€’ Do NOT modify ai_config_manifest.yaml
> β€’ LaunchDarkly is your single source of truth
>==================================================
>
>πŸš€ Starting multi-agent system bootstrap (add-only)...
>πŸ“¦ Project: multi-agent-chatbot
>
>πŸ”§ Creating tools...
> βœ… Tool 'search_v1' created
> βœ… Tool 'arxiv_search' created
> βœ… Tool 'semantic_scholar' created
>
>πŸ€– Ensuring AI configs exist...
>βœ… AI Config 'supervisor-agent' exists
>βœ… AI Config 'security-agent' exists
>βœ… AI Config 'support-agent' exists
>
>🧩 Creating variations...
> βœ… Variation 'strict-security' created
> βœ… Variation 'eu-free' created
> βœ… Variation 'eu-paid' created
> βœ… Variation 'other-free' created
> βœ… Variation 'other-paid' created
>
>πŸ“¦ Creating segments (for targeting rules)...
>βœ… Empty segment 'eu-free' created
> βœ… Rules added to segment 'eu-free' (final count: 1)
>βœ… Empty segment 'eu-paid' created
> βœ… Rules added to segment 'eu-paid' (final count: 1)
>βœ… Empty segment 'other-free' created
> βœ… Rules added to segment 'other-free' (final count: 1)
>βœ… Empty segment 'other-paid' created
> βœ… Rules added to segment 'other-paid' (final count: 1)
>
>🎯 Updating targeting rules...
>βœ… Targeting rules updated for 'security-agent'
>βœ… Targeting rules updated for 'support-agent'
>
>✨ Bootstrap complete!

In your LaunchDarkly dashboard, navigate to your multi-agent-chatbot project. You should see:

  1. AI Configs tab: Three configs (supervisor-agent, security-agent, support-agent) with new variations
  2. Segments tab: Four new segments (eu-free, eu-paid, other-free, other-paid)
  3. Tools tab: Five tools total (including search_v1, arxiv_search, semantic_scholar)

Troubleshooting Common Issues:

❌ Error: β€œLD_API_KEY environment variable not set”

  • Check your .env file contains: LD_API_KEY=your-api-key
  • Verify the API key has β€œWriter” permissions in LaunchDarkly settings

❌ Error: β€œAI Config β€˜security-agent’ not found”

  • Ensure you completed Part 1 with exact naming requirements
  • Verify your project is named multi-agent-chatbot
  • Check that supervisor-agent, security-agent, and support-agent exist in your LaunchDarkly project

❌ Error: β€œFailed to create segment”

  • Your LaunchDarkly account needs segment creation permissions
  • Try running the script again; it’s designed to handle partial failures

❌ Script runs but no changes appear

  • Wait 30-60 seconds for LaunchDarkly UI to refresh
  • Check you’re looking at the correct project and environment (Production)
  • Verify your API key matches your LaunchDarkly organization

Step 3: See How Smart Segmentation Works (2 minutes)

Here’s how the smart segmentation works:

By Region:

  • EU users: Mistral for security processing + Claude for support (privacy + compliance)
  • Non-EU users: Claude for security + GPT for support (cost optimization)
  • All users: Claude for supervision and workflow orchestration

By Business Tier:

  • Free users: Basic search tools (search_v1)
  • Paid users: Full research capabilities (search_v1, search_v2, reranking, arxiv_search, semantic_scholar)

Step 4: Test Segmentation with Script (2 minutes)

The included test script simulates real user scenarios across all segments, verifying that your targeting rules work correctly. It sends actual API requests to your system and confirms each user type gets the right model, tools, and behavior.

First, start your system:

$# Terminal 1: Start the backend
>uv run uvicorn api.main:app --reload --port 8000
>
># Terminal 2: Run the test script
>uv run python api/segmentation_test.py

Expected test output:

$πŸš€ COMPREHENSIVE TUTORIAL 2 SEGMENTATION TESTS
>Testing Geographic + Business Tier Targeting Matrix
>======================================================================
>
>πŸ”„ Running: EU Paid β†’ Claude Sonnet + Full MCP Tools
>
>============================================================
>πŸ§ͺ TESTING: DE paid user (ID: user_eu_paid_001)
>============================================================
>πŸ“Š SUPPORT AGENT:
> Model: claude-3-7-sonnet-latest (expected: claude-3-7-sonnet-latest) βœ…
> Variation: eu-paid (expected: eu-paid) βœ…
> Tools: ['search_v1', 'search_v2', 'reranking', 'arxiv_search', 'semantic_scholar'] βœ…
> Expected: ['search_v1', 'search_v2', 'reranking', 'arxiv_search', 'semantic_scholar']
> MCP Tools: Yes (should be: Yes) βœ…
>
>πŸ“ RESPONSE:
> Length: 847 chars
> Tools Called: ['search_v2', 'arxiv_search']
> Preview: Based on your request, I'll search both internal documentation and recent academic research...
>
>🎯 RESULT: βœ… PASSED
>
>πŸ”„ Running: EU Free β†’ Claude Haiku + Basic Tools
>[Similar detailed output for EU Free user...]
>
>πŸ”„ Running: US Paid β†’ GPT-4 + Full MCP Tools
>[Similar detailed output for US Paid user...]
>
>πŸ”„ Running: US Free β†’ GPT-4o Mini + Basic Tools
>[Similar detailed output for US Free user...]
>
>======================================================================
>πŸ“Š FINAL RESULTS
>======================================================================
>βœ… PASSED: 4/4
>❌ FAILED: 0/4
>
>πŸŽ‰ ALL TESTS PASSED! LaunchDarkly targeting is working correctly.
> β€’ Geographic segmentation: Working
> β€’ Business tier routing: Working
> β€’ Model assignment: Working
> β€’ Tool configuration: Working
> β€’ MCP integration: Working
>
>πŸ”— Next: Test manually in UI at http://localhost:8501

This confirms your targeting matrix is working correctly across all user segments!

Step 5: Experience Segmentation in the Chat UI (3 minutes)

Now let’s see your segmentation in action through the user interface. With your backend already running from Step 4, start the UI:

$# Terminal 3: Start the chat interface
>uv run streamlit run ui/chat_interface.py --server.port 8501

Open http://localhost:8501 and test different user types:

  1. User Dropdown: Find the user dropdown by using the >> icon to open the left nav menu.. Select different regions (eu, other) and plans (Free, Paid).
  2. Ask Questions: Try β€œSearch for machine learning papers.”
  3. Watch Workflow: In the server logs, watch which model and tools get used for each user type.
  4. Verify Routing: EU users get Mistral for security. Other users get GPT. Paid users get MCP tools.

Chat Interface User Selection

Select different user types to test segmentation in the chat interface

What’s Next: Part 3 Preview

In Part 3, we’ll prove what actually works using controlled A/B experiments:

Set up Easy Experiments

  • Tool Implementation Test: Compare search_v1 vs search_v2 on identical models to measure search quality impact
  • Model Efficiency Analysis: Test models with the same full tool stack to measure tool-calling precision and cost

Real Metrics You’ll Track

  • User satisfaction: thumbs up/down feedback
  • Tool call efficiency: average number of tools used per successful query
  • Token cost analysis: cost per query across different model configurations
  • Response latency: performance impact of security and tool variations

Instead of guessing which configurations work better, you’ll have data proving which tool implementations provide value, which models use tools more efficiently, and what security enhancements actually costs in performance.

The Path Forward

You’ve built something powerful: a multi-agent system that adapts to users by design. More importantly, you’ve proven that sophisticated AI applications don’t require repeated deployments; they require smart configuration.

This approach scales beyond tutorials. Whether you’re serving 100 users or 100,000, the same targeting principles apply: segment intelligently, configure dynamically, and let data guide decisions instead of assumptions.


Questions? Issues? Reach out at aiproduct@launchdarkly.com or open an issue in the GitHub repo.