Using targeting to manage AI model usage by tier with the Python AI SDK

Overview

This guide shows how to manage AI model usage by customer tier in an OpenAI-powered application. It uses the LaunchDarkly Python AI SDK and AI configs to dynamically adjust the model used based on customer details.

Using AI configs and targeting to customize your applications means you can:

  • serve different models or messages to different customers, based on attributes of those customers. You can configure this targeting in LaunchDarkly, and update it without redeploying your application.
  • compare variations and determine which one performs better, based on satisfaction, cost, or other metrics.

This guide steps you through the process of working in your application and in LaunchDarkly to customize your application and its targeting.

If you’re not at all familiar with AI configs and would like additional explanation, you can start with the Quickstart for AI configs and come back to this guide when you’re ready for a more realistic example.

Prerequisites

To complete this guide, you must have the following prerequisites:

  • a LaunchDarkly account, including
    • a LaunchDarkly SDK key for your environment.
    • an Admin role, Owner role, or custom role that allows AI config actions.
  • a Python development environment. The LaunchDarkly Python AI SDK is compatible with Python 3.8.0 and higher.
  • an OpenAI API key. The LaunchDarkly AI SDKs provide specific functions for completions for several common AI model families, and an option to record this information yourself. This guide uses OpenAI.

Example scenario

In this example, you have an application that provides chat support. When creating your generated content, you want to use one AI model for the content you provide to the customers who are paying you, and a different AI model for the content you provide to the customers on your free tier. You also want to understand whether your paying customers are getting a better experience.

Step 1: Prepare your development environment

First, install the Python AI SDK:

Shell
$pip install launchdarkly-server-sdk-ai

Then, set up credentials in your environment. The example below uses $Environment_SDK_KEY and $Environment_OPENAI_KEY to refer to your LaunchDarkly SDK key and your OpenAI key, respectively.

You can find your SDK key from the Environments list for your LaunchDarkly project. To learn how, read Copy SDK credentials for an environment.

Step 2: Initialize LaunchDarkly SDK clients

Next, import the LaunchDarkly LDAIClient into your application code and initialize it:

Python
1import ldclient
2from ldclient import Context
3from ldclient.config import Config
4from ldai.client import LDAIClient, AIConfig, ModelConfig, LDMessage, ProviderConfig
5
6ldclient.set_config(Config('$Environment_SDK_KEY'))
7
8if not ldclient.get().is_initialized():
9 print('SDK failed to initialize')
10 exit()
11
12print('SDK successfully initialized')
13
14aiclient = LDAIClient(ldclient.get())

Step 3: Set up AI configs in LaunchDarkly

Next, create an AI config in the LaunchDarkly UI. AI configs are the LaunchDarkly resources that manage model configurations and messages for your generative AI applications.

To create an AI config:

  1. In LaunchDarkly, click Create and choose AI config.
  2. In the “Create AI config” dialog, give your AI config a human-readable Name, for example, “Chat bot summarizer.”
  3. Click Create.

Then, create two variations. Every AI config has one or more variations, each of which includes your AI messages and model configuration.

Here’s how:

  1. On the Variations tab of your newly created AI config, replace “Untitled variation” with a variation Name in the create panel. You’ll use this name to refer to the variation when you set up targeting rules, below. For example, you can use “Premium chat support” for one variation and “Free chat support” for the other variation.
  2. Click Select a model and select a supported OpenAI model. For example, you can use “gpt-4o” for your premium variation and “gpt-4-turbo” for your free variation.
  3. Optionally, adjust the model parameters: click Parameters to view and update model parameters. In the dialog, adjust the model parameters as needed. The Base value of each parameter is from the model settings. You can choose different values for this variation if you prefer.
  4. Add system, user, or assistant messages to define your prompt. For this example, enter a system message for each variation:
You are an expert AI assistant with comprehensive knowledge across multiple domains. Your responses should be detailed, thorough, and professional. You can:
1. Provide in-depth technical explanations
2. Offer multiple solution approaches when applicable
3. Include relevant code examples and best practices
4. Share industry insights and advanced tips
5. Suggest optimizations and improvements
6. Reference technical documentation and trusted sources
Always maintain a professional yet friendly tone, and prioritize accuracy and completeness in your responses.
If a question is unclear, ask for clarification to ensure you provide the most valuable assistance possible.
  1. Click Save changes after you create each variation.

Here’s how the two variations should look after you’ve set them up:

The "Premium chat support" variation.

The "Premium chat support" variation.

The "Free chat support" variation.

The "Free chat support" variation.

Step 4: Set up targeting rules and enable AI config

Next, set up targeting rules for your AI config. These rules determine which of your customers receives which variation of your AI config.

To specify the AI config variation to use by default when the AI config is toggled on:

  1. Select the Targeting tab for your AI config.
  2. In the “Default rule” section, click Edit.
  3. Configure the default rule to serve the “Free chat support” variation.
  4. Click Review and save.

To specify a different AI config variation to use for premium customers:

  1. Select the Targeting tab for your AI config.
  2. Click the + Add rule button and select Build a custom rule.
  3. Optionally, enter a name for the rule.
  4. Leave the Context kind menu set to “user.”
  5. In the Attribute menu, type in customer_type. You’ll set this attribute in your code later.
  6. Leave the Operator menu set to “is one of.”
  7. In the Values menu, type in premium. You’ll set this value in your code later.
  8. From the Select… menu, choose the “Premium chat support” variation.
  9. Click Review and save.

Finally, toggle the AI config to On. Then click Review and save.

Here’s what the Targeting tab of your AI config should look like:

The "Targeting" tab of your AI config.

The "Targeting" tab of your AI config.

Step 5: Customize the AI config

In your code, use the config function in the LaunchDarkly AI SDK to customize the AI config. You need to call config each time you generate content from your AI model.

The config function returns the customized messages and model configuration along with a tracker instance for recording metrics. Customization is what happens when your application’s code sends the LaunchDarkly AI SDK information about a particular AI config and the end user that has encountered it in your app, and the SDK sends back the value of the variation that the end user should receive.

To call the config function, you need to provide information about the end user who is working in your application. For example, you may have this information in a user profile within your app.

The config function returns the customized messages and model along with a tracker instance for recording metrics. You can pass the customized messages directly to your LLM using a chat completion call, track_openai_metrics.

Here’s how:

Example application
1#... Existing code from Step 2, above
2
3# OpenAI API Key
4openai.api_key = "$Environment_OPENAI_KEY"
5
6# The context describes the end user currently working in your application.
7# The targeting rules for your AI config can use any context attributes.
8# This example checks 'customer_type' in one of the targeting rules.
9context = Context.builder('context-key-123abc') \
10 .kind('user') \
11 .set('name', 'Sandy') \
12 .set('customer_type', 'premium') \
13 .build()
14
15aiclient = LDAIClient(ldclient.get())
16
17# In case you cannot reach LaunchDarkly
18fallback_value = AIConfig(
19 model=ModelConfig(name='my-default-model', parameters={'name': 'My default model'}),
20 messages=[LDMessage(role='system', content='')],
21 provider=ProviderConfig(name='my-default-provider'),
22 enabled=True,
23)
24
25# Get AI Config from LaunchDarkly
26def get_prompt_and_model():
27 try:
28 config, tracker = aiclient.config("chat-bot-summarizer", context, fallback_value)
29 return config, tracker
30 except Exception as e:
31 print(f"Error retrieving AI config: {e}")
32 return fallback_config, None
33
34# Perform a chat completion call
35def perform_chat():
36 ai_config, tracker = get_prompt_and_model()
37
38 # Transform the prompt for OpenAI's format
39 messages = [{"role": msg.role, "content": msg.content} for msg in ai_config.messages]
40
41 try:
42 # Track metrics using the AI Client tracker
43 completion = tracker.track_openai_metrics(
44 lambda: openai.chat.completions.create(
45 model=ai_config.model.name,
46 messages=messages,
47 )
48 )
49
50 except Exception as e:
51 print(f"Error during chat completion: {e}")
52
53if __name__ == "__main__":
54 perform_chat()

Step 6: Monitor results

As customers encounter chat support in your application, LaunchDarkly monitors the performance of your AI configs: the tracker returned by the config function automatically records various metrics. To view them, select the Monitoring tab for your AI config in LaunchDarkly.

In this example, you can review the results to determine:

  • which support option provides higher satisfaction for customers
  • which support option uses more tokens

You could use this information to make a business decision about whether the performance differences are worth the cost differences of running each model for your different customer tiers.

The "Monitoring" tab for an AI config.

The "Monitoring" tab for an AI config.

Conclusion

In this guide, you learned how to manage AI model usage by customer tier in an OpenAI-powered application, and how to review the performance of those models based on customer feedback and token usage.

For additional examples, read the other AI configs guides in this section. To learn more, read AI configs and AI SDKs.

Want to know more? Start a trial.
Your 14-day trial begins as soon as you sign up. Get started in minutes using the in-app Quickstart. You'll discover how easy it is to release, monitor, and optimize your software.

Want to try it out? Start a trial.
Built with