Tracking AI metrics

The AI configs product is available for early access

The AI configs product is only available in early access for customers on select plans. To request early access, navigate to AI configs and join the waitlist.


The AI SDKs are designed for use with the AI configs product. The AI SDKs are currently in an alpha version.

Overview

This topic explains how to record metrics from your AI model generation, including duration, generation, satisfaction, and several token-related metrics. This feature is available for AI SDKs only.

About AI metrics

To help you track how your AI model generation is performing, the AI SDKs provide options to record metrics from your model generation. LaunchDarkly displays these metrics on the AI config Monitoring tab in the user interface.

All SDKs include individual track* methods to record the following metrics:

  • duration
  • token usage
  • generation success
  • generation error
  • time to first token
  • output satisfaction

Additionally, some AI SDKs include provider-specific track_[model]_metric methods. These methods take the result of the provider-specific call as a parameter, and record all of the following metrics:

  • duration
  • token usage
  • generation success
  • generation error

The provider-specific methods are a useful shorthand if you’re working with those providers. You can always call the track* methods manually to record additional metrics.

Both the individual track* methods and the provider-specific track_[model]_metric methods are called from the tracker. The tracker is associated with a specific customization call.

AI SDKs

This feature is available for all of the AI SDKs:

.NET AI

Use the TrackRequest function to record metrics from your AI model generation.

The tracker is returned from your call to customize the AI config, and is specific to that AI config. Make sure to call Config again each time you use the tracker and generate content from your AI model, so that your metrics are correctly associated with the customized AI config variation.

Here’s how:

.NET AI SDK, any model
1var response = tracker.TrackRequest(Task.Run(() =>
2 {
3 // Make request to a provider, which automatically tracks metrics in LaunchDarkly.
4 // When sending the request to a provider, use details from tracker.Config.
5 // For instance, you can pass tracker.Config.Model and tracker.Config.Messages.
6 // Optionally, return response metadata, for example to do your own additional logging.
7 //
8 // CAUTION: If the call inside of Task.Run() throws an exception,
9 // the SDK will re-throw that exception
10
11 return new Response
12 {
13 Usage = new Usage { Total = 1, Input = 1, Output = 1 }, /* Token usage data */
14 Metrics = new Metrics { LatencyMs = 100 } /* Metrics data */
15 };
16 }
17));

If you would like to do any additional tracking, besides what LaunchDarkly provides, it is your responsibility to fill in the Response object with the data you want to track.

You can also use the SDK’s other Track* functions to record these metrics manually. The TrackRequest function is expecting a response, so you may need to do this if your application requires streaming.

Each of the Track* functions sends data back to LaunchDarkly. The Monitoring tab of the AI config in the LaunchDarkly UI aggregates data from the Track* functions from across all variations of the AI config.

Here’s how to record metrics manually:

1/// Track your own start and stop time.
2
3/// Set duration to the time (in ms) that your AI model generation takes.
4/// The duration may include network latency, depending on how you calculate it.
5
6tracker.TrackDuration(response.Metrics.LatencyMs);

Make sure to call Config again each time you use the tracker and generate content from your AI model.

To learn more, read LDAIConfigTracker.

Go AI

Use the TrackRequest function to record metrics from your AI model generation.

The tracker is returned from your call to customize the AI config, and is specific to that AI config. Make sure to call Config again each time you use the tracker and generate content from your AI model, so that your metrics are correctly associated with the customized AI config variation.

Here’s how:

Go AI SDK, any model
1response, err := tracker.TrackRequest(func(config *Config) (ProviderResponse, error) {
2
3 // Make request to a provider, which automatically tracks metrics in LaunchDarkly.
4 // When sending the request to a provider, use details from config.
5 // For instance, you can pass a model parameter (config.ModelParam) or messages (config.Messages).
6 // Optionally, return response metadata, for example to do your own additional logging.
7
8 return ProviderResponse{
9 Usage: TokenUsage{
10 Total: 1, // Token usage data
11 },
12 Metrics: {
13 Latency: 10 * time.Millisecond, // Metrics data
14 },
15 }, nil
16})

Alternatively, you can use the SDK’s other Track* functions to record these metrics manually. The TrackRequest function is expecting a response, so you may need to do this if your application requires streaming.

Each of the Track* functions sends data back to LaunchDarkly. The Monitoring tab of the AI config in the LaunchDarkly UI aggregates data from the Track* functions from across all variations of the AI config.

Here’s how to record metrics manually:

1// Track your own start and stop time.
2
3// Set duration to the time that your AI model generation takes.
4// The duration may include network latency, depending on how you calculate it.
5
6tracker.TrackDuration(10 * time.Millisecond);

To learn more, read Tracker.

Node.js (server-side) AI

Use one of the track[Model]Metrics functions to record metrics from your AI model generation. The SDK provides separate track[Model]Metrics functions for several of the models that you can select when you set up your AI config variations in the LaunchDarkly user interface.

The tracker is from your call to customize the AI config, and is specific to that AI config. Make sure to call config again each time you use the tracker and generate content from your AI model, so that your metrics are correctly associated with the customized AI config variation.

Here’s how:

1const { tracker } = aiConfig;
2
3// Pass in the result of the OpenAI operation.
4// When you call the OpenAI operation, use details from aiConfig.
5// For instance, you can pass aiConfig.messages
6// and aiConfig.model to your specific OpenAI operation.
7//
8// CAUTION: If the call inside of trackOpenAIMetrics throws an exception,
9// the SDK will re-throw that exception
10
11const completion = await tracker.trackOpenAIMetrics(async () =>
12 client.chat.completions.create({
13 messages: aiConfig.messages || [],
14 model: aiConfig.model?.name || 'gpt-4',
15 temperature: (aiConfig.model?.parameters?.temperature as number) ?? 0.5,
16 max_tokens: (aiConfig.model?.parameters?.maxTokens as number) ?? 4096,
17 }),
18);

You can also use the SDK’s other track* functions to record these metrics manually. You may need to do this if you are using a model for which the SDK does not provide a convenience track[Model]Metrics function. The track[Model]Metrics functions are expecting a response, so you may also need to do this if your application requires streaming.

Each of the track* functions sends data back to LaunchDarkly. The Monitoring tab of the AI config in the LaunchDarkly UI aggregates data from the track* functions from across all variations of the AI config.

Here’s how to record metrics manually:

1// Track your own start and stop time.
2
3// Set duration to the time (in ms) that your AI model generation takes.
4// The duration may include network latency, depending on how you calculate it.
5
6aiConfig.tracker.trackDuration(duration);

Make sure to call config again each time you use the tracker and generate content from your AI model.

To learn more, read LDAIConfigTracker.

Python AI

Use one of the track_[model]_metrics functions to record metrics from your AI model generation. The SDK provides separate track_[model]_metrics functions for several of the models that you can select when you set up your AI config variations in the LaunchDarkly user interface.

The tracker is returned from your call to customize the AI config, and is specific to that AI config. Make sure to call config again each time you use the tracker and generate content from your AI model, so that your metrics are correctly associated with the customized AI config variation.

Here’s how:

1# Pass in the result of the OpenAI operation.
2# When calling the OpenAI operation, use details from config.
3# For instance, you can pass config.model.name
4# and config.messages[0].content to your specific OpenAI operation.
5#
6# CAUTION: If the call inside of track_openai_metrics throws an exception,
7# the SDK will re-throw that exception
8
9messages = [] if config.messages is None else config.messages
10completion = tracker.track_openai_metrics(
11 lambda:
12 openai_client.chat.completions.create(
13 model=config.model.name,
14 messages=[message.to_dict() for message in messages],
15 )
16)

You can also use the SDK’s other track* functions to record these metrics manually. You may need to do this if you are using a model for which the SDK does not provide a convenience track_[model]_metrics function. The track_[model]_metrics functions are expecting a response, so you may also need to do this if your application requires streaming.

Each of the track* functions sends data back to LaunchDarkly. The Monitoring tab of the AI config in the LaunchDarkly UI aggregates data from the track* functions from across all variations of the AI config.

Here’s how to record metrics manually:

1# Track your own start and stop time.
2
3# Set duration to the time (in ms) that your AI model generation takes.
4# The duration may include network latency, depending on how you calculate it.
5
6tracker.track_duration(duration)

Make sure to call config again each time you use the tracker and generate content from your AI model.

To learn more, read LDAIConfigTracker.

Built with