Node.js (server-side) AI SDK reference

Overview

This topic documents how to get started with the Node.js (server-side) AI SDK, and links to reference information on all of the supported features. You can use either JavaScript or TypeScript when working with the Node.js (server-side) AI SDK.

The Node.js (server-side) AI SDK is designed for use with LaunchDarkly’s AI Configs. It is currently in a pre-1.0 release and under active development. You can follow development or contribute on GitHub.

SDK quick links

LaunchDarkly’s SDKs are open source. In addition to this reference guide, we provide source, API reference documentation, sample applications, and provider-specific packages:

ResourceLocation
SDK API documentationSDK API docs
GitHub repositorynode-server-sdk-ai
Sample applicationUsing Bedrock, Using OpenAI
Published modulenpm
Provider-specific packagesOpenAI, LangChain, Vercel AI
For use in server-side applications only

This SDK is intended for use in multi-user Node.js server applications. To learn more about LaunchDarkly’s different SDK types, read Choosing an SDK type.

Get started

LaunchDarkly AI SDKs interact with AI Configs. AI Configs are the LaunchDarkly resources that manage model configurations and messages for your generative AI applications.

Try the Quickstart

This reference guide describes working specifically with the Node.js (server-side) AI SDK. For a complete introduction to LaunchDarkly AI SDKs and how they interact with AI Configs, read Quickstart for AI Configs.

You can use the Node.js (server-side) AI SDK to customize your AI Config based on the context that you provide. This means both the messages and the model evaluation in your generative AI application are specific to each end user, at runtime. You can also use the AI SDKs to record metrics from your AI model generation, including duration and tokens.

Follow these instructions to start using the Node.js (server-side) AI SDK in your application.

Install the SDK

First, install the AI SDK as a dependency in your application using your application’s dependency manager. If you want to depend on a specific version, refer to the SDK releases page to identify the latest version.

The Node.js (server-side) AI SDK is built on the Node.js (server-side) SDK, so you’ll need to install that as well. Alternatively, you can install a server-side edge SDK instead.

In addition, you can also use provider-specific AI SDK packages for better integration and improved version management with your preferred AI framework.

The following provider-specific packages are available:

These packages require Node.js version 16 or higher.

Provider-specific packages are in early development

These provider-specific packages are currently in early development and are not recommended for production use. They may change without notice, including becoming backwards incompatible.

Here’s how:

Shell
$npm install @launchdarkly/node-server-sdk
>npm install @launchdarkly/server-sdk-ai
># If you want to install a provider-specific package
>npm install @launchdarkly/server-sdk-ai-openai
># or
>npm install @launchdarkly/server-sdk-ai-langchain
># or
>npm install @launchdarkly/server-sdk-ai-vercel

Next, import init, LDContext, and initAi in your application code. If you are using TypeScript, you can optionally import the LaunchDarkly LDAIClient and LDAIConfig. These are implied, so are not strictly required.

Here’s how:

1import { init, LDContext } from '@launchdarkly/node-server-sdk';
2import { initAi, LDAIClient, LDAIConfig } from '@launchdarkly/server-sdk-ai';

Here’s how to import a provider-specific package:

1import { OpenAIProvider } from '@launchdarkly/server-sdk-ai-openai';
2import { OpenAI } from 'openai';

Initialize the client

After you install and import the SDK, create a single, shared instance of LDClient. When the LDClient is initialized, use it to initialize the LDAIClient. The LDAIClient is how you interact with AI Configs. Specify the SDK key to authorize your application to connect to a particular environment within LaunchDarkly.

The Node.js SDKs use an SDK key

Both the Node.js (server-side) AI SDK and the Node.js (server-side) SDK use an SDK key. Keys are specific to each project and environment. They are available from Project settings, on the Environments list. To learn more about key types, read Keys.

Here’s how:

1const ldClient: LDClient = init('sdk-key-123abc');
2
3try {
4 await ldClient.waitForInitialization({ timeout: 10 });
5 // initialization complete
6} catch (error) {
7 // timeout or SDK failed to initialize
8}
9
10const aiClient: LDAIClient = initAi(ldClient);

Configure the context

Next, configure the context that will use the AI Config, that is, the context that will encounter generated AI content in your application. The context attributes determine which variation of the AI Config LaunchDarkly serves to the end user, based on the targeting rules in your AI Config. If you are using template variables in the messages in your AI Config’s variations, the context attributes also fill in values for the template variables.

Here’s how:

1const context: LDContext = {
2 kind: 'user',
3 key: 'user-key-123abc',
4 firstName: 'Sandy',
5 lastName: 'Smith',
6 email: 'sandy@example.com',
7 groups: ['Google', 'Microsoft'],
8};

Now use this context to customize your AI Config.

Customize an AI Config

The next step is to customize your AI Config. Customization means that any variables you include in the messages or instructions when you define the AI Config variation have their values set to the context attributes and variables you pass in.

The details of customizing an AI Config depend on whether you are using AI Configs in a completion mode or an agent mode. You set the mode for a particular AI Config when you create it in the LaunchDarkly UI.

Customize AI Configs in completion mode

completion mode means that each variation in your AI Config includes a single set of roles and messages that you use to prompt your generative AI model.

In completion mode, use config() to customize the AI Config. The config() function takes an AI Config key, a context, a fallback value, and optional variables to use in the customization. It performs the evaluation, then returns the customized messages and model along with a tracker instance for recording metrics. If it cannot perform the evaluation or LaunchDarkly is unreachable, it returns the fallback value. For example, you might use an empty, disabled LDAIConfig as a fallback value, or a fully configured default. Either way, you should make sure to check for this case and handle it appropriately in your application.

Here’s how:

1const fallbackConfig = { enabled: false };
2
3const aiConfig: LDAIConfig = aiClient.config(
4 'ai-config-key-123abc',
5 context,
6 fallbackConfig,
7 { 'exampleCustomVariable': 'exampleCustomValue' },
8);

Customize AI Configs in agent mode

agent mode means that each variation in your AI Config includes a set of instructions, which enable multi-step workflows.

In agent mode, use agent() or agents() to customize the AI Config. The agent() function customizes a single AI Config agent, while the agents() function customizes an array of agent configurations.

Customization requires an AI Config key, fallback value, and optional variables to use in the customization, for each agent. Additionally, customization requires a context. Both functions perform the evaluation and then return the customized instructions for each AI Config, along with a tracker instance for recording metrics. If the function cannot perform the evaluation or LaunchDarkly is unreachable, it returns the fallback value. For example, you might use an empty, disabled LDAIAgentConfig as a fallback value, or a fully configured default. Either way, you should make sure to check for this case and handle it appropriately in your application.

Here’s how:

1const fallbackConfig = { enabled: false };
2
3const agent: LDAIAgent = await aiClient.agent(
4 'ai-config-key-123abc',
5 context,
6 fallbackConfig,
7 { 'exampleCustomVariable': 'exampleCustomValue' },
8);

To learn more, read Customizing AI Configs.

Create a client or model instance

If you’re using a provider-specific package, you can create a client (OpenAI) or a model instance (LangChain and Vercel AI). Here’s how:

1const client = new OpenAI({
2 apiKey: process.env.OPENAI_API_KEY,
3});

Combine AI Config messages with user messages

With your model ready, you can now combine the LaunchDarkly-provided messages with user input. If you’re using a provider-specific package, you can combine an AI Config message with a user message. Here’s how:

1const configMessages = aiConfig.messages || [];
2const userMessage = { role: 'user', content: 'What is the capital of France?' };
3const allMessages = [...configMessages, userMessage];

Call the provider and record AI metrics

Finally, using a provider-specific package, make a request to your generative AI provider and record metrics from your AI model generation.

Here’s how:

1const response = await aiConfig.tracker.trackMetricsOf(
2 (result) => OpenAIProvider.createAIMetrics(result),
3 () => client.chat.completions.create({
4 model: aiConfig.model?.name || 'gpt-4',
5 messages: aiConfig.messages || [],
6 temperature: (aiConfig.model?.parameters?.temperature as number) ?? 0.5,
7 })
8);
9
10console.log('AI Response:', response.choices[0].message.content);

To learn more, read Tracking AI metrics.

Supported features

This SDK supports the following features: