BlogRight arrowAI
Right arrowAI Configs is now GA: Runtime control for AI prompts and models
Backspace icon
Search iconClose icon

MAY 28 2025

AI Configs is now GA: Runtime control for AI prompts and models

Ship smarter, safer, more responsible AI — without duct tape or redeploys.

AI Configs is now GA: Runtime control for AI prompts and models featured image

Sign up for our newsletter

Get tips and best practices on feature management, developing great AI apps, running smart experiments, and more.

Subscribe
Subscribe

With GenAI, your code isn’t the product — your prompts and models are.

But most teams are still managing AI prompts and models with code pushes, spreadsheets, and crossed fingers. What happens when a model underperforms? Or a prompt costs 4x what it should? Or you need to ship a fix, but your CI/CD pipeline is locked down?

That’s why we built AI Configs. And we’re happy to announce it’s now generally available! 

Whether you’re deploying LLM-powered assistants, fine-tuning custom models, or delivering personalized AI workflows, now you can release faster, test safely, and scale your AI workflows with greater control.

🎥 Watch the demo from Galaxy

Why we built AI Configs

An AI app is different from traditional software. It’s unpredictable, hard to test locally, and full of performance and cost tradeoffs that don’t show up until real users are using the product. It’s no wonder Gartner found that 85% of AI and machine learning projects fail to deliver, and only 53% of projects make it from prototypes to production.

Teams building AI features face tough challenges:

  • How do we try a new model without degrading the user experience?
  • How do we roll back quickly if a new config underperforms?
  • Can we safely A/B test models like GPT-4 and Claude in production?
  • Who has access to modify which models, prompts, and parameters?
  • How do we prevent unauthorized changes?

AI Configs gives you the tools to answer these questions — in production, without redeploying code.

What you can do with AI Configs 

Let’s make this real.

Say you're building an AI-powered support assistant. You’ve fine-tuned a prompt using GPT-4, but you’re curious whether Claude or a smaller open-source model might give you similar performance with lower latency or cost.

Your team wants to experiment, but you’re blocked. Any prompt tweak or model change requires engineering to open a PR. You can’t easily run side-by-side comparisons, you have no observability into real-time behavior, and your team’s working off a shared spreadsheet of prompts.

Even worse? If the model underperforms in production, you have no rollback strategy. You're left watching cost or performance spiral until someone can redeploy.

With AI Configs, you now have control:

  • You can configure model and prompt variations in LaunchDarkly — no redeploys required.
  • Run experiments across cohorts or regions, with guardrails in place.
  • Track key metrics like token usage, latency, and user feedback in real time.
  • If something fails, you can roll it back instantly.

And that’s just one use case. AI Configs unlocks runtime control for your AI workflows, so you can improve the quality and consistency of your AI-generated output while streamlining your development process.

  1. Iterate without redeploys.
    Pull prompts, parameters, and model selections out of your codebase and manage them in LaunchDarkly. Make changes in real-time — no need to trigger your CI/CD pipeline for every tweak.
  2. Target by context.
    Roll out new AI configurations to internal users, beta testers, or custom segments. Test in production without exposing risk to your entire user base. Promote or roll back changes instantly based on real usage.
  3. Monitor with confidence.
    Track model performance live. Understand latency, error rates, usage trends, and cost implications across different variations. Compare versions side-by-side to validate changes and avoid regression by “vibe.”
  4. Experiment like a pro.
    Run A/B tests on prompts, models, and tuning strategies. Use the LaunchDarkly experimentation engine to measure real-world impact with trusted metrics — across LLMs, embeddings, or other AI workflows.
  5. Govern AI Configs with guardrails.
    Control who can access and change model configurations. Set up approval workflows, restrict models by team, and maintain a full audit trail of changes. AI Configs helps ensure that your governance policies keep pace with your innovation.

Built for AI builders

AI Configs brings the same level of control, speed, and confidence to AI development that LaunchDarkly brought to feature management. It’s time to stop treating AI as magic—and start managing it like product infrastructure.

Whether you’re using OpenAI, Anthropic, AWS Bedrock, open-source models, or your own fine-tuned foundation models, AI Configs gives you a centralized system of control for managing and evolving AI in production.

And it's not just for chatbots. AI Configs works across recommendation systems, summarization engines, ranking algorithms, content generation tools. Any place you’re using models, prompts, or parameters to drive intelligent user experiences and need to deliver intelligent, personalized AI experiences is a good fit.

How to get started with AI Configs

GenAI is moving fast. But building safely and scaling with confidence shouldn’t be reserved for big-budget AI labs.

With AI Configs, you can ship smarter, safer, more responsible AI — without duct tape or redeploys.

It’s time to stop treating AI like a magic box. Start managing it like real infrastructure.

Let’s build better AI, together.

Questions? Reach out at aiproduct@launchdarkly.com

Ready to release faster and test safely with AI Configs?

Like what you read?
Get a demo
Previous
Next