Vega
Overview
This feature is in Early Access
Vega is in Early Access and it is only available to self-serve customers who signed up for the Developer or Foundation plan on or after July 7, 2024. If you are not an eligible customer, contact your account owner to request access to Vega features.
Vega is an AI-powered assistant that helps you understand, debug, and fix observability issues directly within LaunchDarkly. Vega works inside observability views like logs, traces, errors, and sessions, gathering relevant context, such as recent flag changes and alerts, to suggest improvements and next steps. Vega acts as an intelligent, context-aware, debugging companion built into your everyday workflow.
Unlike off-the-shelf AI agents or general-purpose code assistants, Vega is integrated into the LaunchDarkly observability and feature management platform to deliver deep, context-aware responses to your questions. When you invoke Vega on a log line, trace, error, or session, it automatically gathers the surrounding context, including recent flag changes, alert details, user sessions, and relevant telemetry, so it can explain what happened, why it happened, and how to fix it.
Vega is paired with observability alerts and can analyze observability data automatically when thresholds are breached. It’s like having an on-call engineer available at all times, ready to triage the error spike and remediate the issue.
Vega is powered by Anthropic’s and Open AI models, including Claude 4 (Opus, Sonnet, and Haiku) and GPT models. It uses an authenticated Model Control Protocol (MCP) to access LaunchDarkly observability data. For model inference, Vega uses a combination of OpenAI APIs and AWS Bedrock. When you ask Vega a question or invoke it from a specific view, the MCP securely provides the agent with data relevant to the output. Vega uses that context to reason about what may be happening, identify likely causes, and suggest code-level fixes.
Vega features
Vega includes two primary features that work together inside LaunchDarkly:
Vega agent
Vega agent is an AI debugging assistant embedded in observability views. It investigates logs, traces, errors, and alerts, summarizing what happened and identifying causes. If you connect Vega agent to GitHub, it can even suggest or open fixes.
Vega agent has three modes:
Investigate mode
In this mode, Vega focuses on understanding and diagnosing issues. It summarizes observability data, highlights anomalies, and identifies likely root causes, often correlating them with recent flag or code changes. This is the default mode when you launch Vega from an observability resource.
If you’ve authenticated Vega with GitHub, you can specify which repositories this mode can access for additional context, such as recent commits or deployments. However, Vega will never propose or modify code when it’s in investigate mode. It only reads relevant metadata to enhance its analysis.
Fix mode
When fix mode is enabled, Vega moves beyond diagnosing to suggest potential solutions. In this mode, Vega analyzes the relevant code paths, generates candidate changes, and can open a pull request with proposed edits and explanations.
If you connect your GitHub account to Vega, you can define exactly which repositories it is allowed to read from and write to. Vega’s code suggestions are always visible and reviewable before any changes are merged, so you maintain complete control over your code.
Copilot mode
Copilot mode is for teams that prefer to keep code generation and remediation fully within GitHub Copilot. In this mode, Vega doesn’t propose or author code changes directly. Instead, it creates a GitHub issue in the repository you specify and assigns it to GitHub Copilot for follow-up.
This workflow is ideal if you want to preserve a clear boundary between LaunchDarkly’s observability analysis and your code authoring process. Vega still performs the investigative work, but leaves implementation to GitHub’s AI tooling. It also allows you to use the GitHub Copilot configuration your team may already have created, including custom MCP tools and integrations.
Where to use Vega agent
There are two primary areas where Vega agent is useful:
- In observability views
- In alerts
You can launch Vega directly from logs, traces, errors, and session replays. When it opens, it automatically gathers the surrounding context, including related spans, recent flag changes, and correlated events, to explain what happened and why.
You can also enable Vega in your Alert Configuration settings by toggling on Vega Investigations. Once you enable it, alerts will include a “Run Vega Investigation” option. When you choose this option, Vega analyzes the triggering query, correlated telemetry, and recent flag or code changes to summarize what changed.
Vega currently operates on alerts in investigate mode, focusing on diagnosing and explaining the issue rather than performing remediation.
Authenticating Vega agent with GitHub
To connect GitHub to Vega, follow these steps:
- Install the LaunchDarkly Vega GitHub App at the organization level.
- Authenticate with your personal GitHub account. This lets Vega act on your behalf by opening pull requests, reading from repositories you have access to, and creating issues when fix or copilot mode is enabled.
- Choose which repositories Vega can access. You can restrict Vega’s permissions to specific repositories. These access scopes apply across all modes and can be modified at any time from your GitHub organization’s settings.
GitHub authentication is optional for investigate mode, but required for fix and copilot modes, where Vega interacts directly with your repositories. All GitHub interactions are logged and scoped to your authenticated session. Vega will only operate within the repositories and permissions you’ve explicitly approved.
Vega search assistant
Vega search assistant is a natural-language search tool that lets you ask questions about your observability data in plain language such as, “Which traces increased error rates?”.
Vega automatically converts your question into a structured observability query, runs it across the appropriate datasets, and presents results within LaunchDarkly observability. Vega is prompted on the specific query language used by the product so you can focus on asking meaningful questions.
Where to use Vega search assistant
You can trigger the search assistant from any search bar in logs, traces, errors, or sessions. Type a natural-language question and Vega translates it into the equivalent structured query. After you submit the question, Vega shows both the interpreted query, so you can learn the syntax, and the search results with relevant metrics, traces, or logs.
Security and privacy
Access to Vega is governed by LaunchDarkly’s Role-Based Access Control (RBAC) system. Administrators can explicitly grant or restrict Vega usage on a per-member or per-role basis.
You can:
- Allow or deny Vega access for specific teams, projects, or environments.
- Limit which users can connect or authorize GitHub integrations.
- Configure whether users can invoke Vega in investigate, fix, or copilot modes.
This ensures Vega can only be used by authorized developers with clearly defined permissions for both observability data and code access.
Pricing
Vega is currently in Early Access. During this phase, all participating accounts are limited to 150 Vega runs per month at no additional cost. If you reach your monthly limit or need additional usage, contact your LaunchDarkly representative to request additional access.
A Vega run is a single chat conversation execution. For example, asking Vega to investigate a log, analyze an error, summarize a session, or propose a code fix constitutes one run. Each distinct request counts as one run, regardless of how many observability resources are analyzed within that request.