LaunchDarkly Tips and Tricks: Vol. 1

Download Ebook
LaunchDarkly Tips and Tricks: Vol. 1

Relay Proxy at LaunchDarkly

Have you ever wondered how to use the Relay Proxy — or even what it is in the first place?

You’re not alone. We’ll go over what the Relay Proxy is and what it does below. However, it's difficult to be prescriptive when it comes to leveraging Relay Proxy, because each team's needs and usage/deployment patterns are unique. So, we'll tell you exactly how we at LaunchDarkly use the Relay Proxy. Hopefully, that can provide guidance and some inspiration for how you could also get the most from it.

Note: If you're looking to dive straight into the Relay Proxy, we have plenty of documentation that will walk you through different use cases, configuration concerns, cost structures, best practices, and more, so feel free to check those out.

What is the Relay Proxy?

The Relay Proxy is an open source project supported by LaunchDarkly that enables multiple servers to connect to a local stream rather than making several outbound connections to our streaming service.

Each of your servers connects to the Relay Proxy only, which maintains the connection to LaunchDarkly. You can configure the Relay Proxy to carry multiple environment streams from multiple projects. Some common use cases for the Relay Proxy:

  • Reducing your app's outbound connections
  • Keeping user data private
  • Facilitating faster connections
  • Meeting continuation of service requirements
  • Reducing firewall configuration complexity for your customers
  • Increasing startup speed for serverless functions
  • Reducing operational work when creating new projects and environments

How we use the Relay Proxy at LaunchDarkly

When most folks use the commercial version of LaunchDarkly with the Relay Proxy, they use it to talk to the main commercial instance. Internally, we use it to sit in front of what we’ve dubbed our "catfood" instance. (Catfooding is like dogfooding, but since our parent company is Catamorphic, we call it catfooding.)

This relay sits in front of our private instance—it’s very specific to how we use it. We have many different services that need to connect to our internal catfood instance to pool flag information.

Put simply, the Relay Proxy sits in front of the catfood instance so that the other services can talk to the relay instead of talking directly to their internal catfood instance. The relay acts as a sort of caching mechanism and backup. It allows the catfood service to break without breaking all the new and existing clients trying to read information from it.

Deploying the Relay Proxy with Spinnaker

We deploy like we do all of our other applications: with Spinnaker. This is an app service built at Netflix that the company uses to deploy their own stuff as well. We use a version provided by armory.io. You can essentially give Spinnaker a recipe on how to deploy something, and it does it automatically. We love it.

Deployment is done in AWS. Spinnaker deploys EC2 quite well, and since we generally use EC2, we use EC2 instances to house the LaunchDarkly Relay Proxy nodes. We then tell Spinnaker which regions in AWS to deploy and how many nodes, and the service handles the rest.

Automating upgrades with the Relay Proxy

Upgrades are easy. We've set up our pipeline so we can simply tell it which version of the Relay Proxy to deploy, hit a button, and it automatically makes upgrades everywhere. We run the relay in five regions, so it's a highly available service that's capable of withstanding outages. Everything can continue working properly even if all but one region is down at any given time.

We hope this helps with your consideration or usage of Relay Proxy. Again, if you want to learn the full ins and outs of the Relay Proxy, please check out our docs.