How We Deploy Relay Proxy at LaunchDarkly featured image

"What is the Relay Proxy, and how do you use it?"

That's a question we get asked fairly often, and this article will cover both questions. The thing is though, it's difficult to be super prescriptive when it comes to leveraging Relay Proxy, because each team's needs and usage/deployment patterns are not the same. In this case though, we'll tell you exactly how we at LaunchDarkly use Relay Proxy, and hopefully that can provide guidance on how you could also get the most from it. 

Note: If you're looking to just dive into Relay Proxy, we have plenty of documentation that will walk you through different use cases, configuration concerns, cost structures, best practices, and more.

What is Relay Proxy? 

Good place to start! The Relay Proxy is an open-source project supported by LaunchDarkly that enables multiple servers to connect to a local stream rather than making several outbound connections to our streaming service. 

Each of your servers connects to the Relay Proxy only, which maintains the connection to LaunchDarkly. You can configure the Relay Proxy to carry multiple environment streams from multiple projects.

How we use the Relay Proxy 

So, when most folks use the commercial version of LaunchDarkly with the Relay Proxy, they use it to talk to the main commercial instance. Internally, we use it to sit in front of what we’ve dubbed our "catfood" instance. (Catfooding is like dogfooding, but since our parent company is Catamorphic, we call it catfooding. Again, this is our internal usage and we're not trying to spark a cat vs. dogs debate... although if you really wanted to go down that route, please send us all your cat or dog gifs on Twitter and each reply will count as one vote.) 

This relay sits in front of our private instance—it’s very specific to how we use it. We have many different services that need to connect to our internal catfood instance to pool flag information. Put simply, the Relay Proxy sits in front of the catfood instance so that the other services can talk to the relay instead of talking directly to their internal catfood instance. The relay acts as a sort of caching mechanism and backup. It allows the catfood service to break without breaking all the new and existing clients trying to read information from it. 

Deploying with Spinnaker

We deploy like we do all of our other applications: with Spinnaker. This is an app service built at Netflix that the company uses to deploy their own stuff as well. We use a version provided by armory.io. You can essentially give Spinnaker a recipe on how to deploy something, and it does it automatically. We love it. 

Deployment is done in AWS. Spinnaker deploys EC2 quite well, and since we generally use EC2, we use EC2 instances to house the LaunchDarkly Relay Proxy nodes. We then tell Spinnaker which regions in AWS to deploy and how many nodes, and the service handles the rest. 

What about upgrades?

Upgrades are easy. We've set up our pipeline to where we can just tell it which version of the Relay Proxy to deploy, hit a button, and it automatically makes upgrades everywhere. We run the relay in five regions, so it's a highly available service that's capable of withstanding outages. Everything can continue working properly even if all but one region is down at any given time. 

We hope this helps with your consideration or usage of Relay Proxy. Again, if you want to learn the full ins and outs of the Relay Proxy, please check out our docs

Like what you read?
Get a demo
Related Content

More about Best Practices

May 10, 2022