AWS is well regarded as the “developer cloud.” So it’s only natural that when developer teams initially learn about LaunchDarkly, one of the first questions they ask is, “How can I use LaunchDarkly and AWS together?”
As we start 2021’s re:Invent, we wanted to offer a primer around how AWS and LaunchDarkly are “better together” (oh marketing phrases, how I loathe you at times…) and where you can most commonly expect to see our platform appear in an AWS environment. (Spoiler alert: The answer is in the developer workflow, something we care quite a lot about over here at LaunchDarkly.)
Now onto another spoiler: Understanding where LaunchDarkly intersects in AWS is fairly straightforward. Recently, we published a blog post about our Flag Delivery at Edge functionality, in which we touched on our overall architecture and some recent enhancements to our Flag Delivery Network. Within that post, we also discussed the concept of the LaunchDarkly SDKs that power flag delivery within your application. This key point doesn’t change when we move into a cloud environment; specifically AWS in this case. The magic of LaunchDarkly lives both within your application code, and in our LaunchDarkly infrastructure.
AWS tools in the toolbelt
As a builder who's surfed on many clouds, I always appreciate the breadth of services that are available under the AWS umbrella. I love looking at the AWS catalog as a set of tools that are available for given tasks. Not everyone needs an EC2 Auto Scaling group (ASG), but when you want the ability to scale out a set of “Virtual Machine (VM)” style instances quickly - you’ll surely appreciate what ASG gives you!
This flexibility is useful, but it can also become confusing - or even overwhelming! The truth is, you can probably find a way to use LaunchDarkly across many different AWS services. In this article, we’re going to focus on dispelling some of the confusion that might exist between these services.
Let’s get into it!
Amazon Elastic Kubernetes Service (EKS)
The primer: No article about modern application deployments would be complete without talking about Kubernetes. And no article about AWS would be complete without talking about EKS, Amazon’s distribution of Kubernetes. For the purposes of this post, Kubernetes is a platform for ensuring containers (and many related systems) are running in a consistent and resilient way. Infrastructure teams and developers leverage manifests (or HELM charts) to deploy container-based applications onto Kubernetes. The magic of Kubernetes is its ability to handle many of the dependencies of distributing that application across its infrastructure, as well as its “Declarative State” based APIs, which continuously ensure (or tries its best) to keep “running” what you defined in the manifest.
How LaunchDarkly helps: From a LaunchDarkly standpoint, you can find us running within the applications that are deployed onto the Kubernetes cluster. When the container images are built—whether it’s through something like AWS CodeBuild or CodeDeploy, another continuous integration (CI) platform, or even locally—the LaunchDarkly SDK and configurations are included in the build. From a consumer standpoint, there’s no Kubernetes-specific configurations that need to be applied. We attach directly onto the application deployment/delivery workflow.
In a Kubernetes scenario, the beauty of LaunchDarkly is that we can now decouple the deployment of workloads from the release of features. Historically developers and infrastructure teams would deploy an updated version of the deployment manifest and allow Kubernetes to lifecycle the pods (smallest unit of compute in a Kubernetes cluster) with the new version. This effectively tied our release of new code to the deployment of new infrastructure.
Using LaunchDarkly we can break those tasks apart, allowing you to deploy your new application pods, and gradually enable features within your application. As a developer, this gives YOU greater control over the release of features and rollback if there’s a problem.
Amazon Elastic Container Server (ECS)
The primer: EKS is great, but there are times where you (whether it’s architecturally, or emotionally) just don’t need the complexities that come along with managing a Kubernetes cluster. Sometimes, you just want to run a container, and have “the cloud” keep it running at all times - this is what ECS provides. ECS has many configurations, but most commonly you’ll see traditional ECS and Fargate.
Traditional ECS allows you to configure an Elastic Compute (EC2) instance that containers will be scheduled onto. This provides you all the configurations available to a standard EC2 instance (such as sizing of the system), but you also have to manage that system as a standalone infrastructure resource.
ECS with Fargate allows you to simply consume containers within the AWS infrastructure. AWS manages the availability and scalability, but you run into a few more restrictions around networking models.
How LaunchDarkly helps: Similarly to the EKS/Kubernetes scenario above, in ECS, LaunchDarkly continues to live within your application code. What this means is that the same container you’re building for use in Kubernetes can be used within ECS as well! Once again, we want to ensure that we’re attaching to the workflow for deploying and releasing your applications.
We also continue to reap the benefits of breaking apart deployment and release (just like the Kubernetes example). Historically, new functionality releases would be tied to the deployment of a new container into ECS. With LaunchDarkly in play, we gain far greater control over the release process with code that’s being deployed versus the traditional model, wherein once the container is live, all the code is being consumed by your end users (whether there are problems/bugs or not!) If something goes sideways, you’re looking at pushing a new deployment out.
Amazon Elastic Compute (EC2)
The primer: EC2 is the rough equivalent of traditional virtual machine infrastructure,but hosted in AWS. While it’s a rough equivalent, AWS provides significantly improved capabilities around running “virtual machines” at scale including things like EC2 ASGs, and multi-availability zone configurations. In these traditional infrastructure configurations, we frequently see users hosting their application workloads within some form of a web server configuration (i.e. Apache or NGINX).
How LaunchDarkly helps: In these types of deployments, LaunchDarkly especially shines as, historically, automating delivery of workloads onto these systems poses a bit more challenge than cycling a container in a Kubernetes or Docker environment. Usually this workflow consists of either a pipeline that compiles the workload and delivers it to the existing systems, or treats them more ephemerally - deploying the workload onto new infrastructure and either cycling load balancers to the new systems or using some DNS tricks to get traffic to the new environment.
In these cases, ultimately our feature or functionality releases are tied directly to the infrastructure they are delivered on. LaunchDarkly allows us to break this concept apart and deliver releases outside of the infrastructure deployment cycle. Equally important, we can back problem changes out instead of waiting for that pipeline to run and cycle the workload or infrastructure back.
AWS Lambda
The primer: It wouldn’t be a post about modern architectures and technologies without mentioning serverless in some way! AWS Lambda is a serverless “function as a service” platform which, as you can imagine, exposes individual functions as building blocks of application functionality. These Lambda functions can be used to integrate between services, or can even replace entire API tiers of modern application stacks. Lambda has been around for a while but continues to be an emerging platform and high focus area for enterprises as they look to adopt a more cloud-native operating model. As a serverless platform, it requires no “user managed” infrastructure component, instead spinning up the function - executing its task - and returning its results.
How LaunchDarkly helps: A growing use case we are seeing is developers leveraging feature flags within these functions to increase their flexibility. In this model, the focus is less on breaking up “deploy and release” and instead focuses more on giving these functions broader capabilities. An example that we’ve shown in the past is leveraging a Lambda function to drive which static site is being resolved from an S3 bucket, dynamically changing the URL from a production website to a beta website based on flag value. An additional example we’ve seen is moving a function from interacting with a traditional EC2 hosted database to interacting with something more cloud native like DynamoDB or Amazon Relational Database Service (Amazon RDS).
A Growing Landscape
While EKS, ECS, and EC2 are all examples of true application hosting platforms, we do continue to see customers leverage AWS services in other ways with LaunchDarkly.
Customers are leveraging our Data Export functionality to send event information into AWS Kinesis for future analysis. In this example, Kinesis acts as a platform for processing the large amount of data that comes out of the LaunchDarkly platform, allowing users to analyze and act on that data at a future time.
Pipeline and delivery tools like AWS CodeBuild and CodeDeploy allow users to build their applications, but can also be used to interact with the LaunchDarkly API to trigger flag configurations.
Ultimately, platforms like this are what ingest our application code from something like source control and compile them into applications. LaunchDarkly focuses on taking the functionality that lives within that codebase/application and gradually releases functionality for end users to consume.
In Closing…
LaunchDarkly is at re:Invent this week showing a number of great things. We have a fully-staffed booth with some awesome demos to show off the power of our platform and answer any questions you have about using LaunchDarkly with AWS. Stop by and say hi!