What Are Software Deployments? Methodology + Best Practices featured image

Key Takeaways

  • Developing a comprehensive software deployment strategy is difficult for most organizations.
  • Changes in the software deployment landscape and technologies have only increased this difficulty.
  • Feature management solutions reduce the complexity and friction of software deployments.

Software deployment is the process and strategy of taking the new code that developers write and pushing it to an environment where the new functionality is consumable for end users. Sounds simple enough, right? The whole point of writing code and software development is to solve a problem for our customers, and yet, software deployment is slowed by developers spending time on bug fixes or other potential issues. In this article, we’ll explore:

  • Why software deployments are so challenging for many organizations?
  • What does a software development process look like in practice?
  • What the difference is between software deployments and releases? 
  • How do we optimize the release process with feature management? 

Let’s start by defining exactly what we mean by “software deployments”.

Understanding software deployments

When we talk about software deployments, traditionally we’re referring to the process of pushing changes to a specific environment and making that new application code available to some sort of end-user group. Feature flags have changed the way we think about software deployments by allowing us to separate the deployment (pushing code to a production environment) from the release (making those features available to users). We’ll dive into that a bit more later, but let’s first pick apart the deployment aspect of things.  

Think about the websites you visit, the apps you use on your phones, the virtual interfaces that you interact with on a daily basis. All of these solutions are the result of software deployments. 

Deployments are complex. The complexity is rooted in application architecture. Software applications consist of two umbrella categories: front-end and back-end components. The frontend refers to what we see or interact with, and the backend is the underlying plumbing that makes everything work. This plumbing is critical and where we see a lot of complexity start to arise, but why?

A history of how infrastructure has shaped deployments

The history of software is a story of infrastructure and application code. Application code has changed in the sense of better frameworks and more developer-friendly tooling, but largely the process of writing code has remained the same. Infrastructure, on the other hand, has undergone some fairly large changes over time.

Going back a few years, we saw the introduction of virtual machines. Virtual machines (VMs) enabled organizations to simulate computer environments without requiring end-users to have access to physical hardware. And while VMs made software more readily available, it was still strongly tied to an organization's physical footprint, e.g., the number of data centers and servers they maintained that actually powered these VMs. 

Then came containers, which reshaped the way that software was packaged and delivered. Developers create shareable images with all the necessary dependencies pre-loaded, reducing the time that it takes for code to go live. As a result, organizations realized they could run software anywhere, and the major cloud providers started offering compute as a service. This, in turn, led to the creation of orchestration platforms like Kubernetes, which helped organizations run their containers at scale. But Kubernetes still requires a great deal of operational overhead and management, which ultimately still slows down the deployment process.

Cut to today, where we’re seeing a rise in application hosting platforms like Vercel, Netlify, Railway, and more. These platforms enable developers to deploy code without having to also maintain the back-end infrastructure that that code will rely on.

Understanding that developers just want to push code

So what was the point of this exploration? The common thread here is that developers just want to be able to push code quickly to a live environment without having to think about the underlying infrastructure system. This is the core challenge in software deployments: finding a way to accelerate code delivery without having to wait on the infrastructure to catch up. And there’s evidence to show that increased deployment frequency leads to better organizational results. As illustrated in Google’s State of DevOps Report, higher performing teams tend to deploy more frequently. Specifically, the highest-performing organizations deploy new code multiple times a day!

This push to accelerate code delivery is what ultimately led to the rise of DevOps. Combining the efforts of development teams and operations teams to build dynamic infrastructure environments that can support frequent software deployments. The key element to achieve this is automation. 

How infrastructure automation accelerates software delivery

DevOps tooling prevents infrastructure from becoming a block for accelerating software deployments. Infrastructure automation tools like HashiCorp Terraform, RedHat Ansible, Chef, and Puppet all help accelerate the creation and ease the ongoing configuration management of infrastructure environments. Rather than having operators individually provision and configure environments via ticketing requests, organizations can standardize configurations and package them in shareable templates for repeatable use. 

Having the ability to create infrastructure on-demand reduces friction in the software delivery process. In fact, organizations have started leveraging these infrastructure and configuration management tools as part of their broader deployment pipeline through a process called CI/CD. 

Continuous Integration/Continuous Delivery (CI/CD)

CI/CD, or Continuous Integration/Continuous Deployment, is a philosophy based on consistently iterating on code and immediately deploying those changes through automation. The idea is that by continuously pushing changes, development teams become unblocked in their ability to innovate. As we just covered, making infrastructure available on-demand opens the door for this methodology and results in tooling like GitHub Actions, CircleCI, GitLab, and many many more.

The goal of these software deployment tools is to automate the delivery of new code to different environments, e.g., development, staging, or production environments. Using infrastructure automation and CI/CD tooling, organizations can push new code into their version control systems, provision a new infrastructure environment, and deploy the changes to that environment with little to no developer or operator intervention. Using CI/CD tooling and automation helps reduce the chances of human-error where changes are pushed to the wrong environment. That is how you accelerate software deployments.

How organizations deploy software

Now that we’ve established what a software deployment is and how we’ve gotten to where we are today, let’s explore what a modern software deployment strategy looks like. Remember that the goal is to create a frictionless process for organizations to push code as it is written, not all at once through a “big-bang” deployment.

Planning and preparing software deployments

The first stage for creating a software deployment process is actually planning what the organizational process will look like. In this stage, we’re trying to define what tools are going to be used, who the relevant stakeholders are, what the pipeline looks like, how approvals are going to work, etc. In many cases, this phase should be focused on optimization and gathering specific requirements for the software development lifecycle. From there, development teams will create documentation around what the proposed workflow should look like.

From a tooling perspective, this is where many proof of concepts are likely to occur. Organizations have to make a decision on what their application architecture will look like (microservices vs. monolithic), where their infrastructure will reside (on-premise vs. cloud), and a whole host of other details. Having a strong deployment strategy is the foundation of achieving that Continuous Delivery state because you need a way to create consistency across all teams within the organization. 

Developing and staging deployments

The next step on the deployment checklist is for developers to start writing their code. Organizations need to have determined what their testing environments will look like and how many environments they are looking to maintain, e.g., dev, testing, QA, staging, etc. This is the first entrypoint for some of our CI/CD solutions. As code is written and gets ready for testing, it needs to be pushed to each of these various environments. 

Many developers will use tools like GitHub Actions or CircleCI in conjunction with a series of tags to make sure that code gets delivered to the correct location. When new code is published within their version control system, the commit will trigger a new build for packaging that release and then send it to the requested destination. It’s here that code may undergo a series of unit tests as well to ensure application stability before moving to the production environment.

Release and rollout processes

Following QA and testing, the new software can now move to the production environment, sometimes referred to as a release cycle, but we’ll explain in a moment how that is actually slightly different. This is one of the precarious stages of the deployment process because it’s where external users start to interact with the new code. No matter how much testing you do, there’s always the chance of bugs creeping through. To minimize the potential impacts of those bugs, organizations have a few different deployment methods.

Blue/green deployments 

Blue/green deployments are one of the most common deployment methods that organizations rely on today. The idea is that you evenly distribute application resources between two environments, blue for the previous version of your application and green for the new one. You then use networking patterns to control access to each environment and then test/measure how the new functionality in the green environment performs. If things are going well, you can divert more traffic to the green environment, if not, revert everyone back to the blue environment and revert the new deployment back to development and avoid downtime.  

Canary deployments

Unlike blue/green deployments, canary deployments are the process of deploying specific subsets of code in a new environment and slowly introducing live traffic to those new capabilities, e.g., introducing a new API for your front-end services. Just as the name suggests, this new feature is the “canary in the coal mine” and serves as an early warning system of a potential larger problem that could lead to extended downtime. The idea is that by isolating new capabilities and starting with a smaller subset of users, you can control the impact radius when something goes wrong.   

Shadow deployments 

If there are major concerns about making new capabilities visible to production end users, then shadow deployments may be right for that particular organization. With a shadow deployment, the new capabilities are released in a parallel production environment, but not made available like they are in blue/green or canary deployments. This allows organizations to test the new environment in what will be a live production environment, but can instead use internal testers to validate performance.

Rolling deployments

If maintaining additional environments, like in a canary, blue/green, or shadow deployments, is too burdensome, organizations may opt for a rolling deployment. Rolling deployments are the process of incrementally replacing elements of an application, infrastructure and all. For example, Amazon Web Services (AWS) talks about rolling deployments in the context of ECS. When a new version of an application is ready to be launched, the old container is deprecated and one containing the new version of the application is provisioned. Rolling deployments are very similar to the concept laid out in the Ship of Theseus thought experiment, where the ship was slowly replaced over time eventually becoming something new altogether. Microservice or container architectures make rolling deployments a bit more viable because the application consists of individual parts that are more easily swapped out. 

Downsides of common deployment strategies

While each of these deployment strategies do offer a large number of benefits, they all carry a similar challenge: infrastructure management. Even in the case of rolling deployments, you're cycling through a container lifecycle which can introduce risk if the new container doesn’t provision correctly. Seems like a lot of risk when you’re just testing a new placement for a banner on your home page. Feature flags and feature management solutions help organizations streamline their rollout strategies, but without the additional overhead of managing infrastructure. We’ll touch on that again later in this article.   

Monitoring and feedback strategies

The final piece of the deployment process is monitoring metrics of changes in real-time and gathering end user feedback, either directly or indirectly. Utilizing application performance monitoring tools (APMs) like Datadog, Dynatrace, New Relic, or Honeycomb help organizations monitor the status of their software updates and help detect any performance issues that could lead to a larger outage. Having a robust monitoring system in place is critical for accelerating the software deployment cadence because it provides a safety net to become aware of hiccups before they become bigger problems.

Difference between deploy and release

At this point, you may be feeling like a lot of the topics discussed are very familiar. If you’ve read or seen any of our content on decoupling deploy from release, that may be why. And so the natural next question is, what is the difference between software deployment and software release? For a full breakdown, make sure you check out this article where we cover it in detail about the differences between the deploy and release. For brevity’s sake though, we’ll define it like this—”software deployment” is the process of pushing new code to an environment, “software release” is making it available and accessible to end users. 

Feature management for continuous deployment

As mentioned earlier, the challenge with the deployment strategies is that they all rely on infrastructure management and resources. While this is viable for some larger releases, not every single one needs to have its own dedicated environment for testing in production. Feature flags allow us to control the release process without interrupting the continuous deployment process. Code is still pushed as it is completed,  but features are made available based on an entirely different lifecycle and approval process and does not require additional deployments. 

Experimenting on deployments 

Beyond just the control element, feature management platforms allow us to run experiments to measure the effectiveness of our deployments. Organizations can gather both performance and business insights by running A/B tests within a single application vs. across multiple deployments. Feature management platforms like LaunchDarkly make it possible to integrate experimentation into the release strategy instead of something that needs to be configured as a separate process. 

Remediation and incident management

APMs and monitoring tools provide a great safety net, but they don’t alleviate the pain of a rollback process. In the event that an issue with a deployment is detected and the code has to be removed, it requires essentially another software deployment, just with a previous version of the code. Feature management solutions can leverage the monitoring data from APMs, but when problems are detected, a feature can just be toggled off, no new deployment required. 

Conclusion

Software deployments are a challenging but necessary process in any modern organization. Software solutions touch so many aspects of our daily lives and in order to provide the best experience possible, developers need to be able to deploy quickly and frequently. We discussed the various deployment strategies, but having a strong feature management solution is also critical. If you’re curious to see how others have implemented feature management strategies in their software deployment strategies, take a look at some examples from IBM, Hireology, CircleCI, and Coder.

Using LaunchDarkly to deploy software faster with less risk

LaunchDarkly is a feature management and experimentation platform that enables 5K+ organizations to dramatically improve their software deployment processes. LaunchDarkly gives product delivery teams the safeguards and control to continuously deliver software. Moreover, it enables you to progressively deliver new features to specific audience segments and/or random percentages of users—all at your own pace. It also enables you to immediately disable broken features in runtime without needing to push a fix through your deployment pipeline. 

Related Content

More about De-risked releases

March 21, 2024