Three Ways to Build Release Assurance into a Software Development Life Cycle featured image

The way to feel safe about releasing software is to know exactly what you’ve released, who has received it, and how it’s working. 

LaunchDarkly recently released the results of our survey on psychological safety, and one of the strongest findings was that developers want to be able to change and improve their processes, but don’t want to cause problems when they do it.

Release assurance is the state of knowing what version or variant of software a user has received, and what the effect has been. You need to know both parts to have a full feedback loop and understand what changes you may need to make.

You can think of it as the famous Coke vs. Pepsi taste test. In order for the test to be meaningful, the tester has to know if the taster is drinking Coke or Pepsi and the taster has to tell the tester which cola they like better. 

In a lot of software scenarios, we are handing someone a single can of cola and asking them if it’s ok. We don’t know if it’s better or worse for them than anything, because there’s no comparison, and we don’t know which one they’re drinking. Even if they tell us it’s ok, we don’t have all the information we need.

So how do we build release assurance into our software development life cycle? Here are three concrete actions to start with:

Know what you’re shipping

When we were pressing software onto compact discs, there wasn’t any question about what the software was and wasn’t. Today, we have a much more rich, interesting, diverse, and complicated software ecosystem, which means we need to ask new questions: 

  • Is there an ad-blocker? 
  • Is this a test/beta/white label version of the software? 
  • Is there something unique about the user’s context that might change how the software behaves?

We don’t need to return to the days of unitary software, but we do need to think about how we can identify individual users and cohorts and know which version of our software experience they’re getting. 

Collect objective and subjective data about experience-as-experienced

Pretty much no one who is content with their software experience wants to fill out a form saying that they're content. It adds friction in their day. People who are upset are more likely to comment when we ask them directly. That subjective data is important, but it’s also skewed. 

We don’t have to rely exclusively on people taking the time to tell us how they feel. We can also see how they’re using the software. We can observe their usage patterns, and even with permission, see the full process of how they use our software to perform a task. 

We can view the load on our servers, the parts of the app that get called, and our response latency. Through these methods, we can build a pretty complete picture of what is happening as individuals and groups use our software. There is no point to observability if we are not using it to observe how people are interacting with what we make.

Take action on the information you get

This is the hardest part. It’s difficult to take action on data. Not because no one wants to do it, but because the response-collecting parts of a company are often a long way from the pieces of the company that can change how the software behaves. 

You’ll need to create routes for the information about the real-life impact of software to make it back to the teams that create, tune, or release the software. Without an explicit plan to do that, your feedback cycle is more of a recipe for frustration.

Knowing what you release, and who is affected by it, is one of the most powerful tools you can have in product fit and analysis. Being able to tune your output gives you access to a broader range of people. In the end, our goal is not to create the ideal software product, but the most useful user experience.

Download our report, "Release assurance: Why innovative software delivery starts with trust and psychological safety," which features analysis from 500 software developers and helps spotlight some of the ways companies are mismanaging releases and missing opportunities with modern development teams.

Related Content

More about Best Practices

August 2, 2022