Michael is the Delivery Lead for the IBM Kubernetes Services (IKS) which is part of the IBM Public Cloud. He has been with IBM for 23 years. Michael's current passion is managing Kubernetes at scale and is a huge proponent of personal and team empowerment. He married with 4 kids, 1 dog, and 2 cats; needless to say work is his place for peace and relaxation.
Rich: Welcome to the Trajectory Nano Series. This is the second of four weekly Nano Series sessions leading into Trajectory Live august 26th and 27th. The first talk was last week. We'll post a link to the recording in the chat in case you missed it. But first, some housekeeping. All participants are required to follow the code of conduct. It has been posted into the chat for you to review. Also, please use #trajectorynano when sharing content on social media. If you have any questions during the talk, please post them into the chat. We'll be taking questions after Michael's talk. Rich: Also, I'd like to thank Honeycomb for sponsoring today's talk. Honeycomb is a tool for introspecting and interrogating your production systems. They gather data from any source, from your clients, mobile, IoT, browsers, vendored software, or your own code. If you'd like to talk with someone from Honeycomb, they will be available to chat in the Expo tab on the left anytime during or after this talk. You may have also seen in the chat already but Honeycomb is also doing a raffle of a NASA Apollo 11 Lunar Lander LEGO if you visit their booth. Rich: So good morning, good afternoon, good evening wherever you may be, and thank you for joining us today. My name is Rich Manalang. I'm a developer advocate here at LaunchDarkly, and I'm very pleased to be joined today by Mike McKay from IBM. At LaunchDarkly we categorize feature management into four pillars: build, operate, learn, and empower. Feature management is designed to span multiple teams and use cases, all of which are contained in these four pillars. Teams often start and build and gradually work their way to the other pillars, while other teams jump right in and start by using more than one pillar. Today we'll focus on the operate pillar and hear from Mike about how they are using feature management at IBM. But before we begin, let's step back to understand what makes up the operate pillar. Rich: During last week's Nano talk, my colleague Dawn Parzych covered the build pillar. We've dropped a link in the chat if you've missed that talk. Just to review. The build pillar is all about using feature flags during the build cycle of software development. It's about separating the ploys from release or testing in production, and generally how you roll out new versions of your product using feature flags. In the build pillar, most of your flags are going to be temporary, used only during the lifecycle of the development to the release of that feature. Rich: When we talk about the operate pillar, we're really talking about how you might use LaunchDarkly's feature flags in a more permanent way, in a way that allows you to operate your products and services over time. Here are just a few examples of how you might use operational flags. Let's start with the kill switch. You can create flags that operate like light switches for your product. For example, if you're experiencing an incident due to a performance issue, maybe you can trigger the kill switch on a flag that turns off non-essential or compute intensive parts of your app. This would allow your app to recover without a lot of fanfare. Rich: You can also think of this as a circuit breaker. A circuit breaker is that thing on your wall in your house or apartment that manages the distribution of power throughout your house. If something goes haywire, a circuit trips and prevents the possibility of losing all your electricity, or worse, maybe even causing a fire. Well, that same analogy works in this case. Applying this to feature management you can wire up a feature flag to a metric you're tracking elsewhere like in Honeycomb for instance. If your metric hits a threshold, you can instruct LaunchDarkly to trigger the kill switch. This is an example of reactive monitoring using your services metrics. Rich: Another example is dynamic configuration. In a few minutes you'll hear from Mike about how they use LaunchDarkly to store and manage configurations to their Kubernetes clusters at IBM. And lastly migrations. Well, sometime in every engineer's career they will need to perform some sort of migration. For example, like maybe migrating data, databases, infrastructure or services. Well, traditionally this often involves lots of coordination and possible downtime. Well, did you know that you can use LaunchDarkly feature flags to reduce the risk of these type of migrations and possibly perform them much faster and more efficiently? Rich: Well, these are just a few examples of the build pillar or the operate pillar. And every day we're learning more from our own customers, you, about how you use LaunchDarkly to help you operate your business. So with that, I'd like to introduce Mike McKay to talk to us about how IBM uses LaunchDarkly. Take it away Mike. Michael: Hi, my name is Mike McKay. I'm from the IBM Kubernetes Service, and today I'm going to talk about our journey towards progressive delivery. Michael: Just a quick background about who I am. I've been with IBM for 24 years. I started off doing operations with SAP deployments. I did that for about five years and then my carer took me to doing product development in IBM Tivoli. And then finally the past seven years I've been doing IBM cloud development, specifically lately it's been the IBM Kubernetes Service. Previously I had worked in the DevOps organization. We have a offering which provides DevOps services for the IBM cloud as well. The reason why I bring this up is that my journey overall in these past 24 years it kind of spans how we operate, build and operate applications for customers, how we actually develop code, how we test code, the full product lifecycle around a piece of software. And what's interesting is that seeing those pieces come together and be used to drive and basically build and operate our own cloud services in the IBM cloud. Michael: And then finally, I do like to mention that I've got four kids. Actually this is a misnomer. I actually have three cats now and then one dog. And the reason why I mentioned this is that every time either I have an issue with code or things get very complex or seems like there's just no end to the complexity or issues, I just think about the trips that we take to Florida with four kids in the car, and then everything else just becomes ... seems easy after that. Michael: A little bit background about the IBM Kubernetes Service itself. We are currently operating in six regions across the globe in over 35 data centers. Just as important is our development staff is spread across the globe as well. What this means is that not only did we have to figure out how do we scale from a technology perspective but also how do we scale from just a culture and HR perspective. So we have various teams building various different microservices that need to all be deployed globally across the world really around the clock. So our development process never really stops, which kind of just presents its own set of challenges as well. Michael: So kind of giving you the background about myself and about our service, I just kind of want to go around a little history how do we actually get here and how did adopting progressive delivery helped us get here. It all started about four years ago. I don't want to say it start in the garage because it didn't, but it did start with a small group of folks here in Raleigh. We had a small team and we had one data center in Dallas that we did all of our deployments to. All of our development and tests and production environments were in that one data center in Dallas. At the time we had about 13 clusters just to manage our entire service, and we probably just had a handful, maybe a few hundred different clusters that we managed. Michael: At the time we didn't really know what we were going to be in store for four years later. Even initially when we first started this project, it wasn't even based off of Kubernetes. It was just a generic container service where customers could come to our service and say, "Hey, I'd like you to go run 10 instances of this particular image," and we'd go happily run it for you. But then around that same time was when Kubernetes really started gaining popularity, and that seemed to be the direction the whole industry was headed. So we also jumped on that bandwagon. And here we are today, supporting tens of thousands of clusters, again, spread across the globe with over 100 or so clusters just to manage our service. Michael: Four years ago we also had a different mentality of how we built and operated our service. We still had this mindset of we're building a product. With that mindset, we tended to think of the service as one big monolithic piece that we would, we'd build that piece, we would test it together, we would promote that same big monolithic service across from our development to our test environments to our production environments. Not only was it time-consuming but we also knew that was not going to scale. Michael: And then finally, what we were doing is we were delivering features alongside delivering code. So that meant that typically we do deployments onto our production environments every Tuesday and Thursday. During that deployment process we would actually be releasing new features to our customers. So if a customer came in at 2 p.m and then at 2:30 they'd refresh their screen on a Tuesday, they may see brand new buttons, brand new capabilities. All that was basically rolled out to our entire customer base every Tuesday and Thursday with really no controls in place, and if something went wrong, how we rolled it back or how can we just test this on a smaller population, we had none of that at the time. Michael: The first thing we did was to realize that hey, in order to expand beyond Dallas, we needed a better way to build and control our code deployments themselves. Interestingly enough, we looked at our LaunchDarkly and we realized, "Hey, there's actually some potential here." LaunchDarkly provided us the ability to on many different platforms or many different in our case clusters is to provide a set of valid key or values. And these values were basically the versions of the microservice that we needed to run on all these clusters. Michael: What's interesting is that our first use of LaunchDarkly was not the traditional sense of feature flags. At the time we just used them as a way to build some rules and deliver the values of those rules out to our clusters, again, to determine which versions of our microservice got run those clusters. The big architectural difference between our old build process and our new build and deployment process was moving away from a push-based model which utilized the Jenkins application to not only build our images and build our microservices but also then push that out to all those environments as well. So every time we wanted to do a deployment to any Kubernetes cluster, we'd have to run a set of Jenkins jobs to make that happen. Michael: Part of our redesign here in phase one, we've moved towards a pool based approach where each and every cluster in our environment is now responsible for talking back to LaunchDarkly and understanding which versions of microservices should be running and doing the deployment on those clusters themselves. So this actually greatly increased our scale and our capability to deliver code quickly how to not only our hundred or so control plane clusters, but to the tends of thousands of clusters that we manage for our customers as well. Michael: The next step we took in our journey was we've got this really cool technology called LaunchDarkly managing our builds and where our code is deployed to. Then we actually started integrating this into our operational control plane. So we have various things like our clusters themselves locked for changes or how many instances of a particular microservice that we're running in this particular environment. We actually started utilizing feature flags to control those various operational features of our environment. At the time it was just a handful, so it was really a small team in Raleigh that was using. This is the only team really in the entire service that was using feature flags at the time beyond using it for our deployments. Michael: So basically we've kind of built this big complex machine which is the IBM Kubernetes Service and then we just wanted a simpler way to go through and tweak and control various bits and pieces without having to go through and update configuration all these clusters and/or go through and redeploy applications just get these settings changed. So this is really I'll say our kind of first foray into actually using feature flags kind of as they were intended to be used. Michael: Then as part of phase three this is kind of what I call our growing up period. This is where we went from really just kind of the Wild Wild West where we had developers rolling out code with really out any oversights. We weren't really sure which changes were going to the environment. The one thing we needed to do was to put some kind of not really guard rails but just some auditability into our process, really understanding what kind of changes we're making to the environment and get that under control of it. Michael: So what we did in phase one is we made it really, really easy to deploy, and that was great. The problem though is that because it was so easy, we, again, we lost a little bit control on understanding when changes were taking place, were they taking place at the appropriate times, and if it did have an impact, then could we go back through and see what changes occurred, why they occurred, and was anyone aware of any other circumstances happening to be going on in the environment when that happened. Michael: Part of phase three we built our own tool. We actually call it razee flags, and this incorporates our ServiceNow change management system into our code delivery process. So anytime you wanted to change a LaunchDarkly flag to say bump up a new version of a microservice out into the production environments, you could go to razee flags, you could check, find the feature flag that you wanted to change, request that change, supply some information such as any tests that were run, the type of change that it was, which environment is going to impact, any backup plans. All that then is inputted into this change request, and then the razee flags application itself would kind of open up the ticket through ServiceNow, wait for that ticket to get approved. And once that ticket would get approved, it would allow the user to actually make the change to the flag. Michael: Now the really cool thing about all this is that LaunchDarkly is actually seen how we do this and they've incorporated this now directly into LaunchDarkly. One of our goals for this year is to actually sunset this application here and just utilize the capabilities now built into LaunchDarkly to integrate with our change management system. Michael: Finally, on phase four, this is I guess what I would call are really our first true feature flags. Up until this point our feature flags were always kind of what really, I guess what you read about when you hear feature flags. Either we're using it for our deployments or we're using them for operational flagging, which in both cases when you look at the LaunchDarkly interface and you click on any of our users, they weren't really users. They were Kubernetes clusters. So this is the first time when we started actually using feature flags to control features to our actual customers and users. Michael: So we had all of our teams and all the squads across the globe starting to use feature flags. And the problem is that everyone started using feature flags, which is a good thing and a bad thing. It was a great thing because we now got the capabilities to enable and disable features per user or per segment of users. We could turn, do things like have kill switches, and we then became much more confident about delivering code because now we can actually truly enable or disable that new set of code that we just delivered via the feature flags. Michael: The problem we had though is that we had lots of feature flags. A lot of times we would create a new feature which spanned across multiple teams. And what would happen is multiple themes would end up creating their own feature flags. So now, in order to deliver one big feature, we would have to go through and turn on five or six different feature flags, associate those feature flags with the appropriate people in segments. At the same time this is very helpful and it really got a lot of teams experienced in using features, we still have a bit of a maturity step to take before we really started using feature flags optimally. Michael: Finally, what I call phase five is feature flags done right. This is taking everything we've been doing really for the past three years since we started out on our journey here. And we put a little bit of organization and thought behind our feature flags. So now, instead of just having all these teams arbitrarily create their own flags and control their own code and features, we started thinking about feature flags before we actually started writing code. So at the time when we start thinking about new features just from kind of an inception point to where offering management is thinking, "Hey, that would be really cool if we had this really cool blue button that when you clicked it, launched a set of fireworks in the back end," this is the case where this feature is going to impact several teams. So from the get-go now we create one feature flag that helps control that feature across all the teams. Michael: The cool thing about this now is that now the amount of feature flags that we have to manage itself is much smaller, more manageable. It actually controls features across a variety of different microservices and components in our service. So now, we can have one feature flag that will turn on capabilities in our UI, will enable new APIs to be accessed by the user, and will also enable the command line interface to have more capabilities as well. Michael: Now, when we roll out a feature instead of tweaking five or six different flags, we can just control one, associate a segment with that one flag, and then on top of that we can also see how the flag's being used. So part of our maturity was standardizing the flags across the organization, but was also using additional capabilities in LaunchDarkly to then help understand well how do the using the feature flags actually impact the user's experience through the product. So using the analytical capabilities of LaunchDarkly we can see now that when certain flags are set, how's the user's behavior changed, and we can then control feature development or featture direction of that feature based off that information. Michael: Finally, some closing thoughts. I guess when we look back at the four years we've had and where we started from and where we're at today, what we realize is that there's a lot we still don't know. There's a lot of improvements we plan on making in order to move towards progressive delivery. I think we've actually made a really good start in terms of where we were, where we're at today and where we're heading towards. A lot of new capabilities and in terms of how new features that we can enable through feature flags, how we can get that to span across more pieces of the IBM cloud itself. Michael:
So today, with the IBM Kubernetes Service, we've used these feature flags quite extensively. Almost all new features that we deliver are always feature flagged before they go out the door. In the future, we want to expand that to include more services. So as we come up with services that span across IBM Cloud Services, so we may have new features that span for example IBM Cloud Object Storage and IBM Kubernetes Service and our new IBM satellite offering, how do we organize and get the rest of IBM cloud platform to share the same feature flag experiences that we have. Michael: If you'd like to reach out to me, I provided my Twitter handle here, as well as my email address. But I've also put a link to our razee-io project. In phase one I talked about how we do our delivery process. We have actually open sourced that since last year. So if anyone's interested in how we do, or more information about how we do our delivery in the IBM community service, you're welcome to check out that link at razee.io, or even try it out if you'd like to. Thank you for attending. Rich: Thanks for that Mike. That was a really good talk. Wow, it's been four years since you started this journey with LaunchDarkly. That's quite a feat, especially since you're now just starting to use LaunchDarkly feature flags the way most people use them today. Rich: We're now in the Q&A portion of this talk. If you have any questions for me or Mike, please throw them in the chat on the right. Make sure to ask them in the stage chat. I guess I'll start. I have a burning question for you Mike. Michael: Yep. Rich: What I want to know is how you're getting by with working from home with four kids, two cats, and a dog? Michael: I basically lock myself in the basement. Although I tease people and say, "I'm stuck in the basement," as you can see behind me, it's not really that bad of a thing. When I first started it was upstairs, and it sounded like I was working from a shopping mall. So I've just kind of learned to consolidate myself down here by myself for eight hours a day. Rich: Excellent. Yeah, I had that problem too. If you hear a dog start barking, it's because my mailman's here. Yeah, working from home during COVID times, right? Our first question coming in today is from Dan O'Brien, and he asks: What led you to LaunchDarkly since you didn't use it for regular feature flags to start? Michael: I'd say that's a pretty interesting question. Before I started working with the IBM Kubernetes Service, I was in the IBM DevOps organization. Part of my job was looking at various different developer tools, things like feature flags and build tools and analytical tools and things like that. And one of the tools that we had looked at was LaunchDarkly. At that point we just were kind of investigating how customers use it and how we could actually incorporate it. Michael: Then, when I switched jobs to IBM Kubernetes Service, I was tasked to basically help rebuild the whole CI/CD pipeline. It's kind of weird. You kind of have these two completely opposite ideas in your head. You have feature flags and you have this problem like how do we build and deploy software across the hundreds if not thousands of environments. And somehow it just clicks where I've thought like, "Hey, this would be really cool if we could use feature flags, treat each of our clusters as users and kind of the rest is history." Rich: Yeah, so you kind of like backed into it and started in that operate pillar right away before even really thinking of feature flags as for controlling features. That's pretty cool. Next question comes from Elain Menchaca and his question is: Do you use feature flags or the permanent feature flags? If yes, how and where? Michael: So it's an interesting question because we always kind of scratch our head and like, what is a permanent feature flag? At one point the folks on LaunchDarkly did answer that question for us. So we're getting a bit better about it. Most of our deployment flags we consider permanent flags because they never go away, or at least as long as the microservices we're deploying that flag never goes away. Actual features themselves as we're starting to get into now, I guess we do deem those as non-permanent because there's a definitive lifecycle to those feature flags. There's the inception we created, and when we think about the new feature we want to deliver to the point where we've already rolled this out everywhere and we know that we want to get rid of that flag. Michael: So those we know there's basically a definitive start end time to those flags, so those are not permanent. But most of our operational in deployment flags we do set as permanent. And for us it's more or less, we sort through a list of feature flags. We can just use that additional metadata to help filter through our multitudes of flags that we have. Rich: Got it. Yeah, makes sense. Next question comes from Clark Gates George and he asks: What are the most interesting potential or interesting operational flags use cases you have in place today, and do you plan to add more use cases in the future? Michael: One interesting thing is, so I've done talks about this in the past. We use feature flags to deliver code out to all of our environments. And one feature flag we have kind of works against all the other feature flags that we have for deployment, which is to disable deployments on clusters. So we call it the cluster lock flag. And what this does is since all of our deployments are automated and automate three feature flags, we want to have the ability to where we can basically say, "Hey, don't make automatic updates on this particular cluster." So we can target a cluster or we can target a set of clusters based off rules and say that these clusters are locked therefore no deployments happen to it. Like I said, it's interesting because we have one flag saying, "Hey, I want to do this automated rollout," and we have this other flag says, "No, you don't want to do it. We want to basically stop all updates on this particular cluster." That's one example. Michael: One other example is kind of a cool thing we learned as we're scaling out our solution is most of our clusters, I mean, we're talking tens of thousands of these things would start reporting information back to our API service and they would do so on a regular interval like every five minutes. So every five minutes we just get tens of thousands of API calls. So we created an operational flag that we could deliver to various clusters to say, "Hey, just pick a random time between two to three minutes and then send the data back to the API server." And then we were able to roll that out across regions and kind of control how all of our agents basically flood our server and have a bit more control on it. Michael: So that's actually I think one of our cooler things, one of the things that was fun to implement because we could actually see that change happen in real time with some metering graphs that we had in place on our API. Rich: That's kind of like using it as a jitter, control the jitter on- Michael: Pretty much yes, yeah. Rich: Right. That's pretty cool. Michael: Yeah, we got some stuck in this weird thing where, like I said, every five minutes no matter what we did they would all just kind of synchronize on this five minute interval. So yeah, we used that feature flag to help control that. Rich: Yeah, that's a big problem for a lot of real-time applications. I used to work on a really popular chat application once and whenever we'd go down and come back up, we could never make it back up because all of those clients would start hitting us all at exactly the same time until we figured out, "Oh, you know what? Maybe we should add some jitter to this." So yeah. Michael: Now you're going to leave me guessing what chat application that is, but that's another talk. Rich: Yeah, that's another talk. Another question coming from Jacob. Do you still have feature flags that depend on each other? Michael: I would say no. We've kind of gotten to the point right now where instead of having dependent feature flags, we just have one flag that will control an entire feature. When we first started using feature flags, I think we did try to do some flags where we'd create a high level flag and then we create sub flags below that. But I guess just the way, either the way our feature has been rolled out or the way we're using it, it's not as useful as we thought. We're primarily, we just have one big flag at the top of every feature and that's how we used to control the feature. Rich: Got it. And is that primarily because part of its visibility maybe? Like not knowing to easily know like what's dependent on another flag? Michael: I think it's more or less just simplicity for us. Rich: Okay. Michael: Just having one flag to control an entire feature. I mean, and we have some fairly big features going on now, but we never really say that, "Hey, I want to have this small component of a feature be enabled independently of the main feature." I mean if those use cases come up, we'll certainly use those, but as of right now we are just, we have basically monolithic features that we're rolling out. Rich: Got it. Yeah, keep it simple. Next question from Steven Lee. Can you expand on the shift from phase four to five where you bundle multiple updates into a single feature flag? When you're rolling out a new update that could disrupt existing features under the flag, how do you manage your risk since the update can't be turned off with a flag? There's two questions there, but like, yeah. Expand on four and five. Michael: I'm actually trying to absorb this question here for a second. So when we talk about rolling out features, I mean that's different than rolling out code. So whenever we do roll out code, we still roll it out in a small incremental changes. So even though the feature is not fully baked, we do know that the code we rollout is production ready and fully baked. Usually by the time we do turn on big features, they have been tested and they have been tested quite a bit before we got there by folks in several different environments as well. So just because we're kind of bundling these things together into bigger flags and bigger updates, doesn't also mean that we are just rolling up more, bigger untested code changes. Rich: Okay. Michael: Not sure I answered the question or not, but ... Rich: Yeah, yeah. I mean I think that's one way to totally decouple it. Kind of similar lines I have a question myself. I mean one of the things you talked about in your presentation was that in phase four you have this problem of lots of feature flags. I mean, I kind of want to know a little bit more detail about how you guys are really tackling that problem. Like you have a limited set of users right in LaunchDarkly? Michael: We do. But we still have quite a few users. I mean initially in phase four, that's really when everyone got really excited about feature flags, and I feel is kind of like the term du jour. So every team wanted to do feature flags, and even though they're working the same feature, everyone basically had their own separate feature flag for that feature. So our API, our UI. And what happened is that we still kind of end up in the same problem we had before where we had to coordinate releases and coordinate feature rollouts between these teams. But this time instead of delivering code, we're just synchronizing coordinating enablements of feature flags. Michael: Going from cutting the amount of flags that we have in half and basically rolling out larger flags which encompasses more than one component, again, going back to the early point, just greatly simplified things before. Now, even though we have less flags, I would honestly say this is where we should have started from the beginning. And I think our phase four is more kind of like just our inexperience about using feature flags and just lack of coordination amongst the teams to really set some kind of ground rules in how we do feature flags. Rich: Yeah, and that's obviously a problem I think a lot of customers have. I mean, we have it ourselves here at LaunchDarkly. We started using our own system to build our product without a whole lot of process and guidance in terms of like how we should be using it. We didn't have naming conventions or things like that. You look through our list of flags, we've got a lot of them, and we're slowly evolving to kind of take on that challenge of improving that process. Michael: It's definitely one of those cases where, yeah, we got the shiny new toy and everyone wanted to play with it, and ever did play with it. And then we basically had to make people share that toy from here. Rich: Yeah. Yeah, makes sense. Another question here from Rich Crook, and his question is: What tips and/or challenges can you give to companies just starting along this process? Michael: I would actually say think about what feature flags mean. I think a lot of people have in their heads this is what a feature ... I mean, they have a very simplistic idea of what a feature flag is, but they don't really put much effort into how do you actually operationalize using flags. That's exactly the problem we had, and it definitely shows up in phase four and phase five, because again we have this really cool tool but introducing it into the entire organization and kind of setting standards. Like for example, just something as simple as the naming standard on flags is very important. Michael: Adding labels to these things to make them easier to organize and filter through, assigning owners to flags is actually very important because you'll be surprised how many times you wake up and you see that I've got 300 flags and I don't know what half of them do. Having owners in there can definitely help facilitate some transparency about what these things were designed for and what they're for. One example we do now which we didn't do in the past is using the flag metadata to associate each one of our flags with a GitHub repository. So we know exactly what that flag is used for and what team is responsible for it. Rich: That makes sense. That's pretty cool. Do you guys use code refs? Michael: We don't yet, partially is because we use GitHub Enterprise and up until about a month ago we didn't realize that you could just use the command line and not push those, that code to LaunchDarkly. That was a kind of big no-no for us, but now that we understand that the command line allows us to scan the repositories and not actually push our code to LaunchDarkly, is something that's on our plate to look at. Rich: Yeah, that'd be cool because you know instantly like which repositories that flag is in. Michael: Yeah. Rich: Because it could be in multiple repositories, right? Another question just came in. Again, it's a second question from Elain. Have you seen performance issues within a transactional flow with feature flags? Michael: We have not. I mean, I kind of ... We really like the architecture of LaunchDarkly. The fact that all of the evaluations are done on the SDA itself. So it's not like every rule or every flag evaluation requires the round trip back to LaunchDarkly or not. Even when LaunchDarkly is down, we can still evaluate rules. Now, we definitely probably abuse that more than other customers. We've found a few cases where we do a lot our feature flag checks within say like a for loop and we'll have literally millions of flag checks from some of these flags, and we've really noticed no issues of performance there. Rich: Yeah, that's awesome. I think I have one more question from for me. Natural question here is that is there a phase six? Michael: Honestly if you'd asked me six months ago I wouldn't even know there was a phase five. It's one of those where you just kind of live and learn. One of the things we're slowly getting into and I know I didn't cover very much in the talk was using analytics, combining analytics with feature flags. So taking environmental factors and using some new capabilities that LaunchDarkly has about changing flag rules based off of external triggers. Michael: One of the things we definitely want to move into is moving more towards integrating our ServiceNow with LaunchDarkly. Today as I mentioned in the talk, we have a separate application we've written for that. So that's probably about phase six, is kind of getting rid of some of the stuff we build and to incorporate the native LaunchDarkly features. Rich: Got it. Yeah, I mean I think, I was kind of halfway kidding there, but I think I really would love to see what is it for someone that kind of back ended into LaunchDarkly? What is it that you would like to see us work on next? I mean, I think the workflow stuff that we talked about in the past regarding ServiceNow approvals and stuff that you guys already built, we're just building that stuff into LaunchDarkly now. I mean, I think that's going to help you guys, it'll help a lot of customers that have the change management issues that they have to deal with in their process. But are there other things that you guys have thought about using LaunchDarkly for that we haven't been able to do because it just isn't there or you'd have to build it out yourself? Michael: Not so much a use of it. My biggest ask from LaunchDarkly now is to give us a mobile interface because you'd be surprised how many times I'm sitting, I'll get a call or something like on a Saturday afternoon, I'm out, and I'd love to basically need to make a change to a feature flag. Right now honestly the interactions with LaunchDarkly on a mobile device isn't that great, and I would love it to be able to go through and change some variants of associated rules or just make quick changes to flags, would be awesome. Michael: That being said, we're always kind of looking, I mean, we're always ourselves being introduced new problems and challenges. I guess, I mean the cool thing about LaunchDarkly is it's technically kind of like a Swiss army knife. You can use it for many different purposes as we've proven. But I'm sure back in 2021 we'll be talking about phase six. Rich: Yeah, hopefully so. Well, I think we're just about out of time and I want to give our audience a chance to visit the folks that are sponsoring this, Honeycomb. Go over to their booth for the raffle. We're going to end this part early. Again, thank you Mike. It was a pleasure chatting with you today. And thanks again to Honeycomb for sponsoring this talk. And thank you for tuning in with us today. Rich: Next week we will have my colleague Yaz Graham presenting the learn pillar. And he'll be joined by our featured guest Claire Knight from GitHub. I have a few other things here, and let's see. If you have any other questions, feel free to ask them on Twitter using the #trajectorynano or tagging us at trajectoryconf. If you'd like to chat with us live, please visit our docking station. The URL will be pasted into the chat. And that concludes today's talk, and until then have a wonderful rest of the week. Thanks. Michael: Thank you very much.
Ready to Get Started?
Start your free trial or talk to an expert.