Right Grid
  • Overview
  • Transcript
Trajectory

Shipping and Learning Fast via Feature Flags

Dr. Claire Knight Github

Feature flags are a great way to enhance the speed with which you can collaborate with colleagues, develop in production, and also learn from real-world usage. Using feature flags to enable things selectively for users is an established pattern. Now, take it a step further and use them to enable fast iterative shipping for your development team. I’ll talk about how we’ve unlocked the ability to use production to safeguard production at GitHub - as counter intuitive as that might sound!

Downloads slides

Dr. Claire Knight

Dr. Claire Knight is a remote (yep, even pre COVID) Senior Software Engineering Manager who has worked in many areas of technology over the years. She has served plenty of time in the coal code mine before making the move into also wrangling folks rather than just bits. She is currently working at GitHub where she helps devs all over the world do their best work. Claire lives in Berkshire, UK, with her husband Steve and three cats, who from time to time also like to be involved in video calls. When not working, she likes to lift heavy things, only to put them down again.

Yoz Grahame:

Hello, and welcome to the Trajectory Nano Series. This is the first of four weekly Nano Series sessions leading into Trajectory Live on August 26th and 27th. Before we get into this week's topic, some important details for you. All participants are required to follow the code of conduct. It has been posted into chat for you to review. Please use the hashtag trajectorynano when sharing your content on social media. We'll be taking questions after Dr. Claire's talk. If you have any questions during the talk, please post them into the chat. 

Thank you to Code Climate for sponsoring today's talk, which is all about using feature flags to get fast, actionable feedback about deployed code. What about feedback before deployment? I've long been a fan of Code Climate's automated quality checks. They'll alert your team to all kinds of problems in your code before it's deployed. But now Code Climate's Velocity system can identify problems in your process too. It integrates with your existing tools to gather all kinds of data about ongoing work. When it comes to making decisions, intuition isn't enough, you need data. And that's why we're so glad that Code Climate has sponsored today's talk, which is all about better ways to get data for decisions.

At LaunchDarkly, we've identified four key pillars of feature management: build, operate, learn, and empower. Feature management is designed to span multiple use cases, all of which are contained in these four pillars. Two weeks ago we talked about the build pillar, which is about separating deploy from release, targeted rollout, canary launches, that thing. Last week we looked at the operate pillar, talking about using flags in long-term operational roles, kill switches, circuit breakers and dynamic reconfiguration of services. Today, we'll focus on the learn pillar, which is about gathering data to make your releases more successful. Dr. Claire Knight will tell us how GitHub uses feature management to learn how well code changes perform before they're rolled out to users or to all users. 

Many teams use agile methodologies to deliver a rapid pace. But agility is not just about shipping faster, it's also about getting feedback so that you can tell if you're shipping the right thing. You want to move faster, but that could just make things worse if you're headed in the wrong direction. Maybe your new feature isn't a user wants. Maybe they want it, but it has usability problems, or maybe they love it as it is and they use it lots and then your infrastructure buckles because the backend code needs optimization. You want to find these problems before you release to everyone. You can use feature flags to achieve both faster shipping and clearer navigation, or you can learn the important problems before they become showstoppers.

Learn covers feature flag use cases that enable teams to continually learn from their software and users. In learn, developers, dev ops engineers, product managers and others use feature flags to conduct beta tests more seamlessly and gain feedback from real users early in the development process to run multi-variant experiments on any new changes, whether they're front-end features, new algorithms, or even large infrastructures. And to set baseline metrics to compare the performance of one feature variant to another, while also measuring the impact of certain features on system performance. That's the theory, but how do big engineering organizations practice it?

So today we have Dr. Claire Knight, joining us from the UK. So if you have any trouble with my accent I'm afraid it won't get much better from here. She's a senior engineering manager at GitHub, and she's here to explain how they use feature flag techniques, such as testing and production, or as they call it, star shipping and traffic replication, to get early insight into how well new code performs. Her presentation is 20 minutes long, followed by a live Q and A session. Feel free to post your questions in the chat while she's talking. We'll address as many as we can. Take it away, Dr. Knight.

Claire Knight:

Hey folks. I'm coming to you today from the UK to talk to you about shipping and learning fast via feature flags. We already know that feature flags are a really great way for selectively targeting users. It's a well-established pattern. LaunchDarkly are a way to do that, if you don't have other ways, they have some great documents on their website. I want to talk to you today about some of the things we do at GitHub which is beyond the delivering to production and then releasing to your users. I want to talk to you about how you can collaborate with colleagues, develop in production and learn from real world usage too. 

These are all things that can potentially accelerate the speed with which you can put something into production in a safe way without needing to rely on kill switches. I'm going to talk a little bit about two ways you can do that through this talk. Give you a little bit of background to set the scene for those of you that are not super familiar with feature  flags. I will set that in the scene of how we do things at GitHub with what I would call usual feature flag usage. And then talk in more detail about these other things. It may sound counterintuitive, but really it's not, I promise.

So who am I? I'm Claire Knight. I'm based in the UK and I work for GitHub. I'm an engineer, turned engineering manager. I have spent many years coding things front end, backend, mobile apps all sorts of things. So I've got quite a lot of experience of things like feature flags and conditional passing code beyond the statement. I know that's conditional thing in code, in that purchases are a form of feature flag. It's just, somebody's paid for something and another user hasn't for example. When I joined GitHub, I joined as an engineer, so I've worked in the code base, I've experienced feature flags from an engineering perspective. I've definitely experienced them at scale. GitHub certainly has a lot of scale. I'm most experienced with our API, that's the engineering team that I was on around from an engineering perspective. 

Since I've become an engineering manager, I've learned a lot of the breadth that we have around feature flags and the various impacts that that has at scale. Feature flags are fantastic for rolling out things, but you have to also be careful if you live in the code too long, you then have a page load doubling and tripling over a period of time because it's got a check certain feature flags for things. Especially in a case where they're all enabled, it would be useful to clean those up. Those are the sorts of considerations that I now think about and led me to some of the things I wanted to talk to you in this talk. So I wanted to do a quick overview of a feature flags and how we've used them and get hope for those of you who might not be aware or not particularly up to speed with feature flags in general.

The first of these is the release later, but the very popular ship it's scribble there. But if something goes into production and you're working with continuous deployment as we do at GitHub, you may not always want it to be available to everybody at the point that that code is merged and released. One of the things that I'm sure you're all aware of is the large initiatives. We tend to ship it out our conferences at GitHub. For all of those, we have a bunch of features made available and developed on over the months prior to that. And we will often want to staff ship those which means that we as employees of GitHub get to test these things out, trial them, deal with some rough edges. We are very much eat your own dog food at GitHub. 

I'm also running user experiments on ourselves. We're not necessarily the most typical users. So I'll come onto that and use our experiments in a moment. But what this feature flag functionality allows us to do is to get things ready. And then, as the keynotes are happening, as the announcements are being made, we can then turn these on for our customers or beta customers or whatever we set up. So, this is something that if you're not using them right now, would be really useful way for you to adapt your service-based software to take you to these things. The second one is a user experiment. So I've already said that we will staff ship things a lot to GitHub. User experiments are taking that a step further. It's finding use cases and users for things that we may not do so much internally.

There's for example things around the open source community. We have relationships with maintainers and we will often talk to them and run some experiments with them about their use cases, which are very different to us running GitHub. We also like to use that to get feedback from larger groups of customers, for example and to scale it. So this brings me onto the last one, which is a scaling of alpha and beta testing. We've got a very well known case of that which started late 2018 and then rolled out over the rest of 2019. And that's GitHub actions. We had a first version, we put that out there. We had a lot of hype, a lot of excitement, a lot of very early users up until we hit our limits. 

And we learnt so much from them, so much so that we worked so much stuff to make the actions versions to so much stronger and deliver more for our customers. We couldn't have done that without feature flags. We also learned about operating feature flags at scale in terms of scaling that up to the many thousands that ultimately were able to access this code. So, it depends where you are in your journey, what type of software you have is. But there are some really great starting points we'll try and pass with feature flag usage. And just a shout out to the folks organizing this Launchdarkly, they've got some great documentation on their website about this as well if you need to go read up. 

However, what if you could, a magic thing. What if you want to be sure of a release before it's released? And I don't mean necessarily by staff shipping. You might want a slice to see how things happen for a certain vertical of your users. You might care about how some infrastructure response, things like those. You may have operational concerns that you can need to work out the rough edges off. I know we have feature flags as a concept of a kill switch. So you can release something and then just yank it and remove its impact, throw it away, but that's very rough. It can disengage users if it takes away something they started to rely on, for example. 

So, how can we make it safer releases without needing to rely on those things? Let me talk to you about that. So I want to talk about two sways of things you could do here. And I'm going to talk about some examples, but of course I am going to have to keep some specifics hidden as I'm sure you understand here. I want to talk about these two separate things and then I am going to, I'm afraid to give you some caveats. But I think they probably apply to most feature flags, but I don't just want you to get so excited, you've run off and do some great stuff and then forget about the impact beyond that.

The first thing I want to talk about is what I'm calling a super minimal ship. It's a techie term for which I apologize. But when we say we ship something, it means we've taken a piece of functionality, we've developed the code around it and we have put that out there. When I say put that out there, I mean it's released into the production code or production sites. I don't necessarily mean that all your users have access to that. That's where you're using feature flags for wider rollout. So I want to cover why this is been really useful. And I think as we adapt to the current world environment where there's a lot more remote working, it may be beneficial to some of you to think about how to do this.

GitHub is and has been a remote company for a long while. Almost since its inception, I believe. One of the things that super minimal ships let you do, is to collaborate within teams and across teams, across time zone boundaries in a way that is very low impact on those that are not exposed to this stuff, but really accelerates what those that are exposed to it and working with it can deliver for you. So if you have a feature, you might have one, two, three, four engineers working on that for one, two, three, four, five weeks. Everybody's development cycle is different. What would advocate and what we have done to good effect is to have the first PR of any feature be setting up the feature flag and then some skeleton. Often that's going to be a Euro for something that's running on the web. It might not be, but just a new page, a new piece of functionality and a page we might put behind a separate URL until we tidy that up for example.

You'll need to use a little bit of judgment and your engineers will probably have some views on that. And it's going to be very specific to you, but that's the principle. Once that has shipped, you've basically unblocked then all of the folks that need to work on that, whatever time zone they're in to be able to contribute locally and in production, which is the key point here. And there is relative safety. I say relative because anything running on your production system is never a hundred percent safe. You can have power outages or data centers and it doesn't matter how awesome the code is, that can cause issues for you. So let's stick with relatively safely. When we say locally in production means, obviously you have your local development environments. But there's going to be the need to make some changes around this feature that you then need to share with your colleagues. And you also may want to see how that looks with real data. 

This is my next bullet point that, people will come like,  "Well, use feature branches. " You can also share the feature branches, okay. It's not going to be in production, you won't get the real data, but you've got staging and all of these things. And that's a very fair point. It depends again, like everything in software on how you're set up, the speed you're moving, how many engineers you've got, how your development cycles work. But GitHub is very, very fast paced development. We have hundreds of engineers contributing to our main code base all of the time. We do continuous deployment, which can mean hundreds of deploys every week. So for us long live feature branches, or even quite honestly, short-lived branches can get problematic. And if you're expecting your team to be working on something for a few weeks, the merge hell that would accompany the end of that would potentially invalidate a lot of the testing that you had then done. 

 So what this allows you to do in these dynamic environments where you're constantly shipping is to keep up with everything that everybody else is making available. I'm unsure your new feature works very well with that. I think also it's worth shouting out here that by trial and tweaking things in production, you can account for some of the edge cases. You're never going to cover them all. And as a feature as developed and only a few people have access to it, you're not going to necessarily get the volumes of data. But you are going to learn some of the edge cases. Some of this doesn't show unless I've got this piece of data or this thing enabled. And that with the best will in the world with all unit tests and various functional tests and those sorts of things, are things that can accidentally get forgotten when we don't think about those very edge cases.

 Obviously this is not a free ride in that sense. Your engineers need to still code defensively. We need to cope with no pointers, empty objects, again, language dependent. You're still gonna need the observability that you were already running in production, I hope. You may want to enhance that on the new feature straight away, just so you can see what this database query on this page is taking a heck of a long time and there's only six of us enabled with this. We need to look at that before we take this to production for real. But it really does enable you to see all of these things very early on. And then when you do come to make this available to customers, be that stuff within GitHub and then I beat you and then making it live. You're going to have to be able to plan for some of the things you might want to specifically test for and also things that might appear then and you've preempted them. 

 So that something... And I want to call out here now when we're more remote as well is that, what this enables is that it's, you're not stuck on people being synchronous. You can have somebody in the UK do some work and commit it in the morning here. And then their colleague wakes up on the East Coast and the person in UK still online, but there's something there for their college to pick up on and run with. And then you can hand it off to the West Coast and you can handle it into Asia. You can go around the globe if you... It depends how you set up your engineering teams too. But for where they are spread and the spread of expertise. You can make this work without needing a lot of synchronous calls and things like that because you're shipping such small things each time, it's very easy for the people picking things up to understand stuff. 

 The next thing I wanted to talk about is planning for migration and scale. The concept of using feature flags to change from one database to another, or some infrastructure change is also a relatively known pattern. But what I want to cover is, how do you know that switching to that new thing is even likely to work when you have a lot of unknowns or there's some new piece of hardware or some new way of doing something within a database that you're trying out. Absolutely you will have done some small experiments. You may even have done load testing, chaos testing. What I think feature flags can give you here is like with the code changes, a means of exploring that scaling. 

 So, will scaling work as expected? Sadly, it's often not linear. There are things that are hidden. You hit bottlenecks in external technologies that you didn't realize were there until it's too late. This is not going to be full proof of course, but it hopefully will work towards giving you more confidence in the things that you're doing and rule in or out technical spikes relatively only on perhaps. So you can then focus all of your effort on things that will give you the payoff that you're after. So one of the things that we have used at GitHub is to do some additional database rights. Obviously you need to make care there that you're not going to be doubling the response time to the user. And there are technical ways of achieving that, but we have definitely used this where we've written actual things into a database to see what the impact of a change would be so then we can compare all the new and this is pre the feature shipping. We're not making that assumption that this is going to work and we just need to double the database or some such. 

 In doing this, we can then perform analysis on those rights. We can look at the observability that we'll have set up into that second place. And actually determined that yes, actually what we have is going to be okay, or maybe not. We better pull in some actual resource there. That one has been really useful to us on a number of things. And I'd really recommend you look into that if that's something, a pain point that you think you have or likely to have. Another thing that you can do... Again, you can do this as you want to load balance. That's the point of load balancing. But feature flags allow you to route a certain percentage of traffic to two places, depending on what level your feature flags are and whether you're rooting is, obviously. But that's a way of taking a request and fulfilling it to the user as normal as they would see it. But also sending it to somewhere else so that we can do some different processing on it, potentially return something in a different way, run it through some different hardware to see the impact of that. 

 So you're duplicating the request you get in at that point. You're then sending one off into the ether as far as the users are concerned. They're not aware of this and this goes away, but then the other one does the normal thing and goes back to them. They're unaware, but you're collecting important data around the scaling side of things here. Another one is to add additional observability to write out more alerts, to graph some extra things and then make a decision. So we turn this flag on for certain users or certain accounts and that will allow us to capture extra data. And combined with flood control audit, you can then do some data analysis that will let you decide if the scaling is going to behave as you expect or at least up until a certain point. There are two related but separate things. One is very much at a code level and one is more at an infrastructure level. But I think they'll both allow you to think about and reason about things in different ways than you might have thought about. 

 I want to just quickly cover some caveats though. Remember that flags are complexity, any flag. Be that for the things I've talked about or for the more well known reasons for using them. The more co-pass there are to reason about during development that have test coverage off, the harder it gets for engineers to do things, the harder it is to test things. Just be mindful of that. They are really useful, but don't overuse them. Relatedly long lived flags become technical debt. If something's completely rolled out or completely turned off, tidy them up after a period of time. Don't cause headaches for yourself is what I'll be saying. 

 And then the final one, which I just think it's worth saying, though, I hope you all aware of this is that, you really need to move with care. Anything you do in production is a risk to reliability on your customers. With the best will in the world, nobody was going to have a hundred percent uptime. But if you can take steps to safeguard that with the techniques that I've talked about here, plus everything else, I think your customers are gonna be happier about that. Those were the important caveats and I think they applied to all feature flags. And yeah, that's the end of my talk. Thank you very much for listening.

 Yoz Grahame:

 Excellent talk, Claire. Thank you so much for sharing that. So now it's time for our live Q and A session. We have Claire with us for the next 20 minutes or so. Please post your questions in the chat and we will try to answer them as many as we can. So, we've already got a few questions I think. The first one is from Medan Kumarjayalma. If I've got that, please tell me if I've got that wrong, who asks, does GitHub use Launchdarkly?

 Claire Knight:

 All right. No, sorry, sorry folks. 

 Yoz Grahame:

 Very fine. 

 Claire Knight:

 We don't actually use it. We have been using the same system for eight years now. Possibly getting online depending on... I've lost track of time at the moment, given this year. There's a lot of effort engineering effort for experience and all sorts of things going into it. Yeah, you might want to seriously consider other options than build your own if that's not your core thing.

 Yoz Grahame:

 Right. There's actually something I wrote an article about a little while ago. I think Jesse can post the link in the chat.

But yeah, if you're tempted to write your own, there's a bunch of very decent open source options out there. And I think GitHub started with an open source product, didn't it?

 Claire Knight:

 Yeah. The person that created the Flipper jam, which is the one we use, was at GitHub at the time, used his experience from prior jobs plus his need to get help at the time. And then, we have over the years tried to open source various component things that we've done and that was one of them. I think it was pretty much open source from the outset.

 Yoz Grahame:

 That's great. GitHubs have been a really good citizen in that way, in terms of there are so many open source projects that GitHub has put it out there as that it's built as part of just doing its thing. 

 Claire Knight:

 Yeah. Yeah. And we don't want to... If we can help and give back then why not. Obviously, there may be certain things we may have a fork that we keep for certain things that we've put out there, but yeah. I'm not suggesting that's the right solution for you. It may be better for you to just pay somebody to handle this for you for the most part, apart from the actual turning on and off of things.

 Yoz Grahame:

 Right. Yeah. And this is what we see as well, is that usually the really big companies that GitHubs and Google and Facebook have been doing this for so long. It's often since before LaunchDarkly even started, that they built their own in-house system, it's become totally woven into all their operations, they're totally used to it. But with people starting with feature management now, it's far better to ride on the maturity of an existing product and existing experience there than to try and build yourself, is what we've found.

 Claire Knight:

 Yeah. I would second that. There's so many edge cases and things that we even occasionally are close to missing and we have that experience. So for anybody that doesn't and the simple things, like if a request is coming in and the web browser then refreshes, there's an Ajax request in the background, you have to make sure that that consistency of flag is applied through the duration of all of that. 

 Yoz Grahame:

 Yeah. 

 Claire Knight:

 It sounds like logical from a user point of view, but there's a technical challenge to that so.

 Yoz Grahame:

 Yeah, it's really easy to miss. And this is the thing that we actually... I've been writing a guide that hopefully will go live soon about that. And you can avoid some of these problems with LaunchDarkly, but certainly not all of them. You've got to be pretty thoughtful when you're juggling stuff on both the front and back end to make sure state is consistent. 

 Claire Knight:

 Yeah, absolutely. 

 Yoz Grahame:

 It's very easy to miss. We've got another question from Clark Kate's George. Can you share some of the differences between developing with flags versus developing with feature branches or combination of the two and how they play together?

 Claire Knight:

 Yeah. There was absolutely nothing wrong with feature branches. I want to put that out there straight away. It's a core part of most development, I won't say all. Yeah, I may have pushed a mastery on the day, but not dot com. To add that to our main product, it wasn't just a documentation thing into a small team [inaudible 00:29:48]. But yeah, even then I try not to. The biggest problem we have with a feature that's going to take more than a few days to develop, which for many features is going to be true, is that because we are constantly shipping our product with NCI and it goes, we deploy hundreds of times a day. It's the getting out of date that causes us a problem. And the merge conflicts that that then causes when you've got a large code base with hundreds of engineers working on it. If you are a handful of folks, then maybe using feature flags is going to be overkill for you and a feature branch has totally the right move. It's like anything in software engineering, is that it depends.

 Yoz Grahame:

 Mm-hmm (affirmative). Having my previous job was a very small one, literally. The engineering team was only a handful of people, but even then we got ourselves into trouble with feature branches just going on for a couple of weeks, maybe a month. And then that merge is just really difficult.

 Claire Knight:

 Yeah. Yeah, it is. And obviously we, again, it's a bit like the conversation about we've got long ingrained flipper management. We've got a massive suite now of tests that run in CIA and things all at the time. But even they can't save you sometimes if you've got so much diversion from the main branch. Because then the test break and then people will make mistakes in resolving them. With the best will in the world, we're all human. So this allows us to, it allows the engineering team working on it to testing it as they go and also avoids this pain. So it's a double win rather than one negative. I suppose.

 Yoz Grahame:

 Yeah. That's fantastic. I think this is a point you made really well during the talk which is that, in theory, systems like Git code review and automated tests will catch a lot of problems. But as scale increases, you get more and more edge cases, you start to really push the boundaries. And I think those of us who we've experienced with merge problems know that you can't trust Get to get it right every time.

 Claire Knight:

 You absolutely can't know. And it's fantastic tool and obviously this is a core part of GitHub obviously. But yeah, you still need engineer intuition and ability and things and long live feature branches in fast-paced companies and large companies where you're all working on the same thing, are problematic even without feature flags. So yeah.

 Yoz Grahame:

 Yeah. There's a great to be able to mix the two in that way. So actually the follow-up question from that which is, why not just leave everything behind flags forever for flexibility?

 Claire Knight:

 This is close to my heart because I care about resiliency. So therefore I want flags, but I also care about tech debt. And if you don't clean up your flags, you end up with the obvious tech debt in the code, but you also end up with a performance debt. If we had not cleaned up the flags that we have live, we would have probably have tens of thousands of them now in the lifetime of GitHub. Can you imagine waiting for tens and thousands of checks on flags to load, if you just want to view your profile page. That's going to be painful.

 Yoz Grahame:

 Right, exactly. Sorry. This is where I put in fuel required to put in a bit of promo for LaunchDarkly here in a way.

 Claire Knight:

 Of course, go ahead. 

 Yoz Grahame:

 Thank you so much. The way that LaunchDarkly STKs evaluate flags, is that they do it entirely in memory and process. They maintain a connection to the flag server and changes are sent down asynchronously so that when it's time to validate, a flag can do it instantly. But it's still not free. It's much cheaper, but it's not free. And certainly the complexity cost is absolutely there. 

 Claire Knight:

 Yeah. Yeah. We obviously have some caching, we're not literally querying the database for every flag every time for every everything. That would just have caused our data centers to explode by now, let's be honest. But yeah. The other thing that's probably worth mentioning, given we're talking about using flags for resiliency and learning as well, is that, by having all of the flags enabled, you're putting a massive burden on your engineers and your folk. If something has gone wrong, it's to determine what path something has hit. Feature flag is a potential alternative path through the code and that cognitive burden in a handful of cases is fine. Even hundreds of cases potentially in a large system is fine. But if you start getting thousands and tens of thousands, then yeah.

 Yoz Grahame:

 No, it's a nightmare. And then suddenly, the vast majority of your code is just feature flag alternatives that never get hit.

 Claire Knight:

 Yeah, yeah, yeah. Definitely.

 Yoz Grahame:

 Much harder to read. So this leads great into a great question from my colleague Dawn Parzichan, if I can mention. Any suggestions or strategy to manage that tech debt when it comes to flag?

 Claire Knight:

I hate necessarily using the word culture here, because that can be misrepresentative. Those depend on your organization set up, shall we say. And how you operate things. At GitHub we, again, it's pretty well publicized, we use like a lot of chat ops and we have the human bot where we automate things. So one of the things that we're actually currently exploring is having the bots remind folks and say,  "This flag is in production, but not enabled. Or this flag is in production and completely rolled out. " Which of the two clean cases you've never turned this on or it's rolled out for everybody, why is it still a flag type of... There are gray areas in between. 

 And to have automation remind folks and to ping teams that they need to tidy up things. One of the reasons we wanted to look at the automation side of it is not only to save somebody a job of doing something that we can automate, but it's also that folks don't get as angry when who bot tells them to do so versus [crosstalk 00:36:08]. 

 Yoz Grahame:

That's a really good point. 

Claire Knight:

Well, stop doing that. Remove that thing. That's definitely one way that you can think about it for sure.

 Yoz Grahame:

 That's great. That's a really good point about the value of that bots don't mind people being rude at them or giving them a snack. I love the box, I don't know if you have the box snack command, but it's one of my favorites to just do box snack, and then you got bot goes,  "Yum, yum, yum. "

 Claire Knight:

 Yeah, we do. We can trigger certain meme responses. And then if you happen to say thank you possibly sarcastically, he responds to something. 

 Yoz Grahame:

 Fabulous. That idea of automatically notifying about, that's great. And it's something that we have some LaunchDarkly in our tooling. We have some facilities for that, that we make available to all customers as well, such as we have a system called code references, which you can add to your COI process which identifies all the areas, all the lines of code that you have, that are tied to a particular flag. But yeah, having some automated reminders for cleanup is a great idea. Has it been particularly helpful for this?

 Claire Knight:

 We're just literally exploring this at the moment. But we envisage because it fits in with our other workflow that it could be beneficial. So we've been doing a bit of an audit of things and trying to,  "Is this falling off from your pillar last week? Is this a circuit breaker or is this actually a feature flag? " And we've been digging into that just to house keep ourselves.

 Yoz Grahame:

 That's great. That's great. And that also a really good point, is that some flags you do want to keep around, the operational flags. 

Claire Knight:

Yeah. Yeah.

Yoz Grahame:

A question from Tyler Knight. Can you comment on the pros and cons of using feature flags to personalize or customize the product for groups of users?

Claire Knight:

At first glance on that question or first thought, I suppose I should say, if you want that customization to be long lived, then I'm not convinced that a flag is the right place, because as we've said, we can't keep flags around for the longer term. So that's definitely a disadvantage. It would slow down things eventually, another disadvantage. We definitely do use that one, we're doing roll outs around. Say a book fix where we have a specific user or organization or type of user identifying in our system that is known to say, have a problem. And we want to do a very limited rollout to say,  "Okay, we acknowledge this problem or have a notice problem. We think this fixes it. We've tested it, but you have a real world scenario that was clearly failing. Can you now see if this fixes it. "

 And most of our customers are pretty on board with that actually, because we're helping them by fixing it. So they're willing to help us get the right fix for them, not just a fix. So I think personalizing it that way to get a book fixed, but then again, we would roll it out afterwards, we would say. We will enable this for everybody and tidy things up. I think for general dislike user customization, I just am not convinced it's the right technology.

Yoz Grahame:

My position on that has slightly changed since joining NorteDart last year, it was a very odd one for me to wrap my head around. Because I was thinking, so for things like differentiating between a standard and pro features in an account, for example. My standard-

Claire Knight:

[inaudible 00:40:04]. 

Yoz Grahame:

Right. Normally my inclination is just put a field in the user record. You put a Boolean or something in the user record. But the fact is, that database migrations are painful. They can take weeks of preparation depending on the organization and database setup that you have. Whereas creating a feature flag you can do in a minute or two. I'm giving them that junction has to happen in the code anyway. It just comes down to where is the data stored.

Claire Knight:

Given you just said that you just triggered actually a memory of my prior day of life. 

Yoz Grahame:

Really? 

Claire Knight:

Yeah. So in the mobile world, you may have heard both Google and Apple developers complained about this, but in that purchases orders be a big thing. 

Yoz Grahame:

Yes. 

Claire Knight:

And also be finicky and require tender, loving care at times. And actually an in app purchase is like a feature flag, but it's device enablement thing because it's like,  "Has this person subscribed or not subscribed, or have they paid it on lock? " I'm not talking about where you just buy a bunch of gems to unlock the next level of your game. I'm talking about actual-

Yoz Grahame: 

 Differentiation between- 

 Claire Knight:

Yeah. Pro or not pro. That stuff we actually need to query. There are ways of doing server web hooks as well now, but that didn't use to be. You have to carry on the mobile device. That lives in that code. And so in that situation, I can definitely see it. I guess what I would say is for all use cases I can think of around at GitHub, I don't think it would work very well, but yeah.

Yoz Grahame:

For all of these, it there're loads of potential places, loads of different ways to engineer something. And it comes down to a whole load of factors, especially those that are particular to your instance, to your situation. 

Claire Knight:

Yeah, sure. 

Yoz Grahame:

How are we doing for time? We're coming up towards the end. I think we have time for one more question here. Another one from Medan Kumar Jayalamar [inaudible 00:42:04] you saying with LaunchDarkly we can have thousands of flags out any performance issues? Well-

Claire Knight:

On the spot now. 

Yoz Grahame:

Yes, exactly. Actually there are some potential performance issues I should warn about. Mostly you can be fine. However, the fact is that it's all data and the more complex a flag is, especially if there're thousands, we've seen some flags that have thousands of rules in the malls, specifies for individual user IDs. That individual flags can be megabytes big. And when that happens, then yes, you'll see performance issues. Not in evaluation so much, but in startup time, when it comes to downloading all those flags and updating all of those flags then you will see performance issues. The rest of the time, no. If your flags are pretty simple, then you can literally have thousands of them and they should be fine. And the thing is, if you have thousands of flags, you've got other problems to deal with really, when it comes to technical debt and things like that. And so we have to wrap up the session. One more question for Claire. Is there any advice or things you'd like to leave with the group before we end?

Claire Knight:

It's come up in several answers. And that's what you were talking about there. Everything depends. So it depends is I guess my advice. But feature flags are one tool to achieve a bunch of things. They are incredibly powerful because they can do multiple things and they're getting multiple things well. But hammers and screwdrivers have certain purposes. This might be a repeatable, but you still wouldn't try and change a tire on a car with a multi tool. There are still certain, they're not so default. So I just explore these ideas, see how they might work in your situation with your software and your organization. But don't necessarily expect to pull a solution from some other company and expect it to magically work. Other than employing something like LaunchDarkly. The actual hosted solution will work great for you, but actually how those flags are defined is going to be very dependent on, yeah.

Yoz Grahame:

Yeah. I could not agree more. And this has been actually our position at LaunchDarkly as well. Sure, we would love our customer's money, but really we would rather they do the right thing for them rather than using our thing. So we want people to have a positive experience with our product and that includes not using it for things that it's not meant to be used for. I'm actually thinking that it'd be good to write some posts about. There're certain things that flags aren't great for, like dependency changes, major dependency changes. You can't really manage with flags, or it's very difficult to manage with flags anyway. And there's a bunch of other things, but yeah, absolutely right. The best you've got to find the right tool for the job in all situations.

Claire Knight:

That's what you wrote in the post. 

Yoz Grahame:

Yes. I hope to do it and amongst all the other things I've got to do, but yes. So thank you. And unfortunately, we've run out of time, but thank you for joining us today, Claire. It's been great. And I'd also like to thank our audience for attending today's Nano Series talk. And thank you again to Core Climate for sponsoring today's talk. As I said, you want insight and to learn all kinds of things from your product and what your developers are doing. And Core Climate is a great tool for doing that. So remember that you can see previous talks in this series on the Trajectory conference website, that's trajectoryconf.com. The recording for this talk will be there in a couple of days. Join us this time next week to explore the final of the Nano Series for Trajectory itself. It will be the empower pillar with my colleague, Heidi Water House and our featured guest, Sherry Lim. Until then, have a wonderful rest of the week. Thank you. 

Claire Knight:

Thanks folks.