A previous version of this article ran on The New Stack.
There is a clear business need for having engineers on call. One of the best ways to retain and gain customers is to deliver excellent, world-class service and a product they can rely on. Developing skills that drive business value not only will help engineers in their current role but can prepare them for any role or level they may want in the future. Volunteering for on-call rotation can help the individual and the company.
While taking on a new opportunity is exciting, it also comes with nerves. Maybe you have commitments outside of work like grad school classes or family obligations. Maybe you enjoy your evenings and weekends. What do you do if an alert comes in abouta service that you’re not familiar with?
Change can be scary, which is often why processes don’t change within organizations, but as companies grow they may need to rethink their on-call rotation. Last year LaunchDarkly modified our on-call process for engineers, as we had outgrown our previous model.
A few years ago, engineers worked together on one of two large teams: the Application team, and the Backend Services team. Small groups of people would come together for projects, but then disperse when the project was over. The Backend Services team handled all the on-call responsibilities.
As we grew, we adopted the squad model. Each squad has an engineering manager, a product manager, a designer and five to seven engineers that work on a subset of our features.
After some time, the squad model evolved and adopted service ownership. Each squad became responsible for a subset of our backend services. However, we didn't change the engineers who were on call in a substantial way. They were almost exclusively engineers who were at one point, or would have been, on the defunct Backend Services team. We decided a new process was needed.
Change happens
There were multiple discussions internally about what the new on-call process should look like. We needed to make sure the rotation was equitable and that we had appropriate coverage. In the end we decided on the following:
- Squad members are on call during regular business hours for the services their squad owns.
- For off-hours, responsibilities are distributed between the UK team during their normal business hours, AND the Virtual On-Call squad: a volunteer-based group of engineers split across two rotations that cover different subsets of our services, who take on primary responsibilities for the evening and weekend shifts.
- On-call engineers with off-hours responsibilities are paid for their contribution.
If you’re considering changing your on-call rotation, have open conversations to get various perspectives on what works and what challenges might be encountered.
How to on-board new engineers to the on-call rotation
One of the most important aspects is to have an engineering culture that fosters learning and psychological safety. On-call engineers need an on-boarding process that sets them up for success. Knowing that they will have help, that if something goes wrong they won’t be blamed. Feeling safe to learn and explore means knowing it’s OK to make mistakes.
Be clear with people thinking about joining an on-call rotation about what the expectations are.
I received the following message from my manager when I was thinking about joining the rotation.
“In general the expectation is to try your best with what you know, and if you don’t know how to address the issue, escalate. Over time the people that are being escalated to will think ‘hmm, next time if I don’t want to get a page, I should arm the virtual squad with whatever it needs to handle this.’”
In the weeks leading up to an inaugural shift, consider the following:
- Provide online or in-person training to give engineers confidence in the process and in their ability to succeed at being on call.
- Host meetings and conduct question-and- answer sessions for the on-call rotation.
- Have managers or leads check in on how the new engineers are feeling and send a test page. Normalize that it’s OK to feel a rush of adrenaline when you get paged:
Manager: Was this your first time being paged?
Me: Yup
Manager: Did your heart skip a beat? At least 10 years into on-call, I still sort of jump when I get an alarm :-)
- Establish a co-pilot system with experienced on-call engineers that will pair up with onboarding on-call engineers as their on-call backup for their first few shifts.
- Assign new engineers to an experienced co-pilot for the rotation and devise a plan for how to communicate if needed.
Diary of a 1st time on-call engineer
While the above advice may sound good in theory, you may be wondering in practice how do things go for new on-call engineers? I signed on to join the inaugural Virtual On-Call rotation, and below is my log of my first week on call.
Day 1, Monday
At around 8:30 p.m., I was at home in my jammies playing “The Sims 4” when I got my first legitimate page. It was thrilling! I hopped on my computer.
Within minutes, I got another notification that a colleague on the other rotation had been paged for something related.
We both hopped online. I suggested that we start a public thread in the virtual squad Slack channel instead of direct messaging, so people could learn from our mistakes and help us improve the onboarding process.
We spent about 45 minutes looking at the alert catalog, the runbook for the services and trying to fix the underlying problem. After getting more information, we realized it was not affecting customers and could wait until the team that owned the service came online.
I spent another 30 minutes updating the Captain’s Log, the log we use to communicate about events that might affect backend services, and notifying the squad that owned the service.
Day 2, Tuesday
Silence.
Day 3, Wednesday
At around 5:15 p.m. PST, I got paged while still working.
Ironically, the cause for this alert was the remediation for Monday’s alert
We realized it was an alert for a piece of system architecture due for retirement and no longer serving customer traffic. We throttled the offending service and silenced the alert until the morning.
Badda bing, badda boom.
Day 4, Thursday
At 9 a.m., : the alert from the night before un-snoozed itself and let me know that I needed to figure out what to do about it. Note that typically I would not have been responsible for on call during business hours, but I had set up the alert to un-snooze then since I knew I’d be at my computer.
I created a thread in my squad’s on-call channel, since the alert was for a service my squad owned, and within minutes it was clear that we could delete the alert and permanently wind down the service. 🍰
Day 5, Friday
Silence.
Day 6, Saturday
This was by far the most eventful day.
Morning
At 5 a.m., I got paged. I jumped out of bed and ran over to my computer.
The page self-resolved at 5:01. 🤣
Now wide awake, I spent some time Googling and reading documentation about the service that had paged me before eventually going back to sleep.
Afternoon
I dared to venture to a park nearby.
I brought my laptop and my cell phone with tethering capabilities and was prepared to run home if needed.
Of course, Murphy’s law, I got paged.
I pulled out my laptop, tethered my phone and popped online. Sitting at a playground picnic table, I re-ran the test that had failed and alerted me. It passed.
I was on standby, but able to enjoy the rest of my day.
Day 7, Sunday
I woke up and thanked PagerDuty for no 5 a.m. alert.
The rest of Sunday was quiet as well.
What I learned
At the beginning of the week, I was prepared to have to declare multiple incidents and be getting paged constantly.
Instead, most days were quiet. I was surprised by how few alerts there were, especially low-priority alerts, which I thought were going to be incessant. I attribute this to LaunchDarkly’s commitment as an organization to scalability and investment in making our services production-ready.
Perhaps I got exceptionally lucky this week, but overall it was enthralling, and I’m glad I volunteered. It gave me an incentive to Google things I wouldn’t typically research, read the alert catalog and service runbooks, and I learned a bunch.
Was there an increased cognitive load from having LaunchDarkly in the back of my mind 24/7? Yes.
Did I check my phone too many times, anxious about missing an alert? Also yes.
Will those things improve over time? Probably.
Was it thrilling? Did I learn anything? Yes and yes!