spearhead-issue-response/docs/oncall/being_oncall.md

12 KiB
Raw Blame History

A summary of expectations and helpful information for being on-call.

Alert Fatigue

What is On-Call?

At Spearhead, being on-call means that you are responsible for monitoring our communications channels and responding to requests at any time. There are two on-call scenarios that you will deal with:

  • during your normal work shift
  • outside working hours

For example, if you are on-call outside normal working hours, should any alarms be triggered by our monitoring solution or a customer emails our support channel, you will receive a "notification" (an alert on your mobile device, email, phone call, or SMS, etc.) giving you details on what has broken. You will be expected to gather as much information as possible, create the required cards in our ticketing systems, delegate or assign the card to the right person/watchers and otherwise take whatever actions are necessary in order to resolve the issue.

On-call responsibilities extend beyond normal office hours, and if you are on-call you are expected to be able to respond to issues, even at 2am. This sounds horrible (and it can be), but this is what our customers go through, and is the problem that the Spearhead Systems technical support services is trying to fix!

When you are on-call during normal working hours you are the central contact for our entire support team. We expect you will delegate and assign the card to your colleagues and only attempt to resolve issues if your current workload permits. When you are on-call outside working hours you are expected to handle as much of the process as possible and delegate only if it is outside your area of expertise or you encounter problems that require another colleagues input.

!!! note "When in the office" You are generally speaking on-call during your normal working hours even if you are not the on-call engineer. This means you are keeping an eye on the cards assigned to you directly or that you are a watcher for. If you are ever in a position that you have no assigned cards and it is not clear what to work on ask a TL or senior Sysadmin to help point you in the right direction.

Responsibilities

  1. Prepare

    • Have your laptop and Internet with you (office, home, a phone with a tethering plan, etc).
      • Have a way to charge your phone.
    • Team alert escalation happens within 30 minutes, set/stagger your notification timeouts (push, SMS, phone...) accordingly.
      • Make sure Spearhead Systems (and colleagues directly) texts and calls can bypass your "Do Not Disturb" settings.
    • Be prepared (environment is set up, you have remote access tools ready and functional, your credentials are current, you have Java installed, ssh-keys and so on...)
    • Read our Issue Response documentation (that's this!) to understand how we handle incidents and service requests, what the different roles and methods of communication are, etc.
    • Be aware of your upcoming on-call time (primary, backup) and arrange swaps around travel, vacations, appointments etc.
  2. Triage

    • Acknowledge and act on alerts whenever you can (see the first "Not responsibilities" point below)
    • Determine the urgency of the problem:
      • Is it something that should be worked on right now or escalated into a major incident? ("production server on fire" situations. Security alerts) - do so.
      • Is it some tactical work that doesn't have to happen during the night? (for example, disk utilization high watermark, but there's plenty of space left and the trend is not indicating impending doom) - snooze the issue until a more suitable time (working hours, the next morning...) and get back to fixing it then.
    • Check our internal Chat for current activity. Often (but not always) actions that could potentially cause alerts will be announced there.
    • Does the alert and your initial investigation indicate a general problem or an issue with a specific service that the relevant team should look into? If it does not look like a problem you are the expert for, then escalate to another team member or group.
  3. Fix

    • You are empowered to dive into any problem and act to fix it.
    • Involve other team members as necessary: do not hesitate to escalate if you cannot figure out the cause within a reasonable timeframe or if the service / alert is something you have not tackled before.
    • If the issue is not very time sensitive and you have other priority work, make a note of this in DoIT to keep a track of it (with an appropriate severity, comment and due date).
  4. Improve

    • If a particular issue keeps happening; if an issue alerts often but turns out to be a preventable non-issue perhaps improving this should be a longer-term task.
      • Disks that fill up, logs that should be rotated, noisy alerts...(we use ansible and rundeck, go ahead and start automating!)
      • When we perform a DoD (definition of done) this is good time to bring up recurring issues for discussion.
    • If information is difficult / impossible to find, write it down. Constantly refactor and improve our knowledge base and documentation. Add redundant links and pointers if your mental model of the wiki / codebase does not match the way it is currently organized.
  5. Support

    • When your on-call "shift" ends, let the next on-call and team know about issues that have not been resolved yet and other experiences of note.
      • Make an effort to cleanly handover necessary information. We use internal Chat, email and DoIT to communicate.
      • This is a best-practice that should be applied whenever there are details that by sharing would benefit the efficiency of the team.
    • If you are making a change that impacts the schedule (adding / removing yourself, for example), let others know since many of us make arrangements around the on-call schedule well in advance.
    • Support each other: when doing activities that might generate plenty of alerts, it is courteous to "place the service/host in maintenance" and take it away from the on-call by notifying them and scheduling an override for the duration.

Not Responsibilities

  1. No expectation to be the first to acknowledge all of the alerts during the on-call period.

    • Commute (and other necessary distractions) are facts of life, and sometimes it is not possible to receive or act on an alert before it escalates. That's why we have the backup on-call and schedule for.
  2. No expectation to fix all issues by yourself.

    • No one knows everything. Your whole team is here to help. There is no shame, and much to be learned, by escalating issues you are not certain about. "Never hesitate to escalate".
    • Service owners will always know more about how their stuff works. Especially if our and their documentation is lacking, double-checking with the relevant team avoids mistakes. Measure twice, cut once and it's often best to let the subject matter expert do the cutting.

Recommendations

  • Always have a backup schedule. Yes, this means two people being on-call at the same time, however it takes a lot of the stress off of the primary if they know they have a specific backup they can contact, rather than trying to chose a random member of the team.

  • The third-level of your escalation (after backup schedule) should probably be your entire team. This should hopefully never happen, but when it does, it's useful to be able to just get the next available person.

Escalation

  • Team leaders (TL) are a part of our normal rotation. It gives a better insight into what has been going on.

  • New members of the team should shadow your on-call rotation during the first few weeks. They should get all alerts, and should follow along with what you are doing. (All new employees shadow the Support team for one week of on-call, but it's useful to have new team members shadow your team rotations also.).

  • When going off-call, you should provide a quick summary to the next on-call about any issues that may come up during their shift. A service has been flapping, an issue is likely to re-occur, etc. If you want to be formal, this can be a written report via email, but generally a verbal summary during our morning stand-up is sufficient.

Notification Method Recommendations

You are free to set up your notification rules as you see fit, to match how you would like to best respond to incidents. If you're not sure how to configure them, the Support team has some recommendations,

Mobile Alerts

Etiquette

  • If the current on-call comes into the office at 12pm looking tired, it's not because they're lazy. They probably got paged in the night. Cut them some slack and be nice.

  • Don't close or otherwise modify a card out from under someone else. If you didn't get that specific card assigned to you as owner or a watcher, then you shouldn't be modifying it. Add a comment with your notes instead in the monitoring system and in DoIT.

Acknowledging

  • If you are testing something, or performing an action that you know will cause an alert from our monitoring or possibly may be identified as an issue by our customers, it's customary to "place the host/service in downtime" and announce all the involved parties, for the time during which you will be testing. Notify the person on-call so they are aware of your testing.

  • "Never hesitate to escalate" - Never feel ashamed to rope in someone else if you're not sure how to resolve an issue. Likewise, never look down on someone else if they ask you for help.

  • Always consider covering an hour or so of someone else's on-call time if they request it and you are able. We all have lives which might get in the way of on-call time, and one day it might be you who needs to swap their on-call time in order to have a night out with your friend from out of town.

  • If an issue comes up during your on-call shift for which you got called, you are responsible for resolving it. Even if it takes 3 hours and there's only 1 hour left of your shift. You can hand over to the next on-call if they agree, but you should never assume that's possible.