834 lines
194 KiB
JSON
834 lines
194 KiB
JSON
{
|
|
"docs": [
|
|
{
|
|
"location": "/",
|
|
"text": "This documentation covers parts of the Spearhead Systems Issue Response process. It is a copy of \nPagerDuty's\n documentation and furthermore a cut-down version of our own internal documentation, used at Spearhead Systems for any issue (incident or service request), and to prepare new employees for on-call responsibilities. It provides information not only on preparing for an incident, but also what to do during and after. It is intended to be used by on-call practitioners and those involved in an operational incident response process (or those wishing to enact a formal incident response process). See the \nabout page\n for more information on what this documentation is and why it exists. This documentation is complementary to what is available in our \nexisting wiki\n and may not yet be open sourced.\n\n\n\n\nIssue, Incident and Service Request\n\n\nAt Spearhead we use the term \nissue\n to define any request from our customers. Issues fall into two categories: \"Service Requests (SR)\" and \"Incidents (IN)\". Note that we use the term Incident to describe both a service request as well as incidents. For brevity we will use SR and IN throughout this documentation.\n\n\n\n\nA \"service request\" is usually initiated by a human and is generally not critical for the normal functioning of the business while an \"incident\" is an issue that is or can cause interruption to normal business functions. \n\n\n\n\nBeing On-Call\n#\n\n\nIf you've never been on-call before, you might be wondering what it's all about. These pages describe what the expectations of being on-call are, along with some resources to help you.\n\n\n\n\nBeing On-Call\n - \nA guide to being on-call, both what your responsibilities are, and what they are not.\n\n\nAlerting Principles\n - \nThe principles we use to determine what things page an engineer, and what time of day they page.\n\n\n\n\nBefore an Incident\n#\n\n\nReading material for things you probably want to know before an incident occurs. You likely don't want to be reading these during an actual incident.\n\n\n\n\nSeverity Levels\n - \nInformation on our severity level classification. What constitutes a Low issue? What's a \"Major Incident\"?, etc.\n\n\nDifferent Roles for Incidents\n - \nInformation on the roles during an incident; Incident Commander, Scribe, etc.\n\n\nIncident Call Etiquette\n - \nOur etiquette guidelines for incident calls, before you find yourself in one.\n\n\n\n\nDuring an Incident\n#\n\n\nInformation and processes during an incident.\n\n\n\n\nDuring an Incident\n - \nInformation on what to do during an incident, and how to constructively contribute.\n\n\nSecurity Incident Response\n - \nSecurity incidents are handled differently to normal operational incidents.\n\n\n\n\nAfter an Incident\n#\n\n\nOur followup processes, how we make sure we don't repeat mistakes and are always improving.\n\n\n\n\nPost-Mortem Process\n - \nInformation on our post-mortem process; what's involved and how to write or run a post-mortem.\n\n\nPost-Mortem Template\n - \nThe template we use for writing our post-mortems for major incidents.\n\n\n\n\nTraining\n#\n\n\nSo, you want to learn about incident response? You've come to the right place.\n\n\n\n\nTraining Overview\n - \nAn overview of our training guides and additional training material from third-parties.\n\n\nIncident Commander Training\n - \nA guide to becoming our next Incident Commander.\n\n\nDeputy Training\n - \nHow to be a deputy and back up the Incident Commander.\n\n\nScribe Training\n - \nA guide to scribing.\n\n\nSubject Matter Expert Training\n - \nA guide on responsibilities and behavior for all participants in a major incident.\n\n\nGlossary of Incident Response Terms\n - \nA collection of terms that you may hear being used, along with their definition.\n\n\n\n\nAdditional Reading\n#\n\n\nUseful material and resources from external parties that are relevant to incident response.\n\n\n\n\nIncident Management for Operations\n (O'Reilly)\n\n\nIncident Response\n (O'Reilly)\n\n\nDebriefing Facilitation Guide\n (Etsy)\n\n\nUS National Incident Management System (NIMS)\n (FEMA)",
|
|
"title": "Home"
|
|
},
|
|
{
|
|
"location": "/#being-on-call",
|
|
"text": "If you've never been on-call before, you might be wondering what it's all about. These pages describe what the expectations of being on-call are, along with some resources to help you. Being On-Call - A guide to being on-call, both what your responsibilities are, and what they are not. Alerting Principles - The principles we use to determine what things page an engineer, and what time of day they page.",
|
|
"title": "Being On-Call"
|
|
},
|
|
{
|
|
"location": "/#before-an-incident",
|
|
"text": "Reading material for things you probably want to know before an incident occurs. You likely don't want to be reading these during an actual incident. Severity Levels - Information on our severity level classification. What constitutes a Low issue? What's a \"Major Incident\"?, etc. Different Roles for Incidents - Information on the roles during an incident; Incident Commander, Scribe, etc. Incident Call Etiquette - Our etiquette guidelines for incident calls, before you find yourself in one.",
|
|
"title": "Before an Incident"
|
|
},
|
|
{
|
|
"location": "/#during-an-incident",
|
|
"text": "Information and processes during an incident. During an Incident - Information on what to do during an incident, and how to constructively contribute. Security Incident Response - Security incidents are handled differently to normal operational incidents.",
|
|
"title": "During an Incident"
|
|
},
|
|
{
|
|
"location": "/#after-an-incident",
|
|
"text": "Our followup processes, how we make sure we don't repeat mistakes and are always improving. Post-Mortem Process - Information on our post-mortem process; what's involved and how to write or run a post-mortem. Post-Mortem Template - The template we use for writing our post-mortems for major incidents.",
|
|
"title": "After an Incident"
|
|
},
|
|
{
|
|
"location": "/#training",
|
|
"text": "So, you want to learn about incident response? You've come to the right place. Training Overview - An overview of our training guides and additional training material from third-parties. Incident Commander Training - A guide to becoming our next Incident Commander. Deputy Training - How to be a deputy and back up the Incident Commander. Scribe Training - A guide to scribing. Subject Matter Expert Training - A guide on responsibilities and behavior for all participants in a major incident. Glossary of Incident Response Terms - A collection of terms that you may hear being used, along with their definition.",
|
|
"title": "Training"
|
|
},
|
|
{
|
|
"location": "/#additional-reading",
|
|
"text": "Useful material and resources from external parties that are relevant to incident response. Incident Management for Operations (O'Reilly) Incident Response (O'Reilly) Debriefing Facilitation Guide (Etsy) US National Incident Management System (NIMS) (FEMA)",
|
|
"title": "Additional Reading"
|
|
},
|
|
{
|
|
"location": "/oncall/being_oncall/",
|
|
"text": "A summary of expectations and helpful information for being on-call.\n\n\n\n\nWhat is On-Call?\n#\n\n\nBeing on-call means that you are able to be contacted at any time in order to investigate and fix issues that may arise. For example, if you are on-call, should any alarms be triggered by our monitoring solution, you will receive a \"page\" (an alert on your mobile device, email, phone call, or SMS, etc.) giving you details on what has broken. You will be expected to take whatever actions are necessary in order to resolve the issue and return your service to a normal state.\n\n\nAt Spearhead Systems we consider you are on-call during normal working hours in which case you are proactively working with \nDoIT\n and looking over your assigned cards/boards as well as when you are formally \"on-call\" and issues are being redirected to you.\n\n\nOn-call responsibilities extend beyond normal office hours, and if you are on-call you are expected to be able to respond to issues, even at 2am. This sounds horrible (and it can be), but this is what our customers go through, and is the problem that the Spearhead Systems professional services is trying to fix!\n\n\nResponsibilities\n#\n\n\n\n\n\n\nPrepare\n\n\n\n\nHave your laptop and Internet with you (office, home, a MiFi dongle, a phone with a tethering plan, etc).\n\n\nHave a way to charge your MiFi.\n\n\n\n\n\n\nTeam alert escalation happens within 5 minutes, set/stagger your notification timeouts (push, SMS, phone...) accordingly.\n\n\nMake sure Spearhead Systems (and colleagues directly) texts and calls can bypass your \"Do Not Disturb\" settings.\n\n\n\n\n\n\nBe prepared (environment is set up, a current working copy of the necessary repos is local and functioning, you have configured and tested environments on workstations, your credentials for third-party services are current, you have Java installed, ssh-keys and so on...)\n\n\nRead our Incident Response documentation (that's this!) to understand how we handle incidents and service requests, what the different roles and methods of communication are, etc.\n\n\nBe aware of your upcoming on-call time (primary, backup) and arrange swaps around travel, vacations, appointments etc.\n\n\n\n\n\n\n\n\nTriage\n\n\n\n\nAcknowledge and act on alerts whenever you can (see the first \"Not responsibilities\" point below)\n\n\nDetermine the urgency of the problem:\n\n\nIs it something that should be worked on right now or escalated into a major incident? (\"production server on fire\" situations. Security alerts) - do so.\n\n\nIs it some tactical work that doesn't have to happen during the night? (for example, disk utilization high watermark, but there's plenty of space left and the trend is not indicating impending doom) - snooze the alert until a more suitable time (working hours, the next morning...) and get back to fixing it then.\n\n\n\n\n\n\nCheck Slack for current activity. Often (but not always) actions that could potentially cause alerts will be announced there.\n\n\nDoes the alert and your initial investigation indicate a general problem or an issue with a specific service that the relevant team should look into? If it does not look like a problem you are the expert for, then escalate to another team member or group.\n\n\n\n\n\n\n\n\nFix\n\n\n\n\nYou are empowered to dive into any problem and act to fix it.\n\n\nInvolve other team members as necessary: do not hesitate to escalate if you cannot figure out the cause within a reasonable timeframe or if the service / alert is something you have not tackled before.\n\n\nIf the issue is not very time sensitive and you have other priority work, make a note of this in DoIT to keep a track of it (with an appropriate severity).\n\n\n\n\n\n\n\n\nImprove\n\n\n\n\nIf a particular issue keeps happening; if an issue alerts often but turns out to be a preventable non-issue \u2013 perhaps improving this should be a longer-term task.\n\n\nDisks that fill up, logs that should be rotated, noisy alerts...(we use ansible, go ahead and start automating!)\n\n\n\n\n\n\nIf information is difficult / impossible to find, write it down. Constantly refactor and improve our knowledge base and documentation. Add redundant links and pointers if your mental model of the wiki / codebase does not match the way it is currently organized.\n\n\n\n\n\n\n\n\nSupport\n\n\n\n\nWhen your on-call \"shift\" ends, let the next on-call know about issues that have not been resolved yet and other experiences of note.\n\n\nIf you are making a change that impacts the schedule (adding / removing yourself, for example), let others know since many of us make arrangements around the on-call schedule well in advance.\n\n\nSupport each other: when doing activities that might generate plenty of pages, it is courteous to \"take the page\" away from the on-call by notifying them and scheduling an override for the duration.\n\n\n\n\n\n\n\n\nNot Responsibilities\n#\n\n\n\n\n\n\nNo expectation to be the first to acknowledge \nall\n of the alerts during the on-call period.\n\n\n\n\nCommute (and other necessary distractions) are facts of life, and sometimes it is not possible to receive or act on an alert before it escalates. That's why we have the backup on-call and schedule for.\n\n\n\n\n\n\n\n\nNo expectation to fix all issues by yourself.\n\n\n\n\nNo one knows everything. Your whole team is here to help. There is no shame, and much to be learned, by escalating issues you are not certain about. \"Never hesitate to escalate\".\n\n\nService owners will always know more about how their stuff works. Especially if our and their documentation is lacking, double-checking with the relevant team avoids mistakes. Measure twice, cut once \u2013 and it's often best to let the subject matter expert do the cutting.\n\n\n\n\n\n\n\n\nRecommendations\n#\n\n\nIf your team is starting its own on-call rotation, here are some scheduling recommendations from the Operations team.\n\n\n\n\n\n\nAlways have a backup schedule. Yes, this means two people being on-call at the same time, however it takes a lot of the stress off of the primary if they know they have a specific backup they can contact, rather than trying to chose a random member of the team.\n\n\n\n\nA backup shift should generally come directly after a primary shift. It gives chance for the previous primary to pass on additional context which may have come up during their shift. It also helps to prevent people from sitting on issues with the intent of letting the next shift fix it.\n\n\n\n\n\n\n\n\nThe third-level of your escalation (after backup schedule) should probably be your entire team. This should hopefully never happen (it's happened once in the history of the Support team), but when it does, it's useful to be able to just get the next available person.\n\n\n\n\n\n\n\n\n\n\n\n\nTeam managers can (and should) be part of your normal rotation. It gives a better insight into what has been going on.\n\n\n\n\n\n\nNew members of the team should shadow your on-call rotation during the first few weeks. They should get all alerts, and should follow along with what you are doing. (All new employees shadow the Support team for one week of on-call, but it's useful to have new team members shadow your team rotations also. Just not at the same time).\n\n\n\n\n\n\nWe recommend you set your escalation timeout to 5 minutes. This should be plenty of time for someone to acknowledge the incident if they're able to. If they're not able to within 5 minutes, then they're probably not in a good position to respond to the incident anyway.\n\n\n\n\n\n\nWhen going off-call, you should provide a quick summary to the next on-call about any issues that may come up during their shift. A service has been flapping, an issue is likely to re-occur, etc. If you want to be formal, this can be a written report via email, but generally a verbal summary is sufficient.\n\n\n\n\n\n\nNotification Method Recommendations\n#\n\n\nYou are free to set up your notification rules as you see fit, to match how you would like to best respond to incidents. If you're not sure how to configure them, the Support team has some recommendations,\n\n\n\n\n\n\nUse Push Notification and Email as your first method of notification. Most of us have phones with us at all times, so this is a prudent first method and is usually sufficient. (DoIT is in the process of integratoin with SNS for push notifications)\n\n\nUse Phone and/or SMS notification each minute after, until the escalation time. If Push didn't work, then it's likely you need something stronger, like a phone call. Keep calling every minute until it's too late. If you don't pick up by the 3rd time, then it's unlikely you are able to respond, and the incident will get escalated away from you.\n\n\n\n\nEtiquette\n#\n\n\n\n\n\n\nIf the current on-call comes into the office at 12pm looking tired, it's not because they're lazy. They probably got paged in the night. Cut them some slack and be nice.\n\n\n\n\n\n\nDon't acknowledge an incident out from under someone else. If you didn't get paged for the incident, then you shouldn't be acknowledging it. Add a comment with your notes instead.\n\n\n\n\n\n\n\n\n\n\n\n\nIf you are testing something, or performing an action that you know will cause a page (notification, alert), it's customary to \"take the pager\" for the time during which you will be testing. Notify the person on-call that you are taking the pager for the next hour while you test.\n\n\n\n\n\n\n\"Never hesitate to escalate\" - Never feel ashamed to rope in someone else if you're not sure how to resolve an issue. Likewise, never look down on someone else if they ask you for help.\n\n\n\n\n\n\nAlways consider covering an hour or so of someone else's on-call time if they request it and you are able. We all have lives which might get in the way of on-call time, and one day it might be you who needs to swap their on-call time in order to have a night out with your friend from out of town.\n\n\n\n\n\n\nIf an issue comes up during your on-call shift for which you got paged, you are responsible for resolving it. Even if it takes 3 hours and there's only 1 hour left of your shift. You can hand over to the next on-call if they agree, but you should never assume that's possible.",
|
|
"title": "Being On-Call"
|
|
},
|
|
{
|
|
"location": "/oncall/being_oncall/#what-is-on-call",
|
|
"text": "Being on-call means that you are able to be contacted at any time in order to investigate and fix issues that may arise. For example, if you are on-call, should any alarms be triggered by our monitoring solution, you will receive a \"page\" (an alert on your mobile device, email, phone call, or SMS, etc.) giving you details on what has broken. You will be expected to take whatever actions are necessary in order to resolve the issue and return your service to a normal state. At Spearhead Systems we consider you are on-call during normal working hours in which case you are proactively working with DoIT and looking over your assigned cards/boards as well as when you are formally \"on-call\" and issues are being redirected to you. On-call responsibilities extend beyond normal office hours, and if you are on-call you are expected to be able to respond to issues, even at 2am. This sounds horrible (and it can be), but this is what our customers go through, and is the problem that the Spearhead Systems professional services is trying to fix!",
|
|
"title": "What is On-Call?"
|
|
},
|
|
{
|
|
"location": "/oncall/being_oncall/#responsibilities",
|
|
"text": "Prepare Have your laptop and Internet with you (office, home, a MiFi dongle, a phone with a tethering plan, etc). Have a way to charge your MiFi. Team alert escalation happens within 5 minutes, set/stagger your notification timeouts (push, SMS, phone...) accordingly. Make sure Spearhead Systems (and colleagues directly) texts and calls can bypass your \"Do Not Disturb\" settings. Be prepared (environment is set up, a current working copy of the necessary repos is local and functioning, you have configured and tested environments on workstations, your credentials for third-party services are current, you have Java installed, ssh-keys and so on...) Read our Incident Response documentation (that's this!) to understand how we handle incidents and service requests, what the different roles and methods of communication are, etc. Be aware of your upcoming on-call time (primary, backup) and arrange swaps around travel, vacations, appointments etc. Triage Acknowledge and act on alerts whenever you can (see the first \"Not responsibilities\" point below) Determine the urgency of the problem: Is it something that should be worked on right now or escalated into a major incident? (\"production server on fire\" situations. Security alerts) - do so. Is it some tactical work that doesn't have to happen during the night? (for example, disk utilization high watermark, but there's plenty of space left and the trend is not indicating impending doom) - snooze the alert until a more suitable time (working hours, the next morning...) and get back to fixing it then. Check Slack for current activity. Often (but not always) actions that could potentially cause alerts will be announced there. Does the alert and your initial investigation indicate a general problem or an issue with a specific service that the relevant team should look into? If it does not look like a problem you are the expert for, then escalate to another team member or group. Fix You are empowered to dive into any problem and act to fix it. Involve other team members as necessary: do not hesitate to escalate if you cannot figure out the cause within a reasonable timeframe or if the service / alert is something you have not tackled before. If the issue is not very time sensitive and you have other priority work, make a note of this in DoIT to keep a track of it (with an appropriate severity). Improve If a particular issue keeps happening; if an issue alerts often but turns out to be a preventable non-issue \u2013 perhaps improving this should be a longer-term task. Disks that fill up, logs that should be rotated, noisy alerts...(we use ansible, go ahead and start automating!) If information is difficult / impossible to find, write it down. Constantly refactor and improve our knowledge base and documentation. Add redundant links and pointers if your mental model of the wiki / codebase does not match the way it is currently organized. Support When your on-call \"shift\" ends, let the next on-call know about issues that have not been resolved yet and other experiences of note. If you are making a change that impacts the schedule (adding / removing yourself, for example), let others know since many of us make arrangements around the on-call schedule well in advance. Support each other: when doing activities that might generate plenty of pages, it is courteous to \"take the page\" away from the on-call by notifying them and scheduling an override for the duration.",
|
|
"title": "Responsibilities"
|
|
},
|
|
{
|
|
"location": "/oncall/being_oncall/#not-responsibilities",
|
|
"text": "No expectation to be the first to acknowledge all of the alerts during the on-call period. Commute (and other necessary distractions) are facts of life, and sometimes it is not possible to receive or act on an alert before it escalates. That's why we have the backup on-call and schedule for. No expectation to fix all issues by yourself. No one knows everything. Your whole team is here to help. There is no shame, and much to be learned, by escalating issues you are not certain about. \"Never hesitate to escalate\". Service owners will always know more about how their stuff works. Especially if our and their documentation is lacking, double-checking with the relevant team avoids mistakes. Measure twice, cut once \u2013 and it's often best to let the subject matter expert do the cutting.",
|
|
"title": "Not Responsibilities"
|
|
},
|
|
{
|
|
"location": "/oncall/being_oncall/#recommendations",
|
|
"text": "If your team is starting its own on-call rotation, here are some scheduling recommendations from the Operations team. Always have a backup schedule. Yes, this means two people being on-call at the same time, however it takes a lot of the stress off of the primary if they know they have a specific backup they can contact, rather than trying to chose a random member of the team. A backup shift should generally come directly after a primary shift. It gives chance for the previous primary to pass on additional context which may have come up during their shift. It also helps to prevent people from sitting on issues with the intent of letting the next shift fix it. The third-level of your escalation (after backup schedule) should probably be your entire team. This should hopefully never happen (it's happened once in the history of the Support team), but when it does, it's useful to be able to just get the next available person. Team managers can (and should) be part of your normal rotation. It gives a better insight into what has been going on. New members of the team should shadow your on-call rotation during the first few weeks. They should get all alerts, and should follow along with what you are doing. (All new employees shadow the Support team for one week of on-call, but it's useful to have new team members shadow your team rotations also. Just not at the same time). We recommend you set your escalation timeout to 5 minutes. This should be plenty of time for someone to acknowledge the incident if they're able to. If they're not able to within 5 minutes, then they're probably not in a good position to respond to the incident anyway. When going off-call, you should provide a quick summary to the next on-call about any issues that may come up during their shift. A service has been flapping, an issue is likely to re-occur, etc. If you want to be formal, this can be a written report via email, but generally a verbal summary is sufficient.",
|
|
"title": "Recommendations"
|
|
},
|
|
{
|
|
"location": "/oncall/being_oncall/#notification-method-recommendations",
|
|
"text": "You are free to set up your notification rules as you see fit, to match how you would like to best respond to incidents. If you're not sure how to configure them, the Support team has some recommendations, Use Push Notification and Email as your first method of notification. Most of us have phones with us at all times, so this is a prudent first method and is usually sufficient. (DoIT is in the process of integratoin with SNS for push notifications) Use Phone and/or SMS notification each minute after, until the escalation time. If Push didn't work, then it's likely you need something stronger, like a phone call. Keep calling every minute until it's too late. If you don't pick up by the 3rd time, then it's unlikely you are able to respond, and the incident will get escalated away from you.",
|
|
"title": "Notification Method Recommendations"
|
|
},
|
|
{
|
|
"location": "/oncall/being_oncall/#etiquette",
|
|
"text": "If the current on-call comes into the office at 12pm looking tired, it's not because they're lazy. They probably got paged in the night. Cut them some slack and be nice. Don't acknowledge an incident out from under someone else. If you didn't get paged for the incident, then you shouldn't be acknowledging it. Add a comment with your notes instead. If you are testing something, or performing an action that you know will cause a page (notification, alert), it's customary to \"take the pager\" for the time during which you will be testing. Notify the person on-call that you are taking the pager for the next hour while you test. \"Never hesitate to escalate\" - Never feel ashamed to rope in someone else if you're not sure how to resolve an issue. Likewise, never look down on someone else if they ask you for help. Always consider covering an hour or so of someone else's on-call time if they request it and you are able. We all have lives which might get in the way of on-call time, and one day it might be you who needs to swap their on-call time in order to have a night out with your friend from out of town. If an issue comes up during your on-call shift for which you got paged, you are responsible for resolving it. Even if it takes 3 hours and there's only 1 hour left of your shift. You can hand over to the next on-call if they agree, but you should never assume that's possible.",
|
|
"title": "Etiquette"
|
|
},
|
|
{
|
|
"location": "/oncall/alerting_principles/",
|
|
"text": "We manage how we get alerted based on many factors such as the customers contractual SLA, the urgency of their request or incident, etc.. \nan alert or notification is something which requires a human to perform an action\n. Based on the severity of the issue (service request or incident) we prioritize accordingly in \nDoIT\n.\n\n\n\n\nMajor Priority Alerts\n\n\nAnything that wakes up a human in the middle of the night should be \nimmediately human actionable\n. If it is none of those things, then we need to adjust the alert to not page at those times.\n\n\n\n\n\n\n\n\n\n\nPriority\n\n\nAlerts\n\n\nResponse\n\n\n\n\n\n\n\n\n\n\nMajor\n\n\nMajor-Priority Spearhead Alert 24/7/365.\n\n\nRequires \nimmediate human action\n.\n\n\n\n\n\n\nNormal\n\n\nNormal-Priority Spearhead Alert during \nbusiness hours only\n.\n\n\nRequires human action that same working day.\n\n\n\n\n\n\nMinor\n\n\nMinor-Priority Spearhead Alert 24/7/365.\n\n\nRequires human action at some point.\n\n\n\n\n\n\n\n\nBoth IN and SR (incidents, service requests) share the same priorities. The actual response / resolution times vary and are based upon contractual agreements with the customer. These details (SLA) are available in DoIT on the organization page of the respective customer.\n\n\nIf you're setting up a new alert/notification, consider the chart above for how you want to alert people. Be mindful of not creating new high-priority alerts if they don't require an immediate response, for example.\n\n\n\n\nAlert Channels\n\n\nPresently we use email as the only notification method. This means keeping an eye on your email is essential!\nSMS and Push notifications are in the pipeline for DoIT. \n\n\n\n\nExamples\n#\n\n\n\"Production service is failing for 75% of requests, automation is unable to resolve.\"_\n#\n\n\nThis would be a \nMajor\n priority IN, requiring immediate human action to resolve.\n\n\n\n\n\"A customer sends an email stating that \"Production server disk space is filling, expected to be full in 48 hours. Log rotation is insufficient to resolve.\"\n#\n\n\nThis would be a \nNormal\n priority SR, requiring human action soon, but not immediately.\n\n\n\n\n\"An SSL certificate is due to expire in one week.\"\n#\n\n\nThis would be a \nMinor\n priority SR, requiring human action some time soon.",
|
|
"title": "Alerting Principles"
|
|
},
|
|
{
|
|
"location": "/oncall/alerting_principles/#examples",
|
|
"text": "",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"location": "/oncall/alerting_principles/#production-service-is-failing-for-75-of-requests-automation-is-unable-to-resolve_",
|
|
"text": "This would be a Major priority IN, requiring immediate human action to resolve.",
|
|
"title": "\"Production service is failing for 75% of requests, automation is unable to resolve.\"_"
|
|
},
|
|
{
|
|
"location": "/oncall/alerting_principles/#a-customer-sends-an-email-stating-that-production-server-disk-space-is-filling-expected-to-be-full-in-48-hours-log-rotation-is-insufficient-to-resolve",
|
|
"text": "This would be a Normal priority SR, requiring human action soon, but not immediately.",
|
|
"title": "\"A customer sends an email stating that \"Production server disk space is filling, expected to be full in 48 hours. Log rotation is insufficient to resolve.\""
|
|
},
|
|
{
|
|
"location": "/oncall/alerting_principles/#an-ssl-certificate-is-due-to-expire-in-one-week",
|
|
"text": "This would be a Minor priority SR, requiring human action some time soon.",
|
|
"title": "\"An SSL certificate is due to expire in one week.\""
|
|
},
|
|
{
|
|
"location": "/before/severity_levels/",
|
|
"text": "The first step in any incident response process is to determine what actually constitutes an incident. We have two high level categories for classifying incidents: this is done using \"SR\" or \"IN\" defintions with an attached priority of \"Minor\", \"Normal\" or \"Major\". \"SR\" are \"Service requests\" initiated by a customer and usually do not constitute a critical issue (there are exceptions) and \"IN\" are \"incidents\" which are generally \"urgent\".\n\n\nAll of our operational issues are to be classified as either a Service Request or an Incident. Incidents have priority over Service Requests provided that there are no Service Requests with a higher priority. In general you will want to resolve a higher severity SR or IN than a lower one (a \"Major\" priority gets a more intensive response than a \"Normal\" incident for example).\n\n\n\n\nAlways Assume The Worst\n\n\nIf you are unsure which level an incident is (e.g. not sure if IN is Major or Normal), \ntreat it as the higher one\n. During an incident is not the time to discuss or litigate severities, just assume the highest and review during a post-mortem.\n\n\n\n\n\n \n\n \n\n \nSeverity\n\n \nDescription\n\n \nWhat To Do\n\n \n\n \n\n \n\n \n\n \nMajor\n\n \n\n \n\n \nThe system is in a critical state and is actively impacting a large number of customers.\n\n \nFunctionality has been severely impaired for a long time, breaking SLA.\n\n \nCustomer-data-exposing security vulnerability has come to our attention.\n\n \n\n \n\n \nSee \nDuring an Incident\n.\n\n \n\n \n\n \nNormal\n\n \n\n \n\n \nFunctionality of virtualization platform is severely impaired.\n\n \nE-mail system is offline.\n\n \n\n \n\n \nSee \nDuring an Incident\n.\n\n \n\n \n\n \nAnything above this line is considered a \"Major Incident\". These are generally Incidents (IN). Below are service requests (SR) which are usually initiated by a human who can help with prioritizing. A call is triggered for all major incidents (indifferently of SR or IN).\n\n \n\n \n\n \nNormal\n\n \n\n \n\n \nPartial loss of functionality, only affecting minority of customers.\n\n \nSomething that has the likelihood of becoming Major if nothing is done.\n\n \nNo redundancy in a service (failure of 1 more node will cause outage).\n\n \n\n \n\n \n\n \n\n \nWork on issue as your top priority.\n\n \nLiaise with engineers of affected systems to identify cause.\n\n \nIf related to recent deployment, rollback.\n\n \nMonitor status and notice if/when it escalates.\n\n \nMention on Slack if you think it has the potential to escalate.\n\n \n\n \n\n \n\n \n\n \nNormal\n\n \n\n \n\n \nPerformance issues (delays, etc). Tasks that require non-immediate attention.\n\n \nJob failure (not impacting alerting).\n\n \n\n \n\n \n\n \n\n \nWork on the issue as your first priority (above \"Low\" tasks).\n\n \nMonitor status and notice if/when it escalates.\n\n \n\n \n\n \n\n \n\n \nLow\n\n \n\n \n\n \nNormal bugs which aren't impacting system use, cosmetic issues, etc.\n\n \n\n \n\n \n\n \n\n \nCreate a DoIT ticket and assign to owner of affected system.\n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\nBe Specific\n\n\nWhen creating Cards in Doit, be as specific as possible and include all necessary details. Include relevant details regarding when the issue started, what may have triggered it, etc.. Document your efforts through worklogs and be specific there as well.",
|
|
"title": "Severity Levels"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/",
|
|
"text": "There are several roles for our incident response teams at Spearhead Systems. Certain roles only have one person per incident (e.g. support engineer), whereas other roles can have multiple people (e.g.System/Solution Architects, juniors, etc.). It's all about coming together as a team, working the problem, and getting a solution quickly.\n\n\nHere is a rough outline of our role hierarchy, with each role discussed in more detail on the rest of this page.\n\n\n\n\n\n\nTeam Leader (IC)\n#\n\n\nWhat is it?\n#\n\n\nA Team Leader acts as the single source of truth of what is currently happening and what is going to happen during an major incident. They come in all shapes, sizes, and colors. TL's are also the key elements in a project (boards in DoIT).\n\n\nWhy have one?\n#\n\n\nAs any system grows in size and complexity, things break and cause incidents. The TL is needed to help drive major incidents to resolution by organizing his team towards a common goal.\n\n\nWhat are the responsibilities?\n#\n\n\n\n\nHelp prepare for projects and incidents,\n\n\nSetup communications channels.\n\n\nCreate the DoIT board(s) and other project planning related materials.\n\n\nFunnel people to these communications channels.\n\n\nTrain team members on how to communicate and train other TL's.\n\n\n\n\n\n\nDrive incidents and projects to resolution,\n\n\nGet everyone on the same communication channel.\n\n\nCollect information from team members for their services/area of ownership status.\n\n\nCollect proposed repair actions, then recommend repair actions to be taken.\n\n\nDelegate all repair actions, the TL is NOT a resolver.\n\n\nBe the single authority on system status\n\n\nCommunicate directly with the customers and end-users\n\n\nnot the engineers themselves!\n\n\n\n\n\n\n\n\n\n\nPost Mortem,\n\n\nCreating the initial template right after the incident so people can put in their thoughts while fresh.\n\n\nAssigning the post-mortem after the event is over, this can be done after the call.\n\n\nWork with Managers/Support on scheduling preventive actions.\n\n\n\n\n\n\n\n\nWho are they?\n#\n\n\nAnyone on the TL on-call schedule. Trainees are typically on the TL Shadow schedule.\n\n\nHow can I become one?\n#\n\n\nTake a look at our \nTeam Leader training guide\n.\n\n\n\n\nDeputy\n#\n\n\nWhat is it?\n#\n\n\nA Deputy is a direct support role for the Incident Commander. This is not a shadow where the person just observes, the Deputy is expected to perform important tasks during an incident.\n\n\nWhy have one?\n#\n\n\nIt's important for the IC to focus on the problem at hand, rather than worrying about documenting the steps or monitoring timers. The deputy helps to support the IC and keep them focussed on the incident.\n\n\nWhat are the responsibilities?\n#\n\n\nThe Deputy is expected to:\n\n\n\n\nBring up issues to the Incident Commander that may otherwise not be addressed (keeping an eye on timers that have been started, circling back around to missed items from a roll call, etc).\n\n\nBe a \"hot standby\" Incident Commander, should the primary need to either transition to a SME, or otherwise have to step away from the IC role.\n\n\nPage SME's or other on-call engineers as instructed by the Incident Commander.\n\n\nManage the incident call, and be prepared to remove people from the call if instructed by the Incident Commander.\n\n\nLiaise with stakeholders and provide status updates on Slack as necessary.\n\n\n\n\nWho are they?\n#\n\n\nAny Incident Commander can act as a deputy. Deputies need to be trained as an Incident Commander as they may be required to take over command.\n\n\nHow can I become one?\n#\n\n\nTake a look at our \nDeputy training guide\n. Deputies also need to be \ntrained as an Incident Commander\n.\n\n\n\n\nScribe\n#\n\n\nWhat is it?\n#\n\n\nA Scribe documents the timeline of an incident as it progresses, and makes sure all important decisions and data are captured for later review.\n\n\nWhy have one?\n#\n\n\nThe incident commander will need to focus on the problem at hand, and the subject matter experts will need to focus on resolving the incident. It is important to capture a timeline of events as they happen so that they can be reviewed during the post-mortem to determine how well we performed, and so we can accurate determine any additional impact that we might not have noticed at the time.\n\n\nWhat are the responsibilities?\n#\n\n\nThe Scribe is expected to:\n\n\n\n\nEnsure the incident call is being recorded.\n\n\nNote in Slack important data, events, and actions, as they happen. Specifically:\n\n\nKey actions as they are taken (Example: \"prod-server-387723 is being restarted to attempt to remove the stuck lock\")\n\n\nStatus reports when one is provided by the IC (Example: \"We are in SEV-1, service A is currently not processing events due to a stuck lock, X is restarting the app stack, next checkin in 3 minutes\")\n\n\nAny key callouts either during the call or at the ending review (Example: \"Note: (Bob B) We should have a better way to determine stuck locks.\")\n\n\n\n\n\n\n\n\nWho are they?\n#\n\n\nAnyone can act as a scribe during an incident, and are chosen by the Incident Commander at the start of the call. Typically the Deputy will act as the Scribe, but that doesn't necessarily need to happen, and for larger incidents may not be possible.\n\n\nHow can I become one?\n#\n\n\nFollow our \nScribe training guide\n, and then notify the Incident Commanders that you would like to be considered for scribing for the next incident.\n\n\n\n\nSubject Matter Expert\n#\n\n\nWhat is it?\n#\n\n\nA Subject Matter Expert (SME), sometimes called a \"Resolver\", is a domain expert or designated owner of a component or service that is part of the PagerDuty software stack.\n\n\nWhy have one?\n#\n\n\nThe IC and deputy are not all-knowing super beings. When there is a problem with a service, an expert in that service is needed to be able to quickly help identify and fix issues.\n\n\nWhat are the responsibilities?\n#\n\n\n\n\nBeing able to diagnose common problems with the service.\n\n\nBeing able to rapidly fix issues found during an incident.\n\n\nConcise communication skills, specifically for CAN reports:\n\n\nCondition: What is the current state of the service? Is it healthy or not?\n\n\nActions: What actions need to be taken if the service is not in a healthy state?\n\n\nNeeds: What support does the resolver need to perform an action?\n\n\n\n\n\n\n\n\nWho are they?\n#\n\n\nAnyone who is considered a \"domain expert\" can act as a resolver for an incident. Typically the service's primary on-call will act as the SME for that service.\n\n\nHow can I become one?\n#\n\n\nTake a look at our \nSubject Matter Expert training guide\n. You should also discuss with your team and service owner to determine what the requirements are for your particular service.\n\n\n\n\nCustomer Liaison\n#\n\n\nWhat is it?\n#\n\n\nA person responsible for interacting with customers, either directly, or via our public communication channels. Typically a member of the Customer Support team.\n\n\nWhy have one?\n#\n\n\nAll of the other roles will be actively working on identifying the cause and resolving the issue, we need a role which is focused purely on the customer interaction side of things so that it can be done properly, with the due care and attention it needs.\n\n\nWhat are the responsibilities?\n#\n\n\n\n\nPost any publicly facing messages regarding the incident (Twitter, StatusPage, etc).\n\n\nNotify the IC of any customers reporting that they are affected by the incident.\n\n\n\n\nWho are they?\n#\n\n\nAny member of the Support Team can act as a customer liaison.\n\n\nHow can I become one?\n#\n\n\nDiscuss with the Support Team about becoming our next customer liaison.",
|
|
"title": "Different Roles"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#team-leader-ic",
|
|
"text": "",
|
|
"title": "Team Leader (IC)"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#what-is-it",
|
|
"text": "A Team Leader acts as the single source of truth of what is currently happening and what is going to happen during an major incident. They come in all shapes, sizes, and colors. TL's are also the key elements in a project (boards in DoIT).",
|
|
"title": "What is it?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#why-have-one",
|
|
"text": "As any system grows in size and complexity, things break and cause incidents. The TL is needed to help drive major incidents to resolution by organizing his team towards a common goal.",
|
|
"title": "Why have one?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#what-are-the-responsibilities",
|
|
"text": "Help prepare for projects and incidents, Setup communications channels. Create the DoIT board(s) and other project planning related materials. Funnel people to these communications channels. Train team members on how to communicate and train other TL's. Drive incidents and projects to resolution, Get everyone on the same communication channel. Collect information from team members for their services/area of ownership status. Collect proposed repair actions, then recommend repair actions to be taken. Delegate all repair actions, the TL is NOT a resolver. Be the single authority on system status Communicate directly with the customers and end-users not the engineers themselves! Post Mortem, Creating the initial template right after the incident so people can put in their thoughts while fresh. Assigning the post-mortem after the event is over, this can be done after the call. Work with Managers/Support on scheduling preventive actions.",
|
|
"title": "What are the responsibilities?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#who-are-they",
|
|
"text": "Anyone on the TL on-call schedule. Trainees are typically on the TL Shadow schedule.",
|
|
"title": "Who are they?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#how-can-i-become-one",
|
|
"text": "Take a look at our Team Leader training guide .",
|
|
"title": "How can I become one?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#deputy",
|
|
"text": "",
|
|
"title": "Deputy"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#what-is-it_1",
|
|
"text": "A Deputy is a direct support role for the Incident Commander. This is not a shadow where the person just observes, the Deputy is expected to perform important tasks during an incident.",
|
|
"title": "What is it?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#why-have-one_1",
|
|
"text": "It's important for the IC to focus on the problem at hand, rather than worrying about documenting the steps or monitoring timers. The deputy helps to support the IC and keep them focussed on the incident.",
|
|
"title": "Why have one?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#what-are-the-responsibilities_1",
|
|
"text": "The Deputy is expected to: Bring up issues to the Incident Commander that may otherwise not be addressed (keeping an eye on timers that have been started, circling back around to missed items from a roll call, etc). Be a \"hot standby\" Incident Commander, should the primary need to either transition to a SME, or otherwise have to step away from the IC role. Page SME's or other on-call engineers as instructed by the Incident Commander. Manage the incident call, and be prepared to remove people from the call if instructed by the Incident Commander. Liaise with stakeholders and provide status updates on Slack as necessary.",
|
|
"title": "What are the responsibilities?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#who-are-they_1",
|
|
"text": "Any Incident Commander can act as a deputy. Deputies need to be trained as an Incident Commander as they may be required to take over command.",
|
|
"title": "Who are they?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#how-can-i-become-one_1",
|
|
"text": "Take a look at our Deputy training guide . Deputies also need to be trained as an Incident Commander .",
|
|
"title": "How can I become one?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#scribe",
|
|
"text": "",
|
|
"title": "Scribe"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#what-is-it_2",
|
|
"text": "A Scribe documents the timeline of an incident as it progresses, and makes sure all important decisions and data are captured for later review.",
|
|
"title": "What is it?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#why-have-one_2",
|
|
"text": "The incident commander will need to focus on the problem at hand, and the subject matter experts will need to focus on resolving the incident. It is important to capture a timeline of events as they happen so that they can be reviewed during the post-mortem to determine how well we performed, and so we can accurate determine any additional impact that we might not have noticed at the time.",
|
|
"title": "Why have one?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#what-are-the-responsibilities_2",
|
|
"text": "The Scribe is expected to: Ensure the incident call is being recorded. Note in Slack important data, events, and actions, as they happen. Specifically: Key actions as they are taken (Example: \"prod-server-387723 is being restarted to attempt to remove the stuck lock\") Status reports when one is provided by the IC (Example: \"We are in SEV-1, service A is currently not processing events due to a stuck lock, X is restarting the app stack, next checkin in 3 minutes\") Any key callouts either during the call or at the ending review (Example: \"Note: (Bob B) We should have a better way to determine stuck locks.\")",
|
|
"title": "What are the responsibilities?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#who-are-they_2",
|
|
"text": "Anyone can act as a scribe during an incident, and are chosen by the Incident Commander at the start of the call. Typically the Deputy will act as the Scribe, but that doesn't necessarily need to happen, and for larger incidents may not be possible.",
|
|
"title": "Who are they?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#how-can-i-become-one_2",
|
|
"text": "Follow our Scribe training guide , and then notify the Incident Commanders that you would like to be considered for scribing for the next incident.",
|
|
"title": "How can I become one?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#subject-matter-expert",
|
|
"text": "",
|
|
"title": "Subject Matter Expert"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#what-is-it_3",
|
|
"text": "A Subject Matter Expert (SME), sometimes called a \"Resolver\", is a domain expert or designated owner of a component or service that is part of the PagerDuty software stack.",
|
|
"title": "What is it?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#why-have-one_3",
|
|
"text": "The IC and deputy are not all-knowing super beings. When there is a problem with a service, an expert in that service is needed to be able to quickly help identify and fix issues.",
|
|
"title": "Why have one?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#what-are-the-responsibilities_3",
|
|
"text": "Being able to diagnose common problems with the service. Being able to rapidly fix issues found during an incident. Concise communication skills, specifically for CAN reports: Condition: What is the current state of the service? Is it healthy or not? Actions: What actions need to be taken if the service is not in a healthy state? Needs: What support does the resolver need to perform an action?",
|
|
"title": "What are the responsibilities?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#who-are-they_3",
|
|
"text": "Anyone who is considered a \"domain expert\" can act as a resolver for an incident. Typically the service's primary on-call will act as the SME for that service.",
|
|
"title": "Who are they?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#how-can-i-become-one_3",
|
|
"text": "Take a look at our Subject Matter Expert training guide . You should also discuss with your team and service owner to determine what the requirements are for your particular service.",
|
|
"title": "How can I become one?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#customer-liaison",
|
|
"text": "",
|
|
"title": "Customer Liaison"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#what-is-it_4",
|
|
"text": "A person responsible for interacting with customers, either directly, or via our public communication channels. Typically a member of the Customer Support team.",
|
|
"title": "What is it?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#why-have-one_4",
|
|
"text": "All of the other roles will be actively working on identifying the cause and resolving the issue, we need a role which is focused purely on the customer interaction side of things so that it can be done properly, with the due care and attention it needs.",
|
|
"title": "Why have one?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#what-are-the-responsibilities_4",
|
|
"text": "Post any publicly facing messages regarding the incident (Twitter, StatusPage, etc). Notify the IC of any customers reporting that they are affected by the incident.",
|
|
"title": "What are the responsibilities?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#who-are-they_4",
|
|
"text": "Any member of the Support Team can act as a customer liaison.",
|
|
"title": "Who are they?"
|
|
},
|
|
{
|
|
"location": "/before/different_roles/#how-can-i-become-one_4",
|
|
"text": "Discuss with the Support Team about becoming our next customer liaison.",
|
|
"title": "How can I become one?"
|
|
},
|
|
{
|
|
"location": "/before/call_etiquette/",
|
|
"text": "You've just joined an incident call, and you've never been on one before. You have no idea what's going on, or what you're supposed to be doing. This page will help you through your first time on an incident call, and will provide a reference for future calls you may be a part of.\n\n\n\n\nCredit: \nOfficial White House Photo\n by Pete Souza\n\n\nFirst Steps\n#\n\n\n\n\nIf you intend on participating on the incident call you should join both the call, and Slack.\n\n\nMake sure you are in a quiet environment in order to participate on the call. Background noise should be kept to a minimum.\n\n\nKeep your microphone muted until you have something to say.\n\n\nIdentify yourself when you join the call; State your name and the system you are the expert for.\n\n\nSpeak up and speak clearly.\n\n\nBe direct and factual.\n\n\nKeep conversations/discussions short and to the point.\n\n\nBring any concerns to the Incident Commander (IC) on the call.\n\n\nRespect time constraints given by the Incident Commander.\n\n\n\n\nLingo\n#\n\n\nUse clear terminology, and avoid using acronyms or abbreviations during a call. Clear and accurate communication is more important than quick communication.\n\n\n\n\nStandard radio \nvoice procedure\n does not need to be followed on calls. However, you should familiarize yourself with the terms, as you may hear them on a call (or need to use them yourself). The ones in more active use on major incident calls are,\n\n\n\n\nAck/Rog\n - \"I have received and understood\"\n\n\nSay Again\n - \"Repeat your last message\"\n\n\nStandby\n - \"Please wait a moment for the next response\"\n\n\nWilco\n - \"Will comply\"\n\n\n\n\nDo not invent new abbreviations, and always favor being explicit of implicit. It is better to make things clearer than to try and save time by abbreviating, only to have a misunderstanding because others didn't know the abbreviation.\n\n\nThe Commander\n#\n\n\nThe Incident Commander (IC) is the leader of the incident response process, and is responsible for bringing the incident to resolution. They will announce themselves at the start of the call, and will generally be doing most of the talking.\n\n\n\n\nFollow all instructions from the incident commander, without exception.\n\n\nDo not perform any actions unless the incident commander has told you to do so.\n\n\nThe commander will typically poll for any strong objections before performing a large action. This is your time to raise any objections if you have them.\n\n\nOnce the commander has made a decision, that decision is final and should be followed, even if you disagreed during the poll.\n\n\nAnswer any questions the commander asks you in a clear and concise way.\n\n\nAnswering that you \"don't know\" something is perfectly acceptable. Do not try to guess.\n\n\n\n\n\n\nThe commander may ask you to investigate something and get back to them in X minutes. Make sure you are ready with an answer within that time.\n\n\nAnswering that you need more time is perfectly acceptable, but you need to give the commander an estimate of how much time.\n\n\n\n\n\n\n\n\nProblems?\n#\n\n\nThere's no incident commander on the call! I don't know what to do!\n#\n\n\nAsk on the call if an IC is present. If you have no response, type \n!ic page\n in Slack. This will page the primary and backup IC to the call.\n\n\nI can join the call or Slack, but not both, what should I do?\n#\n\n\nYou're welcome to join only one of the channels, however you should not actively participate in the incident response if so, as it causes disjoined communication. Liaise with someone who is both in Slack and on the call to provide any input you may have so that they can raise it.",
|
|
"title": "Call Etiquette"
|
|
},
|
|
{
|
|
"location": "/before/call_etiquette/#first-steps",
|
|
"text": "If you intend on participating on the incident call you should join both the call, and Slack. Make sure you are in a quiet environment in order to participate on the call. Background noise should be kept to a minimum. Keep your microphone muted until you have something to say. Identify yourself when you join the call; State your name and the system you are the expert for. Speak up and speak clearly. Be direct and factual. Keep conversations/discussions short and to the point. Bring any concerns to the Incident Commander (IC) on the call. Respect time constraints given by the Incident Commander.",
|
|
"title": "First Steps"
|
|
},
|
|
{
|
|
"location": "/before/call_etiquette/#lingo",
|
|
"text": "Use clear terminology, and avoid using acronyms or abbreviations during a call. Clear and accurate communication is more important than quick communication. Standard radio voice procedure does not need to be followed on calls. However, you should familiarize yourself with the terms, as you may hear them on a call (or need to use them yourself). The ones in more active use on major incident calls are, Ack/Rog - \"I have received and understood\" Say Again - \"Repeat your last message\" Standby - \"Please wait a moment for the next response\" Wilco - \"Will comply\" Do not invent new abbreviations, and always favor being explicit of implicit. It is better to make things clearer than to try and save time by abbreviating, only to have a misunderstanding because others didn't know the abbreviation.",
|
|
"title": "Lingo"
|
|
},
|
|
{
|
|
"location": "/before/call_etiquette/#the-commander",
|
|
"text": "The Incident Commander (IC) is the leader of the incident response process, and is responsible for bringing the incident to resolution. They will announce themselves at the start of the call, and will generally be doing most of the talking. Follow all instructions from the incident commander, without exception. Do not perform any actions unless the incident commander has told you to do so. The commander will typically poll for any strong objections before performing a large action. This is your time to raise any objections if you have them. Once the commander has made a decision, that decision is final and should be followed, even if you disagreed during the poll. Answer any questions the commander asks you in a clear and concise way. Answering that you \"don't know\" something is perfectly acceptable. Do not try to guess. The commander may ask you to investigate something and get back to them in X minutes. Make sure you are ready with an answer within that time. Answering that you need more time is perfectly acceptable, but you need to give the commander an estimate of how much time.",
|
|
"title": "The Commander"
|
|
},
|
|
{
|
|
"location": "/before/call_etiquette/#problems",
|
|
"text": "",
|
|
"title": "Problems?"
|
|
},
|
|
{
|
|
"location": "/before/call_etiquette/#theres-no-incident-commander-on-the-call-i-dont-know-what-to-do",
|
|
"text": "Ask on the call if an IC is present. If you have no response, type !ic page in Slack. This will page the primary and backup IC to the call.",
|
|
"title": "There's no incident commander on the call! I don't know what to do!"
|
|
},
|
|
{
|
|
"location": "/before/call_etiquette/#i-can-join-the-call-or-slack-but-not-both-what-should-i-do",
|
|
"text": "You're welcome to join only one of the channels, however you should not actively participate in the incident response if so, as it causes disjoined communication. Liaise with someone who is both in Slack and on the call to provide any input you may have so that they can raise it.",
|
|
"title": "I can join the call or Slack, but not both, what should I do?"
|
|
},
|
|
{
|
|
"location": "/during/during_an_incident/",
|
|
"text": "Information on what to do during a major incident. See our \nseverity level descriptions\n for what constitutes a major incident.\n\n\n\n\nDocumentation\n\n\nFor your own internal documentation, you should make sure that this page has all of the necessary information prominently displayed. Such as: phone bridge numbers, Slack rooms, important chat commands, etc. Here is an example,\n\n\n\n \n\n \n\n \n\n \n\n \n#incident-chat\n\n \nhttps://a-voip-provider.com/incident-call\n\n \n+1 555 BIG FIRE\n (+1 555 244 3473) / PIN: 123456\n\n \n\n \n\n \nNeed an IC? Do \n!ic page\n in Slack\n\n \n\n \n\n \nFor executive summary updates only, join \n#executive-summary-updates\n.\n\n \n\n \n\n\n\n\n\n\n\n\nSecurity Incident?\n\n\nIf this is a security incident, you should follow the \nSecurity Incident Response\n process.\n\n\n\n\nDon't Panic!\n#\n\n\n\n\n\n\nJoin the incident call and chat (see links above).\n\n\n\n\nAnyone is free to join the call or chat to observe and follow along with the incident.\n\n\nIf you wish to participate however, you should join both. If you can't join the call for some reason, you should have a dedicated proxy for the call. Disjointed discussions in the chat room are ultimately distracting.\n\n\n\n\n\n\n\n\nFollow along with the call/chat, add any comments you feel are appropriate, but keep the discussion relevant to the problem at hand.\n\n\n\n\nIf you are not an SME, try to filter any discussion through the primary SME for your service. Too many people discussing at once get become overwhelming, so we should try to maintain a hierarchical structure to the call if possible.\n\n\n\n\n\n\n\n\nFollow instructions from the Incident Commander.\n\n\n\n\nIs there no IC on the call?\n\n\nManually page them via Slack, with \n!ic page\n in Slack. This will page the primary and backup IC's at the same time.\n\n\nNever hesitate to page the IC. It's much better to have them and not need them than the other way around.\n\n\n\n\n\n\n\n\n\n\n\n\nSteps for Incident Commander\n#\n\n\nResolve the incident as quickly and as safely as possible, use the Deputy to assist you. Delegate any tasks to relevant experts at your discretion.\n\n\n\n\n\n\nAnnounce on the call and in Slack that you are the incident commander, who you have designated as deputy (usually the backup IC), and scribe.\n\n\n\n\n\n\nIdentify if there is an obvious cause to the incident (recent deployment, spike in traffic, etc.), delegate investigation to relevant experts,\n\n\n\n\nUse the service experts on the call to assist in the analysis. They should be able to quickly provide confirmation of the cause, but not always. It's the call of the IC on how to proceed in cases where the cause is not positively known. Confer with service owners and use their knowledge to help you.\n\n\n\n\n\n\n\n\nIdentify investigation \n repair actions (roll back, rate-limit services, etc) and delegate actions to relevant service experts. Typically something like this (obviously not an exhaustive list),\n\n\n\n\nBad Deployment:\n Roll it back.\n\n\nWeb Application Stuck/Crashed:\n Do a rolling restart.\n\n\nEvent Flood:\n Validate automatic throttling is sufficient, adjust manually if not.\n\n\nData Center Outage:\n Validate automation has removed bad data center. Force it to do so if not.\n\n\nDegraded Service Behavior without load:\n Gather forensic data (heap dumps, etc), and consider doing a rolling restart.\n\n\n\n\n\n\n\n\nListen for prompts from your Deputy regarding severity escalations, decide whether we need to announce publicly, and instruct customer liaison accordingly.\n\n\n\n\nAnnouncing publicly is at your discretion as IC. If you are unsure, then announce publicly (\"If in doubt, tweet it out\").\n\n\n\n\n\n\n\n\nOnce incident has recovered or is actively recovering, you can announce that the incident is over and that the call is ending. This usually indicates there's no more productive work to be done for the incident right now.\n\n\n\n\nMove the remaining, non-time-critical discussion to Slack.\n\n\nFollow up to ensure the customer liaison wraps up the incident publicly.\n\n\nIdentify any post-incident clean-up work.\n\n\nYou may need to perform debriefing/analysis of the underlying root cause.\n\n\n\n\n\n\n\n\n(After call ends) Create the post-mortem page from the template, and assign an owner to the post-mortem for the incident.\n\n\n\n\n\n\n(After call ends) Send out an internal email explaining that we had a major incident, provide a link to the post-mortem.\n\n\n\n\n\n\nSteps for Deputy\n#\n\n\nYou are there to support the IC in whatever they need.\n\n\n\n\n\n\nMonitor the status, and notify the IC if/when the incident escalates in severity level,\n\n\n\n\nOfficerURL can help you to monitor the status on Slack,\n\n\n!status\n - Will tell you the current status.\n\n\n!status stalk\n - Will continually monitor the status and report it to the room every 30s.\n\n\n\n\n\n\n\n\n\n\n\n\nBe prepared to page other people as directed by the Incident Commander.\n\n\n\n\n\n\nProvide regular status updates in Slack (roughly every 30mins) to the executive team, giving an executive summary of the current status. Keep it short and to the point, and use @here.\n\n\n\n\n\n\nFollow instructions from the Incident Commander.\n\n\n\n\n\n\nSteps for Scribe\n#\n\n\nYou are there to document the key information from the incident in Slack.\n\n\n\n\n\n\nUpdate the Slack room with who the IC is, who the Deputy is, and that you're the scribe (if not already done).\n\n\n\n\ne.g. \"IC: Bob Boberson, Deputy: Deputy Deputyson, Scribe: Writer McWriterson\"\n\n\n\n\n\n\n\n\nYou should add notes to Slack when significant actions are taken, or findings are determined. You don't need to wait for the IC to direct this - use your own judgment.\n\n\n\n\nYou should also add \nTODO\n notes to the Slack room that indicate follow-ups slated for later.\n\n\n\n\n\n\n\n\nFollow instructions from the Incident Commander.\n\n\n\n\n\n\nSteps for Subject Matter Experts\n#\n\n\nYou are there to support the incident commander in identifying the cause of the incident, suggesting and evaluation repair actions, and following through on the repair actions.\n\n\n\n\n\n\nInvestigate the incident by analyzing any graphs or logs at your disposal. Announce all findings to the incident commander.\n\n\n\n\nIf you are unsure of the cause, that's fine, state that you are investigating and provide regular updates to the IC.\n\n\n\n\n\n\n\n\nAnnounce all suggestions for resolution to the incident commander, it is their decision on how to proceed, do not follow any actions unless told to do so!\n\n\n\n\n\n\nFollow instructions from the incident commander.\n\n\n\n\n\n\n(Optional) Once the call is over and post-mortem is created, add any notes you think are relevant to the post-mortem page.\n\n\n\n\n\n\nSteps for Customer Liaison\n#\n\n\nBe on stand-by to post public facing messages regarding the incident.\n\n\n\n\n\n\nYou will typically be required to update the status page and to send Tweets from our various accounts at certain times during the call.\n\n\n\n\n\n\nFollow instructions from the Incident Commander.",
|
|
"title": "During An Incident"
|
|
},
|
|
{
|
|
"location": "/during/during_an_incident/#dont-panic",
|
|
"text": "Join the incident call and chat (see links above). Anyone is free to join the call or chat to observe and follow along with the incident. If you wish to participate however, you should join both. If you can't join the call for some reason, you should have a dedicated proxy for the call. Disjointed discussions in the chat room are ultimately distracting. Follow along with the call/chat, add any comments you feel are appropriate, but keep the discussion relevant to the problem at hand. If you are not an SME, try to filter any discussion through the primary SME for your service. Too many people discussing at once get become overwhelming, so we should try to maintain a hierarchical structure to the call if possible. Follow instructions from the Incident Commander. Is there no IC on the call? Manually page them via Slack, with !ic page in Slack. This will page the primary and backup IC's at the same time. Never hesitate to page the IC. It's much better to have them and not need them than the other way around.",
|
|
"title": "Don't Panic!"
|
|
},
|
|
{
|
|
"location": "/during/during_an_incident/#steps-for-incident-commander",
|
|
"text": "Resolve the incident as quickly and as safely as possible, use the Deputy to assist you. Delegate any tasks to relevant experts at your discretion. Announce on the call and in Slack that you are the incident commander, who you have designated as deputy (usually the backup IC), and scribe. Identify if there is an obvious cause to the incident (recent deployment, spike in traffic, etc.), delegate investigation to relevant experts, Use the service experts on the call to assist in the analysis. They should be able to quickly provide confirmation of the cause, but not always. It's the call of the IC on how to proceed in cases where the cause is not positively known. Confer with service owners and use their knowledge to help you. Identify investigation repair actions (roll back, rate-limit services, etc) and delegate actions to relevant service experts. Typically something like this (obviously not an exhaustive list), Bad Deployment: Roll it back. Web Application Stuck/Crashed: Do a rolling restart. Event Flood: Validate automatic throttling is sufficient, adjust manually if not. Data Center Outage: Validate automation has removed bad data center. Force it to do so if not. Degraded Service Behavior without load: Gather forensic data (heap dumps, etc), and consider doing a rolling restart. Listen for prompts from your Deputy regarding severity escalations, decide whether we need to announce publicly, and instruct customer liaison accordingly. Announcing publicly is at your discretion as IC. If you are unsure, then announce publicly (\"If in doubt, tweet it out\"). Once incident has recovered or is actively recovering, you can announce that the incident is over and that the call is ending. This usually indicates there's no more productive work to be done for the incident right now. Move the remaining, non-time-critical discussion to Slack. Follow up to ensure the customer liaison wraps up the incident publicly. Identify any post-incident clean-up work. You may need to perform debriefing/analysis of the underlying root cause. (After call ends) Create the post-mortem page from the template, and assign an owner to the post-mortem for the incident. (After call ends) Send out an internal email explaining that we had a major incident, provide a link to the post-mortem.",
|
|
"title": "Steps for Incident Commander"
|
|
},
|
|
{
|
|
"location": "/during/during_an_incident/#steps-for-deputy",
|
|
"text": "You are there to support the IC in whatever they need. Monitor the status, and notify the IC if/when the incident escalates in severity level, OfficerURL can help you to monitor the status on Slack, !status - Will tell you the current status. !status stalk - Will continually monitor the status and report it to the room every 30s. Be prepared to page other people as directed by the Incident Commander. Provide regular status updates in Slack (roughly every 30mins) to the executive team, giving an executive summary of the current status. Keep it short and to the point, and use @here. Follow instructions from the Incident Commander.",
|
|
"title": "Steps for Deputy"
|
|
},
|
|
{
|
|
"location": "/during/during_an_incident/#steps-for-scribe",
|
|
"text": "You are there to document the key information from the incident in Slack. Update the Slack room with who the IC is, who the Deputy is, and that you're the scribe (if not already done). e.g. \"IC: Bob Boberson, Deputy: Deputy Deputyson, Scribe: Writer McWriterson\" You should add notes to Slack when significant actions are taken, or findings are determined. You don't need to wait for the IC to direct this - use your own judgment. You should also add TODO notes to the Slack room that indicate follow-ups slated for later. Follow instructions from the Incident Commander.",
|
|
"title": "Steps for Scribe"
|
|
},
|
|
{
|
|
"location": "/during/during_an_incident/#steps-for-subject-matter-experts",
|
|
"text": "You are there to support the incident commander in identifying the cause of the incident, suggesting and evaluation repair actions, and following through on the repair actions. Investigate the incident by analyzing any graphs or logs at your disposal. Announce all findings to the incident commander. If you are unsure of the cause, that's fine, state that you are investigating and provide regular updates to the IC. Announce all suggestions for resolution to the incident commander, it is their decision on how to proceed, do not follow any actions unless told to do so! Follow instructions from the incident commander. (Optional) Once the call is over and post-mortem is created, add any notes you think are relevant to the post-mortem page.",
|
|
"title": "Steps for Subject Matter Experts"
|
|
},
|
|
{
|
|
"location": "/during/during_an_incident/#steps-for-customer-liaison",
|
|
"text": "Be on stand-by to post public facing messages regarding the incident. You will typically be required to update the status page and to send Tweets from our various accounts at certain times during the call. Follow instructions from the Incident Commander.",
|
|
"title": "Steps for Customer Liaison"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/",
|
|
"text": "Incident Commander Required\n\n\nAs with all major incidents at PagerDuty, security ones will also involve an Incident Commander, who will delegate the tasks to relevant resolvers. Tasks may be performed in parallel as assigned by the IC. Page one at the earliest possible opportunity.\n\n\n\n\nChecklist\n#\n\n\nDetails for each of these items are available in the next section.\n\n\n\n\nStop the attack in progress.\n\n\nCut off the attack vector.\n\n\nAssemble the response team.\n\n\nIsolate affected instances.\n\n\nIdentify timeline of attack.\n\n\nIdentify compromised data.\n\n\nAssess risk to other systems.\n\n\nAssess risk of re-attack.\n\n\nApply additional mitigations, additions to monitoring, etc.\n\n\nForensic analysis of compromised systems.\n\n\nInternal communication.\n\n\nInvolve law enforcement.\n\n\nReach out to external parties that may have been used as vector for attack.\n\n\nExternal communication.\n\n\n\n\n\n\nAttack Mitigation\n#\n\n\nStop the attack as quickly as you can, via any means necessary. Shut down servers, network isolate them, turn off a data center if you have to. Some common things to try,\n\n\n\n\nShutdown the instance from the provider console (do not delete or terminate if you can help it, as we'll need to do forensics).\n\n\nIf you happen to be logged into the box you can try to,\n\n\nRe-instate our default iptables rules to restrict traffic.\n\n\nkill -9\n any active session you think is an attacker.\n\n\nChange root password, and update /etc/shadow to lock out all other users.\n\n\nsudo shutdown now\n\n\n\n\n\n\n\n\nCut Off Attack Vector\n#\n\n\nIdentify the likely attack vectors and path/fix them so they cannot be re-exploited immediately after stopping the attack.\n\n\n\n\nIf you suspect a third-party provider is compromised, delete all accounts except your own (and those of others who are physically present) and immediately rotate your password and MFA tokens.\n\n\nIf you suspect a service application was an attack vector, disable any relevant code paths, or shut down the service entirely.\n\n\n\n\nAssemble Response Team\n#\n\n\nIdentify the key responders for the security incident, and keep them all in the loop. Set up a secure method of communicating all information associated with the incident. Details on the incident (or even the fact that an incident has occurred) should be kept private to the responders until you are confident the attack is not being triggered internally.\n\n\n\n\nThe security and site-reliability teams should usually be involved.\n\n\nA representative for any affected services should be involved.\n\n\nAn Incident Commander (IC) should be appointed, who will also appoint the usual incident command roles. The incident command team will be responsible for keeping documentation of actions taken, and for notifying internal stakeholders as appropriate.\n\n\nDo not communicate with anyone not on the response team about the incident until forensics has been performed. The attack could be happening internally.\n\n\nGive the project an innocuous codename that can be used for chats/documents so if anyone overhears they don't realize it's a security incident. (e.g. sapphire-unicorn).\n\n\nPrefix all emails, and chat topics with \"Attorney Work Project\".\n\n\n\n\nIsolate Affected Instances\n#\n\n\nAny instances which were affected by the attack should be immediately isolated from any other instances. As soon as possible, an image of the system should be taken and put into a read-only cold storage for later forensic analysis.\n\n\n\n\nBlacklist the IP addresses for any affected instances from all other hosts.\n\n\nTurn off and shutdown the instances immediately if you didn't do that to stop the attack.\n\n\nTake a disk image for any disks attached to the instances, and ship them to an off-site cold storage location. You should make sure these images are read-only and cannot be tampered with.\n\n\n\n\nIdentify Timeline of Attack\n#\n\n\nWork with all tools at your disposal to identify the timeline of the attack, along with exactly what the attacker did.\n\n\n\n\nAny reconnaissance the attacker performed on the system before the attack started.\n\n\nWhen the attacker gained access to the system.\n\n\nWhat actions the attacker performed on the system, and when.\n\n\nIdentify how long the attacker had access to the system before they were detected, and before they were kicked out.\n\n\nIdentify any queries the attacker ran on databases.\n\n\nTry to identify if the attacker still has access to the system via another back door. Monitor logs for unusual activity, etc.\n\n\n\n\nCompromised Data\n#\n\n\nUsing forensic analysis of log files, time-series graphs, and any other information/tools at your disposal, attempt to identify what information was compromised (if any),\n\n\n\n\nIdentify any data that was compromised during the attack.\n\n\nWas any data exfiltrated from a database?\n\n\nWhat keys were on the system that are now considering compromised?\n\n\nWas the attacker able to identify other components of the system (map out the network, etc).\n\n\n\n\n\n\nFind exactly what customer data has been compromised, if any.\n\n\n\n\nAssess Risk\n#\n\n\nBased on the data that was compromised, assess the risk to other systems.\n\n\n\n\nDoes the attacker have enough information to find another way in?\n\n\nWere any passwords or keys stored on the host? If so, they should be considered compromised, regardless of how they were stored.\n\n\nAny user accounts that were used in the initial attack should rotate all of their keys and passwords on every other system they have an account.\n\n\n\n\nApply Additional Mitigations\n#\n\n\nStart applying mitigations to other parts of your system.\n\n\n\n\nRotate any compromised data.\n\n\nIdentify any new alerting which is needed to notify of a similar breach.\n\n\nBlock any IP addresses associated with the attack.\n\n\nIdentify any keys/credentials that are compromised and revoke their access immediately.\n\n\n\n\nForensic Analysis\n#\n\n\nOnce you are confident the systems are secured, and enough monitoring is in place to detect another attack, you can move onto the forensic analysis stage.\n\n\n\n\nTake any read-only images you created, any access logs you have, and comb through them for more information about the attack.\n\n\nIdentify exactly what happened, how it happened, and how to prevent it in future.\n\n\nKeep track of all IP addresses involved in the attack.\n\n\nMonitor logs for any attempt to regain access to the system by the attacker.\n\n\n\n\nInternal Communication\n#\n\n\nDelegate to:\n VP or Director of Engineering\n\n\nCommunicate internally only once you are confident (via forensic analysis) that the attack was not sourced internally.\n\n\n\n\nDon't go into too much detail.\n\n\nOverview the timeline.\n\n\nDiscuss mitigation steps taken.\n\n\nFollow up with more information once it is known.\n\n\n\n\nLiaise With Law Enforcement / External Actors\n#\n\n\nDelegate to:\n VP or Director of Engineering\n\n\nWork with law enforcement to identify the source of the attack, letting any system owners know that systems under their control may be compromised, etc.\n\n\n\n\nContact local law enforcement.\n\n\nContact FBI.\n\n\nContact operators for any systems used in the attack, their systems may also have been compromised.\n\n\nContact security companies to help in assessing risk and any PR next steps.\n\n\n\n\nExternal Communication\n#\n\n\nDelegate to:\n Marketing Team\n\n\nOnce you have validated all of the information you have is accurate, have a timeline of events, and know exactly what information was compromised, how it was compromised, and sure that it won't happen again. Only then should you prepare and release a public statement to customers informing them of the compromised information and any steps they need to take.\n\n\n\n\nInclude the date in the title of any announcement, so that it's never confused for a potential new breach.\n\n\nDon't say \"We take security very seriously\". It makes everyone cringe when they read it.\n\n\nBe honest, accept responsibility, and present the facts, along with exactly how we plan to prevent such things in future.\n\n\nBe as detailed as possible with the timeline.\n\n\nBe as detailed as possible in what information was compromised, and how it affects customers. If we were storing something we shouldn't have been, be honest about it. It'll come out later and it'll be much worse.\n\n\nDon't name and shame any external parties that might have caused the compromise. It's bad form. (Unless they've already publicly disclosed, in which case we can link to their disclosure).\n\n\nRelease the external communication as soon as possible, preferably within a few days of the compromise. The longer we wait, the worse it will be.\n\n\nFigure out if there is a way to get in touch with customers' internal security teams before the general public notice is sent.\n\n\n\n\n\n\nAdditional Reading\n#\n\n\n\n\nComputer Security Incident Handling Guide\n (NIST)\n\n\nIncident Handler's Handbook\n (SANS)\n\n\nResponding to IT Security Incidents\n (Microsoft)\n\n\nDefining Incident Management Processes for CSIRTs: A Work in Progress\n (CMU)\n\n\nCreating and Managing Computer Security Incident Handling Teams (CSIRTS)\n (CERT)",
|
|
"title": "Security Incident"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#checklist",
|
|
"text": "Details for each of these items are available in the next section. Stop the attack in progress. Cut off the attack vector. Assemble the response team. Isolate affected instances. Identify timeline of attack. Identify compromised data. Assess risk to other systems. Assess risk of re-attack. Apply additional mitigations, additions to monitoring, etc. Forensic analysis of compromised systems. Internal communication. Involve law enforcement. Reach out to external parties that may have been used as vector for attack. External communication.",
|
|
"title": "Checklist"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#attack-mitigation",
|
|
"text": "Stop the attack as quickly as you can, via any means necessary. Shut down servers, network isolate them, turn off a data center if you have to. Some common things to try, Shutdown the instance from the provider console (do not delete or terminate if you can help it, as we'll need to do forensics). If you happen to be logged into the box you can try to, Re-instate our default iptables rules to restrict traffic. kill -9 any active session you think is an attacker. Change root password, and update /etc/shadow to lock out all other users. sudo shutdown now",
|
|
"title": "Attack Mitigation"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#cut-off-attack-vector",
|
|
"text": "Identify the likely attack vectors and path/fix them so they cannot be re-exploited immediately after stopping the attack. If you suspect a third-party provider is compromised, delete all accounts except your own (and those of others who are physically present) and immediately rotate your password and MFA tokens. If you suspect a service application was an attack vector, disable any relevant code paths, or shut down the service entirely.",
|
|
"title": "Cut Off Attack Vector"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#assemble-response-team",
|
|
"text": "Identify the key responders for the security incident, and keep them all in the loop. Set up a secure method of communicating all information associated with the incident. Details on the incident (or even the fact that an incident has occurred) should be kept private to the responders until you are confident the attack is not being triggered internally. The security and site-reliability teams should usually be involved. A representative for any affected services should be involved. An Incident Commander (IC) should be appointed, who will also appoint the usual incident command roles. The incident command team will be responsible for keeping documentation of actions taken, and for notifying internal stakeholders as appropriate. Do not communicate with anyone not on the response team about the incident until forensics has been performed. The attack could be happening internally. Give the project an innocuous codename that can be used for chats/documents so if anyone overhears they don't realize it's a security incident. (e.g. sapphire-unicorn). Prefix all emails, and chat topics with \"Attorney Work Project\".",
|
|
"title": "Assemble Response Team"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#isolate-affected-instances",
|
|
"text": "Any instances which were affected by the attack should be immediately isolated from any other instances. As soon as possible, an image of the system should be taken and put into a read-only cold storage for later forensic analysis. Blacklist the IP addresses for any affected instances from all other hosts. Turn off and shutdown the instances immediately if you didn't do that to stop the attack. Take a disk image for any disks attached to the instances, and ship them to an off-site cold storage location. You should make sure these images are read-only and cannot be tampered with.",
|
|
"title": "Isolate Affected Instances"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#identify-timeline-of-attack",
|
|
"text": "Work with all tools at your disposal to identify the timeline of the attack, along with exactly what the attacker did. Any reconnaissance the attacker performed on the system before the attack started. When the attacker gained access to the system. What actions the attacker performed on the system, and when. Identify how long the attacker had access to the system before they were detected, and before they were kicked out. Identify any queries the attacker ran on databases. Try to identify if the attacker still has access to the system via another back door. Monitor logs for unusual activity, etc.",
|
|
"title": "Identify Timeline of Attack"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#compromised-data",
|
|
"text": "Using forensic analysis of log files, time-series graphs, and any other information/tools at your disposal, attempt to identify what information was compromised (if any), Identify any data that was compromised during the attack. Was any data exfiltrated from a database? What keys were on the system that are now considering compromised? Was the attacker able to identify other components of the system (map out the network, etc). Find exactly what customer data has been compromised, if any.",
|
|
"title": "Compromised Data"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#assess-risk",
|
|
"text": "Based on the data that was compromised, assess the risk to other systems. Does the attacker have enough information to find another way in? Were any passwords or keys stored on the host? If so, they should be considered compromised, regardless of how they were stored. Any user accounts that were used in the initial attack should rotate all of their keys and passwords on every other system they have an account.",
|
|
"title": "Assess Risk"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#apply-additional-mitigations",
|
|
"text": "Start applying mitigations to other parts of your system. Rotate any compromised data. Identify any new alerting which is needed to notify of a similar breach. Block any IP addresses associated with the attack. Identify any keys/credentials that are compromised and revoke their access immediately.",
|
|
"title": "Apply Additional Mitigations"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#forensic-analysis",
|
|
"text": "Once you are confident the systems are secured, and enough monitoring is in place to detect another attack, you can move onto the forensic analysis stage. Take any read-only images you created, any access logs you have, and comb through them for more information about the attack. Identify exactly what happened, how it happened, and how to prevent it in future. Keep track of all IP addresses involved in the attack. Monitor logs for any attempt to regain access to the system by the attacker.",
|
|
"title": "Forensic Analysis"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#internal-communication",
|
|
"text": "Delegate to: VP or Director of Engineering Communicate internally only once you are confident (via forensic analysis) that the attack was not sourced internally. Don't go into too much detail. Overview the timeline. Discuss mitigation steps taken. Follow up with more information once it is known.",
|
|
"title": "Internal Communication"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#liaise-with-law-enforcement-external-actors",
|
|
"text": "Delegate to: VP or Director of Engineering Work with law enforcement to identify the source of the attack, letting any system owners know that systems under their control may be compromised, etc. Contact local law enforcement. Contact FBI. Contact operators for any systems used in the attack, their systems may also have been compromised. Contact security companies to help in assessing risk and any PR next steps.",
|
|
"title": "Liaise With Law Enforcement / External Actors"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#external-communication",
|
|
"text": "Delegate to: Marketing Team Once you have validated all of the information you have is accurate, have a timeline of events, and know exactly what information was compromised, how it was compromised, and sure that it won't happen again. Only then should you prepare and release a public statement to customers informing them of the compromised information and any steps they need to take. Include the date in the title of any announcement, so that it's never confused for a potential new breach. Don't say \"We take security very seriously\". It makes everyone cringe when they read it. Be honest, accept responsibility, and present the facts, along with exactly how we plan to prevent such things in future. Be as detailed as possible with the timeline. Be as detailed as possible in what information was compromised, and how it affects customers. If we were storing something we shouldn't have been, be honest about it. It'll come out later and it'll be much worse. Don't name and shame any external parties that might have caused the compromise. It's bad form. (Unless they've already publicly disclosed, in which case we can link to their disclosure). Release the external communication as soon as possible, preferably within a few days of the compromise. The longer we wait, the worse it will be. Figure out if there is a way to get in touch with customers' internal security teams before the general public notice is sent.",
|
|
"title": "External Communication"
|
|
},
|
|
{
|
|
"location": "/during/security_incident_response/#additional-reading",
|
|
"text": "Computer Security Incident Handling Guide (NIST) Incident Handler's Handbook (SANS) Responding to IT Security Incidents (Microsoft) Defining Incident Management Processes for CSIRTs: A Work in Progress (CMU) Creating and Managing Computer Security Incident Handling Teams (CSIRTS) (CERT)",
|
|
"title": "Additional Reading"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_process/",
|
|
"text": "For every major incident (SEV-2/1), we need to follow up with a post-mortem. A blame-free, detailed description, of exactly what went wrong in order to cause the incident, along with a list of steps to take in order to prevent a similar incident from occurring again in the future. The incident response process itself should also be included.\n\n\n\n\nOwner Designation\n#\n\n\nThe first step is that a post-mortem owner will be designated. This is done by the IC either at the end of a major incident call, or very shortly after. You will be notified directly by the IC if you are the owner for the post-mortem. The owner is responsible for populating the post-mortem page, looking up logs, managing the followup investigation, and keeping all interested parties in the loop. Please use Slack for coordinating followup. A detailed list of the steps is available below,\n\n\nOwner Responsibilities\n#\n\n\nAs owner of a post-mortem, you are responsible for the following,\n\n\n\n\nScheduling the post-mortem meeting (on the shared calendar) and inviting the relevant people (this should be scheduled within 5 business days of the incident).\n\n\nUpdating the page with all of the necessary content.\n\n\nInvestigating the incident, pulling in whomever you need from other teams to assist in the investigation.\n\n\nCreating follow-up JIRA tickets (\nYou are only responsible for creating the tickets, not following them up to resolution\n).\n\n\nRunning the post-mortem meeting (\nthese generally run themselves, but you should get people back on topic if the conversation starts to wander\n).\n\n\nIn cases where we need a public blog post, creating \n reviewing it with appropriate parties.\n\n\n\n\nPost-Mortem Wiki Page\n#\n\n\nOnce you've been designated as the owner of a post-mortem, you should start updating the page with all the relevant information.\n\n\n\n\n\n\n(If not already done by the IC) Create a new post-mortem page for the incident.\n\n\n\n\n\n\nSchedule a post-mortem meeting for within 5 business days of the incident. You should schedule this before filling in the page, just so it's on the calendar.\n\n\n\n\nCreate the meeting on the \"Incident Post-Mortem Meetings\" shared calendar.\n\n\n\n\n\n\n\n\nBegin populating the page with all of the information you have.\n\n\n\n\nThe timeline should be the main focus to begin with.\n\n\nThe timeline should include important changes in status/impact, and also key actions taken by responders.\n\n\nYou should mark the start of the incident in red, and the resolution in green (for when we went into/out of SEV).\n\n\n\n\n\n\nGo through the history in Slack to identify the responders, and add them to the page.\n\n\nIdentify the Incident Commander and Scribe in this list.\n\n\n\n\n\n\n\n\n\n\n\n\nPopulate the page with more detailed information.\n\n\n\n\nFor each item in the timeline, identify a metric, or some third-party page where the data came from. This could be a link to a Datadog graph, a SumoLogic search, a Tweet, etc. Anything which shows the data point you're trying to illustrate in the timeline.\n\n\n\n\n\n\n\n\nPerform an analysis of the incident.\n\n\n\n\nCapture all available data regarding the incident. What caused it, how many customers were affected, etc.\n\n\nAny commands or queries you use to look up data should be posted in the page so others can see how the data was gathered.\n\n\nCapture the impact to customers (generally in terms of event submission, delayed processing, and slow notification delivery)\n\n\nIdentify the underlying cause of the incident (What happened, and why did it happen).\n\n\n\n\n\n\n\n\nCreate any followup action JIRA tickets (or note down topics for discussion if we need to decide on a direction to go before creating tickets),\n\n\n\n\nGo through the history in Slack to identify any TODO items.\n\n\nLabel all tickets with their severity level and date tags.\n\n\nAny actions which can reduce re-occurrence of the incident.\n\n\n(There may be some trade-off here, and that's fine. Sometimes the ROI isn't worth the effort that would go into it).\n\n\n\n\n\n\nIdentify any actions which can make our incident response process better.\n\n\nBe careful with creating too many tickets. Generally we only want to create things that are P0/P1's. Things that absolutely should be dealt with.\n\n\n\n\n\n\n\n\nWrite the external message that will be sent to customers. This will be reviewed during the post-mortem meeting before it is sent out.\n\n\n\n\nAvoid using the word \"outage\" unless it really was a full outage, use the word \"incident\" instead. Customers generally see \"outage\" and assume everything was down, when in reality it was likely just some alerts delivered outside of SLA.\n\n\nLook at other examples of previous post-mortems to see the kind of thing you should send.\n\n\n\n\n\n\n\n\nPost-Mortem Meeting\n#\n\n\nThese meetings should generally last 15-30 minutes, and are intended to be a wrap up of the post-mortem process. We should discuss what happened, what we could've done better, and any followup actions we need to take. The goal is to suss out any disagreement on the facts, analysis, or recommended actions, and to get some wider awareness of the problems that are causing reliability issues for us.\n\n\nYou should invite the following people to the post-mortem meeting,\n\n\n\n\nAlways\n\n\nThe incident commander.\n\n\nService owners involved in the incident.\n\n\nKey engineer(s)/responders involved in the incident.\n\n\n\n\n\n\nOptional\n\n\nCustomer liaison. (Only SEV-1 incidents)\n\n\n\n\n\n\n\n\nA general agenda for the meeting would be something like,\n\n\n\n\nRecap the timeline, to make sure everyone agrees and is on the same page.\n\n\nRecap important points, and any unusual items.\n\n\nDiscuss how the problem could've been caught.\n\n\nDid it show up in canary?\n\n\nCould it have been caught in tests, or loadtest environment?\n\n\n\n\n\n\nDiscuss customer impact. Any comments from customers, etc.\n\n\nReview action items that have been created, discuss if appropriate, or if more are needed, etc.\n\n\n\n\nExamples\n#\n\n\nHere are some examples of post-mortems from other companies as a reference,\n\n\n\n\nStripe\n\n\nLastPass\n\n\nAWS\n\n\nTwilio\n\n\nHeroku\n\n\nNetflix\n\n\nGOV.UK Rail Accident Investigation\n\n\nA List of Post-mortems!\n\n\n\n\nUseful Resources\n#\n\n\n\n\nAdvanced PostMortem Fu and Human Error 101 (Velocity 2011)\n\n\nBlame. Language. Sharing.",
|
|
"title": "Post-Mortem Process"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_process/#owner-designation",
|
|
"text": "The first step is that a post-mortem owner will be designated. This is done by the IC either at the end of a major incident call, or very shortly after. You will be notified directly by the IC if you are the owner for the post-mortem. The owner is responsible for populating the post-mortem page, looking up logs, managing the followup investigation, and keeping all interested parties in the loop. Please use Slack for coordinating followup. A detailed list of the steps is available below,",
|
|
"title": "Owner Designation"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_process/#owner-responsibilities",
|
|
"text": "As owner of a post-mortem, you are responsible for the following, Scheduling the post-mortem meeting (on the shared calendar) and inviting the relevant people (this should be scheduled within 5 business days of the incident). Updating the page with all of the necessary content. Investigating the incident, pulling in whomever you need from other teams to assist in the investigation. Creating follow-up JIRA tickets ( You are only responsible for creating the tickets, not following them up to resolution ). Running the post-mortem meeting ( these generally run themselves, but you should get people back on topic if the conversation starts to wander ). In cases where we need a public blog post, creating reviewing it with appropriate parties.",
|
|
"title": "Owner Responsibilities"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_process/#post-mortem-wiki-page",
|
|
"text": "Once you've been designated as the owner of a post-mortem, you should start updating the page with all the relevant information. (If not already done by the IC) Create a new post-mortem page for the incident. Schedule a post-mortem meeting for within 5 business days of the incident. You should schedule this before filling in the page, just so it's on the calendar. Create the meeting on the \"Incident Post-Mortem Meetings\" shared calendar. Begin populating the page with all of the information you have. The timeline should be the main focus to begin with. The timeline should include important changes in status/impact, and also key actions taken by responders. You should mark the start of the incident in red, and the resolution in green (for when we went into/out of SEV). Go through the history in Slack to identify the responders, and add them to the page. Identify the Incident Commander and Scribe in this list. Populate the page with more detailed information. For each item in the timeline, identify a metric, or some third-party page where the data came from. This could be a link to a Datadog graph, a SumoLogic search, a Tweet, etc. Anything which shows the data point you're trying to illustrate in the timeline. Perform an analysis of the incident. Capture all available data regarding the incident. What caused it, how many customers were affected, etc. Any commands or queries you use to look up data should be posted in the page so others can see how the data was gathered. Capture the impact to customers (generally in terms of event submission, delayed processing, and slow notification delivery) Identify the underlying cause of the incident (What happened, and why did it happen). Create any followup action JIRA tickets (or note down topics for discussion if we need to decide on a direction to go before creating tickets), Go through the history in Slack to identify any TODO items. Label all tickets with their severity level and date tags. Any actions which can reduce re-occurrence of the incident. (There may be some trade-off here, and that's fine. Sometimes the ROI isn't worth the effort that would go into it). Identify any actions which can make our incident response process better. Be careful with creating too many tickets. Generally we only want to create things that are P0/P1's. Things that absolutely should be dealt with. Write the external message that will be sent to customers. This will be reviewed during the post-mortem meeting before it is sent out. Avoid using the word \"outage\" unless it really was a full outage, use the word \"incident\" instead. Customers generally see \"outage\" and assume everything was down, when in reality it was likely just some alerts delivered outside of SLA. Look at other examples of previous post-mortems to see the kind of thing you should send.",
|
|
"title": "Post-Mortem Wiki Page"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_process/#post-mortem-meeting",
|
|
"text": "These meetings should generally last 15-30 minutes, and are intended to be a wrap up of the post-mortem process. We should discuss what happened, what we could've done better, and any followup actions we need to take. The goal is to suss out any disagreement on the facts, analysis, or recommended actions, and to get some wider awareness of the problems that are causing reliability issues for us. You should invite the following people to the post-mortem meeting, Always The incident commander. Service owners involved in the incident. Key engineer(s)/responders involved in the incident. Optional Customer liaison. (Only SEV-1 incidents) A general agenda for the meeting would be something like, Recap the timeline, to make sure everyone agrees and is on the same page. Recap important points, and any unusual items. Discuss how the problem could've been caught. Did it show up in canary? Could it have been caught in tests, or loadtest environment? Discuss customer impact. Any comments from customers, etc. Review action items that have been created, discuss if appropriate, or if more are needed, etc.",
|
|
"title": "Post-Mortem Meeting"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_process/#examples",
|
|
"text": "Here are some examples of post-mortems from other companies as a reference, Stripe LastPass AWS Twilio Heroku Netflix GOV.UK Rail Accident Investigation A List of Post-mortems!",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_process/#useful-resources",
|
|
"text": "Advanced PostMortem Fu and Human Error 101 (Velocity 2011) Blame. Language. Sharing.",
|
|
"title": "Useful Resources"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/",
|
|
"text": "This is a standard template we use for post-mortems at PagerDuty. Each section describes the type of information you will want to put in that section.\n\n\n\n\n\n\nGuidelines\n\n\nThis page is intended to be reviewed during a post-mortem meeting that should be scheduled within 5 business days of any event.\nYour first step should be to schedule the post-mortem meeting in the shared calendar for within 5 business days after the incident.\nDon't wait until you've filled in the info to schedule the meeting, however make sure the page is completed by the meeting.\n\n\n\n\n Post-Mortem Owner:\n \nYour name goes here.\n\n\n Meeting Scheduled For:\n \nSchedule the meeting on the \"Incident Post-Mortem Meetings\" shared calendar, for within 5 business days after the incident. Put the date/time here.\n\n\n Call Recording:\n \nLink to the incident call recording.\n\n\nOverview\n#\n\n\nInclude a \nshort\n sentence or two summarizing the root cause, timeline summary, and the impact. E.g. \"On the morning of August 99th, we suffered a 1 minute SEV-1 due to a runaway process on our primary database machine. This slowness caused roughly 0.024% of alerts that had begun during this time to be delivered out of SLA.\"\n\n\nWhat Happened\n#\n\n\nInclude a short description of what happened.\n\n\nRoot Cause\n#\n\n\nInclude a description of the root cause. If there were any actions taken that exacerbated the issue, also include them here with the intention of learning from any mistakes made during the resolution process.\n\n\nResolution\n#\n\n\nInclude a description what solved the problem. If there was a temporary fix in place, describe that along with the long-term solution.\n\n\nImpact\n#\n\n\nBe very specific here, include exact numbers.\n\n\n\n\n\n\n\n\nTime in SEV-1\n\n\n?mins\n\n\n\n\n\n\n\n\n\n\nNotifications Delivered out of SLA\n\n\n??% (?? of ??)\n\n\n\n\n\n\nEvents Dropped / Not Accepted\n\n\n??% (?? of ??) \nShould usually be 0, but always check\n\n\n\n\n\n\nAccounts Affected\n\n\n??\n\n\n\n\n\n\nUsers Affected\n\n\n??\n\n\n\n\n\n\nSupport Requests Raised\n\n\n?? \nInclude any relevant links to tickets\n\n\n\n\n\n\n\n\nResponders\n#\n\n\n\n\nWho was the IC?\n\n\nWho was the scribe?\n\n\nWho else was involved?\n\n\nWho else was involved?\n\n\n\n\nTimeline\n#\n\n\nSome important times to include: (1) time the root cause began, (2) time of the page, (3) time that the status page was updated (i.e. when the incident became public), (4) time of any significant actions, (5) time the SEV-2/1 ended, (6) links to tools/logs that show how the timestamp was arrived at.\n\n\n\n\n\n\n\n\nTime (UTC)\n\n\nEvent\n\n\nData Link\n\n\n\n\n\n\n\n\n\n\nHow'd We Do?\n#\n\n\nWhat Went Well?\n#\n\n\n\n\nList anything you did well and want to call out. It's OK to not list anything.\n\n\n\n\nWhat Didn't Go So Well?\n#\n\n\n\n\nList anything you think we didn't do very well. The intent is that we should follow up on all points here to improve our processes.\n\n\n\n\nAction Items\n#\n\n\nEach action item should be in the form of a JIRA ticket, and each ticket should have the same set of two tags: \u201csev1_YYYYMMDD\u201d (such as sev1_20150911) and simply \u201csev1\u201d. Include action items such as: (1) any fixes required to prevent the root cause in the future, (2) any preparedness tasks that could help mitigate the problem if it came up again, (3) remaining post-mortem steps, such as the internal email, as well as the status-page public post, (4) any improvements to our incident response process.\n\n\nMessaging\n#\n\n\nInternal Email\n#\n\n\nThis is a follow-up for employees. It should be sent out right after the post-mortem meeting is over. It only needs a short paragraph summarizing the incident and a link to this wiki page.\n\n\n\n\nBriefly summarize what happened and where the post-mortem page (this page) can be found.\n\n\n\n\nExternal Message\n#\n\n\nThis is what will be included on the status.pagerduty.com website regarding this incident. What are we telling customers, including an apology? (The apology should be genuine, not rote.)\n\n\n\n\nSummary\n\n\nWhat Happened?\n\n\nWhat Are We Doing About This?",
|
|
"title": "Post-Mortem Template"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#overview",
|
|
"text": "Include a short sentence or two summarizing the root cause, timeline summary, and the impact. E.g. \"On the morning of August 99th, we suffered a 1 minute SEV-1 due to a runaway process on our primary database machine. This slowness caused roughly 0.024% of alerts that had begun during this time to be delivered out of SLA.\"",
|
|
"title": "Overview"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#what-happened",
|
|
"text": "Include a short description of what happened.",
|
|
"title": "What Happened"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#root-cause",
|
|
"text": "Include a description of the root cause. If there were any actions taken that exacerbated the issue, also include them here with the intention of learning from any mistakes made during the resolution process.",
|
|
"title": "Root Cause"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#resolution",
|
|
"text": "Include a description what solved the problem. If there was a temporary fix in place, describe that along with the long-term solution.",
|
|
"title": "Resolution"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#impact",
|
|
"text": "Be very specific here, include exact numbers. Time in SEV-1 ?mins Notifications Delivered out of SLA ??% (?? of ??) Events Dropped / Not Accepted ??% (?? of ??) Should usually be 0, but always check Accounts Affected ?? Users Affected ?? Support Requests Raised ?? Include any relevant links to tickets",
|
|
"title": "Impact"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#responders",
|
|
"text": "Who was the IC? Who was the scribe? Who else was involved? Who else was involved?",
|
|
"title": "Responders"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#timeline",
|
|
"text": "Some important times to include: (1) time the root cause began, (2) time of the page, (3) time that the status page was updated (i.e. when the incident became public), (4) time of any significant actions, (5) time the SEV-2/1 ended, (6) links to tools/logs that show how the timestamp was arrived at. Time (UTC) Event Data Link",
|
|
"title": "Timeline"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#howd-we-do",
|
|
"text": "",
|
|
"title": "How'd We Do?"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#what-went-well",
|
|
"text": "List anything you did well and want to call out. It's OK to not list anything.",
|
|
"title": "What Went Well?"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#what-didnt-go-so-well",
|
|
"text": "List anything you think we didn't do very well. The intent is that we should follow up on all points here to improve our processes.",
|
|
"title": "What Didn't Go So Well?"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#action-items",
|
|
"text": "Each action item should be in the form of a JIRA ticket, and each ticket should have the same set of two tags: \u201csev1_YYYYMMDD\u201d (such as sev1_20150911) and simply \u201csev1\u201d. Include action items such as: (1) any fixes required to prevent the root cause in the future, (2) any preparedness tasks that could help mitigate the problem if it came up again, (3) remaining post-mortem steps, such as the internal email, as well as the status-page public post, (4) any improvements to our incident response process.",
|
|
"title": "Action Items"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#messaging",
|
|
"text": "",
|
|
"title": "Messaging"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#internal-email",
|
|
"text": "This is a follow-up for employees. It should be sent out right after the post-mortem meeting is over. It only needs a short paragraph summarizing the incident and a link to this wiki page. Briefly summarize what happened and where the post-mortem page (this page) can be found.",
|
|
"title": "Internal Email"
|
|
},
|
|
{
|
|
"location": "/after/post_mortem_template/#external-message",
|
|
"text": "This is what will be included on the status.pagerduty.com website regarding this incident. What are we telling customers, including an apology? (The apology should be genuine, not rote.) Summary What Happened? What Are We Doing About This?",
|
|
"title": "External Message"
|
|
},
|
|
{
|
|
"location": "/training/overview/",
|
|
"text": "Learning about the Spearhead Systems incident response process is an important part of being an effective on-call engineer at Spearhead Systens. This section goes over our training material for the various roles that are involved in our incident response, along with some additional information and training material from government agencies.\n\n\nTraining Guides\n#\n\n\nOur training guides are split up by role, however you are encouraged to read through the training guides even for roles you don't belong to, as it can give you some good insight into how those people will be behaving during major incidents.\n\n\n\n\nIncident Commander Training\n - The \"IC\" is the person who drives a major incident to resolution. They're the person who will be directing everyone else.\n\n\nDeputy Training\n - The Deputy is someone who supports the Incident Commander and can take over for them if necessary.\n\n\nScribe Training\n - This is intended for individuals who will be acting as a scribe during an incident.\n\n\nSME / Resolver Training\n - This is relevant to everyone at Spearhead Systems who are on-call for any team.\n\n\n\n\nNational Incident Management System (NIMS)\n#\n\n\nOur incident response process is loosely based on the \nUS National Incident Management System (NIMS)\n, which is described as,\n\n\nA systematic, proactive approach to guide departments and agencies at all levels of government, nongovernmental organizations, and the private sector to work together seamlessly and manage incidents involving all threats and hazards\u2014regardless of cause, size, location, or complexity\u2014in order to reduce loss of life, property and harm to the environment.\n\n\nWhile it might not initially seem that this would be applicable to an IT operations environment, we've found that many of the lessons learned from major incidents in these situations can be directly applied to our industry too. The principles are the same and span many different environments.\n\n\n \n\n\nIf you want to learn more about NIMS, we recommend the \nICS-100\n and \nICS-700\n online training courses, which go over NIMS and the Incident Command System (You can also take an online examination after training in order to get a certificate from FEMA). There is also a wealth of \nadditional training material and courses from FEMA\n on NIMS, which I would encourage you to look at.\n\n\nAlso take a look at the \nAdditional Reading\n section on the home page.",
|
|
"title": "Overview"
|
|
},
|
|
{
|
|
"location": "/training/overview/#training-guides",
|
|
"text": "Our training guides are split up by role, however you are encouraged to read through the training guides even for roles you don't belong to, as it can give you some good insight into how those people will be behaving during major incidents. Incident Commander Training - The \"IC\" is the person who drives a major incident to resolution. They're the person who will be directing everyone else. Deputy Training - The Deputy is someone who supports the Incident Commander and can take over for them if necessary. Scribe Training - This is intended for individuals who will be acting as a scribe during an incident. SME / Resolver Training - This is relevant to everyone at Spearhead Systems who are on-call for any team.",
|
|
"title": "Training Guides"
|
|
},
|
|
{
|
|
"location": "/training/overview/#national-incident-management-system-nims",
|
|
"text": "Our incident response process is loosely based on the US National Incident Management System (NIMS) , which is described as, A systematic, proactive approach to guide departments and agencies at all levels of government, nongovernmental organizations, and the private sector to work together seamlessly and manage incidents involving all threats and hazards\u2014regardless of cause, size, location, or complexity\u2014in order to reduce loss of life, property and harm to the environment. While it might not initially seem that this would be applicable to an IT operations environment, we've found that many of the lessons learned from major incidents in these situations can be directly applied to our industry too. The principles are the same and span many different environments. If you want to learn more about NIMS, we recommend the ICS-100 and ICS-700 online training courses, which go over NIMS and the Incident Command System (You can also take an online examination after training in order to get a certificate from FEMA). There is also a wealth of additional training material and courses from FEMA on NIMS, which I would encourage you to look at. Also take a look at the Additional Reading section on the home page.",
|
|
"title": "National Incident Management System (NIMS)"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/",
|
|
"text": "So you want to be an incident commander? You've come to the right place! You don't need to be a senior team member to become an IC, anyone can do it providing you have the requisite knowledge (yes, even an intern)!\n\n\n\n\nCredit: \nNASA\n\n\nPurpose\n#\n\n\nIf you could boil down the definition of an Incident Commander to one sentence, it would be,\n\n\n\n\nTake whatever actions are necessary to protect PagerDuty systems and customers.\n\n\n\n\nThe purpose of the Incident Commander is to be the decision maker during an major incident; Delegating tasks and listening to input from subject matter experts in order to bring the incident to resolution.\n\n\nThe Incident Commander becomes the highest ranking individual on any major incident call, regardless of their day-to-day rank. Their decisions made as commander are final.\n\n\nYour job as an IC is to listen to the call and to watch the incident Slack room in order to provide clear coordination, recruiting others to gather context/details. \nYou should not be performing any actions or remediations, checking graphs, or investigating logs.\n Those tasks should be delegated.\n\n\nPrerequisites\n#\n\n\nBefore you can be an Incident Commander, it is expected that you meet the following criteria. Don't worry if you don't meet them all yet, you can still continue with training!\n\n\n\n\nHas \nexcellent knowledge of PagerDuty systems\n and is able to quickly evaluate good vs bad options, and quickly identify what's actually going on.\n\n\nBeen at PagerDuty for at least 6 months and has a \nsolid understanding of the incident notification pipeline and web stack\n.\n\n\nExcellent verbal and written \ncommunication skills\n.\n\n\nHas \nknowledge of obscure PagerDuty terms\n.\n\n\nHas gravitas and is \nwilling to kick people off a call\n to remove distractions, even if it's the CEO.\n\n\n\n\nResponsibilities\n#\n\n\nRead up on our \nDifferent Roles for Incidents\n to see what is expected from an Incident Commander, as well as what we expect from the other roles you'll be interacting with.\n\n\nQualities\n#\n\n\nSome qualities we expect from an effective leader include being able to:\n\n\n\n\nTake command.\n\n\nMotivate responders.\n\n\nCommunicate clear directions.\n\n\nSize up the situation and make rapid decisions.\n\n\nAssess the effectiveness of tactics/strategies.\n\n\nBe flexible and modify your plans as necessary.\n\n\n\n\nAs a leader, you should try to:\n\n\n\n\nBe proficient in your job.\n\n\nMake sound and timely decisions.\n\n\nEnsure tasks are understood.\n\n\nBe prepared to step out of a tactical role to assume a leadership role.\n\n\n\n\nTraining Process\n#\n\n\nThe process is fairly loose for now. Here's a list of things you can do to train though,\n\n\n\n\n\n\nRead the rest of this page, particularly the sections below.\n\n\n\n\n\n\nParticipate in \nFailure Friday\n (FF).\n\n\n\n\nShadow a FF to see how it's run.\n\n\nBe the scribe for multiple FF's.\n\n\nBe the incident commander for multiple FF's.\n\n\n\n\n\n\n\n\nPlay a game of \"\nKeep Talking and Nobody Explodes\n\" with other people in the office.\n\n\n\n\nFor a more realistic experience, play it with someone in a different office over Hangouts.\n\n\n\n\n\n\n\n\nShadow a current incident commander for at least a full week shift.\n\n\n\n\nGet alerted when they do, join in on the same calls.\n\n\nSit in on an active incident call, follow along with the chat, and follow along with what the Incident Commander is doing.\n\n\nDo not actively participate in the call, keep your questions until the end.\n\n\n\n\n\n\n\n\nReverse shadow a current incident commander for at least a full week shift.\n\n\n\n\nYou should be the one to respond to incidents, and you will take point on calls, however the current IC will be there to take over should you not know how to proceed.\n\n\n\n\n\n\n\n\nGraduation\n#\n\n\nWhat's the difference between an IC in training, and an IC? (This isn't the set up to a joke). Simple, an IC puts themselves on the schedule.\n\n\nHandling Incidents\n#\n\n\nEvery incident is different (we're hopefully not repeating the same issue multiple times!), but there's a common process you can apply to each one.\n\n\n\n\n\n\nIdentify the symptoms.\n\n\n\n\nIdentify what the symptoms are, how big the issue is, and whether it's escalating/flapping/static.\n\n\n\n\n\n\n\n\nSize-up the situation.\n\n\n\n\nGather as much information as you can, as quickly as you can (remember the incident is still happening while you're doing this).\n\n\nGet the facts, the possibilities of what can happen, and the probability of those things happening.\n\n\n\n\n\n\n\n\nStabilize the incident.\n\n\n\n\nIdentify actions you can use to proceed.\n\n\nGather support for the plan (See \"Polling During a Decision\" below).\n\n\nDelegate remediation actions to your SME's.\n\n\n\n\n\n\n\n\nProvide regular updates.\n\n\n\n\nMaintain a cadence, and provide regular updates to everyone on the call.\n\n\nWhat's happening, what are we doing about it, etc.\n\n\n\n\n\n\n\n\nDeputy\n#\n\n\nThe deputy for an incident is generally the backup Incident Commander. However, as an Incident Commander, you may appoint one or more Deputies. Note that Deputy Incident Commanders must be as qualified as the Incident Commander, and that if a Deputy is assigned, he or she must be fully qualified to assume the Incident Commander\u2019s position if required.\n\n\nCommunication Responsibilities\n#\n\n\nSharing information during an incident is a critical process. As an Incident Commander (or Deputy), you should be prepared to brief others as necessary. You will also be required to communicate your intentions and decisions clearly so that there is no ambiguity in your commands.\n\n\nWhen given information from a responder, you should clearly acknowledge that you have received and understood their message, so that the responder can be confident in moving on to other tasks.\n\n\nAfter an incident, you should communicate with other training Incident Commanders on any debrief actions you feel are necessary.\n\n\nIncident Call Procedures and Lingo\n#\n\n\nThe \nSteps for Incident Commander\n provide a detailed description of what you should be doing during an incident.\n\n\nAdditionally, aside from following the \nusual incident call etiquette\n, there a few extra etiquette guidelines you should follow as IC:\n\n\n\n\nAlways announce when you join the call if you are the on-call IC.\n\n\nDon't let discussions get out of hand. Keep conversations short.\n\n\nNote objections from others, but your call is final.\n\n\nIf anyone is being actively disruptive to your call, kick them off.\n\n\nAnnounce the end of the call.\n\n\n\n\nHere are some examples of phrases and patterns you should use during incident calls.\n\n\nStart of Call Announcement\n#\n\n\nAt the start of any major incident call, the incident commander should announce the following,\n\n\n\n\nThis is [NAME], I am the Incident Commander for this call.\n\n\n\n\nThis establishes to everyone on the call what your name is, and that you are now the commander. You should state \"Incident Commander\" and not \"IC\", as newcomers may not be familiar with the terminology yet. The word \"commander\" makes it very clear that you're in charge.\n\n\nStart of Incident, IC Not Present\n#\n\n\nIf you are trained to be an IC and have joined a call, even if you aren't the IC on-call, you should do the following,\n\n\n\n\nIs there an IC on the call?\n\n\n(pause)\n\n\nHearing no response, this is [NAME], and I am now the Incident Commander for this call.\n\n\n\n\nIf the on-call IC joins later, you may hand over to them at your discretion (see below for the hand-off procedure)\n\n\nChecking if SME's are Present\n#\n\n\nDuring a call, you will want to know who is available from the various teams in order to resolve the incident. Etiquette dictates that people should announce themselves, but sometimes you may be joining late to the call. If you need a representative from a team, just ask on the call. Your deputy can page one if no one answers.\n\n\n\n\nDo we have a representative from [X] on the call?\n\n\n(pause)\n\n\nDeputy, can you go ahead and page the [X] on-call please.\n\n\n\n\nAssigning Tasks\n#\n\n\nWhen you need to give out an assignment or task, give it to a person directly, never say \"can someone do...\" as this leads to the \nbystander effect\n. Instead, all actions should be assigned to a specific person, and time-boxed with a specific number of minutes.\n\n\n\n\nIC: Bob, please investigate the high latency on web app boxes. I'll come back to you for an answer in 3 minutes.\n\n\nBob: Understood\n\n\n\n\nKeep track of how many minutes you assigned, and check in with that person after that time. You can get help from your deputy to help track the timings.\n\n\nPolling During a Decision\n#\n\n\nIf a decision needs to be made, it comes down to the IC. Once the IC makes a decision, it is final. But it's important that no one can come later and object to the plan, saying things like \"I knew that would happen\". An IC will use very specific language to be sure that doesn't happen.\n\n\n\n\nThe proposal is to [EXPLAIN PROPOSAL]\n\n\nAre there any strong objections to this plan?\n\n\n(pause)\n\n\nHearing no objects, we are proceeding with this proposal.\n\n\n\n\nIf you were to ask \"Does everyone agree?\", you'd get people speaking over each other, you'd have quiet people not speaking up, etc. Asking for any STRONG objections gives people the chance to object, but only if they feel strongly on the matter.\n\n\nStatus Updates\n#\n\n\nIt's important to maintain a cadence during a major incident call. Whenever there is a lull in the proceedings, usually because you're waiting for someone to get back to you, you can fill the gap by explaining the current situation and the actions that are outstanding. This makes sure everyone is on the same page.\n\n\n\n\nWhile we wait for [X], here's an update of our current situation.\n\n\nWe are currently in a SEV-1 situation, we believe to be caused by [X]. There's an open question to [Y] who will be getting back to us in 2 minutes. In the meantime, we have Tweeted out that we are experiencing issues. Our next Tweet will be in 10 minutes if the incident is still ongoing at that time.\n\n\nAre there any additional actions or proposals from anyone else at this time?\n\n\n\n\nTransfer of Command\n#\n\n\nTransfer of command, involves (as the name suggests) transferring command to another Incident Commander. There are multiple reasons why a transfer of command might take place,\n\n\n\n\nCommander has become fatigued and is unable to continue.\n\n\nIncident complexity changes.\n\n\nChange of command is necessary for effectiveness or efficiency.\n\n\nPersonal emergencies arise (e.g., Incident Commander has a family emergency).\n\n\n\n\nNever feel like you are not doing your job properly by handing over. Handovers are encouraged. In order to handover, out of band from the main call (via Slack for example), notify the other IC that you wish to transfer command. Update them with anything you feel appropriate. Then announce on the call,\n\n\n\n\nEveryone on the call, be advised, at this time I am handing over command to [X].\n\n\n\n\nThe new IC should then announce on the call as if they were joining a new call (see above), so that everyone is aware of the new commander.\n\n\nNote that the arrival of a more qualified person does NOT necessarily mean a change in incident command.\n\n\nMaintaining Order\n#\n\n\nOften times on a call people will be talking over one another, or an argument on the correct way to proceed may break out. As Incident Commander it's important that order is maintained on a call. The Incident Commander has the power to remove someone from the call if necessary (even if it's the CEO). But often times you just need to remind people to speak one at a time. Sometimes the discussion can be healthy even if it starts as an argument, but you shouldn't let it go on for too long.\n\n\n\n\n(noise)\n\n\nOk everyone, can we all speak one at a time please. So far I'm hearing two options to proceed: 1) [X], 2) [Y].\n\n\nAre there any other proposals someone would like to make at this time?\n\n\n...etc\n\n\n\n\nGetting Straight Answers\n#\n\n\nYou may ask a question as IC and receive an answer that doesn't actually answer your question. This is generally when you ask for a yes/no answer but get a more detailed explanation. This can often times be because the person doesn't understand the call etiquette. But if it continues, you need to take action in order to proceed.\n\n\n\n\nIC: Is this going to disable the service for everyone?\n\n\nSME: Well... for some people it....\n\n\nIC: Stop. I need a yes/no answer. Is this going to disable the service for everyone?\n\n\nSME: Well... it might not do...\n\n\nIC: Stop. I'm going to ask again, and the only two words I want to hear from you are \"yes\" or \"no. If this going to disable the service for everyone?\n\n\nSME: Well.. like I was saying..\n\n\nIC: Stop. Leave the call. Backup IC can you please page the backup on-call for [service] so that we can get an answer.\n\n\n\n\nExecutive Swoop\n#\n\n\nYou may get someone who would be senior to you during peacetime come on the call and start overriding your decisions as IC. This is unacceptable behaviour during wartime, as the IC is in command. While this is rare, you can get things back on track with the following,\n\n\n\n\nExecutive: No, I don't want us doing that. Everyone stop. We need to rollback instead.\n\n\nIC: Hold please. [EXECUTIVE], do you wish to take over command?\n\n\nExecutive: Yes/No\n\n\n(If yes) IC: Understood. Everyone on the call, be advised, at this time I am handling over command to [EXECUTIVE]. They are now the incident commander for this call.\n\n\n(If no) IC: In that case, please cause no further interruptions or I will remove you from the call.\n\n\n\n\nThis makes it clear to the executive that they have the option of being in charge and making decisions, but in order to do so they must continue as an Incident Commander. If they refuse, then remind them that you are in charge and disruptive interruptions will not be tolerated. If they continue, remove them from the call.\n\n\nEnd of Call Sign-Off\n#\n\n\nAt the end of an incident, you should announce to everyone on the call that you are ending the call at this time, and provide information on where followup discussion can take place. It's also customary to thank everyone.\n\n\n\n\nOk everyone, we're ending the call at this time. Please continue any followup discussion on Slack. Thanks everyone.\n\n\n\n\nExamples From Pop Culture\n#\n\n\nPagerDuty employees have access to all previous incident calls, and can listen to them at their discretion. We can't release these calls, so for everyone else, here are some short examples from popular culture to show the techniques at work.\n\n\n\n\n\n\n\nHere's a clip from the movie Apollo 13, where Gene Kranz (Flight Director / Incident Commander) shows some great examples of Incident Command. Here are some things to note:\n\n\n\n\nWalks into the room, and immediately obvious that he's the IC. Calms the noise, and makes sure everyone is paying attention.\n\n\nProvides a status update so people are aware of the situation.\n\n\nProjector breaks, doesn't get sidetracked on fixing it, just moves on to something else.\n\n\nProvides a proposal for how to proceed and elicits feedback.\n\n\nListens to the feedback calmly.\n\n\nWhen counter-proposal is raised, states that he agrees and why.\n\n\n\n\n\n\nAllows a discussion to happen, listens to all points. When discussion gets out of hand, re-asserts command of the situation.\n\n\nExplains his decision, and why.\n\n\n\n\n\n\nExplains his full plan and decision, so everyone is on the same page.\n\n\n\n\n\n\n\n\n\nAnother clip from Apollo 13. Things to note:\n\n\n\n\nSummarizes the situation, and states the facts.\n\n\nListens to the feedback from various people.\n\n\nWhen a trusted SME provides information counter to what everyone else is saying, asks for additional clarification (\"What do you mean, everything?\")\n\n\nWise cracking remarks are not acknowledged by the IC (\"You can't run a vacuum cleaner on 12 amps!\")\n\n\n\"That's the deal?\".. \"That's the deal\".\n\n\nOnce decision is made, moves on to the next discussion.\n\n\nDelegates tasks.",
|
|
"title": "Incident Commander"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#purpose",
|
|
"text": "If you could boil down the definition of an Incident Commander to one sentence, it would be, Take whatever actions are necessary to protect PagerDuty systems and customers. The purpose of the Incident Commander is to be the decision maker during an major incident; Delegating tasks and listening to input from subject matter experts in order to bring the incident to resolution. The Incident Commander becomes the highest ranking individual on any major incident call, regardless of their day-to-day rank. Their decisions made as commander are final. Your job as an IC is to listen to the call and to watch the incident Slack room in order to provide clear coordination, recruiting others to gather context/details. You should not be performing any actions or remediations, checking graphs, or investigating logs. Those tasks should be delegated.",
|
|
"title": "Purpose"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#prerequisites",
|
|
"text": "Before you can be an Incident Commander, it is expected that you meet the following criteria. Don't worry if you don't meet them all yet, you can still continue with training! Has excellent knowledge of PagerDuty systems and is able to quickly evaluate good vs bad options, and quickly identify what's actually going on. Been at PagerDuty for at least 6 months and has a solid understanding of the incident notification pipeline and web stack . Excellent verbal and written communication skills . Has knowledge of obscure PagerDuty terms . Has gravitas and is willing to kick people off a call to remove distractions, even if it's the CEO.",
|
|
"title": "Prerequisites"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#responsibilities",
|
|
"text": "Read up on our Different Roles for Incidents to see what is expected from an Incident Commander, as well as what we expect from the other roles you'll be interacting with.",
|
|
"title": "Responsibilities"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#qualities",
|
|
"text": "Some qualities we expect from an effective leader include being able to: Take command. Motivate responders. Communicate clear directions. Size up the situation and make rapid decisions. Assess the effectiveness of tactics/strategies. Be flexible and modify your plans as necessary. As a leader, you should try to: Be proficient in your job. Make sound and timely decisions. Ensure tasks are understood. Be prepared to step out of a tactical role to assume a leadership role.",
|
|
"title": "Qualities"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#training-process",
|
|
"text": "The process is fairly loose for now. Here's a list of things you can do to train though, Read the rest of this page, particularly the sections below. Participate in Failure Friday (FF). Shadow a FF to see how it's run. Be the scribe for multiple FF's. Be the incident commander for multiple FF's. Play a game of \" Keep Talking and Nobody Explodes \" with other people in the office. For a more realistic experience, play it with someone in a different office over Hangouts. Shadow a current incident commander for at least a full week shift. Get alerted when they do, join in on the same calls. Sit in on an active incident call, follow along with the chat, and follow along with what the Incident Commander is doing. Do not actively participate in the call, keep your questions until the end. Reverse shadow a current incident commander for at least a full week shift. You should be the one to respond to incidents, and you will take point on calls, however the current IC will be there to take over should you not know how to proceed.",
|
|
"title": "Training Process"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#graduation",
|
|
"text": "What's the difference between an IC in training, and an IC? (This isn't the set up to a joke). Simple, an IC puts themselves on the schedule.",
|
|
"title": "Graduation"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#handling-incidents",
|
|
"text": "Every incident is different (we're hopefully not repeating the same issue multiple times!), but there's a common process you can apply to each one. Identify the symptoms. Identify what the symptoms are, how big the issue is, and whether it's escalating/flapping/static. Size-up the situation. Gather as much information as you can, as quickly as you can (remember the incident is still happening while you're doing this). Get the facts, the possibilities of what can happen, and the probability of those things happening. Stabilize the incident. Identify actions you can use to proceed. Gather support for the plan (See \"Polling During a Decision\" below). Delegate remediation actions to your SME's. Provide regular updates. Maintain a cadence, and provide regular updates to everyone on the call. What's happening, what are we doing about it, etc.",
|
|
"title": "Handling Incidents"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#deputy",
|
|
"text": "The deputy for an incident is generally the backup Incident Commander. However, as an Incident Commander, you may appoint one or more Deputies. Note that Deputy Incident Commanders must be as qualified as the Incident Commander, and that if a Deputy is assigned, he or she must be fully qualified to assume the Incident Commander\u2019s position if required.",
|
|
"title": "Deputy"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#communication-responsibilities",
|
|
"text": "Sharing information during an incident is a critical process. As an Incident Commander (or Deputy), you should be prepared to brief others as necessary. You will also be required to communicate your intentions and decisions clearly so that there is no ambiguity in your commands. When given information from a responder, you should clearly acknowledge that you have received and understood their message, so that the responder can be confident in moving on to other tasks. After an incident, you should communicate with other training Incident Commanders on any debrief actions you feel are necessary.",
|
|
"title": "Communication Responsibilities"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#incident-call-procedures-and-lingo",
|
|
"text": "The Steps for Incident Commander provide a detailed description of what you should be doing during an incident. Additionally, aside from following the usual incident call etiquette , there a few extra etiquette guidelines you should follow as IC: Always announce when you join the call if you are the on-call IC. Don't let discussions get out of hand. Keep conversations short. Note objections from others, but your call is final. If anyone is being actively disruptive to your call, kick them off. Announce the end of the call. Here are some examples of phrases and patterns you should use during incident calls.",
|
|
"title": "Incident Call Procedures and Lingo"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#start-of-call-announcement",
|
|
"text": "At the start of any major incident call, the incident commander should announce the following, This is [NAME], I am the Incident Commander for this call. This establishes to everyone on the call what your name is, and that you are now the commander. You should state \"Incident Commander\" and not \"IC\", as newcomers may not be familiar with the terminology yet. The word \"commander\" makes it very clear that you're in charge.",
|
|
"title": "Start of Call Announcement"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#start-of-incident-ic-not-present",
|
|
"text": "If you are trained to be an IC and have joined a call, even if you aren't the IC on-call, you should do the following, Is there an IC on the call? (pause) Hearing no response, this is [NAME], and I am now the Incident Commander for this call. If the on-call IC joins later, you may hand over to them at your discretion (see below for the hand-off procedure)",
|
|
"title": "Start of Incident, IC Not Present"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#checking-if-smes-are-present",
|
|
"text": "During a call, you will want to know who is available from the various teams in order to resolve the incident. Etiquette dictates that people should announce themselves, but sometimes you may be joining late to the call. If you need a representative from a team, just ask on the call. Your deputy can page one if no one answers. Do we have a representative from [X] on the call? (pause) Deputy, can you go ahead and page the [X] on-call please.",
|
|
"title": "Checking if SME's are Present"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#assigning-tasks",
|
|
"text": "When you need to give out an assignment or task, give it to a person directly, never say \"can someone do...\" as this leads to the bystander effect . Instead, all actions should be assigned to a specific person, and time-boxed with a specific number of minutes. IC: Bob, please investigate the high latency on web app boxes. I'll come back to you for an answer in 3 minutes. Bob: Understood Keep track of how many minutes you assigned, and check in with that person after that time. You can get help from your deputy to help track the timings.",
|
|
"title": "Assigning Tasks"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#polling-during-a-decision",
|
|
"text": "If a decision needs to be made, it comes down to the IC. Once the IC makes a decision, it is final. But it's important that no one can come later and object to the plan, saying things like \"I knew that would happen\". An IC will use very specific language to be sure that doesn't happen. The proposal is to [EXPLAIN PROPOSAL] Are there any strong objections to this plan? (pause) Hearing no objects, we are proceeding with this proposal. If you were to ask \"Does everyone agree?\", you'd get people speaking over each other, you'd have quiet people not speaking up, etc. Asking for any STRONG objections gives people the chance to object, but only if they feel strongly on the matter.",
|
|
"title": "Polling During a Decision"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#status-updates",
|
|
"text": "It's important to maintain a cadence during a major incident call. Whenever there is a lull in the proceedings, usually because you're waiting for someone to get back to you, you can fill the gap by explaining the current situation and the actions that are outstanding. This makes sure everyone is on the same page. While we wait for [X], here's an update of our current situation. We are currently in a SEV-1 situation, we believe to be caused by [X]. There's an open question to [Y] who will be getting back to us in 2 minutes. In the meantime, we have Tweeted out that we are experiencing issues. Our next Tweet will be in 10 minutes if the incident is still ongoing at that time. Are there any additional actions or proposals from anyone else at this time?",
|
|
"title": "Status Updates"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#transfer-of-command",
|
|
"text": "Transfer of command, involves (as the name suggests) transferring command to another Incident Commander. There are multiple reasons why a transfer of command might take place, Commander has become fatigued and is unable to continue. Incident complexity changes. Change of command is necessary for effectiveness or efficiency. Personal emergencies arise (e.g., Incident Commander has a family emergency). Never feel like you are not doing your job properly by handing over. Handovers are encouraged. In order to handover, out of band from the main call (via Slack for example), notify the other IC that you wish to transfer command. Update them with anything you feel appropriate. Then announce on the call, Everyone on the call, be advised, at this time I am handing over command to [X]. The new IC should then announce on the call as if they were joining a new call (see above), so that everyone is aware of the new commander. Note that the arrival of a more qualified person does NOT necessarily mean a change in incident command.",
|
|
"title": "Transfer of Command"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#maintaining-order",
|
|
"text": "Often times on a call people will be talking over one another, or an argument on the correct way to proceed may break out. As Incident Commander it's important that order is maintained on a call. The Incident Commander has the power to remove someone from the call if necessary (even if it's the CEO). But often times you just need to remind people to speak one at a time. Sometimes the discussion can be healthy even if it starts as an argument, but you shouldn't let it go on for too long. (noise) Ok everyone, can we all speak one at a time please. So far I'm hearing two options to proceed: 1) [X], 2) [Y]. Are there any other proposals someone would like to make at this time? ...etc",
|
|
"title": "Maintaining Order"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#getting-straight-answers",
|
|
"text": "You may ask a question as IC and receive an answer that doesn't actually answer your question. This is generally when you ask for a yes/no answer but get a more detailed explanation. This can often times be because the person doesn't understand the call etiquette. But if it continues, you need to take action in order to proceed. IC: Is this going to disable the service for everyone? SME: Well... for some people it.... IC: Stop. I need a yes/no answer. Is this going to disable the service for everyone? SME: Well... it might not do... IC: Stop. I'm going to ask again, and the only two words I want to hear from you are \"yes\" or \"no. If this going to disable the service for everyone? SME: Well.. like I was saying.. IC: Stop. Leave the call. Backup IC can you please page the backup on-call for [service] so that we can get an answer.",
|
|
"title": "Getting Straight Answers"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#executive-swoop",
|
|
"text": "You may get someone who would be senior to you during peacetime come on the call and start overriding your decisions as IC. This is unacceptable behaviour during wartime, as the IC is in command. While this is rare, you can get things back on track with the following, Executive: No, I don't want us doing that. Everyone stop. We need to rollback instead. IC: Hold please. [EXECUTIVE], do you wish to take over command? Executive: Yes/No (If yes) IC: Understood. Everyone on the call, be advised, at this time I am handling over command to [EXECUTIVE]. They are now the incident commander for this call. (If no) IC: In that case, please cause no further interruptions or I will remove you from the call. This makes it clear to the executive that they have the option of being in charge and making decisions, but in order to do so they must continue as an Incident Commander. If they refuse, then remind them that you are in charge and disruptive interruptions will not be tolerated. If they continue, remove them from the call.",
|
|
"title": "Executive Swoop"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#end-of-call-sign-off",
|
|
"text": "At the end of an incident, you should announce to everyone on the call that you are ending the call at this time, and provide information on where followup discussion can take place. It's also customary to thank everyone. Ok everyone, we're ending the call at this time. Please continue any followup discussion on Slack. Thanks everyone.",
|
|
"title": "End of Call Sign-Off"
|
|
},
|
|
{
|
|
"location": "/training/incident_commander/#examples-from-pop-culture",
|
|
"text": "PagerDuty employees have access to all previous incident calls, and can listen to them at their discretion. We can't release these calls, so for everyone else, here are some short examples from popular culture to show the techniques at work. Here's a clip from the movie Apollo 13, where Gene Kranz (Flight Director / Incident Commander) shows some great examples of Incident Command. Here are some things to note: Walks into the room, and immediately obvious that he's the IC. Calms the noise, and makes sure everyone is paying attention. Provides a status update so people are aware of the situation. Projector breaks, doesn't get sidetracked on fixing it, just moves on to something else. Provides a proposal for how to proceed and elicits feedback. Listens to the feedback calmly. When counter-proposal is raised, states that he agrees and why. Allows a discussion to happen, listens to all points. When discussion gets out of hand, re-asserts command of the situation. Explains his decision, and why. Explains his full plan and decision, so everyone is on the same page. Another clip from Apollo 13. Things to note: Summarizes the situation, and states the facts. Listens to the feedback from various people. When a trusted SME provides information counter to what everyone else is saying, asks for additional clarification (\"What do you mean, everything?\") Wise cracking remarks are not acknowledged by the IC (\"You can't run a vacuum cleaner on 12 amps!\") \"That's the deal?\".. \"That's the deal\". Once decision is made, moves on to the next discussion. Delegates tasks.",
|
|
"title": "Examples From Pop Culture"
|
|
},
|
|
{
|
|
"location": "/training/deputy/",
|
|
"text": "So you want to be a deputy? You've come to the right place!\n\n\n\n\nCredit: \noregondot @ Flickr\n\n\nPurpose\n#\n\n\nThe purpose of the Deputy is to support the IC by keeping track of timers, notifying the IC of important information, and paging other people as directed by the IC.\n\n\nIt's important for the IC to focus on the problem at hand, rather than worrying about monitoring timers. The deputy is there to help support the IC and keep them focussed on the incident.\n\n\nAs a Deputy, you will be expected to take over command from the IC if they request it.\n\n\nYou should not be performing any remediations, checking graphs, or investigating logs\n. Those tasks will be delegated to the resolvers by the IC.\n\n\nPrerequisites\n#\n\n\nBefore you can be a Deputy, it is expected that you meet the following criteria. Don't worry if you don't meet them all yet, you can still continue with training!\n\n\n\n\nBe trained as an \nIncident Commander\n.\n\n\n\n\nResponsibilities\n#\n\n\nRead up on our \nDifferent Roles for Incidents\n to see what is expected from a Deputy, as well as what we expect from the other roles you'll be interacting with.\n\n\nTraining Process\n#\n\n\nThe training process for a Deputy is quite simple.\n\n\n\n\nFollow our \nIncident Commander Training\n.\n\n\nRead this page.\n\n\n\n\nIncident Call Procedures and Lingo\n#\n\n\nThe \nSteps for Deputy\n provide a detailed description of what you should be doing during an incident.\n\n\nHere are some examples of phrases and patterns you should use during incident calls.\n\n\nKeep Track of Responders\n#\n\n\nAs you listen to the call, you should keep track of the responders to the call as you hear them speak. Make a note on a piece of paper, or use the \n!ic responders\n to see who they are. The IC may ask you who is on-call for a particular system, and you should know the answer, and be able to page them.\n\n\n\n\nDo we have a representative from [X] on the call?\n\n\n(pause)\n\n\nDeputy, can you go ahead and page the [X] on-call please.\n\n\n\n\nYou can page them however you see fit, phone call, etc.\n\n\nProvide Executive Status Updates\n#\n\n\nProvide regular status updates on Slack (roughly every 30mins), giving an executive summary of the current status during SEV-1 incidents. Keep it short and to the point, and use @here. Mention the current state, the actions in progress, customer impact, and expected time remaining. It's OK to miss out some of those if the information isn't known.\n\n\n\n\n@here: We are in SEV-1 due to X. Current actions in progress are to do Y. Expecting 3 mins to complete that action. Once action is complete, system should recover on its own within 5 minutes.\n\n\n\n\nAlert IC to Timers\n#\n\n\nYou are expected to keep track of how long the incident has been running for, and provide callouts to the IC every 10 minutes so they can take actions such as increasing the severity, or asking Support to Tweet out. This is as simple as telling the IC on the call,\n\n\n\n\nIC, be advised the incident is now at the 10 minute mark.\n\n\n\n\nSimilarly, when the IC asks for someone to get back to them in X minutes, you are expected to keep track of that. You should remind the IC when that time has been reached.\n\n\n\n\nIC, be advised the timer for [TEAM]'s investigation is up.",
|
|
"title": "Deputy"
|
|
},
|
|
{
|
|
"location": "/training/deputy/#purpose",
|
|
"text": "The purpose of the Deputy is to support the IC by keeping track of timers, notifying the IC of important information, and paging other people as directed by the IC. It's important for the IC to focus on the problem at hand, rather than worrying about monitoring timers. The deputy is there to help support the IC and keep them focussed on the incident. As a Deputy, you will be expected to take over command from the IC if they request it. You should not be performing any remediations, checking graphs, or investigating logs . Those tasks will be delegated to the resolvers by the IC.",
|
|
"title": "Purpose"
|
|
},
|
|
{
|
|
"location": "/training/deputy/#prerequisites",
|
|
"text": "Before you can be a Deputy, it is expected that you meet the following criteria. Don't worry if you don't meet them all yet, you can still continue with training! Be trained as an Incident Commander .",
|
|
"title": "Prerequisites"
|
|
},
|
|
{
|
|
"location": "/training/deputy/#responsibilities",
|
|
"text": "Read up on our Different Roles for Incidents to see what is expected from a Deputy, as well as what we expect from the other roles you'll be interacting with.",
|
|
"title": "Responsibilities"
|
|
},
|
|
{
|
|
"location": "/training/deputy/#training-process",
|
|
"text": "The training process for a Deputy is quite simple. Follow our Incident Commander Training . Read this page.",
|
|
"title": "Training Process"
|
|
},
|
|
{
|
|
"location": "/training/deputy/#incident-call-procedures-and-lingo",
|
|
"text": "The Steps for Deputy provide a detailed description of what you should be doing during an incident. Here are some examples of phrases and patterns you should use during incident calls.",
|
|
"title": "Incident Call Procedures and Lingo"
|
|
},
|
|
{
|
|
"location": "/training/deputy/#keep-track-of-responders",
|
|
"text": "As you listen to the call, you should keep track of the responders to the call as you hear them speak. Make a note on a piece of paper, or use the !ic responders to see who they are. The IC may ask you who is on-call for a particular system, and you should know the answer, and be able to page them. Do we have a representative from [X] on the call? (pause) Deputy, can you go ahead and page the [X] on-call please. You can page them however you see fit, phone call, etc.",
|
|
"title": "Keep Track of Responders"
|
|
},
|
|
{
|
|
"location": "/training/deputy/#provide-executive-status-updates",
|
|
"text": "Provide regular status updates on Slack (roughly every 30mins), giving an executive summary of the current status during SEV-1 incidents. Keep it short and to the point, and use @here. Mention the current state, the actions in progress, customer impact, and expected time remaining. It's OK to miss out some of those if the information isn't known. @here: We are in SEV-1 due to X. Current actions in progress are to do Y. Expecting 3 mins to complete that action. Once action is complete, system should recover on its own within 5 minutes.",
|
|
"title": "Provide Executive Status Updates"
|
|
},
|
|
{
|
|
"location": "/training/deputy/#alert-ic-to-timers",
|
|
"text": "You are expected to keep track of how long the incident has been running for, and provide callouts to the IC every 10 minutes so they can take actions such as increasing the severity, or asking Support to Tweet out. This is as simple as telling the IC on the call, IC, be advised the incident is now at the 10 minute mark. Similarly, when the IC asks for someone to get back to them in X minutes, you are expected to keep track of that. You should remind the IC when that time has been reached. IC, be advised the timer for [TEAM]'s investigation is up.",
|
|
"title": "Alert IC to Timers"
|
|
},
|
|
{
|
|
"location": "/training/scribe/",
|
|
"text": "So you want to be a scribe? You've come to the right place! You don't need to be a senior team member to become a deputy or scribe, anyone can do it providing you have the requisite knowledge!\n\n\n\n\nCredit: \nHolly Chaffin\n\n\nPurpose\n#\n\n\nThe purpose of the Scribe is to maintain a timeline of key events during an incident. Documenting actions, and keeping track of any followup items that will need to be addressed.\n\n\nIt's important for the rest of the command staff to be able to focus on the problem at hand, rather than worrying about documenting the steps.\n\n\nYour job as Scribe is to listen to the call and to watch the incident Slack room, keeping track of context and actions that need to be performed, documenting these in Slack as you go. \nYou should not be performing any remediations, checking graphs, or investigating logs.\n Those tasks will be delegated to the subject matter experts (SME's) by the Incident Commander.\n\n\nPrerequisites\n#\n\n\nBefore you can be a Scribe, it is expected that you meet the following criteria. Don't worry if you don't meet them all yet, you can still continue with training!\n\n\n\n\nExcellent verbal and written \ncommunication skills\n.\n\n\nHas \nknowledge of obscure PagerDuty terms\n.\n\n\n\n\nResponsibilities\n#\n\n\nRead up on our \nDifferent Roles for Incidents\n to see what is expected from a Scribe, as well as what we expect from the other roles you'll be interacting with.\n\n\nTraining Process\n#\n\n\nThere is no formal training process for this role, reading this page should be sufficient for most tasks. Here's a list of things you can do to train though,\n\n\n\n\n\n\nRead the rest of this page, particularly the sections below.\n\n\n\n\n\n\nParticipate in \nFailure Friday\n (FF).\n\n\n\n\nShadow a FF to see how it's run.\n\n\nBe the scribe for multiple FF's.\n\n\n\n\n\n\n\n\nScribing\n#\n\n\nScribing is more art than science. The objective is to keep an accurate record of important events that occurred on the call, so that we can look back at the timeline to see what happened. But what exactly is important? There's no overwhelming answer, and it really comes down the judgement and experience. But here are some general things you most definitely want to capture as scribe.\n\n\n\n\nThe result of any polling decisions.\n\n\n This is not \"9 people voted yay, 3 voted nay\".\n\n\n It is \"Polled for if we should do rolling restart. \n is proceeding with restart.\"\n\n\n\n\n\n\nAny followup items that are called out as \"We should do this..\", \"Why didn't this?..\", etc.\n\n\n This is not \"Why isn't the Support representative on the call?\"\n\n\n This is \"TODO: Why didn't we get paged for this earlier?\"\n\n\n\n\n\n\n\n\nIncident Call Procedures and Lingo\n#\n\n\nThe \nSteps for Scribe\n provide a detailed description of what you should be doing during an incident.\n\n\nHere are some examples of phrases and patterns you should use during incident calls.\n\n\nStatus Stalking\n#\n\n\nAt the start of any major incident call, you should start our status stalking bot, so that it will post to the room an update automatically.\n\n\n\n\n!status stalk\n\n\n\n\nThis will provide the update and allow the IC to see the status without having to keep asking.\n\n\nNote Important Actions\n#\n\n\nDuring a call, you will hear lots of discussion happening, you should not be documenting all of this in the chat room. You only want to document things which will be important for the final timeline. It's not always obvious what this might be, and it's usually a matter of judgement. You generally want to note any actions the IC has asked someone to perform, along with the result of any polling decisions.\n\n\n\n\nPolled for decision on whether to perform rolling restart. We are proceeding with restart. [USER_A] to execute.\n\n\n\n\nSome actions might seem important at the time, but end up not being. That's OK. It's better to have more info than not enough, but don't go overboard.\n\n\nNote Followup Actions\n#\n\n\nSometimes during the call, someone will either mention something we \"should fix\", or the IC will specifically ask you to note a followup item. You can do this in Slack by simply prefixing with \"TODO\", this will make it easier to search for later.\n\n\n\n\nTODO: Why did we not get paged for the fall in traffic on [X] cluster?\n\n\n\n\nThe post-mortem owner will find these after and raise tasks for them.\n\n\nEnd of Call Notification\n#\n\n\nWhen the IC ends the call, you should post a message into Slack to let everyone know the call is over, and that they should continue discussion elsewhere.\n\n\n\n\nCall is over, thanks everyone. Follow up in Slack.\n\n\n\n\nDon't forget to also stop the status stalking.\n\n\n\n\n!status unstalk",
|
|
"title": "Scribe"
|
|
},
|
|
{
|
|
"location": "/training/scribe/#purpose",
|
|
"text": "The purpose of the Scribe is to maintain a timeline of key events during an incident. Documenting actions, and keeping track of any followup items that will need to be addressed. It's important for the rest of the command staff to be able to focus on the problem at hand, rather than worrying about documenting the steps. Your job as Scribe is to listen to the call and to watch the incident Slack room, keeping track of context and actions that need to be performed, documenting these in Slack as you go. You should not be performing any remediations, checking graphs, or investigating logs. Those tasks will be delegated to the subject matter experts (SME's) by the Incident Commander.",
|
|
"title": "Purpose"
|
|
},
|
|
{
|
|
"location": "/training/scribe/#prerequisites",
|
|
"text": "Before you can be a Scribe, it is expected that you meet the following criteria. Don't worry if you don't meet them all yet, you can still continue with training! Excellent verbal and written communication skills . Has knowledge of obscure PagerDuty terms .",
|
|
"title": "Prerequisites"
|
|
},
|
|
{
|
|
"location": "/training/scribe/#responsibilities",
|
|
"text": "Read up on our Different Roles for Incidents to see what is expected from a Scribe, as well as what we expect from the other roles you'll be interacting with.",
|
|
"title": "Responsibilities"
|
|
},
|
|
{
|
|
"location": "/training/scribe/#training-process",
|
|
"text": "There is no formal training process for this role, reading this page should be sufficient for most tasks. Here's a list of things you can do to train though, Read the rest of this page, particularly the sections below. Participate in Failure Friday (FF). Shadow a FF to see how it's run. Be the scribe for multiple FF's.",
|
|
"title": "Training Process"
|
|
},
|
|
{
|
|
"location": "/training/scribe/#scribing",
|
|
"text": "Scribing is more art than science. The objective is to keep an accurate record of important events that occurred on the call, so that we can look back at the timeline to see what happened. But what exactly is important? There's no overwhelming answer, and it really comes down the judgement and experience. But here are some general things you most definitely want to capture as scribe. The result of any polling decisions. This is not \"9 people voted yay, 3 voted nay\". It is \"Polled for if we should do rolling restart. is proceeding with restart.\" Any followup items that are called out as \"We should do this..\", \"Why didn't this?..\", etc. This is not \"Why isn't the Support representative on the call?\" This is \"TODO: Why didn't we get paged for this earlier?\"",
|
|
"title": "Scribing"
|
|
},
|
|
{
|
|
"location": "/training/scribe/#incident-call-procedures-and-lingo",
|
|
"text": "The Steps for Scribe provide a detailed description of what you should be doing during an incident. Here are some examples of phrases and patterns you should use during incident calls.",
|
|
"title": "Incident Call Procedures and Lingo"
|
|
},
|
|
{
|
|
"location": "/training/scribe/#status-stalking",
|
|
"text": "At the start of any major incident call, you should start our status stalking bot, so that it will post to the room an update automatically. !status stalk This will provide the update and allow the IC to see the status without having to keep asking.",
|
|
"title": "Status Stalking"
|
|
},
|
|
{
|
|
"location": "/training/scribe/#note-important-actions",
|
|
"text": "During a call, you will hear lots of discussion happening, you should not be documenting all of this in the chat room. You only want to document things which will be important for the final timeline. It's not always obvious what this might be, and it's usually a matter of judgement. You generally want to note any actions the IC has asked someone to perform, along with the result of any polling decisions. Polled for decision on whether to perform rolling restart. We are proceeding with restart. [USER_A] to execute. Some actions might seem important at the time, but end up not being. That's OK. It's better to have more info than not enough, but don't go overboard.",
|
|
"title": "Note Important Actions"
|
|
},
|
|
{
|
|
"location": "/training/scribe/#note-followup-actions",
|
|
"text": "Sometimes during the call, someone will either mention something we \"should fix\", or the IC will specifically ask you to note a followup item. You can do this in Slack by simply prefixing with \"TODO\", this will make it easier to search for later. TODO: Why did we not get paged for the fall in traffic on [X] cluster? The post-mortem owner will find these after and raise tasks for them.",
|
|
"title": "Note Followup Actions"
|
|
},
|
|
{
|
|
"location": "/training/scribe/#end-of-call-notification",
|
|
"text": "When the IC ends the call, you should post a message into Slack to let everyone know the call is over, and that they should continue discussion elsewhere. Call is over, thanks everyone. Follow up in Slack. Don't forget to also stop the status stalking. !status unstalk",
|
|
"title": "End of Call Notification"
|
|
},
|
|
{
|
|
"location": "/training/subject_matter_expert/",
|
|
"text": "If you are on-call for any team at PagerDuty, you may be paged for a major incident and will be expected to respond as a subject matter expert (SME) for your service. This page details everything you need to know in order to be prepared for that responsibility. If you are interested in becoming an Incident Commander, take a look at the \nIncident Commander Training page\n.\n\n\n\n\nCredit: \noregondot @ Flickr\n\n\nOn-Call Expectations\n#\n\n\nIf you are on-call for your team, there are certain expectations of you as that on-call. This applies to both the primary and secondary on-calls. Getting paged about a SEV-3 or SEV-4 in your system comes with different expectations than getting paged with a major SEV-2.\n\n\nBefore Going On-Call\n#\n\n\n\n\nBe prepared, by having already familiarized yourself with our incident response policies and procedures. In particular,\n\n\nDifferent Roles for Incidents\n - You will be acting as a \"Resolver\" or \"SME\". But you should familiarize yourself with the other roles and what they will be doing.\n\n\nIncident Call Etiquette\n - How to behave during an incident call.\n\n\nDuring an Incident\n - What to do during an incident. You are specifically interested in the \"Resolver\" steps, but you should familiarize yourself with the entire document.\n\n\nGlossary\n - Familiarize yourself with the terminology that may be used during the call.\n\n\n\n\n\n\nMake sure you have set up your alerting methods, and that PagerDuty can bypass your \"Do Not Disturb\" settings.\n\n\nCheck you can join the incident call. You may need to install a browser plugin. You don't want to be doing that the first time you get paged.\n\n\nBe aware of your upcoming on-call time and arrange swaps around travel, vacations, appointments, etc.\n\n\nIf you are an Incident Commander, make sure you are not on-call for your team at the same time as being on-call as Incident Commander.\n\n\n\n\nDuring On-Call Period\n#\n\n\n\n\nHave your laptop and Internet with you at all times during your on-call period (office, home, a MiFi, a phone with a tethering plan, etc).\n\n\nIf you have important appointments, you need to get someone else on your team to cover that time slot in advance.\n\n\nWhen you receive an alert for a major incident, you are expected to join the incident call and Slack as quickly as possible (within minutes).\n\n\nYou will be asked questions or given actions by the Incident Commander. Answer questions concisely, and follow all actions given (even if you disagree with them).\n\n\n\n\n\n\n\n\nResponse Mobilization\n#\n\n\nWhen an incident occurs, you must be mobilized or assigned to become part of the incident response. In other words, until you are mobilized to the incident via a page or being directly asked by someone else on the incident, you remain in your everyday role. After being mobilized, your first task is to check in and receive an assignment. While it's tempting to see an incident happening and want to jump in and help, when resources show up that have not been requested, the management of the incident can be compromised.\n\n\n\"Never Hesitate to Escalate\"\n#\n\n\nIf you're not sure about something, it is perfectly acceptable to bring in other SMEs from your team that you believe know a given system better than you. Don't let your ego keep you from bringing in additional help. Our motto is \"Never hesitate to escalate\", you will never be looked down upon for escalating something because you didn't know how to handle it.\n\n\nBlameless\n#\n\n\nThere will be incidents. Some will be caused by you, some will be caused by others... some will just happen. Our entire incident response process is completely blameless. Blaming people is counter productive and just distracts from the problem at hand. No matter how an incident started, they all need to get solved as quickly as possible.\n\n\nWartime vs Peacetime\n#\n\n\nBehavior during a major incident is very different to any other alert you may have received in the past. We call a major incident \"wartime\", and make a distinction between that and normal everyday operations (\"peacetime\").\n\n\nPeacetime\n#\n\n\nThe organizational structure is generally based on seniority. The more senior members of a team will lead discussions, and managers or team leads will have the final say. Decisions are made after careful consideration of all options, and to minimize potential risk to customers.\n\n\nWartime\n#\n\n\nWartime is different, and you will notice on our major incident calls that there's a different organizational structure.\n\n\n\n\nThe Incident Commander is in charge. No matter their rank during peacetime, they are now the highest ranked individual on the call, higher than the CEO.\n\n\nPrimary responders (folks acting as primary on-call for a team/service) are the highest ranked individuals for that service.\n\n\nDecisions will be made by the IC after consideration of the information presented. Once that decision is made, it is final.\n\n\nRiskier decisions can be made by the IC than would normally be considered during peacetime.\n\n\nFor example, the IC may decide to drop events for a particular customer in order to maintain the integrity of the system for everyone else.\n\n\n\n\n\n\nThe IC may go against a consensus decision. If a poll is done, and 9/10 people agree but 1 disagrees. The IC may choose the disagreement option despite a majority vote.\n\n\nEven if you disagree, the IC's decision is final. During the call is not the time to argue with them.\n\n\n\n\n\n\nThe IC may use language or behave in a way you find rude. This is wartime, and they need to do whatever it takes to resolve the situation, so sometimes rudeness occurs. This is never anything personal, and something you should be prepared to experience if you've never been in a wartime situation before.\n\n\nYou may be asked to leave the call by the IC, or you may even be forceable kicked off a call. It is at the IC's discretion to do this if they feel you are not providing useful input. Again, this is nothing personal and you should remember that wartime is different than peacetime.",
|
|
"title": "Subject Matter Expert"
|
|
},
|
|
{
|
|
"location": "/training/subject_matter_expert/#on-call-expectations",
|
|
"text": "If you are on-call for your team, there are certain expectations of you as that on-call. This applies to both the primary and secondary on-calls. Getting paged about a SEV-3 or SEV-4 in your system comes with different expectations than getting paged with a major SEV-2.",
|
|
"title": "On-Call Expectations"
|
|
},
|
|
{
|
|
"location": "/training/subject_matter_expert/#before-going-on-call",
|
|
"text": "Be prepared, by having already familiarized yourself with our incident response policies and procedures. In particular, Different Roles for Incidents - You will be acting as a \"Resolver\" or \"SME\". But you should familiarize yourself with the other roles and what they will be doing. Incident Call Etiquette - How to behave during an incident call. During an Incident - What to do during an incident. You are specifically interested in the \"Resolver\" steps, but you should familiarize yourself with the entire document. Glossary - Familiarize yourself with the terminology that may be used during the call. Make sure you have set up your alerting methods, and that PagerDuty can bypass your \"Do Not Disturb\" settings. Check you can join the incident call. You may need to install a browser plugin. You don't want to be doing that the first time you get paged. Be aware of your upcoming on-call time and arrange swaps around travel, vacations, appointments, etc. If you are an Incident Commander, make sure you are not on-call for your team at the same time as being on-call as Incident Commander.",
|
|
"title": "Before Going On-Call"
|
|
},
|
|
{
|
|
"location": "/training/subject_matter_expert/#during-on-call-period",
|
|
"text": "Have your laptop and Internet with you at all times during your on-call period (office, home, a MiFi, a phone with a tethering plan, etc). If you have important appointments, you need to get someone else on your team to cover that time slot in advance. When you receive an alert for a major incident, you are expected to join the incident call and Slack as quickly as possible (within minutes). You will be asked questions or given actions by the Incident Commander. Answer questions concisely, and follow all actions given (even if you disagree with them).",
|
|
"title": "During On-Call Period"
|
|
},
|
|
{
|
|
"location": "/training/subject_matter_expert/#response-mobilization",
|
|
"text": "When an incident occurs, you must be mobilized or assigned to become part of the incident response. In other words, until you are mobilized to the incident via a page or being directly asked by someone else on the incident, you remain in your everyday role. After being mobilized, your first task is to check in and receive an assignment. While it's tempting to see an incident happening and want to jump in and help, when resources show up that have not been requested, the management of the incident can be compromised.",
|
|
"title": "Response Mobilization"
|
|
},
|
|
{
|
|
"location": "/training/subject_matter_expert/#never-hesitate-to-escalate",
|
|
"text": "If you're not sure about something, it is perfectly acceptable to bring in other SMEs from your team that you believe know a given system better than you. Don't let your ego keep you from bringing in additional help. Our motto is \"Never hesitate to escalate\", you will never be looked down upon for escalating something because you didn't know how to handle it.",
|
|
"title": "\"Never Hesitate to Escalate\""
|
|
},
|
|
{
|
|
"location": "/training/subject_matter_expert/#blameless",
|
|
"text": "There will be incidents. Some will be caused by you, some will be caused by others... some will just happen. Our entire incident response process is completely blameless. Blaming people is counter productive and just distracts from the problem at hand. No matter how an incident started, they all need to get solved as quickly as possible.",
|
|
"title": "Blameless"
|
|
},
|
|
{
|
|
"location": "/training/subject_matter_expert/#wartime-vs-peacetime",
|
|
"text": "Behavior during a major incident is very different to any other alert you may have received in the past. We call a major incident \"wartime\", and make a distinction between that and normal everyday operations (\"peacetime\").",
|
|
"title": "Wartime vs Peacetime"
|
|
},
|
|
{
|
|
"location": "/training/subject_matter_expert/#peacetime",
|
|
"text": "The organizational structure is generally based on seniority. The more senior members of a team will lead discussions, and managers or team leads will have the final say. Decisions are made after careful consideration of all options, and to minimize potential risk to customers.",
|
|
"title": "Peacetime"
|
|
},
|
|
{
|
|
"location": "/training/subject_matter_expert/#wartime",
|
|
"text": "Wartime is different, and you will notice on our major incident calls that there's a different organizational structure. The Incident Commander is in charge. No matter their rank during peacetime, they are now the highest ranked individual on the call, higher than the CEO. Primary responders (folks acting as primary on-call for a team/service) are the highest ranked individuals for that service. Decisions will be made by the IC after consideration of the information presented. Once that decision is made, it is final. Riskier decisions can be made by the IC than would normally be considered during peacetime. For example, the IC may decide to drop events for a particular customer in order to maintain the integrity of the system for everyone else. The IC may go against a consensus decision. If a poll is done, and 9/10 people agree but 1 disagrees. The IC may choose the disagreement option despite a majority vote. Even if you disagree, the IC's decision is final. During the call is not the time to argue with them. The IC may use language or behave in a way you find rude. This is wartime, and they need to do whatever it takes to resolve the situation, so sometimes rudeness occurs. This is never anything personal, and something you should be prepared to experience if you've never been in a wartime situation before. You may be asked to leave the call by the IC, or you may even be forceable kicked off a call. It is at the IC's discretion to do this if they feel you are not providing useful input. Again, this is nothing personal and you should remember that wartime is different than peacetime.",
|
|
"title": "Wartime"
|
|
},
|
|
{
|
|
"location": "/training/glossary/",
|
|
"text": "Ever wonder what all of those strange words you sometimes see in our documentation mean? This page is here to help.\n\n\n\n\n\n\n\n\nTerm\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nIC / Incident Commander\n\n\nThe incident commander is the person responsible for bringing any major incident to resolution. They are the highest ranking individual on any major incident call, regardless of their day-to-day rank. Their decisions made as commander are final. \nMore info\n.\n\n\n\n\n\n\nDeputy\n\n\nTypically the backup IC. The deputy's job is to support the IC during the call, providing them with any help they need. \nMore info\n.\n\n\n\n\n\n\nScribe\n\n\nThe scribe's job is to keep a log of all activities performed during the call in a written chat log on Slack. \nMore info\n.\n\n\n\n\n\n\nResolver\n\n\nA person on the incident call who is able to help resolve issues within a particular system. Also referred to as an SME (see below). \nMore info\n.\n\n\n\n\n\n\nSME\n\n\n\"Subject Matter Expert\", someone who is an expert in a particular service or subject who can provide information to the IC, and perform resolution actions for a particular system. \nMore info\n.\n\n\n\n\n\n\nCAN Report\n\n\nCAN stands for \"Conditions\" \"Actions\" \"Needs\", if an IC asks you for a CAN report, you should provide the current state of your service (condition), what actions need to be taken to return it to a healthy state (actions), and what support you need in order to perform the actions (needs).\n\n\n\n\n\n\nSev / Severity\n\n\nHow severe the incident is. The \"sev\" of an incident determines the type of response we give. The higher the severity, the higher the likelihood of making risky actions to resolve the situation. \nMore info\n.\n\n\n\n\n\n\nSpan of Control\n\n\nRefers to the number of direct reports you have. For example, if the IC has 10 people as direct reports on a call, they have a large span of control. We aim to make the span of control as minimal as we can while still being productive.\n\n\n\n\n\n\nGrenade Thrower\n\n\nSomeone who joins the call at a late time in the game, and provides information that completely derails the current thinking. They then leave almost immediately.\n\n\n\n\n\n\nExecutive Swoop\n\n\nWhen an executive comes on the call and drops some sort of bombshell. A version of grenade throwing.",
|
|
"title": "Glossary"
|
|
},
|
|
{
|
|
"location": "/about/",
|
|
"text": "This site documents parts of the Spearhead Systems Issue Response process. It is a cut-down version of our internal documentation, used at Spearhead Systems for any incident or service request, and to prepare new employees for on-call responsibilities. It provides information not only on preparation but also what to do during and after.\n\n\nFew companies seem to talk about their internal processes for dealing with major incidents. We would like to change that by opening up our documentation to the community, in the hopes that it proves useful to others who may want to formalize their own processes. Additionally, it provides an opportunity for others to suggest improvements, which ends up helping everyone.\n\n\nThis documentation is complementary to what is available in our \nexisting wiki\n.\n\n\nWhat is this?\n#\n\n\nA collection of pages detailing how to efficiently deal with any incident or service request that might arise, along with information on how to go on-call effectively. It provides lessons learned the hard way, along with training material for getting you up to speed quickly.\n\n\nWho is this for?\n#\n\n\nIt is intended for on-call practitioners and those involved in an operational incident or service request response process, or those wishing to enact a formal incident response process. Specifically this is for all of our Technical Support staff.\n\n\nWhy do I need it?\n#\n\n\nAs a service provider Spearhead Systems deals with service requests on a daily basis. The reason we exist is to deliver a service which in most cases boils down to incidents and service requests. We want to deliver a smooth and seamless experience for resolving our customers issues therefore this documentation is a guideline for how we handle these requests. This documentation will allow you give you a head start on how to deal with issues in a way which leads to the fastest possible recovery time.\n\n\nWhat is covered?\n#\n\n\nAnything from preparing to \ngo on-call\n, definitions of \nseverities\n, incident \ncall etiquette\n, all the way to how to run a \npost-mortem\n, providing a \npost-mortem template\n and even a \nsecurity incident response process\n.\n\n\nWhat is missing?\n#\n\n\nLots, dig in an help us complete the picture. We can migrate most processes from Sharepoint here.\n\n\nLicense\n#\n\n\nThis documentation is provided under the Apache License 2.0. In plain English that means you can use and modify this documentation and use it both commercially and for private use. However, you must include any original copyright notices, and the original LICENSE file.\n\n\nWhether you are a Spearhead Systems customer or not, we want you to have the ability to use this documentation internally at your own company. You can view the source code for all of this documentation on our GitHub account, feel free to fork the repository and use it as a base for your own internal documentation.",
|
|
"title": "About"
|
|
},
|
|
{
|
|
"location": "/about/#what-is-this",
|
|
"text": "A collection of pages detailing how to efficiently deal with any incident or service request that might arise, along with information on how to go on-call effectively. It provides lessons learned the hard way, along with training material for getting you up to speed quickly.",
|
|
"title": "What is this?"
|
|
},
|
|
{
|
|
"location": "/about/#who-is-this-for",
|
|
"text": "It is intended for on-call practitioners and those involved in an operational incident or service request response process, or those wishing to enact a formal incident response process. Specifically this is for all of our Technical Support staff.",
|
|
"title": "Who is this for?"
|
|
},
|
|
{
|
|
"location": "/about/#why-do-i-need-it",
|
|
"text": "As a service provider Spearhead Systems deals with service requests on a daily basis. The reason we exist is to deliver a service which in most cases boils down to incidents and service requests. We want to deliver a smooth and seamless experience for resolving our customers issues therefore this documentation is a guideline for how we handle these requests. This documentation will allow you give you a head start on how to deal with issues in a way which leads to the fastest possible recovery time.",
|
|
"title": "Why do I need it?"
|
|
},
|
|
{
|
|
"location": "/about/#what-is-covered",
|
|
"text": "Anything from preparing to go on-call , definitions of severities , incident call etiquette , all the way to how to run a post-mortem , providing a post-mortem template and even a security incident response process .",
|
|
"title": "What is covered?"
|
|
},
|
|
{
|
|
"location": "/about/#what-is-missing",
|
|
"text": "Lots, dig in an help us complete the picture. We can migrate most processes from Sharepoint here.",
|
|
"title": "What is missing?"
|
|
},
|
|
{
|
|
"location": "/about/#license",
|
|
"text": "This documentation is provided under the Apache License 2.0. In plain English that means you can use and modify this documentation and use it both commercially and for private use. However, you must include any original copyright notices, and the original LICENSE file. Whether you are a Spearhead Systems customer or not, we want you to have the ability to use this documentation internally at your own company. You can view the source code for all of this documentation on our GitHub account, feel free to fork the repository and use it as a base for your own internal documentation.",
|
|
"title": "License"
|
|
}
|
|
]
|
|
} |