spearhead-issue-response/docs/during/during_an_incident.md

113 lines
7.0 KiB
Markdown
Raw Normal View History

2017-01-13 20:18:18 +02:00
Information on what to do during a major incident. See our [severity level descriptions](/before/severity_levels.md) for what constitutes a major incident.
!!! note "Documentation"
2017-01-21 13:46:47 +02:00
Always document your activities. Keep a detailed worklog of your actions in DoIT and communicate verbosely on Slack or other channels (email, etc.).
2017-01-13 20:18:18 +02:00
<table class="custom-table" id="contact-summary">
<thead>
</thead>
<tbody>
<tr>
2017-01-21 13:46:47 +02:00
<td><a href="#">#support</a></td>
<td><a href="#">http://response.spearhead.systems</a></td>
<td><a href="#">+40728 005 263</a> </td>
2017-01-13 20:18:18 +02:00
</tr>
<tr>
2017-01-21 13:46:47 +02:00
<td colspan="3" class="centered">Need an TL? Do <code>!tl page</code> in Slack</td>
2017-01-13 20:18:18 +02:00
</tr>
<tr>
<td colspan="3"><em>For executive summary updates only, join <a href="#">#executive-summary-updates</a>.</em></td>
</tr>
</tbody>
</table>
!!! info "Security Incident?"
If this is a security incident, you should follow the [Security Incident Response](/during/security_incident_response.md) process.
## Don't Panic!
1. Join the incident call and chat (see links above).
* Anyone is free to join the call or chat to observe and follow along with the incident.
* If you wish to participate however, you should join both. If you can't join the call for some reason, you should have a dedicated proxy for the call. Disjointed discussions in the chat room are ultimately distracting.
1. Follow along with the call/chat, add any comments you feel are appropriate, but keep the discussion relevant to the problem at hand.
* If you are not an SME, try to filter any discussion through the primary SME for your service. Too many people discussing at once get become overwhelming, so we should try to maintain a hierarchical structure to the call if possible.
2017-01-21 13:46:47 +02:00
1. Follow instructions from the Team Leader.
* **Is there no TL on the call?**
* Manually page them via Slack, with `!tl page` in Slack. This will page the primary and backup TL's at the same time.
* Never hesitate to page the TL. It's much better to have them and not need them than the other way around.
2017-01-13 20:18:18 +02:00
2017-01-21 13:46:47 +02:00
!!! info "Not a call?"
Not all issues begin with a formal call. Some issues are self-explanatory and automatically generated via our monitoring platforms, a customer logging on to our portal, etc. In these scenarios [DoIT](http://doit.sphs.ro) is the definitive source. If that is not sufficient ask your TL.
2017-01-13 20:18:18 +02:00
2017-01-21 13:46:47 +02:00
## Steps for the Team Leader
Resolve the incident as quickly and as safely as possible, use the Sysadmin to assist you. Delegate any tasks to relevant experts at your discretion.
1. Announce on the call, in DoIT and in Slack that you are the team leader, who you have designated as sysadmin (usually the backup TL), and scribe/juniors if any.
2017-01-13 20:18:18 +02:00
1. Identify if there is an obvious cause to the incident (recent deployment, spike in traffic, etc.), delegate investigation to relevant experts,
2017-01-21 13:46:47 +02:00
* Use the service experts on the call to assist in the analysis. They should be able to quickly provide confirmation of the cause, but not always. It's the call of the TL on how to proceed in cases where the cause is not positively known. Confer with service owners and use their knowledge to help you.
2017-01-13 20:18:18 +02:00
1. Identify investigation & repair actions (roll back, rate-limit services, etc) and delegate actions to relevant service experts. Typically something like this (obviously not an exhaustive list),
* **Bad Deployment:** Roll it back.
* **Web Application Stuck/Crashed:** Do a rolling restart.
* **Event Flood:** Validate automatic throttling is sufficient, adjust manually if not.
* **Data Center Outage:** Validate automation has removed bad data center. Force it to do so if not.
* **Degraded Service Behavior without load:** Gather forensic data (heap dumps, etc), and consider doing a rolling restart.
2017-01-21 13:46:47 +02:00
1. Listen for prompts from your Sysadmin regarding severity escalations, decide whether we need to announce publicly, and instruct customer liaison accordingly.
* Announcing publicly is at your discretion as TL. If you are unsure, then announce publicly ("If in doubt, tweet it out").
2017-01-13 20:18:18 +02:00
1. Once incident has recovered or is actively recovering, you can announce that the incident is over and that the call is ending. This usually indicates there's no more productive work to be done for the incident right now.
* Move the remaining, non-time-critical discussion to Slack.
* Follow up to ensure the customer liaison wraps up the incident publicly.
* Identify any post-incident clean-up work.
* You may need to perform debriefing/analysis of the underlying root cause.
1. (After call ends) Create the post-mortem page from the template, and assign an owner to the post-mortem for the incident.
1. (After call ends) Send out an internal email explaining that we had a major incident, provide a link to the post-mortem.
2017-01-21 13:46:47 +02:00
## Steps for Sysadmin
You are there to support the TL in whatever they need.
2017-01-13 20:18:18 +02:00
2017-01-21 13:46:47 +02:00
1. Monitor the status, and notify the TL if/when the incident escalates in severity level.
2017-01-13 20:18:18 +02:00
2017-01-21 13:46:47 +02:00
1. Be prepared to page other people as directed by the Team Leader.
2017-01-13 20:18:18 +02:00
1. Provide regular status updates in Slack (roughly every 30mins) to the executive team, giving an executive summary of the current status. Keep it short and to the point, and use @here.
2017-01-21 13:46:47 +02:00
1. Follow instructions from the Team Leader.
2017-01-13 20:18:18 +02:00
## Steps for Scribe
2017-01-21 13:46:47 +02:00
You are there to document the key information from the incident in Slack, DoIT, our WiKi, etc.
2017-01-13 20:18:18 +02:00
2017-01-21 13:46:47 +02:00
1. Update the apropriate channel with who the TL is, who the Sysadmin is, and that you're the scribe (if not already done).
* e.g. "TL: Bob Boberson, Sysadmin: Gigi Con, Scribe: Writer Writerson"
2017-01-13 20:18:18 +02:00
2017-01-21 13:46:47 +02:00
1. You should add notes to the proper channels when significant actions are taken, or findings are determined. You don't need to wait for the TL to direct this - use your own judgment.
* You should also add `TODO` notes to the proper channel that indicate follow-ups slated for later.
2017-01-13 20:18:18 +02:00
2017-01-21 13:46:47 +02:00
1. Follow instructions from the Team Leader.
2017-01-13 20:18:18 +02:00
## Steps for Subject Matter Experts
2017-01-21 13:46:47 +02:00
You are there to support the team leader in identifying the cause of the incident, suggesting and evaluation repair actions, and following through on the repair actions.
2017-01-13 20:18:18 +02:00
1. Investigate the incident by analyzing any graphs or logs at your disposal. Announce all findings to the incident commander.
2017-01-21 13:46:47 +02:00
* If you are unsure of the cause, that's fine, state that you are investigating and provide regular updates to the TL.
2017-01-13 20:18:18 +02:00
2017-01-21 13:46:47 +02:00
1. Announce all suggestions for resolution to the team leader, it is their decision on how to proceed, do not follow any actions unless told to do so!
2017-01-13 20:18:18 +02:00
2017-01-21 13:46:47 +02:00
1. Follow instructions from the team leader.
2017-01-13 20:18:18 +02:00
1. (Optional) Once the call is over and post-mortem is created, add any notes you think are relevant to the post-mortem page.
## Steps for Customer Liaison
Be on stand-by to post public facing messages regarding the incident.
2017-01-21 13:46:47 +02:00
1. You will typically be required to update the status page and to send Tweets or other communications from our various accounts at certain times during the call.
2017-01-13 20:18:18 +02:00
2017-01-21 13:46:47 +02:00
1. Follow instructions from the Team Leader.