The learning loop

Every incident is an opportunity to update the model. Whether that opportunity is used depends on whether the organisation has created the conditions for honest review, and whether review findings are connected to process and structure changes rather than filed and forgotten.

Post-incident review

Conduct reviews promptly, ideally within days of the incident being closed, while the detail is still fresh. A review conducted three months later relies on memory and documentation, and the emotional distance that time provides, while sometimes useful, also obscures the texture of decisions made under pressure.

The review is not a debrief of what went wrong, though that is part of it. It is an examination of the model: what did the SIRT assume about how the incident would unfold, and how did that assumption compare to what actually happened? The gaps between the model and reality are the most valuable output of the review.

The congruence condition

Satir’s survival stances appear regularly in post-incident environments. The placating response is to say everything went well when it did not, to protect relationships or to avoid creating the impression of incompetence. The blaming response is to locate the cause of the incident or the response failure in another team or another person. The computing response is to produce a technically accurate account that omits the significant decision made under stress and without complete information. The distracting response is to change the subject to operational detail and avoid the uncomfortable question entirely.

None of these produce useful learning. Congruent review means describing what actually happened, what was known and not known at each decision point, and what the reasoning was. This requires two conditions that have to be explicitly created: a genuine assurance that the review is for learning and not for blame, and a facilitator or review structure that keeps the conversation congruent when it drifts toward one of the stances.

The assurance has to be backed by visible behaviour from leadership. An organisation that says the review is for learning and then makes personnel decisions based on review findings has communicated clearly that the assurance is not credible. People adapt accordingly.

What to do with findings

A finding that is not connected to a change in process, structure, or tooling is a finding that documents the same problem appearing again next time. The post-incident review should produce a short list of specific changes, each owned by a named person with a realistic timeline.

The changes should be proportionate to the finding. Significant gaps in structure or authority require structural response. A gap in documentation practice may require only an update to a checklist. Not every finding requires a project.

Track recurring issues separately. If the same finding appears in multiple post-incident reviews, it is a systemic issue, not a per-incident one. Systemic issues require systemic response, which typically involves the political layer: someone with authority to require the change, protected time and budget to make it, and the willingness to treat a recurring gap as a management priority rather than a team performance issue.

Metrics as feedback

Metrics are useful to the extent that they reveal trends and prompt questions rather than generating reports. Time to detection, time to containment, and time to resolution are the most immediately useful. False positive rates matter for the quality of the detection that feeds into SIRT work.

Watch the trends rather than the individual numbers. A single slow incident may reflect unusual complexity. Consistently slow detection-to-decision times suggest a bottleneck in the process or the tooling that is worth investigating.

Metrics only change behaviour if someone looks at them and asks what the trend means. A dashboard no one consults is not measurement; it is performance of measurement.