Practical PSL behaviours for security leaders¶
PSL leadership does not look like command and control. It looks like a set of habits that most technical environments do not reward and rarely teach.
Reframing problems¶
The presenting problem is almost always a symptom. The skill is in resisting the pull toward the obvious solution long enough to ask what is actually happening.
In security, the presenting problem might be “we have too many unresolved critical vulnerabilities.” The reframes worth exploring: are the vulnerabilities genuinely unresolvable, or is the resolution process broken? Is the severity classification trustworthy, or are teams gaming it to manage workload? Is there an ownership structure that assigns the work to people who have no authority to complete it?
Reframing is not delay. It is the difference between treating a symptom repeatedly and addressing the cause once.
Surfacing hidden constraints¶
Every organisation has constraints that are known informally but never stated explicitly. Security work that ignores these will fail at the same invisible wall, repeatedly.
Asking directly sometimes works. Observing what happens when a recommendation is made often works better. Who hesitates? Who defers? Who changes the subject? These are signals about where the real constraints live.
Managing energy, not just tasks¶
Burnout is a security vulnerability. A team that is running on adrenaline and attrition will miss things, cut corners on process, and fail to notice the slow-moving threats that do not trigger alerts.
A PSL-minded security leader pays attention to the energy state of the team as a system-level concern, not a welfare concern. Exhausted teams make the threat model worse.
Designing experiments instead of chasing perfect answers¶
Security problems are rarely clean enough to admit perfect solutions. The alternative is not to wait until more information arrives. It is to design small, reversible experiments that generate information.
A controlled test of a detection capability against a known technique is more useful than a theoretical discussion of whether the capability works. A pilot rollout of a new control with one team produces data that a policy document never will.
Working on yourself first¶
Weinberg’s version of this is direct and slightly uncomfortable: your reactions are part of the system. If you communicate in a way that makes people defensive, you will see defensive behaviour, and you may conclude the team cannot be trusted with hard information. The conclusion will be self-fulfilling.
The version that matters for security is this: how a security team is experienced by the rest of the organisation shapes how much useful information reaches it. A team that is experienced as judgemental, obstructive, or disconnected from operational reality will be managed, not collaborated with. That is a capability gap.