Scenario planning (Looking forward) sketch¶
Imagine peering around the corner of tomorrow without tripping over your own assumptions. Scenario planning lets you do exactly that: explore plausible futures, stress-test your strategies, and spot the potholes before you drive into them. By running thought experiments on emerging threats, you sharpen decision-making, improve adaptability, and strengthen your organisation’s confidence in handling uncertainty.
This sketch uses the future of AI as a running example—because if we are going to practice foresight, let us start with something galloping faster than humanity can keep up.
Define the focal issue¶
Identify the core question or challenge your organisation wants to explore.
AI example: “How will AI integration shape our workforce, culture, and operational resilience over the next decade?”
This step anchors the exercise: everything else revolves around this focal question.
Identify driving forces¶
Assess external factors that could influence the organisation’s future.
AI example:
Technological: AI adoption, automation, generative models
Economic: Labour market shifts, productivity changes, platform monopolisation
Political: Regulation, governance models (democratic vs. extractive)
Social: Public acceptance, ethical considerations, cultural impact
Outcome: Forces like governance style, energy innovation, and labour adaptation will shape the plausible trajectories of AI’s impact.
Determine critical uncertainties¶
Pinpoint uncertainties that have the greatest potential to affect outcomes.
AI example:
Will AI foster human-AI symbiosis, or create turbulence, or result in displacement?
Will regulation lag or lead technological adoption?
How resilient are labour markets and education systems to AI-driven change?
Outcome: These uncertainties define the axes along which scenarios are developed.
Develop scenarios¶
Create a small set of distinct, plausible futures.
AI example (from Futures of AI: Symbiosis, turbulence, or displacement, or something completely different?):
Human-AI symbiosis (best-case in-the-box): AI augments human work; meaningful reskilling; ethical frameworks in place.
Turbulent coexistence (likely in-the-box): AI integration is uneven; some sectors thrive, others falter; regulation lags.
Dystopian displacement (worst-case in-the-box): AI displaces humans; monopolies dominate; inequality rises.
The Great Pullback (out-of-the-box): Society consciously scales back AI use; decentralised, open-source solutions emerge; a “deliberate unwind” scenario.
Analyze implications¶
Evaluate how each scenario affects your organisation.
AI example:
Symbiosis → invest in collaborative AI tools, reskilling programs, policy alignment
Turbulence → prepare flexible operational strategies, hedging risks across sectors
Displacement → anticipate workforce shifts, safeguard organisational knowledge, build safety nets
Pullback → explore decentralised solutions, reduce dependency on large AI platforms
Outcome: Each scenario guides prioritisation of resources, risk management, and policy considerations.
Develop strategic responses¶
Define actions to remain robust across multiple futures.
AI example:
Target interventions realistically: Form cross-functional teams, but recognise that expertise is limited and some critical skills may be scarce. Don’t assume perfect collaboration — silos, politics, and resource constraints will intrude.
Prioritise flexibility over comprehensiveness: Flexible workflows help adapt to AI integration, but constant change introduces operational friction and staff fatigue. Choose where agility matters most.
Ethics vs expediency: Developing ethical AI policies is crucial, yet commercial pressures and regulatory gaps may force compromises. Be explicit about where values might collide with performance or profit goals.
Experiment carefully, but critically: Low-risk trials are useful, but they can mislead. Even small pilots can scale unintended harms if not monitored rigorously. Treat early results with skepticism and document failures as rigorously as successes.
Scenario hedging: Prepare for multiple plausible outcomes, but accept that extreme disruptions (e.g., Dystopian Displacement) may be outside your control. Focus on resilience — what can your organisation survive, rather than what it can prevent?
Continuous reflection: Strategies should not be “set and forget.” Regularly question assumptions, especially about human behaviour, societal adoption, and regulatory shifts. Many organisations overestimate their influence on these factors.
Monitor and adapt¶
Continuously track trends and update strategies.
AI example:
Use indicators such as AI adoption rates, policy changes, labour market shifts
Update scenario planning quarterly or annually
Ensure lessons learned from real-world AI integration feed back into organisational planning
Cross-cutting insights from AI example¶
Labour markets: jobs evolve, fragment, or vanish depending on scenario
Creativity vs automation: AI can enhance or displace human expression
Governance and democracy: the trajectory hinges on regulatory and societal choices
Key variables: governance, energy innovation, labour adaptation, public agency