Scenario planning is a structured method for preparing organizations for multiple plausible futures rather than betting on a single predicted outcome. It was used to navigate the Cold War, the 2008 financial crisis, and the COVID‑19 pandemic. The same methodology is now being applied to artificial intelligence — because the risks AI introduces are structurally similar to the risks COVID revealed: systemic, fast‑moving, and impossible to forecast with traditional planning methods.
The core insight from COVID is straightforward: organizations that had built scenario planning capability were able to respond faster and more coherently than those operating on single‑point forecasts. A 2022 paper published in Academic Radiology described how healthcare systems using scenario planning identified high‑yield strategies that remained valuable across multiple pandemic scenarios — without knowing which scenario would materialize. The common elements across scenarios pointed to the actions worth taking regardless of which future arrived.
AI risk presents the same problem in a different domain. The technology evolves faster than any individual organization can track. Regulations are still forming. Systemic risks — supply‑chain dependencies, model failures at scale, adversarial attacks — are not hypothetical. Scenario planning gives organizations a way to build strategic resilience for AI disruption without waiting for perfect information.
What COVID Taught Risk Practitioners About Uncertainty
The pandemic exposed three structural failures in conventional strategic planning that are directly relevant to AI risk management.
Failure 1: Forecasting based on the past. Traditional business forecasting assumes the future will resemble the recent past. This assumption breaks down in the presence of low‑probability, high‑impact events — what the insurance industry calls “tail risk.” COVID was not a tail risk in retrospect; it was a foreseeable scenario that most organizations had not formally planned for. AI risk carries similar characteristics: low probability of catastrophic failure in any given year, but high enough cumulative probability over a decade that strategic preparation is warranted.
Failure 2: Silos that prevent early warning. COVID’s early signals appeared in multiple organizations and government agencies simultaneously — but those signals never connected into a coherent picture because the data lived in separate silos with no shared taxonomy. AI risk signals face the same fragmentation: cybersecurity teams see adversarial attack patterns, data scientists see model drift, legal teams see regulatory developments, and business leaders see competitive dynamics. No single function has the full picture.
Failure 3: Plans that assume normal operations. Most organizational continuity plans assumed that the main disruption would be localized — a facility outage, a supply‑chain hiccup, a regulatory action in one jurisdiction. The plans did not account for simultaneous, global, multi‑domain disruption. AI risks share this property: a model failure in a core business system can cascade into regulatory, reputational, operational, and financial domains simultaneously.
The organizations that responded best to COVID shared one characteristic: they had practiced thinking through unfamiliar scenarios before those scenarios materialized. They had what Daryl Connor, a veteran of post‑crisis recovery work, described as “institutional muscle memory for the unfamiliar.”
The 3C‑AI Framework: Scenario Planning Applied to AI Risk
Researchers at UC Berkeley’s California Management Review published a framework in 2025 specifically designed to apply scenario planning methodology to AI disruption risk. Called the 3C‑AI framework, it is a cyclical, adaptive model built on three phases: Characterization, Confrontation, and Continuous Review.
Phase 1: Characterization — Mapping the AI Risk Landscape
Characterization is a comprehensive mapping of potential AI risks, going beyond surface‑level concerns to identify root‑cause risk categories. The framework identifies four domains where AI introduces distinct risk patterns:
- Operational risks – Systems fail in unexpected ways when deployed at scale. Model performance degrades as input distributions shift. Dependencies on third‑party AI services create single points of failure.
- Ethical and bias risks – Adaptive learning systems produce outcomes that were not predictable during development. Bias embedded in training data produces discriminatory decisions at scale.
- Regulatory risks – AI‑specific regulation is actively forming in the EU (AI Act), US (sector‑specific guidance from NIST, FDA, CFPB), and other jurisdictions. Organizations face retroactive compliance obligations as rules solidify.
- Societal and reputational risks – Public trust in AI systems is volatile. A single high‑profile failure can trigger reputational damage disproportionate to the technical severity of the incident.
The characterization phase requires cross‑functional input: technology, legal, compliance, ethics, and business‑line leaders must collectively map the AI risk landscape using a shared taxonomy. Without this shared language, scenario planning defaults to each function planning for its own concerns in isolation.
Real‑World Example: A Global Bank’s 3C‑AI Pilot
In early 2024, a multinational bank piloted the 3C‑AI framework for its credit‑scoring models. During Characterization, the team uncovered an unexpected operational risk: a third‑party cloud provider’s API latency spikes could silently degrade model predictions, leading to loan‑approval errors. By documenting this risk early, the bank added a latency‑monitoring layer to its pipeline—a tactic that later proved crucial when the provider suffered a regional outage in mid‑2025, preventing a cascade of mis‑priced loans.
Phase 2: Confrontation — Stress‑Testing Plans Against Scenarios
Confrontation is where scenario planning diverges most sharply from traditional risk assessment. Rather than calculating the probability of each identified risk, the framework develops four to six plausible future scenarios and tests whether existing organizational plans are robust across all of them.
| Scenario | Description | Key Organizational Stress |
|---|---|---|
| Scenario A: Regulatory shock | A major AI‑related incident triggers emergency legislation requiring explainability for all automated decisions in regulated industries within 18 months. | Compliance infrastructure, model documentation, audit trails |
| Scenario B: Model supply‑chain failure | A widely deployed foundation‑model provider experiences a catastrophic failure or is acquired by a competitor, forcing organizations to migrate models rapidly. | Vendor dependency, portability architecture, fallback procedures |
| Scenario C: Adversarial escalation | State‑sponsored actors systematically exploit AI system vulnerabilities, causing a wave of model‑poisoning and prompt‑injection attacks at enterprise scale. | Security controls, monitoring, incident response |
| Scenario D: Public trust collapse | A series of AI failures in high‑profile applications (healthcare, criminal justice, finance) generates sustained negative media coverage and consumer backlash. | Brand resilience, communication plans, AI usage governance |
The goal of confrontation is not to predict which scenario will occur. It is to identify the actions that are valuable across all plausible futures — the common elements that represent robust strategic investment regardless of which future materializes.
Phase 3: Continuous Review — Keeping the Framework Current
The final phase acknowledges that AI risk is not static. The 3C‑AI framework is explicitly designed as a living process, not a one‑time planning exercise. New AI capabilities, new regulatory requirements, and new adversarial techniques emerge on a timescale measured in months, not years.
This means scenario sets must be reviewed and updated at least annually, with trigger‑based reviews when significant events occur — a major AI‑related incident in your industry, a new regulatory development, or a meaningful shift in your organization’s AI deployment profile.
Applying COVID‑Era Tactics to AI Risk Scenarios
Healthcare organizations’ experience with scenario planning during COVID offers several direct lessons for AI risk management that go beyond generic scenario‑planning theory.
Lesson 1: Identify Axes of Uncertainty, Not Just Risks
In the COVID scenario‑planning exercise published in Academic Radiology, the planning team identified three “axes of uncertainty” — variables where expert opinions diverged most sharply and whose future direction was genuinely unknowable. They used these axes to generate nine distinct scenarios from combinations of high and low values for each axis.
For AI risk, useful axes of uncertainty include:
- Pace of AI‑specific regulation – slow vs. rapid.
- Reliability of AI systems at scale – stable vs. frequently failing.
- Degree of AI adoption in your industry – incremental vs. rapid displacement.
Different combinations produce scenarios with genuinely different strategic implications.
Lesson 2: Find Common‑Element Tactics
The California Management Review study found that the most valuable tactical responses were those that remained useful across multiple scenarios. In healthcare COVID planning, common‑element tactics included maintaining flexible workforce arrangements and ensuring supply‑chain redundancy — actions worth taking regardless of whether COVID cases were high or low.
For AI risk, common‑element tactics that remain valuable across most plausible scenarios include:
- Investing in model documentation standards early.
- Building explainability into AI systems from the design phase.
- Maintaining human‑in‑the‑loop oversight for high‑stakes decisions.
- Establishing a clear AI governance policy before a crisis forces it.
Lesson 3: Stress‑Test Your Existing Risk Management Framework
Existing frameworks — NIST RMF, ISO 31000, COSO ERM — were designed for conventional technological risks. They are well‑suited to structured, well‑understood threats but have documented limitations when applied to adaptive AI systems whose behavior can change over time.
The 3C‑AI framework notes two specific gaps:
- Emergent properties of adaptive learning systems are not adequately addressed.
- Operational guidance for the rapidly evolving AI threat landscape is missing.
When applying scenario planning to AI risk, audit your current framework first to surface these blind spots, then layer the 3C‑AI process on top.
A Practical Scenario‑Planning Process for AI Risk
Organizations that want to implement scenario planning for AI risk without disrupting existing operations can follow this compressed eight‑week process:
Week 1–2: Cross‑functional kickoff
- Assemble representatives from IT, data science, legal/compliance, and two business units.
- Review current and planned AI deployments.
- Identify 3–5 AI‑related decisions that carry the most strategic uncertainty; these become the anchor questions for the scenario set.
Week 3–4: Scenario development
- Use the axis‑of‑uncertainty method to generate four scenarios that span the range of plausible futures for your anchor questions.
- Document each scenario with a narrative, key assumptions, and the decisions it challenges.
Week 5–6: Strategy testing
- For each scenario, assess whether current plans, budgets, and policies are adequate.
- Highlight common‑element tactics — actions that are valuable in three or more scenarios.
- Flag scenarios where gaps are most severe.
Week 7: Action planning
- Prioritize the common‑element tactics and develop concrete implementation road‑maps (e.g., “adopt model‑card templates by Q4,” “establish a cross‑functional AI risk council”).
- Assign owners, timelines, and success metrics.
Week 8: Review and embed
- Present findings to senior leadership.
- Integrate the scenario‑planning outputs into existing governance structures (risk committees, board reporting).
- Schedule the first Continuous Review checkpoint for six months later.
Conclusion
The COVID‑19 pandemic forced organizations to confront uncertainty in real time, and the ones that survived did so because they already had a habit of imagining multiple futures. AI risk presents a comparable, if not more accelerated, challenge. By borrowing the disciplined, cross‑functional approach that proved effective during the pandemic—and by applying the 3C‑AI framework—companies can turn vague AI anxieties into actionable strategies.
Scenario planning does not eliminate risk; it makes the unknown manageable. It surfaces hidden dependencies, forces teams out of their silos, and surfaces a set of “common‑element” actions that deliver value no matter which future unfolds. In a world where AI capabilities evolve faster than regulations and where a single model failure can ripple across legal, operational, and reputational domains, that kind of resilience is no longer optional—it’s a competitive necessity.
Key Takeaways & Next Steps
- Start with a cross‑functional charter. Bring together tech, legal, risk, and business leaders early to agree on a shared AI risk taxonomy.
- Map the landscape (Characterization). Identify operational, ethical, regulatory, and societal risks specific to your AI portfolio.
- Build a handful of plausible scenarios. Use axes of uncertainty such as regulatory speed, model reliability, and adoption rate.
- Stress‑test existing plans. Look for gaps and, more importantly, for tactics that work across most scenarios.
- Prioritize common‑element actions. Early documentation, explainability, human‑in‑the‑loop controls, and a formal AI governance policy pay off in every future.
- Institutionalize Continuous Review. Schedule annual (or trigger‑based) updates to keep the scenario set and mitigation tactics current.
- Embed outcomes into governance. Tie scenario‑planning insights to board risk reports, budget cycles, and performance metrics.
By following these steps, organizations can move from reactive firefighting to proactive resilience—turning the lessons of COVID into a strategic advantage in the age of AI.