Insights | Thinslices Blog

Digital resilience will be judged after the breach and at the board level

Written by Iain Cox | Feb 4, 2026 2:18:51 PM

At the board level, digital resilience has become less about preventing incidents and more about how decisions are judged once a breach has occurred. Because scrutiny is retrospective, controls, audits and frameworks offer limited protection unless they clearly inform deliberate board-level judgments about risk, trade-offs and acceptable impact. When expert views diverge, reasonable care is assessed through how uncertainty was handled and revisited, not whether the “right” answer was chosen. Regulation increases the visibility of these judgments without removing ambiguity, reinforcing that digital resilience ultimately depends on board judgment under uncertainty.

For boards today, cyber risk has stopped being a technical check-box and become a judgment problem under uncertainty. Across the EU, the most recent ENISA threat landscape reports nearly 4,900 verified cybersecurity incidents across member states in the year to June 2025, highlighting how frequently organisations are now tested, often in regulated and interconnected environments . In the UK, the picture is just as stark: 43% of businesses reported experiencing a cyber breach or attack in the past year, according to the UK Government’s Cyber Security Breaches Survey.

What these numbers change is not just the likelihood of disruption, but where scrutiny lands when it happens. Accountability increasingly extends beyond the operational layer into the boardroom, where decisions are made under time pressure, with incomplete information, and amid competing expert views.

This has left many directors navigating a growing sense of unease. Cyber risk is clearly escalating as a board responsibility, yet it is often unclear what decision is actually being expected, how much certainty is realistic, or what will be examined once outcomes are known. Assurances that feel adequate in advance can quickly lose their force after an incident.

Today, we’re looking at how cyber risk is judged at the board level once something has gone wrong, focusing on how decisions are evaluated after the fact, why common inputs fail under scrutiny, and how reasonable care is interpreted when certainty was never available.

The new reality: decisions are evaluated after failure, not before

Most board-level discussions about cyber risk are forward-looking. Programmes are approved, budgets signed off, and assurance is taken from management and external experts based on what is known at the time. The underlying expectation is that sensible preparation will stand as evidence of good governance if an incident occurs.

That expectation rarely holds.

In my experience, once a breach has occurred, the basis on which decisions are assessed fundamentally changes. Regulators, auditors, litigators and, in some cases, shareholders no longer look at what was reasonable then. They look at what happened after. Controls that felt proportionate are reinterpreted through the lens of impact. Risks that were consciously accepted become risks that appear avoidable.

What repeatedly catches directors out is the shift from decision-making under uncertainty to judgment with hindsight. Ambiguity is flattened, warning signs appear obvious in retrospect, and previously debated trade-offs are reframed as omissions. The question quietly moves from “was this a reasonable decision at the time?” to “why was this allowed to happen?”

This is where many boards discover that activity before an incident does not translate into protection after it. Policies, audits and certifications rarely answer the question being asked post-breach. That question is not whether work was done, but whether the board understood the risk it was accepting and why it believed that exposure was justified.

Recognising this dynamic is essential because it reframes cyber resilience as a governance challenge rather than a compliance exercise. Once decisions are judged after failure, the focus inevitably shifts from collecting inputs to demonstrating judgment. That leads to the next, more uncomfortable issue: what decision the board is actually expected to make in the first place.

 

Digital resilience is judged after the breach, not before it. What matters then is not the volume of controls in place, but whether board-level decisions about risk were deliberate, informed and revisited as circumstances changed.

 

What decision is the board actually expected to make?

For many directors, discomfort around cyber risk starts here. The responsibility is clearly escalating, yet the decision itself often feels ill-defined. Papers arrive full of technical language, maturity scores and coloured dashboards, but it is not obvious what, concretely, the board is being asked to decide.

This ambiguity creates a quiet failure mode. Boards either retreat into oversight theatre, noting reports and seeking reassurance, or they overreach into technical detail where they have neither context nor leverage. Neither position holds up well after an incident.

In reality, the board’s role is not to approve specific controls, vendors or architectures. It is to make explicit judgments about risk, grounded in the business model and its constraints. That includes deciding which risks are acceptable, which are not, and what level of disruption the organisation is genuinely prepared to absorb. These are not technical decisions. They are strategic ones, even when the input is technical.

Where this often breaks down is that these judgments remain implicit. Risk appetite is described in general terms but not translated into concrete scenarios. Trade-offs are made, but not clearly articulated. Assumptions about threat likelihood, recovery time or third-party dependence are absorbed rather than challenged.

After a breach, that lack of clarity becomes a problem. Scrutiny does not focus on whether the board chose the right firewall or framework. It focuses on whether directors understood the nature of the exposure they were accepting and whether that acceptance was deliberate, informed and revisited as the organisation evolved.

In many organisations, it only becomes visible when an external perspective forces the conversation out of technical status and into decision-making. A short, senior-level discussion focused on board judgments, rather than controls or maturity scores, is often enough to surface where assumptions are doing more work than they should.

Where directors get caught out: inputs that do not survive scrutiny

When boards review cyber risk, the discussion is usually anchored in a familiar set of inputs. Compliance status, audit outcomes, penetration test summaries, risk heatmaps, and third-party certifications. These artefacts are designed to provide comfort and structure, and in many organisations they do exactly that.

After an incident, these same materials are often revisited and found wanting. A clean audit does not explain why a particular failure mode was acceptable. A certification does not demonstrate that the board understood systemic or supply-chain risk. A green dashboard does not show how conflicting expert opinions were handled or why certain warnings were deprioritised.

The problem is not that these inputs are wrong, but that they are often misunderstood as protection, with boards assuming their presence will speak for itself. In hindsight, they rarely do. Regulators and investigators do not ask whether a framework was followed. They ask whether the board engaged with the substance of the risk and whether the information presented was sufficient to support the decisions that were made.

This is particularly acute in regulated and interconnected environments. In financial services, healthcare, SaaS platforms and IT services with downstream dependencies, the impact of a cyber incident extends beyond the organisation itself. Inputs that focus narrowly on internal controls fail to account for concentration risk, third-party failure or cascading effects across customers and partners.

The uncomfortable reality is that many board packs are optimised for reassurance rather than judgment. They summarise activity and status without surfacing uncertainty or forcing trade-offs into the open. As a result, they provide limited protection once scrutiny turns from what was done to how decisions were made.

This leads directly to one of the hardest questions directors face: how to demonstrate reasonable care when experts themselves disagree. That tension, and how it is judged after the fact, is where many accountability cases are ultimately decided.

“Reasonable care” when experts disagree

One of the most paralysing aspects of cyber risk at the board level is that expert advice is rarely aligned. Internal teams, external advisers, auditors and insurers often present different views of the same risk, each grounded in their own incentives, methodologies and risk tolerance. For directors, this creates a genuine dilemma: whose advice defines “reasonable care”?

The mistake many boards make is assuming that reasonable care means finding the correct answer. In reality, it means demonstrating how disagreement was handled. After an incident, scrutiny does not focus on whether the board sided with the expert who later proved to be right. It focuses on whether competing perspectives were surfaced, tested and weighed in a structured way.

Where directors get exposed is when disagreement is smoothed over rather than explored. When reassurance replaces challenge. When one authoritative voice is accepted without examining its assumptions, limitations or blind spots. In hindsight, a consensus achieved too easily can look like a lack of diligence rather than good governance.

Reasonable care, as it is increasingly interpreted, is procedural as much as substantive. It is evidenced through questions asked, scenarios explored and trade-offs made explicit. Did the board understand where experts disagreed and why? Did it recognise what was uncertain rather than treating estimates as facts? Did it revisit earlier judgments as the business, threat landscape or regulatory expectations changed?

This is particularly relevant in fast-scaling and regulated environments, where yesterday’s reasonable position can quietly become today’s exposure. Boards that rely on static assurances struggle to show that care was ongoing rather than assumed.

Where expert views diverge, boards often find value in stepping outside formal assurance processes. A focused conversation with an experienced, independent practitioner can help clarify where disagreement is material and which uncertainties actually require board-level decisions.

 

Reasonable care is not demonstrated by choosing the right expert. It is demonstrated by how disagreement and uncertainty were handled at the board level, long before outcomes were known.

 

What regulators examine after the fact

In my experience, once an incident has occurred, regulatory attention rarely centres on whether an organisation followed a particular framework or ticked the expected boxes. Those elements may set the baseline, but they are not where accountability is ultimately assessed.

Regulators examine the quality of decision-making. Investigations tend to reconstruct what the board knew, when it knew it, and how that information was interpreted. They look for evidence that cyber risk was understood in business terms, not just reported as a technical issue. They examine whether material risks were escalated appropriately, whether warnings were contextualised, and whether the board challenged assumptions rather than accepting reassurance at face value.

A recurring pattern in post-incident reviews is the gap between information provided and understanding achieved. Boards may have received regular updates, yet still struggle to demonstrate that they grasped the implications of issues such as recovery time, data concentration, third-party dependence or operational resilience. The presence of reporting is not taken as proof of comprehension.

Another area of focus is how decisions evolved over time. Regulators often look for signs that earlier judgments were revisited as the organisation scaled, entered new markets, or increased its reliance on digital infrastructure and suppliers. Static risk positions in a dynamic environment raise questions about ongoing oversight.

Importantly, regulators also examine what was not discussed. Missing scenarios, untested assumptions and deferred conversations can become as significant as recorded decisions. Silence is easily interpreted as oversight rather than prioritisation once the impact is visible.

This post-event lens reinforces a difficult truth for boards: documentation and controls are supporting evidence, not the case itself. The central question remains whether directors exercised informed judgment in the face of uncertainty. That is why digital resilience cannot be reduced to a control problem alone, and why many organisations continue to strengthen defences without reducing board-level exposure.

Why digital resilience is a judgment problem, not a control problem

In response to rising scrutiny, many organisations default to strengthening controls. More tooling, more reporting, more assurance. While these investments are often necessary, they rarely address the core issue boards face.

Controls do not make decisions. Boards do. A purely control-led approach therefore, falls short: controls describe what exists, while judgment explains why it was considered sufficient.

Digital resilience breaks down not because a specific safeguard was missing, but because assumptions about failure, recovery and impact were never fully examined. Questions such as how long critical services can realistically be unavailable, which data losses are existential rather than inconvenient, or where dependencies create single points of failure are judgment calls. They cannot be automated or outsourced.

Resilience, in practice, is about how an organisation absorbs and responds to disruption. That depends on prior choices: where investment was prioritised, which risks were consciously accepted, and how trade-offs between speed, cost and robustness were made. These are board-level decisions, even when the execution sits elsewhere.

In many organisations, this distinction is blurred, with cyber risk discussed as a technical status rather than a strategic posture, creating a false sense of safety before incidents and limited defensibility after.

Recognising digital resilience as a judgment problem reframes the board’s task. It shifts the conversation from “are our controls adequate?” to “are we comfortable with the outcomes if they fail?” That reframing also explains why some boards emerge from incidents with their credibility intact while others do not. The patterns behind that difference are remarkably consistent.

Patterns we repeatedly see with boards under pressure

When we examine cyber incidents across sectors and jurisdictions, certain board-level patterns recur. They are not about industry, size or maturity. They are about how risk is framed and discussed when uncertainty is unavoidable.

Boards that struggle after an incident often share similar traits. Cyber risk is treated as a standing agenda item, but the conversation remains high-level and procedural. Updates focus on progress and compliance rather than exposure and consequence. Reassurance is prioritised over discomfort, and challenge is episodic rather than systematic.

By contrast, boards that remain defensible tend to surface uncertainty deliberately. They ask how assumptions could fail, not just whether controls are in place. They revisit earlier judgments as the business changes, rather than relying on positions that were reasonable at a different scale or operating model. Importantly, they make trade-offs explicit, even when those trade-offs are uncomfortable.

Another recurring pattern is how time is used. Boards under pressure often compress cyber discussions into short updates, reinforcing the idea that this is a technical status to be noted. Boards that hold up better allocate time to scenarios and implications, particularly where resilience intersects with customers, regulators or critical suppliers.

There is also a difference in how accountability is distributed. In weaker patterns, responsibility is implicitly pushed downwards: to security teams, to external advisers, or to frameworks. In stronger ones, the board retains ownership of the judgment, even while relying on expert input.

These patterns matter because they shape the evidence that exists after an incident. They influence what was asked, what was challenged and what was documented. In environments where regulation is intensifying, those patterns become even more consequential, not because they remove uncertainty, but because they determine how that uncertainty is judged.

Regulated contexts raise the stakes, not the clarity

In regulated and highly interconnected environments, the pressure on directors intensifies, but the underlying uncertainty persists. Frameworks and legislation are often interpreted as sources of clarity. In practice, they do something different. They raise expectations around governance while leaving the hardest judgments firmly unresolved.

Frameworks such as DORA, NIS2 and the UK’s evolving resilience and cyber accountability landscape formalise what regulators already examine after an incident. They make board-level responsibility explicit. They increase requirements around oversight, escalation and documentation. What they do not provide is a definitive answer to what constitutes “enough” in any given organisation.

Regulatory context: what directors are expected to consider

Digital Operational Resilience Act (DORA)
EU regulation applicable to financial entities from January 2025.

Key considerations for directors:

  • Whether the board has explicit oversight of digital operational resilience, not just IT or security controls
  • How tolerance for disruption is defined and tested, including recovery objectives for critical services
  • How material ICT third-party dependencies are governed and challenged at board level
  • Whether incident reporting and escalation reflect business impact rather than technical severity

NIS2 Directive
EU directive strengthening cybersecurity and governance obligations across critical and important sectors.

Key considerations for directors:

  • Personal accountability for approving and overseeing cybersecurity risk management measures
  • Whether risk discussions move beyond compliance to cover realistic failure and impact scenarios
  • How disagreements or uncertainty in expert advice are surfaced and addressed
  • Whether cyber risk oversight evolves as the organisation scales or changes operating model

UK Cyber Security and Resilience Bill (CSRB) 2026
Proposed UK legislation expanding and strengthening the existing NIS framework.

Key considerations for directors:

  • Clarity on board-level responsibility for resilience and incident preparedness
  • Whether reporting to the board enables informed judgment rather than reassurance
  • How resilience expectations extend across suppliers and service dependencies
  • Whether decisions and trade-offs are documented in a way that stands up to post-incident scrutiny

The presence of regulation can create the impression that compliance equates to safety, or that following prescribed structures will shield the board from scrutiny. After an incident, that assumption rarely holds. Regulatory reviews tend to look past formal alignment and focus on whether the board exercised informed judgment within the framework.

In sectors such as financial services, healthcare, SaaS platforms and IT services with downstream dependencies, this effect is magnified. Operational resilience is inseparable from third-party risk, data concentration and systemic impact. Regulation acknowledges these realities but still leaves boards to decide how much exposure is acceptable and how trade-offs should be managed.

The net effect is not reduced accountability, but heightened visibility. Regulation increases the volume of evidence available after the fact, without narrowing the space for judgment. For directors, this reinforces a difficult truth: frameworks can support decision-making, but they cannot replace it.

Orienting the board: from fear of being wrong to being defensible

For many directors, the instinctive response to rising cyber accountability is caution. Decisions are delayed, discussions remain high-level, and responsibility is pushed towards experts in the hope that technical authority will provide protection. It rarely does.

A more effective shift is not towards certainty, but towards defensibility.

Being defensible does not mean preventing every incident. It means being able to show that decisions were made deliberately, with a clear understanding of what was uncertain and what was at stake. Boards that orient themselves this way focus less on eliminating risk and more on clarifying which risks they are consciously accepting and why.

In practice, this changes the nature of board discussions. The focus moves from whether controls exist to what happens when they fail. Scenarios are used to test assumptions rather than reassure. Expert input becomes most valuable not when it delivers another assessment, but when it helps the board sharpen its judgment and surface blind spots early.

This is why many boards benefit from engaging senior cybersecurity expertise before committing to a full audit or programme. A focused, exploratory discussion can help directors pressure-test assumptions, understand where expert views diverge, and clarify what decisions actually sit at board level, without immediately defaulting to heavyweight remediation.

That orientation also leaves a clearer record. When trade-offs are explicit and revisited over time, the board’s role is visible. And when digital resilience is judged after the breach, that visibility often matters more than the volume of controls in place.

Cyber risk will continue to demand judgment under uncertainty. Regulation and controls may evolve, but they do not remove the board’s responsibility to decide, or to explain those decisions when scrutiny inevitably follows.