TryGliding Just Go membership Classifieds

Safety Differently

safety

By Professor Sidney Dekker
National Safety Advisor

An insight hit home when I was working with a health authority in Canada a couple of years ago. The insight was this: when we talk about safety, we actually don’t talk about safety. We talk about the lack of it—the absence of it. We talk about incidents, we investigate accidents, we scratch our heads at the mismanagement of risk by our fellow pilots. We even measure our safety by the number of instances in which it was absent – for example, an accident rate.

As one of my doctoral students said, it’s as if we are trying to understand how to have a successful marriage by studying divorce.

It was time to start doing, and seeing, safety differently. Think of safety outcomes as a hypothetical Gaussian, or normal curve, also known as a bell curve. The curve shows that the number of the things that go wrong – the left side of the curve – is tiny. On the right side of the curve are the heroic, unexpected surprises – a Hudson River landing by Sully, for instance – that fall far outside what people would normally experience or have to deal with.

Middle of the Curve

In between, the huge bulbous middle of the figure, sits the daily creation of success. This is where good outcomes are made, despite the organisational, operational and financial obstacles, despite the rules, bureaucracy and common frustrations. This is where work can be hard, but is still successful.

This figure suggests that the way to improve safety is not by trying to make the red part of things that go wrong even smaller, but by understanding what happens in the big middle part where things go right and then enhancing the capacities that make it so. That way, we make the red part smaller by making the white part bigger.

Let’s go back to that hospital that cemented my insight. It was actually a large health authority that employs some 25,000 people. The patient safety statistics were dire, if typical: one in 13 of the patients who walked or were carried through the doors to receive care were hurt in the process of receiving that care – one in 13, or 7%.

When we asked the health authority what they typically found in the cases that went wrong, here is what they came up with. Among the patterns that all their incident data yielded, they consistently find:

• Workarounds
• Shortcuts
• Violations
• Guidelines not followed
• Errors and miscalculations
• Unfindable people or tools
• Unreliable measurements
• User-unfriendly technologies
• Organisational frustrations
• Supervisory shortcomings

Weakest Link

It seems to be a pretty intuitive and straightforward list. It also firmly belongs to a particular understanding of safety: the person is the weakest link. The ‘human factor’ is a set of mental and moral deficiencies that only great systems and stringent supervision can protect against. Following that sort of logic, we have great systems and solid procedures—it’s just those bloody-minded people who are unreliable or non-compliant. You probably recognise the logic:

• People are the problem to control
• We need to find out what people did wrong
• We write or enforce more rules
• We tell everyone to try harder
• We get rid of bad apples

Many safety strategies, to the extent that you can call them that, are organized around these very premises. Poster campaigns remind people of particular risks they need to be aware of. Strict surveillance and compliance monitoring gets done to achieve certain ‘zero-tolerance’ or ‘zero-harm’ goals.

The health authority had been doing that sort of stuff as well. None of it helped. They were still stuck at one-in-thirteen.

Ask the Right Question

Then I asked the question that my colleague Erik Hollnagel, a professor of cognitive systems and author of the book ‘Safety I – Safety II’, would have asked, “What about the other twelve? Do you even know why they go right? Have you ever asked yourself that question?”

The answer from the health authority was “no.” All the resources that they had for safety were directed toward investigating and understanding the cases that went wrong. There was organisational, regulatory, reputational and political pressure to do so, for sure, and the resources to investigate the instances of harm were too meager to begin with. This was all they could do.

So we then offered to do it for them. In an acutely unscientific and highly opportunistic way, we spent time in the hospitals of the authority to find out what happened when things went well, when there was no evidence of adverse events or patient harm.

At first we couldn’t believe our data, but it turned out that everybody found that in the 12 cases that go right – the cases that don’t result in an adverse event or patient harm – there were:

• Workarounds
• Shortcuts
• Violations
• Guidelines not followed
• Errors and miscalculations
• Unfindable people or tools
• Unreliable measurements
• User-unfriendly technologies
• Organizational frustrations
• Supervisory shortcomings

It didn’t seem to make a difference! These things show up all the time, whether the outcome was good or bad. Sound familiar?

Positive Ingredients

But if these things don’t make a difference between what goes right and what goes wrong, then what does? We looked at our notes again. Because there was more. In the 12 cases that went well, we found more of the following than in the one that didn’t go so well:

Diversity of opinion and the possibility to voice dissent
Diversity comes in a variety of ways, but professional diversity – as opposed to gender and racial diversity – is the most important one in this context. Yet whether the team is professionally diverse or not, voicing dissent can be difficult. It is much easier to shut up than to speak up. I was reminded of Ray Dalio, CEO of a large investment fund, who has actually fired people for not disagreeing with him. He said to his employees, “You are not entitled to hold a dissenting opinion … which you don’t voice.”

Keeping a discussion about risk alive and not taking past success as a guarantee for safety
In complex systems, past results are no assurance for the same outcome today, because conditions and factors may have subtly shifted and changed. Even in repetitive work, such as landing a glider on a day of lots of instruction flights, repetition doesn’t mean replicability or reliability: the need to be poised to adapt is ever-present. Making this explicit in briefings or other pre-flight conversations that address the subtleties and choreographies of the present tasks and the people doing them, will help things go right.

Deference to expertise
This means asking the person who knows, not the person who happens to be in charge. In gliding, it also means being able to differentiate between those who have an opinion on something – we’ve got plenty of them – from those who actually know their stuff. Deference to expertise is generally deemed critical for maintaining safety. Research into so-called high-reliability organisations shows that safe ones push decision-making down and around, creating a recognizable pattern of decisions ‘migrating’ to expertise.

Ability to say stop
As Berkeley researchers Barton and Sutcliffe found in an analysis of bush firefighting, “a key difference between incidents that ended badly and those that did not was the extent to which individuals voiced their concerns about the early warning signs”. Amy Edmondson at Harvard calls for the presence of ‘psychological safety’ as a crucial capacity in teams that allow members to safely speak up and voice concerns. In her work on medical teams, too, the presence of such capacities were much more predictive of good outcomes than the absence of non-compliance or other negative indicators.

Broken down barriers between hierarchies and departments
As is frequently obvious after an accident has happened, the total intelligence required to foresee bad things was often present in an organisation, but scattered across various units or silos. Get people to talk to each other – operations, planning, marketing, maintenance, training, finance – and break down the barriers between them.

Don’t wait for audits or inspections to improve
If the team or organisation waited for an audit or an inspection to discover failed parts or processes, they were way behind the curve. After all, you cannot inspect safety or quality into a process. The people who carry out the process create safety – every day (Deming, 1982). Subtle, uncelebrated expressions of fixes and improvements are everywhere in a safe organisation, if you know where to look. They are found among the kinds of improvements and ways in which people ‘finish the design’ of their systems so that error traps are eliminated and things go well rather than badly.

Pride of workmanship
This trait is linked to a willingness and ability to improve without being prodded by audits or inspections. Teams that take evident pride in the products of their work, and the workmanship behind it, tended to end up with more good results. What can an organisation do to support this? They can start by enabling their workers to do what they want and need to do by removing unnecessary constraints and decluttering the bureaucracy surrounding their daily lives.

Making it Go Right

The difference between things going right and going wrong was not in the absence of negatives, like violations. No, the difference was in the presence of positive capacities! Even organisations like NASA are getting around to this insight. “Focusing on the rare cases of failures attributed to ‘human error’ provides little information about why human performance almost always goes right. Similarly, focusing on the lack of safety provides limited information about how to improve safety,” they concluded in a symposium just last year.

If we apply it to our own sport, we might well recognise some or all of these capacities as responsible for why things go right! Because indeed, it is easy to see that much more in gliding goes right than goes wrong. Seeing safety only in relation to a small number of incidents or accidents is important but severely limits how we can learn and improve.

Capacities for Safety

Understanding how success is created is just as important, if not more so. This is why Gliding Australia might see its safety not just as the absence of negative events or the existence of a safety bureaucracy, but much more as the presence of capacities that make things go well—in its organisation, in clubs, in training panels, in individual pilots.

This includes capacity to anticipate the changing face of risk and new harmful influences, for example, airspace changes, demographic shifts, technological developments and airworthiness insights.

Pilots need a capacity to be ready to respond and manage risk in more ways than just writing another rule to plug the hole that was found. Examples might include investing in pilot competencies, sharing accounts in Gliding Australia Magazine and other outlets, and carefully testing new systems such as those for collision avoidance.

A capacity to proactively learn and keep our conversation about risk open and alive is also valuable, not only within clubs, but also through the immediate, non-punitive, expert-based and independent analyses of incidents through the SOAR system;

The capacity to show curiosity, instead of judgment when confronted with non-conformances, helps understand incidents. Why did it seem impossible for this pilot, or this club, to complete the operation and follow the rules at the same time? The capacity to remain curious and open-minded, to withhold judgment and withstand pressure to act immediately, is critical here.

The capacity to respond justly to incidents by asking about the (potential) impacts of the event will reveal the needs that these impacts have created, and how our community shares the obligation to start meeting those needs.

Are these starting points for you and your club to identify some of the capacities that make things go right? If so, how would you enhance those capacities? What can you do to make them even better, more omnipresent, and more resilient? It is also an incomplete list. Perhaps you have found other capacities in your teams, in your people, and in your systems and processes that seem to account for good outcomes. What are they? What can you add?