Never Events and Nailing Niggles

Some events should NEVER happen – such as removing the wrong kidney; or injecting an anti-cancer drug designed for a vein into the spine; or sailing a cruise ship over a charted underwater reef; or driving a bus full of sleeping school children into a concrete wall.

But  these catastrophic irreversible and tragic Never Events do keep happening – rarely perhaps – but persistently. At the Never-Event investigation the Finger-of-Blame goes looking for the incompetent culprit while the innocent victims call for compensation.

And after the smoke has cleared and the pain of loss has dimmed another Never-Again-Event happens – and then another, and then another. Rarely perhaps – but not never.

Never Events are so awful and emotionally charged that we remember them and we come to believe that they are not rare and from that misperception we develop a constant nagging feeling of fear for the future. It is our fear that erodes our trust which leads to the paralysis that prevents us from acting.  In the globally tragic event of 9/11 several thousand innocents victims died while the world watched in horror.  More innocent victims than that die needlessly every day in high-tech hospitals from avoidable errors – but that statistic is never shared.

The metaphor that is often used is the Swiss Cheese – the sort on cartoons with lots of holes in it. The cheese represents a quality check – a barrier that catches and corrects mistakes before they cause irreversible damage. But the cheesy check-list is not perfect; it has holes in it.  Mistakes slip through.

So multiple layers of cheesy checks are added in the hope that the holes in the earlier slices will be covered by the cheese in the later ones – and our experience shows that this multi-check design does reduce the number of mistakes that get through. But not completely. And when, by rare chance, holes in each slice line up then the error penetrates all the way through and a Never Event becomes a Actual Catastrophe.  So, the typical recommendation from the after-the-never-event investigation is to add another layer of cheese to the stack – another check on the list on top of all the others.

But the cheese is not durable: it deteriorates over time with the incessant barrage of work and the pressure of increasing demand. The holes get bigger, the cheese gets thinner, and new holes appear. The inevitable outcome is the opening up of unpredictable, new paths through the cheese to a Never Event; more Never Events; more after-the-never-event investigation; and more slices of increasingly expensive and complex cheese added to the tottering, rotting heap.

A drawback of the Swiss Cheese metaphor is that it gives the impression that the slices are static and each cheesy check has a consistent position and persistent set of flaws in it. In reality this is not the case – the system behaves as if the slices and the holes are moving about: variation is jiggling , jostling and wobbling the whole cheesy edifice.

This wobble does not increase the risk of a Never Event  but it prevents the subsequent after-the-event investigation from discovering the specific conjunction of holes that caused it. The Finger of Blame cannot find a culprit and the cause is labelled a “system failure” or an unlucky individual is implicated and named-shamed-blamed and sacrificed to the Gods of Chance on the Alter of Hope! More often new slices of KneeJerk Cheese are added in the desperate hope of improvement – and creating an even greater burden of back-covering bureaucracy than before – and paradoxically increasing the number of holes!

Improvement Science offers a more rational, logical, effective and efficient approach to dissolving this messy, inefficient and ineffective safety design.

First it recognises that to prevent a Never Event then no errors should reach the last layer of cheese checking – the last opportunity to block the error trajectory. An error that penetrates that far is a Near Miss and these will happen more often than Never Events so they are the key to understanding and dissolving the problem.

Every Near Miss that is detected should be reported and investigated immediately – because that is the best time to identify the hole in the previous slice – before it wobbles out of sight. The goal of the investigation is understanding not accountability. Failure to report a near miss; failure to investigate it; failure to learn from it; failure to act on it; and failure to monitor the effect of the action are all errors of omission (EOOs) and they are the worst of management crimes.

The question to ask is “What error happened immediately before the Near Miss?”  This event is called a Not Again. Focussing attention on this Not Again and understanding what, where, when, who and how it happened is the path to preventing the Near Miss and the Never Event.  Why is not the question to ask – especially when trust is low and cynicism and fear are high – the question to ask is “how”.

The first action after Naming the Not Again is to design a counter-measure for it – to plug the hole – NOT to add another slice of Check-and Correct cheese! The second necessary action is to treat that Not Again as a Near-Miss and to monitor it so when it happens again the cause can be identified. These common, every day, repeating causes of Not Agains are called Niggles; the hundreds of minor irritations that we just accept as inevitable. This is where the real work happens – identifying the most common Niggle and focussing all attention on nailing it! Forever.  Niggle naming and nailing is everyone’s responsibility – it is part of business-as-usual – and if leaders do not demonstrate the behaviour and set the expectation then followers will not do it.

So what effect would we expect?

To answer that question we need a better metaphor than our static stack of Swiss cheese slices: we need something more dynamic – something like a motorway!

Suppose you were to set out walking across a busy motorway with your eyes shut and your fingers in your ears – hoping to get to the other side without being run over. What is the chance that you will make it across safely?  It depends on how busy the traffic is and how fast you walk – but say you have a 50:50 chance of getting across one lane safely (which is the same chance as tossing a fair coin and getting a head) – what is the chance that you will get across all six lanes safely? The answer is the same chance as tossing six heads in a row: a 1-in-2 chance of surviving the first lane (50%), a 1 in 4 chance of getting across two lanes (25%), a 1 in 8 chance of making it across three (12.5%) …. to a 1 in 64 chance of getting across all six (1.6%). Said another way that is a 63 out of 64 chance of being run over somewhere which is a 98.4% chance of failure – near certain death! Hardly a Never Event.

What happens to our risk of being run over if the traffic in just one lane is stopped and that lane is now 100% safe to cross? Well you might think that it depends on which lane it is but it doesn’t – the risk of failure is now 31/32 or 96.8% irrespective of which lane it is – so not much improvement apparently!  We have doubled the chance of success though!

Is there a better improvement strategy?

What if we work collectively to just reduce the flow of Niggles in all the lanes at the same time – and suppose we are all able to reduce the risk of a Niggle in our lane-of-influence from 1-in-2 to 1-in-6. How we do it is up to us. To illustrate the benefit we replace our coin with a six-sided die (no pun intended) and we only “die” if we throw a 1.  What happens to our pedestrian’s probability of survival? The chance of surviving the first lane is now 5/6 (83.3%), and both first and second 5/6 x 5/6 = 25/36 (69%.4) and so on to all six lanes which is 5/6 x 5/6 x 5/6 x 5/6 x 5/6 x 5/6 = 15625/46656 = 33.3% which is a lot better than our previous 1.6%!  And what if we keep plugging the holes in our bits of the cheese and we increase our individual lane success rate to 95% – our pedestrians probability of survival is now 73.5%. The chance of a catastrophic event becomes less and less.

The arithmetic may be a bit scary but the message is clear: to prevent the Never Events we must reduce the Near Misses and to to do that we investigate every Near Miss and expose the Not Agains and then use them to Name and Nail all the Niggles.  And we have complete control over the causes of our commonest Niggles because we create them.

This strategy will improve the safety of our system. It has another positive benefit – it will free up our Near Miss investigation team to do something else: it frees them to assist in the re-design the system so that Not Agains cannot happen at all – they become Never Events too – and the earlier in the path that safety-design happens the better – because it renders the other layers of check-and-correct cheesocracy irrelevant.

Just imagine what would happen in a real system if we did that …

And now try to justify not doing it …

And now consider what an individual, team and organisation would need to learn to do this …

It is called Improvement Science.

And learning the Foundations of Improvement Science in Healthcare (FISH) is one place to start.

fish

Leave a Reply