The Strangeness of LoS

It had been some time since Bob and Leslie had chatted so an email from the blue was a welcome distraction from a complex data analysis task.

<Bob> Hi Leslie, great to hear from you. I was beginning to think you had lost interest in health care improvement-by-design.

<Leslie> Hi Bob, not at all.  Rather the opposite.  I’ve been very busy using everything that I’ve learned so far.  It’s applications are endless, but I have hit a problem that I have been unable to solve, and it is driving me nuts!

<Bob> OK. That sounds encouraging and interesting.  Would you be able to outline this thorny problem and I will help if I can.

<Leslie> Thanks Bob.  It relates to a big issue that my organisation is stuck with – managing urgent admissions.  The problem is that very often there is no bed available, but there is no predictability to that.  It feels like a lottery; a quality and safety lottery.  The clinicians are clamoring for “more beds” but the commissioners are saying “there is no more money“.  So the focus has turned to reducing length of stay.

<Bob> OK.  A focus on length of stay sounds reasonable.  Reducing that can free up enough beds to provide the necessary space-capacity resilience to dramatically improve the service quality.  So long as you don’t then close all the “empty” beds to save money, or fall into the trap of believing that 85% average bed occupancy is the “optimum”.

<Leslie> Yes, I know.  We have explored all of these topics before.  That is not the problem.

<Bob> OK. What is the problem?

<Leslie> The problem is demonstrating objectively that the length-of-stay reduction experiments are having a beneficial impact.  The data seems to say they they are, and the senior managers are trumpeting the success, but the people on the ground say they are not. We have hit a stalemate.


<Bob> Ah ha!  That old chestnut.  So, can I first ask what happens to the patients who cannot get a bed urgently?

<Leslie> Good question.  We have mapped and measured that.  What happens is the most urgent admission failures spill over to commercial service providers, who charge a fee-per-case and we have no choice but to pay it.  The Director of Finance is going mental!  The less urgent admission failures just wait on queue-in-the-community until a bed becomes available.  They are the ones who are complaining the most, so the Director of Governance is also going mental.  The Director of Operations is caught in the cross-fire and the Chief Executive and Chair are doing their best to calm frayed tempers and to referee the increasingly toxic arguments.

<Bob> OK.  I can see why a “Reduce Length of Stay Initiative” would tick everyone’s Nice If box.  So, the data analysts are saying “the length of stay has come down since the Initiative was launched” but the teams on the ground are saying “it feels the same to us … the beds are still full and we still cannot admit patients“.

<Leslie> Yes, that is exactly it.  And everyone has come to the conclusion that demand must have increased so it is pointless to attempt to reduce length of stay because when we do that it just sucks in more work.  They are feeling increasingly helpless and hopeless.

<Bob> OK.  Well, the “chronic backlog of unmet need” issue is certainly possible, but your data will show if admissions have gone up.

<Leslie> I know, and as far as I can see they have not.

<Bob> OK.  So I’m guessing that the next explanation is that “the data is wonky“.

<Leslie> Yup.  Spot on.  So, to counter that the Information Department has embarked on a massive push on data collection and quality control and they are adamant that the data is complete and clean.

<Bob> OK.  So what is your diagnosis?

<Leslie> I don’t have one, that’s why I emailed you.  I’m stuck.


<Bob> OK.  We need a diagnosis, and that means we need to take a “history” and “examine” the process.  Can you tell me the outline of the RLoS Initiative.

<Leslie> We knew that we would need a baseline to measure from so we got the historical admission and discharge data and plotted a Diagnostic Vitals Chart®.  I have learned something from my HCSE training!  Then we planned the implementation of a visual feedback tool that would show ward staff which patients were delayed so that they could focus on “unblocking” the bottlenecks.  We then planned to measure the impact of the intervention for three months, and then we planned to compare the average length of stay before and after the RLoS Intervention with a big enough data set to give us an accurate estimate of the averages.  The data showed a very obvious improvement, a highly statistically significant one.

<Bob> OK.  It sounds like you have avoided the usual trap of just relying on subjective feedback, and now have a different problem because your objective and subjective feedback are in disagreement.

<Leslie> Yes.  And I have to say, getting stuck like this has rather dented my confidence.

<Bob> Fear not Leslie.  I said this is an “old chestnut” and I can say with 100% confidence that you already have what you need in your T4 kit bag?

<Leslie>Tee-Four?

<Bob> Sorry, a new abbreviation. It stands for “theory, techniques, tools and training“.

<Leslie> Phew!  That is very reassuring to hear, but it does not tell me what to do next.

<Bob> You are an engineer now Leslie, so you need to don the hard-hat of Improvement-by-Design.  Start with your Needs Analysis.


<Leslie> OK.  I need a trustworthy tool that will tell me if the planned intervention has has a significant impact on length of stay, for better or worse or not at all.  And I need it to tell me that quickly so I can decide what to do next.

<Bob> Good.  Now list all the things that you currently have that you feel you can trust.

<Leslie> I do actually trust that the Information team collect, store, verify and clean the raw data – they are really passionate about it.  And I do trust that the front line teams are giving accurate subjective feedback – I work with them and they are just as passionate.  And I do trust the systems engineering “T4” kit bag – it has proven itself again-and-again.

<Bob> Good, and I say that because you have everything you need to solve this, and it sounds like the data analysis part of the process is a good place to focus.

<Leslie> That was my conclusion too.  And I have looked at the process, and I can’t see a flaw. It is driving me nuts!

<Bob> OK.  Let us take a different tack.  Have you thought about designing the tool you need from scratch?

<Leslie> No. I’ve been using the ones I already have, and assume that I must be using them incorrectly, but I can’t see where I’m going wrong.

<Bob> Ah!  Then, I think it would be a good idea to run each of your tools through a verification test and check that they are fit-4-purpose in this specific context.

<Leslie> OK. That sounds like something I haven’t covered before.

<Bob> I know.  Designing verification test-rigs is part of the Level 2 training.  I think you have demonstrated that you are ready to take the next step up the HCSE learning curve.

<Leslie> Do you mean I can learn how to design and build my own tools?  Special tools for specific tasks?

<Bob> Yup.  All the techniques and tools that you are using now had to be specified, designed, built, verified, and validated. That is why you can trust them to be fit-4-purpose.

<Leslie> Wooohooo! I knew it was a good idea to give you a call.  Let’s get started.


[Postscript] And Leslie, together with the other stakeholders, went on to design the tool that they needed and to use the available data to dissolve the stalemate.  And once everyone was on the same page again they were able to work collaboratively to resolve the flow problems, and to improve the safety, flow, quality and affordability of their service.  Oh, and to know for sure that they had improved it.

Righteous Indignation

On 5th July 2018, the NHS will be 70 years old, and like many of those it was created to serve, it has become elderly and frail.

We live much longer, on average, than we used to and the growing population of frail elderly are presenting an unprecedented health and social care challenge that the NHS was never designed to manage.

The creases and cracks are showing, and each year feels more pressured than the last.


This week a story that illustrates this challenge was shared with me along with permission to broadcast …

“My mother-in-law is 91, in general she is amazingly self-sufficient, able to arrange most of her life with reasonable care at home via a council tendered care provider.

She has had Parkinson’s for years, needing regular medication to enable her to walk and eat (it affects her jaw and swallowing capability). So the care provision is time critical, to get up, have lunch, have tea and get to bed.

She’s also going deaf, profoundly in one ear, pretty bad in the other. She wears a single ‘in-ear’ aid, which has a micro-switch on/off toggle, far too small for her to see or operate. Most of the carers can’t put it in, and fail to switch it off.

Her care package is well drafted, but rarely adhered to. It should be 45 minutes in the morning, 30, 15, 30 through the day. Each time administering the medications from the dossette box. Despite the register in/out process from the carers, many visits are far less time than designed (and paid for by the council), with some lasting 8 minutes instead of 30!

Most carers don’t ensure she takes her meds, which sometimes leads to dropped pills on the floor, with no hope of picking them up!

While the care is supposedly ‘time critical’ the provider don’t manage it via allocated time slots, they simply provide lists, that imply the order of work, but don’t make it clear. My mother-in-law (Mum) cannot be certain when the visit will occur, which makes going out very difficult.

The carers won’t cook food, but will micro-wave it, thus if a cooked meal is to happen, my Mum will start it, with the view of the carers serving it. If they arrive early, the food is under-cooked (“Just put vinegar on it, it will taste better”) and if they arrive late, either she’ll try to get it out herself, or it will be dried out / cremated.

Her medication pattern should be every 4 to 5 hours in the day, with a 11:40 lunch visit, and a 17:45 tea visit, followed by a 19:30 bed prep visit, she finishes up with too long between meds, followed by far too close together. Her GP has stated that this is making her health and Parkinson’s worse.

Mum also rarely drinks enough through the day, in the hot whether she tends to dehydrate, which we try to persuade her must be avoided. Part of the problem is Parkinson’s related, part the hassle of getting to the toilet more often. Parkinson’s affects swallowing, so she tends to sip, rather than gulp. By sipping often, she deludes herself that she is drinking enough.

She also is stubbornly not adjusting methods to align to issues. She drinks tea and water from her lovely bone china cups. Because her grip is not good and her hand shakes, we can’t fill those cups very high, so her ‘cup of tea’ is only a fraction of what it could be.

As she can walk around most days, there’s no way of telling whether she drinks enough, and she frequently has several different carers in a day.

When Mum gets dehydrated, it affects her memory and her reasoning, similar to the onset of dementia. It also seems to increase her probability of falling, perhaps due to forgetting to be defensive.

When she falls, she cannot get up, thus usually presses her alarm dongle, resulting in me going round to get her up, check for concussion, and check for other injuries, prior to settling her down again. These can be ten weeks apart, through to a few in a week.

When she starts to hallucinate, we do our very best to increase drinking, seeking to re-hydrate.

On Sunday, something exceptional happened, Mum fell out of bed and didn’t press her alarm. The carer found her and immediately called the paramedics and her GP, who later called us in. For the first time ever she was not sufficiently mentally alert to press her alarm switch.

After initial assessment, she was taken to A&E, luckily being early on Sunday morning it was initially quite quiet.

Hospital

The Hospital is on the boundary between two counties, within a large town, a mixture of new build elements, between aging structures. There has been considerable investment within A&E, X-ray etc. due partly to that growth industry and partly due to the closures of cottage hospitals and reducing GP services out of hours.

It took some persuasion to have Mum put on a drip, as she hadn’t had breakfast or any fluids, and dehydration was a probable primary cause of her visit. They took bloods, an X-ray of her chest (to check for fall related damage) and a CT scan of her head, to see if there were issues.

I called the carers to tell them to suspend visits, but the phone simply rang without be answered (not for the first time.)

After about six hours, during which time she was awake, but not very lucid, she was transferred to the day ward, where after assessment she was given some meds, a sandwich and another drip.

Later that evening we were informed she was to be kept on a drip for 24 hours.

The next day (Bank Holiday Monday) she was transferred to another ward. When we arrived she was not on a drip, so their decisions had been reversed.

I spoke at length with her assigned staff nurse, and was told the following: Mum could come out soon if she had a 24/7 care package, and that as well as the known issues mum now has COPD. When I asked her what COPD was, she clearly didn’t know, but flustered a ‘it is a form of heart failure that affects breathing’. (I looked it up on my phone a few minutes later.)

So, to get mum out, I had to arrange a 24/7 care package, and nowhere was open until the next day.

Trying to escalate care isn’t going to be easy, even in the short term. My emails to ‘usually very good’ social care people achieved nothing to start with on Tuesday, and their phone was on the ‘out of hours’ setting for evenings and weekends, despite being during the day of a normal working week.

Eventually I was told that there would be nothing to achieve until the hospital processed the correct exit papers to Social Care.

When we went in to the hospital (on Tuesday) a more senior nurse was on duty. She explained that mum was now medically fit to leave hospital if care can be re-established. I told her that I was trying to set up 24/7 care as advised. She looked through the notes and said 24/7 care was not needed, the normal 4 x a day was enough. (She was clearly angry).

I then explained that the newly diagnosed COPD may be part of the problem, she said that she’s worked with COPD patients for 16 years, and mum definitely doesn’t have COPD. While she was amending the notes, I noticed that mum’s allergy to aspirin wasn’t there, despite us advising that on entry. The nurse also explained that as the hospital is in one county, but almost half their patients are from another, they are always stymied on ‘joined up working’

While we were talking with mum, her meds came round and she was only given paracetamol for her pain, but NOT her meds for Parkinson’s. I asked that nurse why that was the case, and she said that was not on her meds sheet. So I went back to the more senior nurse, she checked the meds as ordered and Parkinson’s was required 4 x a day, but it was NOT transferred onto the administration sheet. The doctor next to us said she would do it straight away, and I was told, “Thank God you are here to get this right!”

Mum was given her food, it consisted of some soup, which she couldn’t spoon due to lack of meds and a dry tough lump of gammon and some mashed sweet potato, which she couldn’t chew.

When I asked why meds were given at five, after the delivery of food, they said ‘That’s our system!’, when I suggested that administering Parkinson’s meds an hour before food would increase the ability to eat the food they said “that’s a really good idea, we should do that!”

On Wednesday I spoke with Social Care to try to re-start care to enable mum to get out. At that time the social worker could neither get through to the hospital nor the carers. We spoke again after I had arrived in hospital, but before I could do anything.

On arrival at the hospital I was amazed to see the white-board declaring that mum would be discharged for noon on Monday (in five days-time!). I spoke with the assigned staff nurse who said, “That’s the earliest that her carers can re-start, and anyway its nearly the weekend”.

I said that “mum was medically OK for discharge on Tuesday, after only two days in the hospital, and you are complacent to block the bed for another six days, have you spoken with the discharge team?”

She replied, “No they’ll have gone home by now, and I’ve not seen them all day” I told her that they work shifts, and that they will be here, and made it quite clear if she didn’t contact SHEDs that I’d go walkabout to find them. A few minutes later she told me a SHED member would be with me in 20 minutes.

While the hospital had resolved her medical issues, she was stuck in a ward, with no help to walk, the only TV via a complex pay-for system she had no hope of understanding, with no day room, so no entertainment, no exercise, just boredom encouraged to lay in bed, wear a pad because she won’t be taken to the loo in time.

When the SHED worker arrived I explained the staff nurse attitude, she said she would try to improve those thinking processes. She took lots of details, then said that so long as mum can walk with assistance, she could be released after noon, to have NHS carer support, 4 times a day, from the afternoon. She walked around the ward for the first time since being admitted, and while shaky was fine.

Hopefully all will be better now?”


This story is not exceptional … I have heard it many times from many people in many different parts of the UK.  It is the norm rather than the exception.

It is the story of a fragmented and fractured system of health and social care.

It is the story of frustration for everyone – patients, family, carers, NHS staff, commissioners, and tax-payers.  A fractured care system is unsafe, chaotic, frustrating and expensive.

There are no winners here.  It is not a trade off, compromise or best possible.

It is just poor system design.


What we want has a name … it is called a Frail Safe design … and this is not a new idea.  It is achievable. It has been achieved.

http://www.frailsafe.org.uk

So why is this still happening?

The reason is simple – the NHS does not know any other way.  It does not know how to design itself to be safe, calm, efficient, high quality and affordable.

It does not know how to do this because it has never learned that this is possible.

But it is possible to do, and it is possible to learn, and that learning does not take very long or cost very much.

And the return vastly outnumbers the investment.


The title of this blog is Righteous Indignation

… if your frail elderly parents, relatives or friends were forced to endure a system that is far from frail safe; and you learned that this situation was avoidable and that a safer design would be less expensive; and all you hear is “can’t do” and “too busy” and “not enough money” and “not my job” …  wouldn’t you feel a sense of righteous indignation?

I do.


For more posts like this please vote here.
For more information please subscribe here.

The Pressure Cooker

About a year ago we looked back at the previous 10 years of NHS unscheduled care performance …

click here to read

… and warned that a catastrophe was on the way because we had unintentionally created a urgent care “pressure cooker”.

 

Did waving the red warning flag make any difference? It seems not.

The catastrophe unfolded as predicted … A&E performance slumped to an all-time low, and has not recovered.


A pressure cooker is an elegantly simple self-regulating system.  A strong metal box with a sealed lid and a pressure-sensitive valve.  Food cooks more quickly at a higher temperature, and we can increase the boiling point of water by increasing the ambient pressure.  So all we need to do is put some water in the cooker, close the lid, set the pressure limit we need (i.e. the temperature we want) and apply some heat.  Simple.  As the water boils the steam increases the pressure inside, until the regulator valve opens and lets a bit of steam out.  The more heat we apply – the faster the steam comes out – but the internal pressure and temperature remain constant.  An elegantly simple self-regulating system.


Our unscheduled care acute hospital “pressure cooker” design is very similar – but it has an additional feature – we can squeeze raw patients in through a one-way valve labelled “admissions”.  The internal pressure will eventually squeeze them out through another one-way pressure-sensitive valve called “discharges”.

But there is not much head-space inside our hospital (i.e. empty beds) so pushing patients in will increase the pressure inside, and it will trigger an internal reaction called “fire-fighting” that generates heat (but no insight).  When the internal pressure reaches the critical level, patients are squeezed out; ready-or-not.

What emerges from the chaotic internal cauldron is a mixture of under-cooked, just-right, and over-cooked patients.  And we then conduct quality control audits and we label what we find as “quality variation”, but it looks random so it gives us no clues as to the causes or what to do next.

Equilibrium is eventually achieved – what goes in comes out – the pressure and temperature auto-regulate – the chaos becomes chronic – and the quality of the output is predictably unacceptable and unpredictable, with some of it randomly spoiled (i.e. harmed).

And our acute care pressure cooker is very resistant to external influences. It is one of its key design features, it is an auto-regulating system.


Option 1: Admissions Avoidance
Squeezing a bit less in does not make any difference to the internal pressure and temperature.  It auto-regulates.  The reduced inflow means a reduced outflow and a longer cooking time and we just get less under-cooked and more over-cooked output.  Oh, and we go bust because our revenue has reduced but our costs have not.

Option 2: Build a Bigger Hospital
Building a bigger pressure cooker (i.e. adding more beds) does not make any sustained difference either.  Again the system auto-regulates.  The extra space-capacity allows a longer cooking time – and again we get less under-cooked and more over-cooked output.  Oh, and we still go bust (same revenue but increased cost).

Option 3: Reduce the Expectation
Turning down the heat (i.e. reducing the 4 hr A&E lead time target yield from 98% to 95%) does not make any difference. Our elegant auto-regulating design adjusts itself to sustain the internal pressure and temperature.  Output is still variable, but least we do not go bust.


This metaphor may go some way to explain why the intuitively obvious “initiatives” to improve unscheduled care performance appear to have had no significant or sustained impact.

And what is more worrying is that they may even have made the situation worse.

Also, working inside an urgent care pressure cooker is dangerous.  People get emotionally damaged and permanently scarred.


The good news is that a different approach is available … a health and social care systems engineering (HSCSE) approach … one that we could use to change the fundamental design from fire-fighter to flow-facilitator.

Using HSCSE theory, techniques and tools we could specify, design, build, verify, implement and validate a low-pressure, low-resistance, low-wait, low-latency, high-efficiency unscheduled care flow design that is safe, timely, effective and affordable.

But we are not training NHS staff to do that.

Why is that?  Is is because we are not aware that this is possible, or that we do not believe that it can work, or that we lack the capability to do it? Or all three?

The first step is raising awareness … so here is an example that proves it is possible.

Notably Absent

KingsFund_Quality_Report_May_2016This week the King’s Fund published their Quality Monitoring Report for the NHS, and it makes depressing reading.

These highlights are a snapshot.

The website has some excellent interactive time-series charts that transform the deluge of data the NHS pumps out into pictures that tell a shameful story.

On almost all reported dimensions, things are getting worse and getting worse faster.

Which I do not believe is the intention.

But it is clearly the impact of the last 20 years of health and social care policy.


What is more worrying is the data that is notably absent from the King’s Fund QMR.

The first omission is outcome: How well did the NHS deliver on its intended purpose?  It is stated at the top of the NHS England web site …

NHSE_Purpose

And lets us be very clear here: dying, waiting, complaining, and over-spending are not measures of what we want: health and quality success metrics.  They are a measures of what we do not want; they are failure metrics.

The fanatical focus on failure is part of the hyper-competitive, risk-averse medical mindset:

primum non nocere (first do no harm),

and as a patient I am reassured to hear that but is no harm all I can expect?

What about:

tunc mederi (then do some healing)


And where is the data on dying in the Kings Fund QMR?

It seems to be notably absent.

And I would say that is a quality issue because it is something that patients are anxious about.  And that may be because they are given so much ‘open information’ about what might go wrong, not what should go right.


And you might think that sharp, objective data on dying would be easy to collect and to share.  After all, it is not conveniently fuzzy and subjective like satisfaction.

It is indeed mandatory to collect hospital mortality data, but sharing it seems to be a bit more of a problem.

The fear-of-failure fanaticism extends there too.  In the wake of humiliating, historical, catastrophic failures like Mid Staffs, all hospitals are monitored, measured and compared. And the negative deviants are named, shamed and blamed … in the hope that improvement might follow.

And to do the bench-marking we need to compare apples with apples; not peaches with lemons.  So we need to process the raw data to make it fair to compare; to ensure that factors known to be associated with higher risk of death are taken into account. Factors like age, urgency, co-morbidity and primary diagnosis.  Factors that are outside the circle-of-control of the hospitals themselves.

And there is an army of academics, statisticians, data processors, and analysts out there to help. The fruit of their hard work and dedication is called SHMI … the Summary Hospital Mortality Index.

SHMI_Specification

Now, the most interesting paragraph is the third one which outlines what raw data is fed in to building the risk-adjusted model.  The first four are objective, the last two are more subjective, especially the diagnosis grouping one.

The importance of this distinction comes down to human nature: if a hospital is failing on its SHMI then it has two options:
(a) to improve its policies and processes to improve outcomes, or
(b) to manipulate the diagnosis group data to reduce the SHMI score.

And the latter is much easier to do, it is called up-coding, and basically it involves camping at the pessimistic end of the diagnostic spectrum. And we are very comfortable with doing that in health care. We favour the Black Hat.

And when our patients do better than our pessimistically-biased prediction, then our SHMI score improves and we look better on the NHS funnel plot.

We do not have to do anything at all about actually improving the outcomes of the service we provide, which is handy because we cannot do that. We do not measure it!


And what might be notably absent from the data fed in to the SHMI risk-model?  Data that is objective and easy to measure.  Data such as length of stay (LOS) for example?

Is there a statistical reason that LOS is omitted? Not really. Any relevant metric is a contender for pumping into a risk-adjustment model.  And we all know that the sicker we are, the longer we stay in hospital, and the less likely we are to come out unharmed (or at all).  And avoidable errors create delays and complications that imply more risk, more work and longer length of stay. Irrespective of the illness we arrived with.

So why has LOS been omitted from SHMI?

The reason may be more political than statistical.

We know that the risk of death increases with infirmity and age.

We know that if we put frail elderly patients into a hospital bed for a few days then they will decondition and become more frail, require more time in hospital, are more likely to need a transfer of care to somewhere other than home, are more susceptible to harm, and more likely to die.

So why is LOS not in the risk-of-death SHMI model?

And it is not in the King’s Fund QR report either.

Nor is the amount of cash being pumped in to keep the HMS NHS afloat each month.

All notably absent!

Undiscussables

Chimp_NoHear_NoSee_NoSpeakLast week I shared a link to Dr Don Berwick’s thought provoking presentation at the Healthcare Safety Congress in Sweden.

Near the end of the talk Don recommended six books, and I was reassured that I already had read three of them. Naturally, I was curious to read the other three.

One of the unfamiliar books was “Overcoming Organizational Defenses” by the late Chris Argyris, a professor at Harvard.  I confess that I have tried to read some of his books before, but found them rather difficult to understand.  So I was intrigued that Don was recommending it as an ‘easy read’.  Maybe I am more of a dimwit that I previously believed!  So fear of failure took over my inner-chimp and I prevaricated. I flipped into denial. Who would willingly want to discover the true depth of their dimwittedness!


Later in the week, I was forwarded a copy of a recently published paper that was on a topic closely related to a key thread in Dr Don’s presentation:

understanding variation.

The paper was by researchers who had looked at the Board reports of 30 randomly selected NHS Trusts to examine how information on safety and quality was being shared and used.  They were looking for evidence that the Trust Boards understood the importance of variation and the need to separate ‘signal’ from ‘noise’ before making decisions on actions to improve safety and quality performance.  This was a point Don had stressed too, so there was a link.

The randomly selected Trust Board reports contained 1488 charts, of which only 88 demonstrated the contribution of chance effects (i.e. noise). Of these, 72 showed the Shewhart-style control charts that Don demonstrated. And of these, only 8 stated how the control limits were constructed (which is an essential requirement for the chart to be meaningful and useful).

That is a validity yield of 8 out of 1488, or 0.54%, which is for all practical purposes zero. Oh dear!


This chance combination of apparently independent events got me thinking.

Q1: What is the reason that NHS Trust Boards do not use these signal-and-noise separation techniques when it has been demonstrated, for at least 12 years to my knowledge, that they are very effective for facilitating improvement in healthcare? (e.g. Improving Healthcare with Control Charts by Raymond G. Carey was published in 2003).

Q2: Is there some form of “organizational defense” system in place that prevents NHS Trust Boards from learning useful ‘new’ knowledge?


So I surfed the Web to learn more about Chris Argyris and to explore in greater depth his concept of Single Loop and Double Loop learning.  I was feeling like a dimwit again because to me it is not a very descriptive title!  I suspect it is not to many others too.

I sensed that I needed to translate the concept into the language of healthcare and this is what emerged.

Single Loop learning is like treating the symptoms and ignoring the disease.

Double Loop learning is diagnosing the underlying disease and treating that.


So what are the symptoms?
The pain of NHS Trust  failure on all dimensions – safety, delivery, quality and productivity (i.e. affordability for a not-for-profit enterprise).

And what are the signs?
The tell-tale sign is more subtle. It’s what is not present that is important. A serious omission. The missing bits are valid time-series charts in the Trust Board reports that show clearly what is signal and what is noise. This diagnosis is critical because the strategies for addressing them are quite different – as Julian Simcox eloquently describes in his latest essay.  If we get this wrong and we act on our unwise decision, then we stand a very high chance of making the problem worse, and demoralizing ourselves and our whole workforce in the process! Does that sound familiar?

And what is the disease?
Undiscussables.  Emotive subjects that are too taboo to table in the Board Room.  And the issue of what is discussable is one of the undiscussables so we have a self-sustaining system.  Anyone who attempts to discuss an undiscussable is breaking an unspoken social code.  Another undiscussable is behaviour, and our social code is that we must not upset anyone so we cannot discuss ‘difficult’ issues.  But by avoiding the issue (the undiscussable disease) we fail to address the root cause and end up upsetting everyone.  We achieve exactly what we are striving to avoid, which is the technical definition of incompetence.  And Chris Argyris labelled this as ‘skilled incompetence’.


Does an apparent lack of awareness of what is already possible fully explain why NHS Trust Boards do not use the tried-and-tested tool called a system behaviour chart to help them diagnose, design and deliver effective improvements in safety, flow, quality and productivity?

Or are there other forces at play as well?

Some deeper undiscussables perhaps?

FrailSafe Design

frailsafeSafe means avoiding harm, and safety is an emergent property of a well-designed system.

Frail means infirm, poorly, wobbly and at higher risk of harm.

So we want our health care system to be a FrailSafe Design.

But is it? How would we know? And what could we do to improve it?


About ten years ago I was involved in a project to improve the safety design of a specific clinical stream flowing through the hospital that I work in.

The ‘at risk’ group of patients were frail elderly patients admitted as an emergency after a fall and who had suffered a fractured thigh bone. The neck of the femur.

Historically, the outcome for these patients was poor.  Many do not survive, and many of the survivors never returned to independent living. They become even more frail.


The project was undertaken during an organisational transition, the hospital was being ‘taken over’ by a bigger one.  This created a window of opportunity for some disruptive innovation, and the project was labelled as a ‘Lean’ one because we had been inspired by similar work done at Bolton some years before and Lean was the flavour of the month.

The actual change was small: it was a flow design tweak that cost nothing to implement.

First we asked two flow questions:
Q1: How many of these high-risk frail patients do we admit a year?
A1: About one per day on average.
Q2: What is the safety critical time for these patients?
A2: The first four days.  The sooner they have hip surgery and are able to be actively mobilise the better their outcome.

Second we applied Little’s Law which showed the average number of patients in this critical phase is four. This was the ‘work in progress’ or WIP.

And we knew that variation is always present, and we knew that having all these patients in one place would make it much easier for the multi-disciplinary teams to provide timely care and to avoid potentially harmful delays.

So we suggested that one six-bedded bay on one of the trauma wards be designated the Fractured Neck Of Femur bay.

That was the flow diagnosis and design done.

The safety design was created by the multi-disciplinary teams who looked after these patients: the geriatricians, the anaesthetists, the perioperative emergency care team (PECT), the trauma and orthopaedic team, the physiotherapists, and so on.

They designed checklists to ensure that all #NOF patients got what they needed when they needed it and so that nothing important was left to chance.

And that was basically it.

And the impact was remarkable. The stream flowed. And one measured outcome was a dramatic and highly statistically significant reduction in mortality.

Injury_2011_Results
The full paper was published in Injury 2011; 42: 1234-1237.

We had created a FrailSafe Design … which implied that what was happening before was clearly not safe for these frail patients!


And there was an improved outcome for the patients who survived: A far larger proportion rehabilitated and returned to independent living, and a far smaller proportion required long-term institutional care.

By learning how to create and implement a FrailSafe Design we had added both years-to-life and life-to-years.

It cost nothing to achieve and the message was clear, as this quote is from the 2011 paper illustrates …

Injury_2011_Message

What was a bit disappointing was the gap of four years between delivering this dramatic and highly significant patient safety and quality improvement and the sharing of the story.


What is more exciting is that the concept of FrailSafe is growing, evolving and spreading.

Righteous Indignation

NHS_Legal_CostsThis heading in the the newspaper today caught my eye.

Reading the rest of the story triggered a strong emotional response: anger.

My inner chimp was not happy. Not happy at all.

So I took my chimp for a walk and we had a long chat and this is the story that emerged.

The first trigger was the eye-watering fact that the NHS is facing something like a £26 billion litigation cost.  That is about a quarter of the total NHS annual budget!

The second was the fact that the litigation bill has increased by over £3 billion in the last year alone.

The third was that the extra money will just fall into a bottomless pit – the pockets of legal experts – not to where it is intended, to support overworked and demoralised front-line NHS staff. GPs, nurses, AHPs, consultants … the ones that deliver care.

That is why my chimp was so upset.  And it sounded like righteous indignation rather than irrational fear.


So what is the root cause of this massive bill? A more litigious society? Ambulance chasing lawyers trying to make a living? Dishonest people trying to make a quick buck out of a tax-funded system that cannot defend itself?

And what is the plan to reduce this cost?

Well in the article there are three parts to this:
“apologise and learn when you’re wrong,  explain and vigorously defend when we’re right, view court as a last resort.”

This sounds very plausible but to achieve it requires knowing when we are wrong or right.

How do we know?


Generally we all think we are right until we are proved wrong.

It is the way our brains are wired. We are more sure about our ‘rightness’ than the evidence suggests is justified. We are naturally optimistic about our view of ourselves.

So to be proved wrong is emotionally painful and to do it we need:
1) To make a mistake.
2) For that mistake to lead to psychological or physical harm.
3) For the harm to be identified.
4) For the cause of the harm to be traced back to the mistake we made.
5) For the evidence to be used to hold us to account, (to apologise and learn).

And that is all hunky-dory when we are individually inept and we make avoidable mistakes.

But what happens when the harm is the outcome of a combination of actions that individually are harmless but which together are not?  What if the contributory actions are sensible and are enforced as policies that we dutifully follow to the letter?

Who is held to account?  Who needs to apologise? Who needs to learn?  Someone? Anyone? Everyone? No one?

The person who wrote the policy?  The person who commissioned the policy to be written? The person who administers the policy? The person who follows the policy?

How can that happen if the policies are individually harmless but collectively lethal?


The error here is one of a different sort.

It is called an ‘error of omission’.  The harm is caused by what we did not do.  And notice the ‘we’.

What we did not do is to check the impact on others of the policies that we write for ourselves.

Example:

The governance department of a large hospital designs safety policies that if not followed lead to disciplinary action and possible dismissal.  That sounds like a reasonable way to weed out the ‘bad apples’ and the policies are adhered to.

At the same time the operations department designs flow policies (such as maximum waiting time targets and minimum resource utilisation) that if not followed lead to disciplinary action and possible dismissal.  That also sounds like a reasonable way to weed out the layabouts whose idleness cause queues and delays and the policies are adhered to.

And at the same time the finance department designs fiscal policies (such as fixed budgets and cost improvement targets) that if not followed lead to disciplinary action and possible dismissal. Again, that sounds like a reasonable way to weed out money wasters and the policies are adhered to.

What is the combined effect? The multiple safety checks take more time to complete, which puts extra workload on resources and forces up utilisation. As the budget ceiling is lowered the financial and operational pressures build, the system heats up, stress increases, corners are cut, errors slip through the safety checks. More safety checks are added and the already over-worked staff are forced into an impossible position.  Chaos ensues … more mistakes are made … patients are harmed and justifiably seek compensation by litigation.  Everyone loses (except perhaps the lawyers).


So why was my inner chimp really so unhappy?

Because none of this is necessary. This scenario is avoidable.

Reducing the pain of complaints and the cost of litigation requires setting realistic expectations to avoid disappointment and it requires not creating harm in the first place.

That implies creating healthcare systems that are inherently safe, not made not-unsafe by inspection-and-correction.

And it implies measuring and sharing intended and actual outcomes not  just compliance with policies and rates of failure to meet arbitrary and conflicting targets.

So if that is all possible and all that is required then why are we not doing it?

Simple. We never learned how. We never knew it is possible.

Fit-4-Purpose

F4P_PillsWe all want a healthcare system that is fit for purpose.

One which can deliver diagnosis, treatment and prognosis where it is needed, when it is needed, with empathy and at an affordable cost.

One that achieves intended outcomes without unintended harm – either physical or psychological.

We want safety, delivery, quality and affordability … all at the same time.

And we know that there are always constraints we need to work within.

There are constraints set by the Laws of the Universe – physical constraints.

These are absolute,  eternal and are not negotiable.

Dr Who’s fantastical tardis is fictional. We cannot distort space, or travel in time, or go faster than light – well not with our current knowledge.

There are also constraints set by the Laws of the Land – legal constraints.

Legal constraints are rigid but they are also adjustable.  Laws evolve over time, and they are arbitrary. We design them. We choose them. And we change them when they are no longer fit for purpose.

The third limit is often seen as the financial constraint. We are required to live within our means. There is no eternal font of  limitless funds to draw from.  We all share a planet that has finite natural resources  – and ‘grow’ in one part implies ‘shrink’ in another.  The Laws of the Universe are not negotiable. Mass, momentum and energy are conserved.

The fourth constraint is perceived to be the most difficult yet, paradoxically, is the one that we have most influence over.

It is the cultural constraint.

The collective, continuously evolving, unwritten rules of socially acceptable behaviour.


Improvement requires challenging our unconscious assumptions, our beliefs and our habits – and selectively updating those that are no longer fit-4-purpose.

To learn we first need to expose the gaps in our knowledge and then to fill them.

We need to test our hot rhetoric against cold reality – and when the fog of disillusionment forms we must rip up and rewrite what we have exposed to be old rubbish.

We need to examine our habits with forensic detachment and we need to ‘unlearn’ the ones that are limiting our effectiveness, and replace them with new habits that better leverage our capabilities.

And all of that is tough to do. Life is tough. Living is tough. Learning is tough. Leading is tough. But it energising too.

Having a model-of-effective-leadership to aspire to and a peer-group for mutual respect and support is a critical piece of the jigsaw.

It is not possible to improve a system alone. No matter how smart we are, how committed we are, or how hard we work.  A system can only be improved by the system itself. It is a collective and a collaborative challenge.


So with all that in mind let us sketch a blueprint for a leader of systemic cultural improvement.

What values, beliefs, attitudes, knowledge, skills and behaviours would be on our ‘must have’ list?

What hard evidence of effectiveness would we ask for? What facts, figures and feedback?

And with our check-list in hand would we feel confident to spot an ‘effective leader of systemic cultural improvement’ if we came across one?


This is a tough design assignment because it requires the benefit of  hindsight to identify the critical-to-success factors: our ‘must have and must do’ and ‘must not have and must not do’ lists.

H’mmmm ….

So let us take a more pragmatic and empirical approach. Let us ask …

“Are there any real examples of significant and sustained healthcare system improvement that are relevant to our specific context?”

And if we can find even just one Black Swan then we can ask …

Q1. What specifically was the significant and sustained improvement?
Q2. How specifically was the improvement achieved?
Q3. When exactly did the process start?
Q4. Who specifically led the system improvement?

And if we do this exercise for the NHS we discover some interesting things.

First let us look for exemplars … and let us start using some official material – the Monitor website (http://www.monitor.gov.uk) for example … and let us pick out ‘Foundation Trusts’ because they are the ones who are entrusted to run their systems with a greater degree of capability and autonomy.

And what we discover is a league table where those FTs that are OK are called ‘green’ and those that are Not OK are coloured ‘red’.  And there are some that are ‘under review’ so we will call them ‘amber’.

The criteria for deciding this RAG rating are embedded in a large balanced scorecard of objective performance metrics linked to a robust legal contract that provides the framework for enforcement.  Safety metrics like standardised mortality ratios, flow metrics like 18-week and 4-hour target yields, quality metrics like the friends-and-family test, and productivity metrics like financial viability.

A quick tally revealed 106 FTs in the green, 10 in the amber and 27 in the red.

But this is not much help with our quest for exemplars because it is not designed to point us to who has improved the most, it only points to who is failing the most!  The league table is a name-and-shame motivation-destroying cultural-missile fuelled by DRATs (delusional ratios and arbitrary targets) and armed with legal teeth.  A projection of the current top-down, Theory-X, burn-the-toast-then-scrape-it management-of-mediocrity paradigm. Oh dear!

However,  despite these drawbacks we could make better use of this data.  We could look at the ‘reds’ and specifically at their styles of cultural leadership and compare with a random sample of all the ‘greens’ and their models for success. We could draw out the differences and correlate with outcomes: red, amber or green.

That could offer us some insight and could give us the head start with our blueprint and check-list.


It would be a time-consuming and expensive piece of work and we do not want to wait that long. So what other avenues are there we can explore now and at no cost?

Well there are unofficial sources of information … the ‘grapevine’ … the stuff that people actually talk about.

What examples of effective improvement leadership in the NHS are people talking about?

Well a little blue bird tweeted one in my ear this week …

And specifically they are talking about a leader who has learned to walk-the-improvement-walk and is now talking-the-improvement-walk: and that is Sir David Dalton, the CEO of Salford Royal.

Here is a copy of the slides from Sir David’s recent lecture at the Kings Fund … and it is interesting to compare and contrast it with the style of NHS Leadership that led up to the Mid Staffordshire Failure, and to the Francis Report, and to the Keogh Report and to the Berwick Report.

Chalk and cheese!


So if you are an NHS employee would you rather work as part of an NHS Trust where the leaders walk-DD’s-walk and talk-DD’s-talk?

And if you are an NHS customer would you prefer that the leaders of your local NHS Trust walked Sir David’s walk too?


We are the system … we get the leaders that we deserve … we make the  choice … so we need to choose wisely … and we need to make our collective voice heard.

Actions speak louder than words.  Walk works better than talk.  We must be the change we want to see.

The Speed of Trust

London_UndergroundSystems are built from intersecting streams of work called processes.

This iconic image of the London Underground shows a system map – a set of intersecting transport streams.

Each stream links a sequence of independent steps – in this case the individual stations.  Each step is a system in itself – it has a set of inner streams.

For a system to exhibit stable and acceptable behaviour the steps must be in synergy – literally ‘together work’. The steps also need to be in synchrony – literally ‘same time’. And to do that they need to be aligned to a common purpose.  In the case of a transport system the design purpose is to get from A to B safety, quickly, in comfort and at an affordable cost.

In large socioeconomic systems called ‘organisations’ the steps represent groups of people with special knowledge and skills that collectively create the desired product or service.  This creates an inevitable need for ‘handoffs’ as partially completed work flows through the system along streams from one step to another. Each step contributes to the output. It is like a series of baton passes in a relay race.

This creates the requirement for a critical design ingredient: trust.

Each step needs to be able to trust the others to do their part:  right-first-time and on-time.  All the steps are directly or indirectly interdependent.  If any one of them is ‘untrustworthy’ then the whole system will suffer to some degree. If too many generate dis-trust then the system may fail and can literally fall apart. Trust is like social glue.

So a critical part of people-system design is the development and the maintenance of trust-bonds.

And it does not happen by accident. It takes active effort. It requires design.

We are social animals. Our default behaviour is to trust. We learn distrust by experiencing repeated disappointments. We are not born cynical – we learn that behaviour.

The default behaviour for inanimate systems is disorder – and it has a fancy name – it is called ‘entropy’. There is a Law of Physics that says that ‘the average entropy of a system will increase over time‘. The critical word is ‘average’.

So, if we are not aware of this and we omit to pay attention to the hand-offs between the steps we will observe increasing disorder which leads to repeated disappointments and erosion of trust. Our natural reaction then is ‘self-protect’ which implies ‘check-and-reject’ and ‘check and correct’. This adds complexity and bureaucracy and may prevent further decline – which is good – but it comes at a cost – quite literally.

Eventually an equilibrium will be achieved where our system performance is limited by the amount of check-and-correct bureaucracy we can afford.  This is called a ‘mediocrity trap’ and it is very resilient – which means resistant to change in any direction.


To escape from the mediocrity trap we need to break into the self-reinforcing check-and-reject loop and we do that by developing a design that challenges ‘trust eroding behaviour’.  The strategy is to develop a skill called  ‘smart trust’.

To appreciate what smart trust is we need to view trust as a spectrum: not as a yes/no option.

At one end is ‘nonspecific distrust’ – otherwise known as ‘cynical behaviour’. At the other end is ‘blind trust’ – otherwise  known and ‘gullible behaviour’.  Neither of these are what we need.

In the middle is the zone of smart trust that spans healthy scepticism  through to healthy optimism.  What we need is to maintain a balance between the two – not to eliminate them. This is because some people are ‘glass-half-empty’ types and some are ‘glass-half-full’. And both views have a value.

The action required to develop smart trust is to respectfully challenge every part of the organisation to demonstrate ‘trustworthiness’ using evidence.  Rhetoric is not enough. Politicians always score very low on ‘most trusted people’ surveys.

The first phase of this smart trust development is for steps to demonstrate trustworthiness to themselves using their own evidence, and then to share this with the steps immediately upstream and downstream of them.

So what evidence is needed?

SFQP1Safety comes first. If a step cannot be trusted to be safe then that is the first priority. Safe systems need to be designed to be safe.

Flow comes second. If the streams do not flow smoothly then we experience turbulence and chaos which increases stress,  the risk of harm and creates disappointment for everyone. Smooth flow is the result of careful  flow design.

Third is Quality which means ‘setting and meeting realistic expectations‘.  This cannot happen in an unsafe, chaotic system.  Quality builds on Flow which builds on Safety. Quality is a design goal – an output – a purpose.

Fourth is Productivity (or profitability) and that does not automatically follow from the other three as some QI Zealots might have us believe. It is possible to have a safe, smooth, high quality design that is unaffordable.  Productivity needs to be designed too.  An unsafe, chaotic, low quality design is always more expensive.  Always. Safe, smooth and reliable can be highly productive and profitable – if designed to be.

So whatever the driver for improvement the sequence of questions is the same for every step in the system: “How can I demonstrate evidence of trustworthiness for Safety, then Flow, then Quality and then Productivity?”

And when that happens improvement will take off like a rocket. That is the Speed of Trust.  That is Improvement Science in Action.

What is the Temperamenture?

tweet_birdie_flying_between_phones_150_wht_9168Tweet
The sound heralded the arrival of a tweet so Bob looked up from his book and scanned the message. It was from Leslie, one of the Improvement Science apprentices.

It said “If your organisation is feeling poorly then do not forget to measure the Temperamenture. You may have Cultural Change Fever.

Bob was intrigued. This was a novel word and he suspected it was not a spelling error. He know he was being teased. He tapped a reply on his iPad “Interesting word ‘Temperamenture’ – can you expand?” 

Ring Ring
<Bob> Hello, Bob here.

There was laughing on the other end of the line – it was Leslie.

<Leslie> Ho Ho. Hi Bob – I thought that might prick your curiosity if you were on line. I know you like novel words.

<Bob> Ah! You know my weakness – I am at your mercy now!  So, I am consumed with curiosity – as you knew I would be.

<Leslie> OK. No more games. You know that you are always saying that there are three parts to Improvement Science – Processes, People and Systems – and that the three are synergistic so they need to be kept in balance …

<Bob> Yes.

<Leslie> Well, I have discovered a source of antagonism that creates a lot of cultural imbalance and emotional heat in my organisation.

<Bob> OK. So I take from that you mean an imbalance in the People part that then upsets the Process and System parts.

<Leslie> Yes, exactly. In your Improvement Science course you mentioned the theory behind this but did not share any real examples.

<Bob> That is very possible.  Hard evidence and explainable examples are easier for the Process component – the People stuff is more difficult to do that way.  Can you be more specific?  I think I know where you may be going with this.

<Leslie> OK. Where do you feel I am going with it?

<Bob> Ha! The student becomes the teacher. Excellent response! I was thinking something to do with the Four Temperaments.

<Leslie>Yes.  And specifically the conflict that can happen between them.  I am thinking of the tension between the Idealists and the Guardians.

<Bob> Ah!  Yes. The Bile Wars – Yellow and Black. The Cholerics versus the Melancholics. So do you have hard evidence of this happening in reality rather than just my theoretical rhetoric?

<Leslie> Yes!  But the facts do not seem to fit the theory. You know that I work in a hospital. Well one of the most important “engines” of a hospital is the surgical operating suite. Conveniently called the SOS.

<Bob> Yes. It seems to be a frequent source of both Nuggets and Niggles.

<Leslie> Well, I am working with the SOS team at my hospital and I have to say that they are a pretty sceptical bunch.  Everyone seems to have strong opinions.  Strong but different opinions of what should happen and who should do it.  The words someone and should get mentioned a lot.  I have not managed to find this elusive “someone” yet.  The some-one, no-one, every-one, any-one problem.

<Bob> OK. I have heard this before. I hear that surgeons in particular have strong opinions – and they disagree with each other!  I remember watching episodes of “Doctor in the House” many years ago.  What was the name of the irascible chief surgeon played by James Robertson Justice? Sir Lancelot Spratt the archetype consultant surgeon. Are they actually like that?

<Leslie> I have not met any as extreme as Sir Lancelot though some do seem to emulate that role model.  In reality the surgeons, anaesthetists, nurses, ODPs, and managers all seem to believe there is one way that a theatre should be run, their way, and their separate “one ways” do not line up.  Hence the conflict and high emotional temperature.

<Bob> OK, so how does the Temperament dimension relate to this?  Is there a temperament mismatch between the different tribes in the operating suite as the MBTI theory would suggest?

<Leslie> That was my hypothesis and I decided that the only way I could test it was by mapping the temperaments using the Temperament Sorter from the FISH toolbox.

<Bob> Excellent, but you would need quite a big sample to draw any statistically valid conclusions.  How did you achieve that with a group of disparate sceptics?

<Leslie>I know.  So I posed this challenge as a research question – and they were curious enough to give it a try.  Well, the Surgeons and Anaesthetists were anyway.  The Nurses, OPDs and Managers chose to sit on the fence and watch the game.

<Bob>Wow! Now I am really interested. What did you find?

<Leslie>Woah there!  I need to explain how we did it first.  They have a monthly audit meeting where they all get together as separate groups and after I posed the question they decided to do use the Temperament Sorter at one of those meetings.  It was done in a light-hearted way and it was really good fun too.  I brought some cartoons and descriptions of the sixteen MBTI types and they tried to guess who was which type.

<Bob>Excellent.  So what did you find?

<Leslie>We disproved the hypothesis that there was a Temperament mismatch.

<Bob>Really!  What did the data show?

<Leslie> It showed that the Temperament profile for both surgeons and anaesthetists was different from the population average …

<Bob>OK, and …?

<Leslie>… and that there was no statistical difference between surgeons and anaesthetists.

<Bob> Really! So what are they both?

<Leslie> Guardians. The majority of both tribes are SJs.

There was a long pause.  Bob was digesting this juicy new fact.  Leslie knew that if there was one thing that Bob really liked it was having a theory disproved by reality.  Eventually he replied.

<Bob> Clarity of hindsight is a wonderful thing.  It makes complete sense that they are Guardians.  Speaking as a patient, what I want most is Safety and Predictability which is the ideal context for Guardians to deliver their best.  I am sure that neither surgeons nor anaesthetists like “surprises” and I suspect that they both prefer doing things “by the book”.  They are sceptical of new ideas by temperament.

<Leslie> And there is more.

<Bob> Excellent! What?

<Leslie> They are tough-minded Guardians. They are STJs.

<Bob> Of course!  Having the responsibility of “your life in my hands” requires a degree of tough-mindedness and an ability to not get too emotionally hooked.  Sir Lancelot is a classic extrovert tough-minded Guardian!  The Rolls-Royce and the ritual humiliation of ignorant underlings all fits.  Wow!  Well done Leslie.  So what have you done with this new knowledge and deeper understanding?

<Leslie> Ouch! You got me! That is why I sent the Tweet. Now what do I do?

<Bob> Ah! I am not sure.  We are both sailing in uncharted water now so I suggest we explore and learn together.  Let me ponder and do some exploring of the implications of your findings and I will get back to you.  Can you do the same?

<Leslie> Good plan. Shall we share notes in a couple of days?

<Bob> Excellent. I look forward to it.


This is not a completely fictional narrative.

In a recent experiment the Temperament of a group of 66 surgeons and 65 anaesthetists was mapped using a standard Myers-Briggs Type Indicator® tool.  The data showed that the proportion reporting a Guardian (xSxJ) preference was 62% for the surgeons and 59% for the anaesthetists.  The difference was not statistically significant [For the statistically knowledgable the Chi-squared test gave a p-value of 0.84].  The reported proportion of the normal population who have a Guardian temperament is 34% so this is very different from the combined group of operating theatre doctors [Chi-squared test, p<0.0001].  Digging deeper into the data the proportion showing the tough-minded Guardian preference, the xSTJ, was 55% for the Surgeons and 46% for the Anaesthetists which was also not significantly different [p=0.34] but compared with a normal population proportion of 24% there are significantly more tough-minded Guardians in the operating theatre [p<0.0001].

So what then is the difference between Surgeons and Anaesthetists in their preferred modes of thinking?

The data shows that Surgeons are more likely to prefer Extraversion – the ESTJ profile – compared with Anaesthetists – who lean more towards Introversion – the ISTJ profile (p=0.12). This p-value means that with the data available there is a one in eight chance that this difference is due to chance. We would needs a bigger set of data to get greater certainty.

The temperament gradient is enough to create a certain degree of tension because although the Guardian temperament is the same, and the tough-mindedness is the same, the dominant function differs between the ESTJ and the ISTJ types.  As the Surgeons tend to the ESTJ mode, their dominant function is Thinking Judgement. The Anaesthetists tend to perfer ISTJ so their dominant fuction is Sensed Perceiving. This makes a big difference.

And it fits with their chosen roles in the operating theatre. The archetype ESTJ Surgeon is the Supervisor and decides what to do and who does it. The archetype ISTJ Anaesthetist is the Inspector and monitors and maintains safety and stability. This is a sweepig generalisation of course – but a useful one.

The roles are complementary, the minor conflict is inevitable, and the tension is not a “bad” thing – it is healthy – for the patient.  But when external forces threaten the safety, predictability and stability the conflict is amplified.

lightning_strike_150_wht_5809Rather like the weather.

Hot wet air looks clear. Cold dry air looks clear too.  When hot-humid air from the tropics meets cold-crisp air from the poles then a band of of fog will be created.  We call it a weather front and it generates variation.  And if the temperature and humidity difference is excessive then storm clouds will form. The lightning will flash and the thunder will growl as the energy is released.

Clouds obscure clarity of forward vision but clouds also create shade from the sun above; clouds trap warmth beneath; and clouds create rain which is necessary to sustain growth. Clouds are not all bad.  Some cloudiness is necessary.

An Improvement Scientist knows that 100% harmony is not the healthiest ratio. Unchallenged group-think is potentially dangerous.  Zero harmony is also unhealthy.  Open warfare is destructive.  Everyone loses.  A mixture of temperaments, a diversity of perspectives, a bit of fog, and a bit of respectful challenge is healthier than All-or-None.

It is at the complex and dynamic interface between different temperaments that learning and innovation happens so a slight temperamenture gradient is ideal.  The emotometer should not read too cold or too hot.

Understanding this dynamic is a big step towards being able to manage the creative tension.

To explore the Temperamenture Map of your team, department and organisation try the Temperament Sorter tool – one of the Improvement Science cultural diagnostic tests.

The Writing on the Wall – Part II

Who_Is_To_BlameThe retrospectoscope is the favourite instrument of the forensic cynic – the expert in the after-the-event-and-I-told-you-so rhetoric. The rabble-rouser for the lynch-mob.

It feels better to retrospectively nail-to-a-cross the person who committed the Cardinal Error of Omission, and leave them there in emotional and financial pain as a visible lesson to everyone else.

This form of public feedback has been used for centuries.

It is called barbarism, and it has no place in a modern civilised society.


A more constructive question to ask is:

Could the evolving Mid-Staffordshire crisis have been detected earlier … and avoided?”

And this question exposes a tricky problem: it is much more difficult to predict the future than to explain the past.  And if it could have been detected and avoided earlier, then how is that done?  And if the how-is-known then is everyone else in the NHS using this know-how to detect and avoid their own evolving Mid-Staffs crisis?

To illustrate how it is currently done let us use the actual Mid-Staffs data. It is conveniently available in Figure 1 embedded in Figure 5 on Page 360 in Appendix G of Volume 1 of the first Francis Report.  If you do not have it at your fingertips I have put a copy of it below.

MS_RawData

The message does not exactly leap off the page and smack us between the eyes does it? Even with the benefit of hindsight.  So what is the problem here?

The problem is one of ergonomics. Tables of numbers like this are very difficult for most people to interpret, so they create a risk that we ignore the data or that we just jump to the bottom line and miss the real message. And It is very easy to miss the message when we compare the results for the current period with the previous one – a very bad habit that is spread by accountants.

This was a slowly emerging crisis so we need a way of seeing it evolving and the better way to present this data is as a time-series chart.

As we are most interested in safety and outcomes, then we would reasonably look at the outcome we do not want – i.e. mortality.  I think we will all agree that it is an easy enough one to measure.

MS_RawDeathsThis is the raw mortality data from the table above, plotted as a time-series chart.  The green line is the average and the red-lines are a measure of variation-over-time. We can all see that the raw mortality is increasing and the red flags say that this is a statistically significant increase. Oh dear!

But hang on just a minute – using raw mortality data like this is invalid because we all know that the people are getting older, demand on our hospitals is rising, A&Es are busier, older people have more illnesses, and more of them will not survive their visit to our hospital. This rise in mortality may actually just be because we are doing more work.

Good point! Let us plot the activity data and see if there has been an increase.

MS_Activity

Yes – indeed the activity has increased significantly too.

Told you so! And it looks like the activity has gone up more than the mortality. Does that mean we are actually doing a better job at keeping people alive? That sounds like a more positive message for the Board and the Annual Report. But how do we present that message? What about as a ratio of mortality to activity? That will make it easier to compare ourselves with other hospitals.

Good idea! Here is the Raw Mortality Ratio chart.

MS_RawMortality_RatioAh ha. See! The % mortality is falling significantly over time. Told you so.

Careful. There is an unstated assumption here. The assumption that the case mix is staying the same over time. This pattern could also be the impact of us doing a greater proportion of lower complexity and lower risk work.  So we need to correct this raw mortality data for case mix complexity – and we can do that by using data from all NHS hospitals to give us a frame of reference. Dr Foster can help us with that because it is quite a complicated statistical modelling process. What comes out of Dr Fosters black magic box is the Global Hospital Raw Mortality (GHRM) which is the expected number of deaths for our case mix if we were an ‘average’ NHS hospital.

MS_ExpectedMortality_Ratio

What this says is that the NHS-wide raw mortality risk appears to be falling over time (which may be for a wide variety of reasons but that is outside the scope of this conversation). So what we now need to do is compare this global raw mortality risk with our local raw mortality risk  … to give the Hospital Standardised Mortality Ratio.

MS_HSMRThis gives us the Mid Staffordshire Hospital HSMR chart.  The blue line at 100 is the reference average – and what this chart says is that Mid Staffordshire hospital had a consistently higher risk than the average case-mix adjusted mortality risk for the whole NHS. And it says that it got even worse after 2001 and that it stayed consistently 20% higher after 2003.

Ah! Oh dear! That is not such a positive message for the Board and the Annual Report. But how did we miss this evolving safety catastrophe?  We had the Dr Foster data from 2001

This is not a new problem – a similar thing happened in Vienna between 1820 and 1850 with maternal deaths caused by Childbed Fever. The problem was detected by Dr Ignaz Semmelweis who also discovered a simple, pragmatic solution to the problem: hand washing.  He blew the whistle but unfortunately those in power did not like the implication that they had been the cause of thousands of avoidable mother and baby deaths.  Semmelweis was vilified and ignored, and he did not publish his data until 1861. And even then the story was buried in tables of numbers.  Semmelweis went mad trying to convince the World that there was a problem.  Here is the full story.

Also, time-series charts were not invented until 1924 – and it was not in healthcare – it was in manufacturing. These tried-and-tested safety and quality improvement tools are only slowly diffusing into healthcare because the barriers to innovation appear somewhat impervious.

And the pores have been clogged even more by the social poison called “cynicide” – the emotional and political toxin exuded by cynics.

So how could we detect a developing crisis earlier – in time to avoid a catastrophe?

The first step is to estimate the excess-death-equivalent. Dr Foster does this for you.MS_ExcessDeathsHere is the data from the table plotted as a time-series chart that shows that the estimated-excess-death-equivalent per year. It has an average of 100 (that is two per week) and the average should be close to zero. More worryingly the number was increasing steadily over time up to 200 per year in 2006 – that is about four excess deaths per week – on average.  It is important to remember that HSMR is a risk ratio and mortality is a multi-factorial outcome. So the excess-death-equivalent estimate does not imply that a clear causal chain will be evident in specific deaths. That is a complete misunderstanding of the method.

I am sorry – you are losing me with the statistical jargon here. Can you explain in plain English what you mean?

OK. Let us use an example.

Suppose we set up a tombola at the village fete and we sell 50 tickets with the expectation that the winner bags all the money. Each ticket holder has the same 1 in 50 risk of winning the wad-of-wonga and a 49 in 50 risk of losing their small stake. At the appointed time we spin the barrel to mix up the ticket stubs then we blindly draw one ticket out. At that instant the 50 people with an equal risk changes to one winner and 49 losers. It is as if the grey fog of risk instantly condenses into a precise, black-and-white, yes-or-no, winner-or-loser, reality.

Translating this concept back into HSMR and Mid Staffs – the estimated 1200 deaths are the just the “condensed risk of harm equivalent”.  So, to then conduct a retrospective case note analysis of specific deaths looking for the specific cause would be equivalent to trying to retrospectively work out the reason the particular winning ticket in the tombola was picked out. It is a search that is doomed to fail. To then conclude from this fruitless search that HSMR is invalid, is only to compound the delusion further.  The actual problem here is ignorance and misunderstanding of the basic Laws of Physics and Probability, because our brains are not good at solving these sort of problems.

But Mid Staffs is a particularly severe example and  it only shows up after years of data has accumulated. How would a hospital that was not as bad as this know they had a risk problem and know sooner? Waiting for years to accumulate enough data to prove there was a avoidable problem in the past is not much help. 

That is an excellent question. This type of time-series chart is not very sensitive to small changes when the data is noisy and sparse – such as when you plot the data on a month-by-month timescale and avoidable deaths are actually an uncommon outcome. Plotting the annual sum smooths out this variation and makes the trend easier to see, but it delays the diagnosis further. One way to increase the sensitivity is to plot the data as a cusum (cumulative sum) chart – which is conspicuous by its absence from the data table. It is the running total of the estimated excess deaths. Rather like the running total of swings in a game of golf.

MS_ExcessDeaths_CUSUMThis is the cusum chart of excess deaths and you will notice that it is not plotted with control limits. That is because it is invalid to use standard control limits for cumulative data.  The important feature of the cusum chart is the slope and the deviation from zero. What is usually done is an alert threshold is plotted on the cusum chart and if the measured cusum crosses this alert-line then the alarm bell should go off – and the search then focuses on the precursor events: the Near Misses, the Not Agains and the Niggles.

I see. You make it look easy when the data is presented as pictures. But aren’t we still missing the point? Isn’t this still after-the-avoidable-event analysis?

Yes! An avoidable death should be a Never-Event in a designed-to-be-safe healthcare system. It should never happen. There should be no coffins to count. To get to that stage we need to apply exactly the same approach to the Near-Misses, and then the Not-Agains, and eventually the Niggles.

You mean we have to use the SUI data and the IR1 data and the complaint data to do this – and also ask our staff and patients about their Niggles?

Yes. And it is not the number of complaints that is the most useful metric – it is the appearance of the cumulative sum of the complaint severity score. And we need a method for diagnosing and treating the cause of the Niggles too. We need to convert the feedback information into effective action.

Ah ha! Now I understand what the role of the Governance Department is: to apply the tools and techniques of Improvement Science proactively.  But our Governance Department have not been trained to do this!

Then that is one place to start – and their role needs to evolve from Inspectors and Supervisors to Demonstrators and Educators – ultimately everyone in the organisation needs to be a competent Healthcare Improvementologist.

OK – I now now what to do next. But wait a minute. This is going to cost a fortune!

This is just one small first step.  The next step is to redesign the processes so the errors do not happen in the first place. The cumulative cost saving from eliminating the repeated checking, correcting, box-ticking, documenting, investigating, compensating and insuring is much much more than the one-off investment in learning safe system design.

So the Finance Director should be a champion for safety and quality too.

Yup!

Brill. Thanks. And can I ask one more question? I do not want to appear to skeptical but how do we know we can trust that this risk-estimation system has been designed and implemented correctly? How do we know we are not being bamboozled by statisticians? It has happened before!

That is the best question yet.  It is important to remember that HSMR is counting deaths in hospital which means that it is not actually the risk of harm to the patient that is measured – it is the risk to the reputation of hospital! So the answer to your question is that you demonstrate your deep understanding of the rationle and method of risk-of-harm estimation by listing all the ways that such a system could be deliberately “gamed” to make the figures look better for the hospital. And then go out and look for hard evidence of all the “games” that you can invent. It is a sort of creative poacher-becomes-gamekeeper detective exercise.

OK – I sort of get what you mean. Can you give me some examples?

Yes. The HSMR method is based on deaths-in-hospital so discharging a patient from hospital before they die will make the figures look better. Suppose one hospital has more access to end-of-life care in the community than another: their HSMR figures would look better even though exactly the same number of people died. Another is that the HSMR method is weighted towards admissions classified as “emergencies” – so if a hospital admits more patients as “emergencies” who are not actually very sick and discharges them quickly then this will inflated their estimated deaths and make their actual mortality ratio look better – even though the risk-of-harm to patients has not changed.

OMG – so if we have pressure to meet 4 hour A&E targets and we get paid more for an emergency admission than an A&E attendance then admitting to an Assessmen Area and discharging within one day will actually reward the hospital financially, operationally and by apparently reducing their HSMR even though there has been no difference at all to the care that patients actually recieve?

Yes. It is an inevitable outcome of the current system design.

But that means that if I am gaming the system and my HSMR is not getting better then the risk-of-harm to patients is actually increasing and my HSMR system is giving me false reassurance that everything is OK.   Wow! I can see why some people might not want that realisation to be public knowledge. So what do we do?

Design the system so that the rewards are aligned with lower risk of harm to patients and improved outcomes.

Is that possible?

Yes. It is called a Win-Win-Win design.

How do we learn how to do that?

Improvement Science.

Footnote I:

The graphs tell a story but they may not create a useful sense of perspective. It has been said that there is a 1 in 300 chance that if you go to hospital you will not leave alive for avoidable causes. What! It cannot be as high as 1 in 300 surely?

OK – let us use the published Mid-Staffs data to test this hypothesis. Over 12 years there were about 150,000 admissions and an estimated 1,200 excess deaths (if all the risk were concentrated into the excess deaths which is not what actually happens). That means a 1 in 130 odds of an avoidable death for every admission! That is twice as bad as the estimated average.

The Mid Staffordshire statistics are bad enough; but the NHS-as-a-whole statistics are cumulatively worse because there are 100’s of other hospitals that are each generating not-as-obvious avoidable mortality. The data is very ‘noisy’ so it is difficult even for a statistical expert to separate the message from the morass.

And remember – that  the “expected” mortality is estimated from the average for the whole NHS – which means that if this average is higher than it could be then there is a statistical bias and we are being falsely reassured by being ‘not statistically significantly different’ from the pack.

And remember too – for every patient and family that suffers and avoidable death there are many more that have to live with the consequences of avoidable but non-fatal harm.  That is called avoidable morbidity.  This is what the risk really means – everyone has a higher risk of some degree of avoidable harm. Psychological and physical harm.

This challenge is not just about preventing another Mid Staffs – it is about preventing 1000’s of avoidable deaths and 100,000s of patients avoidably harmed every year in ‘average’ NHS trusts.

It is not a mass conspiracy of bad nurses, bad doctors, bad managers or bad policians that is the root cause.

It is poorly designed processes – and they are poorly designed because the nurses, doctors and managers have not learned how to design better ones.  And we do not know how because we were not trained to.  And that education gap was an accident – an unintended error of omission.  

Our urgently-improve-NHS-safety-challenge requires a system-wide safety-by-design educational and cultural transformation.

And that is possible because the knowledge of how to design, test and implement inherently safe processes exists. But it exists outside healthcare.

And that safety-by-design training is a worthwhile investment because safer-by-design processes cost less to run because they require less checking, less documenting, less correcting – and all the valuable nurse, doctor and manager time freed up by that can be reinvested in more care, better care and designing even better processes and systems.

Everyone Wins – except the cynics who have a choice: to eat humble pie or leave.

Footnote II:

In the debate that has followed the publication of the Francis Report a lot of scrutiny has been applied to the method by which an estimated excess mortality number is created and it is necessary to explore this in a bit more detail.

The HSMR is an estimate of relative risk – it does not say that a set of specific patients were the ones who came to harm and the rest were OK. So looking at individual deaths and looking for the specific causes is to completely misunderstand the method. So looking at the actual deaths individually and looking for identifiable cause-and-effect paths is an misuse of the message.  When very few if any are found to conclude that HSMR is flawed is an error of logic and exposes the ignorance of the analyst further.

HSMR is not perfect though – it has weaknesses.  It is a benchmarking process the”standard” of 100 is always moving because the collective goal posts are moving – the reference is always changing . HSMR is estimated using data submitted by hospitals themselves – the clinical coding data.  So the main weakness is that it is dependent on the quality of the clinicial coding – the errors of comission (wrong codes) and the errors of omission (missing codes). Garbage In Garbage Out.

Hospitals use clinically coded data for other reasons – payment. The way hospitals are now paid is based on the volume and complexity of that activity – Payment By Results (PbR) – using what are called Health Resource Groups (HRGs). This is a better and fairer design because hospitals with more complex (i.e. costly to manage) case loads get paid more per patient on average.  The HRG for each patient is determined by their clinical codes – including what are called the comorbidities – the other things that the patient has wrong with them. More comorbidites means more complex and more risky so more money and more risk of death – roughly speaking.  So when PbR came in it becamevery important to code fully in order to get paid “properly”.  The problem was that before PbR the coding errors went largely unnoticed – especially the comorbidity coding. And the errors were biassed – it is more likely to omit a code than to have an incorrect code. Errors of omission are harder to detect. This meant that by more complete coding (to attract more money) the estimated casemix complexity would have gone up compared with the historical reference. So as actual (not estimated) NHS mortality has gone down slightly then the HSMR yardstick becomes even more distorted.  Hospitals that did not keep up with the Coding Game would look worse even though  their actual risk and mortality may be unchanged.  This is the fundamental design flaw in all types of  benchmarking based on self-reported data.

The actual problem here is even more serious. PbR is actually a payment for activity – not a payment for outcomes. It is calculated from what it cost to run the average NHS hospital using a technique called Reference Costing which is the same method that manufacturing companies used to decide what price to charge for their products. It has another name – Absorption Costing.  The highest performers in the manufacturing world no longer use this out-of-date method. The implication of using Reference Costing and PbR in the NHS are profound and dangerous:

If NHS hospitals in general have poorly designed processes that create internal queues and require more bed days than actually necessary then the cost of that “waste” becomes built into the future PbR tariff. This means average length of stay (LOS) is financially rewarded. Above average LOS is financially penalised and below average LOS makes a profit.  There is no financial pressure to improve beyound average. This is called the Regression to the Mean effect.  Also LOS is not a measure of quality – so there is a to shorten length of stay for purely financial reasons – to generate a surplus to use to fund growth and capital investment.  That pressure is non-specific and indiscrimiate.  PbR is necessary but it is not sufficient – it requires an quality of outcome metric to complete it.    

So the PbR system is based on an out-of-date cost-allocation model and therefore leads to the very problems that are contributing to the MidStaffs crisis – financial pressure causing quality failures and increased risk of mortality.  MidStaffs may be a chance victim of a combination of factors coming together like a perfect storm – but those same factors are present throughout the NHS because they are built into the current design.

One solution is to move towards a more up-to-date financial model called stream costing. This uses the similar data to reference costing but it estimates the “ideal” cost of the “necessary” work to achieve the intended outcome. This stream cost becomes the focus for improvement – the streams where there is the biggest gap between the stream cost and the reference cost are the focus of the redesign activity. Very often the root cause is just poor operational policy design; sometimes it is quality and safety design problems. Both are solvable without investment in extra capacity. The result is a higher quality, quicker, lower-cost stream. Win-win-win. And in the short term that  is rewarded by a tariff income that exceeds cost and a lower HSMR.

Radically redesigning the financial model for healthcare is not a quick fix – and it requires a lot of other changes to happen first. So the sooner we start the sooner we will arrive. 

Robert Francis QC

press_on_screen_anim_150_wht_7028Today is an important day.

The Robert Francis QC Report and recommendations from the Mid-Staffordshire Hospital Crisis has been published – and it is a sobering read.  The emotions that just the executive summary evoked in me were sadness, shame and anger.  Sadness for the patients, relatives, and staff who have been irreversibly damaged; shame that the clinical professionals turned a blind-eye; and anger that the root cause has still not been exposed to public scrutiny.

Click here to get a copy of the RFQC Report Executive Summary.

Click here to see the video of RFQC describing his findings. 

The root cause is ignorance at all levels of the NHS.  Not stupidity. Not malevolence. Just ignorance.

Ignorance of what is possible and ignorance of how to achieve it.

RFQC rightly focusses his recommendations on putting patients at the centre of healthcare and on making those paid to deliver care accountable for the outcomes.  Disappointingly, the report is notably thin on the financial dimension other than saying that financial targets took priority over safety and quality.  He is correct. They did. But the report does not say that this is unnecessary – it just says “in future put safety before finance” and in so doing he does not challenge the belief that we are playing a zero-sum-game. The  assumotion that higher-quality-always-costs-more.

This assumption is wrong and can easily be disproved.

A system that has been designed to deliver safety-and-quality-on-time-first-time-and-every-time costs less. And it costs less because the cost of errors, checking, rework, queues, investigation, compensation, inspectors, correctors, fixers, chasers, and all the other expensive-high-level-hot-air-generation-machinery that overburdens the NHS and that RFQC has pointed squarely at is unnecessary.  He says “simplify” which is a step in the right direction. The goal is to render it irrelevent.

The ignorance is ignorance of how to design a healthcare system that works right-first-time. The fact that the Francis Report even exists and is pointing its uncomfortable fingers-of-evidence at every level of the NHS from ward to government is tangible proof of this collective ignorance of system design.

And the good news is that this collective ignorance is also unnecessary … because the knowledge of how to design safe-and-affordable systems already exists. We just have to learn how. I call it 6M Design® – but  the label is irrelevent – the knowledge exists and the evidence that it works exists.

So here are some of the RFQC recommendations viewed though a 6M Design® lens:       

1.131 Compliance with the fundamental standards should be policed by reference to developing the CQC’s outcomes into a specification of indicators and metrics by which it intends to monitor compliance. These indicators should, where possible, be produced by the National Institute for Health and Clinical Excellence (NICE) in the form of evidence-based procedures and practice which provide a practical means of compliance and of measuring compliance with fundamental standards.

This is the safety-and-quality outcome specification for a healthcare system design – the required outcome presented as a relevent metric in time-series format and qualified by context.  Only a stable outcome can be compared with a reference standard to assess the system capability. An unstable outcome metric requires inquiry to understand the root cause and an appropriate action to restore stability. A stable but incapable outcome performance requires redesign to achieve both stability and capability. And if  the terms used above are unfamiliar then that is further evidence of system-design-ignorance.   
 
1.132 The procedures and metrics produced by NICE should include evidence-based tools for establishing the staffing needs of each service. These measures need to be readily understood and accepted by the public and healthcare professionals.

This is the capacity-and-cost specification of any healthcare system design – the financial envelope within which the system must operate. The system capacity design works backwards from this constraint in the manner of “We have this much resource – what design of our system is capable of delivering the required safety and quality outcome with this capacity?”  The essence of this challenge is to identify the components of poor (i.e. wasteful) design in the existing systems and remove or replace them with less wasteful designs that achieve the same or better quality outcomes. This is not impossible but it does require system diagnostic and design capability. If the NHS had enough of those skills then the Francis Report would not exist.

1.133 Adoption of these practices, or at least their equivalent, is likely to help ensure patients’ safety. Where NICE is unable to produce relevant procedures, metrics or guidance, assistance could be sought and commissioned from the Royal Colleges or other third-party organisations, as felt appropriate by the CQC, in establishing these procedures and practices to assist compliance with the fundamental standards.

How to implement evidence-based research in the messy real world is the Elephant in the Room. It is possible but it requires techniques and tools that fall outside the traditional research and audit framework – or rather that sit between research and audit. This is where Improvement Science sits. The fact that the Report only mentions evidence-based practice and audit implies that the NHS is still ignorant of this gap and what fills it – and so it appears is RFQC.   

1.136 Information needs to be used effectively by regulators and other stakeholders in the system wherever possible by use of shared databases. Regulators should ensure that they use the valuable information contained in complaints and many other sources. The CQC’s quality risk profile is a valuable tool, but it is not a substitute for active regulatory oversight by inspectors, and is not intended to be.

Databases store data. Sharing databases will share data. Data is not information. Information requires data and the context for that data.  Furthermore having been informed does not imply either knowledge or understanding. So in addition to sharing information, the capability to convert information-into-decision is also required. And the decisions we want are called “wise decisions” which are those that result in actions and inactions that lead inevitably to the intended outcome.  The knowledge of how to do this exists but the NHS seems ignorant of it. So the challenge is one of education not of yet more investigation.

1.137 Inspection should remain the central method for monitoring compliance with fundamental standards. A specialist cadre of hospital inspectors should be established, and consideration needs to be given to collaborative inspections with other agencies and a greater exploitation of peer review techniques.

This is audit. This is the sixth stage of a 6M Design® – the Maintain step.  Inspectors need to know what they are looking for, the errors of commission and the errors of omission;  and to know what those errors imply and what to do to identify and correct the root cause of these errors when discovered. The first cadre of inspectors will need to be fully trained in healthcare systems design and healthcare systems improvement – in short – they need to be Healthcare Improvementologists. And they too will need to be subject to the same framework of accreditation, and accountability as those who work in the system they are inspecting.  This will be one of the greatest of the challenges. The fact that the Francis report exists implies that we do not have such a cadre. Who will train, accredit and inspect the inspectors? Who has proven themselves competent in reality (not rhetorically)?

1.163 Responsibility for driving improvement in the quality of service should therefore rest with the commissioners through their commissioning arrangements. Commissioners should promote improvement by requiring compliance with enhanced standards that demand more of the provider than the fundamental standards.

This means that commissioners will need to understand what improvement requires and to include that expectation in their commissioning contracts. This challenge is even geater that the creation of a “cadre of inspectors”. What is required is a “generation of competent commissioners” who are also experienced and who have demonstrated competence in healthcare system design. The Commissioners-of-the-Future will need to be experienced healthcare improvementologists.

The NHS is sick – very sick. The medicine it needs to restore its health and vitality does exist – and it will not taste very nice – but to withold an effective treatment for an serious illness on that basis is clinical negligence.

It is time for the NHS to look in the mirror and take the strong medicine. The effect is quick – it will start to feel better almost immediately. 

To deliver safety and quality and quickly and affordably is possible – and if you do not believe that then you will need to muster the humility to ask to have the how demonstrated.

6MDesign

 

The Six Dice Game

<Ring Ring><Ring Ring>

Hello, you are through to the Improvement Science Helpline. How can we help?

This is Leslie, one of your apprentices.  Could I speak to Bob – my Improvement Science coach?

Yes, Bob is free. I will connect you now.

<Ring Ring><Ring Ring>

B: Hello Leslie, Bob here. What is on your mind?

L: Hi Bob, I have a problem that I do not feel my Foundation training has equipped me to solve. Can I talk it through with you?

B: Of course. Can you outline the context for me?

L: OK. The context is a department that is delivering an acceptable quality-of-service and is delivering on-time but is failing financially. As you know we are all being forced to adopt austerity measures and I am concerned that if their budget is cut then they will fail on delivery and may start cutting corners and then fail on quality too.  We need a win-win-win outcome and I do not know where to start with this one.

B: OK – are you using the 6M Design method?

L: Yes – of course!

B: OK – have you done The 4N Chart for the customer of their service?

L: Yes – it was their customers who asked me if I could help and that is what I used to get the context.

B: OK – have you done The 4N Chart for the department?

L: Yes. And that is where my major concerns come from. They feel under extreme pressure; they feel they are working flat out just to maintain the current level of quality and on-time delivery; they feel undervalued and frustrated that their requests for more resources are refused; they feel demoralized; demotivated and scared that their service may be ‘outsourced’. On the positive side they feel that they work well as a team and are willing to learn. I do not know what to do next.

B: OK. Dispair not. This sounds like a very common and treatable system illness.  It is a stream design problem which may be the reason your Foundations training feels insufficient. Would you like to see how a Practitioner would approach this?

L: Yes please!

B: OK. Have you mapped their internal process?

L: Yes. It is a six-step process for each job. Each step has different requirements and are done by different people with different skills. In the past they had a problem with poor service quality so extra safety and quality checks were imposed by the Governance department.  Now the quality of each step is measured on a 1-6 scale and the quality of the whole process is the sum of the individual steps so is measured on a scale of 6 to 36. They now have been given a minimum quality target of 21 to achieve for every job. How they achieve that is not specified – it was left up to them.

B: OK – do they record their quality measurement data?

L: Yes – I have their report.

B: OK – how is the information presented?

L: As an average for the previous month which is reported up to the Quality Performance Committee.

B: OK – what was the average for last month?

L: Their results were 24 – so they do not have an issue delivering the required quality. The problem is the costs they are incurring and they are being labelled by others as ‘inefficient’. Especially the departments who are in budget and they are annoyed that this failing department keeps getting ‘bailed out’.

B: OK. One issue here is the quality reporting process is not alerting you to the real issue. It sounds from what you say that you have fallen into the Flaw of Averages trap.

L: I don’t understand. What is the Flaw of Averages trap?

B: The answer to your question will become clear. The finance issue is a symptom – an effect – it is unlikely to be the cause. When did this finance issue appear?

L: Just after the Safety and Quality Review. They needed to employ more agency staff to do the extra work created by having to meet the new Minimum Quality target.

B: OK. I need to ask you a personal question. Do you believe that improving quality always costs more?

L: I have to say that I am coming to that conclusion. Our Governance and Finance departments are always arguing about it. Governance state ‘a minimum standard of safety and quality is not optional’ and finance say ‘but we are going out of business’. They are at loggerheads. The service departments get caught in the cross-fire.

B: OK. We will need to use reality to demonstrate that this belief is incorrect. Rhetoric alone does not work. If it did then we would not be having this conversation. Do you have the raw data from which the averages are calculated?

L: Yes. We have the data. The quality inspectors are very thorough!

B: OK – can you plot the quality scores for the last fifty jobs as a BaseLine chart?

L: Yes – give me a second. The average is 24 as I said.

B: OK – is the process stable?

L: Yes – there is only one flag for the fifty. I know from my Foundations training that is not a cause for alarm.

B: OK – what is the process capability?

L: I am sorry – I don’t know what you mean by that?

B: My apologies. I forgot that you have not completed the Practitioner training yet. The capability is the range between the red lines on the chart.

L: Um – the lower line is at 17 and the upper line is at 31.

L: OK – how many points lie below the target of 21.

B: None of course. They are meeting their Minimum Quality target. The issue is not quality – it is money.

There was a pause.  Leslie knew from experience that when Bob paused there was a surprise coming.

B: Can you email me your chart?

A cold-shiver went down Leslie’s back. What was the problem here? Bob had never asked to see the data before.

Sure. I will send it now.  The recent fifty is on the right, the data on the left is from after the quality inspectors went in and before the the Minimum Quality target was imposed. This is the chart that Governance has been using as evidence to justify their existence because they are claiming the credit for improving the quality.

B: OK – thanks. I have got it – let me see.  Oh dear.

Leslie was shocked. She had never heard Bob use language like ‘Oh dear’.

There was another pause.

B: Leslie, what is the context for this data? What does the X-axis represent?

Leslie looked at the chart again – more closely this time. Then she saw what Bob was getting at. There were fifty points in the first group, and about the same number in the second group. That was not the interesting part. In the first group the X-axis went up to 50 in regular steps of five; in the second group it went from 50 to just over 149 and was no longer regularly spaced. Eventually she replied.

Bob, that is a really good question. My guess it is that this is the quality of the completed work.

B: It is unwise to guess. It is better to go and see reality.

You are right. I knew that. It is drummed into us during the Foundations training! I will go and ask. Can I call you back?

B: Of course. I will email you my direct number.


<Ring Ring><Ring Ring>

B: Hello, Bob here.

L: Bob – it is Leslie. I am  so excited! I have discovered something amazing.

B: Hello Leslie. That is good to hear. Can you tell me what you have discovered?

L: I have discovered that better quality does not always cost more.

B: That is a good discovery. Can you prove it with data?

L: Yes I can!  I am emailing you the chart now.

B: OK – I am looking at your chart. Can you explain to me what you have discovered?

L: Yes. When I went to see for myself I saw that when a job failed the Minimum Quality check at the end then the whole job had to be re-done because there was no time to investigate and correct the causes of the failure.  The people doing the work said that they were helpless victims of errors that were made upstream of them – and they could not predict from one job to the next what the error would be. They said it felt like quality was a lottery and that they were just firefighting all the time. They knew that just repeating the work was not solving the problem but they had no other choice because they were under enormous pressure to deliver on-time as well. The only solution they could see is was to get more resources but their requests were being refused by Finance on the grounds that there is no more money. They felt completely trapped.

B: OK. Can you describe what you did?

L: Yes. I saw immediately that there were so many sources of errors that it would be impossible for me to tackle them all. So I used the tool that I had learned in the Foundations training: the Niggle-o-Gram. That focussed us and led to a surprisingly simple, quick, zero-cost process design change. We deliberately did not remove the Inspection-and-Correction policy because we needed to know what the impact of the change would be. Oh, and we did one other thing that challenged the current methods. We plotted every attempt, both the successes and the failures, on the BaseLine chart so we could see both the the quality and the work done on one chart.  And we updated the chart every day and posted it chart on the notice board so everyone in the department could see the effect of the change that they had designed. It worked like magic! They have already slashed their agency staff costs, the whole department feels calmer and they are still delivering on-time. And best of all they now feel that they have the energy and time to start looking at the next niggle. Thank you so much! Now I see how the tools and techniques I learned in Foundations are so powerful and now I understand better the reason we learned them first.

B: Well done Leslie. You have taken an important step to becoming a fully fledged Practitioner. You have learned some critical lessons in this challenge.


This scenario is fictional but realistic.

And it has been designed so that it can be replicated easily using a simple game that requires only pencil, paper and some dice.

If you do not have some dice handy then you can use this little program that simulates rolling six dice.

The Six Digital Dice program (for PC only).

Instructions
1. Prepare a piece of A4 squared paper with the Y-axis marked from zero to 40 and the X-axis from 1 to 80.
2. Roll six dice and record the score on each (or roll one die six times) – then calculate the total.
3. Plot the total on your graph. Left-to-right in time order. Link the dots with lines.
4. After 25 dots look at the chart. It should resemble the leftmost data in the charts above.
5. Now draw a horizontal line at 21. This is the Minimum Quality Target.
6. Keep rolling the dice – six per cycle, adding the totals to the right of your previous data.

But this time if the total is less than 21 then repeat the cycle of six dice rolls until the score is 21 or more. Record on your chart the output of all the cycles – not just the acceptable ones.

7. Keep going until you have 25 acceptable outcomes. As long as it takes.

Now count how many cycles you needed to complete in order to get 25 acceptable outcomes.  You should find that it is about twice as many as before you “imposed” the Inspect-and-Correct QI policy.

This illustrates the problem of an Inspection-and-Correction design for quality improvement.  It does improve the quality of the final output – but at a higher cost.

We are treating the symptoms (effects) and ignoring the disease (causes).

The internal design of the process is unchanged so it is still generating mistakes.

How much quality improvement you get and how much it costs you is determined by the design of the underlying process – which has not changed. There is a Law of Diminishing returns here – and a big risk.

The risk is that if quality improves as the result of applying a quality target then it encourages the Governance thumbscrews to be tightened further and forces those delivering the service further into cross-fire between Governance and Finance.

The other negative consequence of the Inspect-and-Correct approach is that it increases both the average and the variation in lead time which also fuels the calls for more targets, more sticks, calls for  more resources and pushes costs up even further.

The lesson from this simple exercise seems clear.

The better strategy for improving quality is to design the root causes of errors out of the processes  because then we will get improved quality and improved delivery and improved productivity and we will discover that we have improved safety as well.  Win-win-win-win.

The Six Dice Game is a simpler version of the famous Red Bead Game that W Edwards Deming used to explain why, in the modern world, the arbitrary-target-driven-command-and-control-stick-and-carrot style of performance management creates more problems than it solves.

The illusion is of short-term gain but the reality is of long-term pain.

And if you would like to see and hear Deming talking about the science of improvement there is a video of him speaking in 1984. He is at the bottom of the page.  Click here.

The F Word

There is an F-word that organisations do not like to use – except maybe in conspiratorial corridor conversations.

What word might that be? What are good candidates for it?

Finance perhaps?

Certainly a word that many people do not want to utter – especially when the financial picture is not looking very rosy. And when the word finance is mentioned in meetings there is usually a groan of anguish. So yes, finance is a good candidate – but it is not the F-word.

Failure maybe?

Yes – definitely a word that is rarely uttered openly. The concept of failure is just not acceptable. Organisations must succeed, sustain and grow. Talk of failure is for losers not for winners. To talk about failure is tempting fate. So yes, another excellent candidate – but it is not the F-word.

OK – what about Fear?

That is definitely something no one likes to admit to.  Especially leaders. They are expected to be fearless. Fear is a sign of weakness! Once you start letting the fear take over then panic starts to set in – then rash decisions follow then you are really on the slippery slope. Your organisation fragments into warring factions and your fate is sealed. That must be the F-word!

Nope.  It is another very worthy candidate but it is not the F-word.


[reveal heading=”Click here to reveal the F-word“]


The dreaded F-word is Feedback.

We do not like feedback.  We do not like asking for it. We do not like giving it. We do not like talking about it. Our systems seem to be specifically designed to exclude it. Potentially useful feedback information is kept secret, confidential, for-our-eyes only.  And if it is shared it is emasculated and anonymized.

And the brave souls who are prepared to grasp the nettle – the 360 Feedback Zealots – are forced to cloak feedback with secrecy and confidentiality. We are expected to ask  for feedback, to take it on the chin, but not to know who or where it came from. So to ease the pain of anonymous feedback we are allowed to choose our accusers. So we choose those who we think will not point out our blindspot. Which renders the whole exercise worthless.

And when we actually want feedback we extract it mercilessly – like extracting blood from a reluctant stone. And if you do not believe me then consider this question: Have you ever been to a training course where your ‘certificate of attendance’ was with-held until you had completed the feedback form? The trainers do this for good reason. We just hate giving feedback. Any feedback. Positive or negative. So if they do not extract it from us before we leave they do not get any.

Unfortunately by extracting feedback from us under coercion is like acquiring a confession under torture – it distorts the message and renders it worthless.

What is the problem here?  What are we scared of?


We all know the answer to the question.  We just do not want to point at the elephant in the room.

We are all terrified of discovering that we have the organisational equivalent of body-odour. Something deeply unpleasant about our behaviour that we are blissfully unaware of but that everyone else can see as plain as day. Our behaviour blindspot. The thing we would cringe with embarrassment about if we knew. We are social animals – not solitary ones. We need on feedback yet we fear it too.

We lack the courage and humility to face our fear so we resort to denial. We avoid feedback like the plague. Feedback becomes the F-word.

But where did we learn this feedback phobia?

Maybe we remember the playground taunts from the Bullies and their Sychophants? From the poisonous Queen-Bees and their Wannabees?  Maybe we tried to protect ourselves with incantations that our well-meaning parents taught us. Spells like “Sticks and stones may break my bones but names will never hurt me“.  But being called names does hurt. Deeply. And it hurts because we are terrified that there might be some truth in the taunt.

Maybe we learned to turn a blind-eye and a deaf-ear; to cross the street at the first sign of trouble; to turn the other cheek? Maybe we just learned to adopt the Victim role? Maybe we were taught to fight back? To win at any cost? Maybe we were not taught how to defuse the school yard psycho-games right at the start?  Maybe our parents and teachers did not know how to teach us? Maybe they did not know themselves?  Maybe the ‘innocent’ schoolyard games are actually much more sinister?  Maybe we carry them with us as habitual behaviours into adult life and into our organisations? And maybe the bullies and Queen-Bees learned something too? Maybe they learned that they could get away with it? Maybe they got to like the Persecutor role and its seductive musk of power? If so then then maybe the very last thing the Bullies and Queen-Bees will want to do is to encourage open, honest feedback – especially about their behaviour. Maybe that is the root cause of the conspiracy of silence? Maybe?

But what is the big deal here?

The ‘big deal’ is that this cultural conspiracy of silence is toxic.  It is toxic to trust. It is toxic to teams. It is toxic to morale.  It is toxic to motivation. It is toxic to innovation. It is toxic to improvement. It is so toxic that it kills organisations – from the inside. Slowly.

Ouch! That feels uncomfortably realistic. So what is the problem again – exactly?

The problem is a deliberate error of omission – the active avoidance of feedback.

So ….. if it were that – how would we prove that is the root cause? Eh?

By correcting the error of omission and then observing what happens.


And this is where it gets dangerous for leaders. They are skating on politically thin ice and they know it.

Subjective feedback is very emotive.  If we ask ten people for their feedback on us we will get ten different replies – because no two people perceive the world (and therefore us) the same way.  So which is ‘right’? Which opinions do we take heed of and which ones do we discount? It is a psycho-socio-political minefield. So no wonder we avoid stepping onto the cultural barbed-wire!

There is an alternative.  Stick to reality and avoid rhetoric. Stick to facts and avoid feelings. Feed back the facts of how the organisational system is behaving to everyone in the organisation.

And the easiest way to do that is with three time-series charts that are updated and shared at regular and frequent intervals.

First – the count of safety and quality failure near-misses for each interval – for at least 50 intervals.

Second – the delivery time of our product or service for each customer over the same time period.

Third – the revenue generated and the cost incurred for each interval for the same 50 intervals.

No ratios, no targets, no balanced scorecard.

Just the three charts that paint the big picture of reality. And it might not be a very pretty picture.

But why at least 50 intervals?

So we can see the long term and short term variation over time. We need both … because …

Our Safety Chart shows that near misses keep happening despite all the burden of inspection and correction.

Our Delivery Chart shows that our performance is distorted by targets and the Horned Gaussian stalks us.

Our Viability Chart shows that our costs are increasing as we pay dearly for past mistakes and our revenue is decreasing as our customers protect their purses and their persons by staying away.

That is the not-so-good news.

The good news is that as soon as we have a multi-dimensional-frequent-feedback loop installed we will start to see improvement. It happens like magic. And the feedback accelerates the improvement.

And the news gets better.

To make best use of this frequent feedback we just need to include in our Constant Purpose – to improve safety, delivery and viability. And then the final step is to link the role of every person in the organisation to that single win-win-win goal. So that everyone can see how they contribute and how their job is worthwhile.

Shared Goals, Clear Roles and Frequent Feedback.

And if you resonate with this message then you will resonate with “The Three Signs of  Miserable Job” by Patrick Lencioni.

And if you want to improve your feedback-ability then a really simple and effective feedback tool is The 4N Chart

And please share your feedback.

[/reveal]

The Three R’s

Processes are like people – they get poorly – sometimes very poorly.

Poorly processes present with symptoms. Symptoms such as criticism, complaints, and even catastrophes.

Poorly processes show signs. Signs such as fear, queues and deficits.

So when a process gets very poorly what do we do?

We follow the Three R’s

1-Resuscitate
2-Review
3-Repair

Resuscitate means to stabilize the process so that it is not getting sicker.

Review means to quickly and accurately diagnose the root cause of the process sickness.

Repair means to make changes that will return the process to a healthy and stable state.

So the concept of ‘stability’ is fundamental and we need to understand what that means in practice.

Stability means ‘predictable within limits’. It is not the same as ‘constant’. Constant is stable but stable is not necessarily constant.

Predictable implies time – so any measure of process health must be presented as time-series data.

We are now getting close to a working definition of stability: “a useful metric of system performance that is predictable within limits over time”.

So what is a ‘useful metric’?

There will be at least three useful metrics for every system: a quality metric, a time metric and a money metric.

Quality is subjective. Money is objective. Time is both.

Time is the one to start with – because it is the easiest to measure.

And if we treat our system as a ‘black box’ then from the outside there are three inter-dependent time-related metrics. These are external process metrics (EPMs) – sometimes called Key Performance Indicators (KPIs).

Flow in – also called demand
Flow out – also called activity
Delivery time – which is the time a task spends inside our system – also called the lead time.

But this is all starting to sound like rather dry, conceptual, academic mumbo-jumbo … so let us add a bit of realism and drama – let us tell this as a story …

[reveal heading=”Click here to reveal the story …“] 


Picture yourself as the manager of a service that is poorly. Very poorly. You are getting a constant barrage of criticism and complaints and the occasional catastrophe. Your service is struggling to meet the required delivery time performance. Your service is struggling to stay in budget – let alone meet future cost improvement targets. Your life is a constant fire-fight and you are getting very tired and depressed. Nothing you try seems to make any difference. You are starting to think that anything is better than this – even unemployment! But you have a family to support and jobs are hard to come by in austere times so jumping is not an option. There is no way out. You feel you are going under. You feel are drowning. You feel terrified and helpless!

In desperation you type “Management fire-fighting” into your web search box and among the list of hits you see “Process Improvement Emergency Service”.  That looks hopeful. The link takes you to a website and a phone number. What have you got to lose? You dial the number.

It rings twice and a calm voice answers.

?“You are through to the Process Improvement Emergency Service – what is the nature of the process emergency?”

“Um – my service feels like it is on fire and I am drowning!”

The calm voice continues in a reassuring tone.

?“OK. Have you got a minute to answer three questions?”

“Yes – just about”.

?“OK. First question: Is your service safe?”

“Yes – for now. We have had some catastrophes but have put in lots of extra safety policies and checks which seems to be working. But they are creating a lot of extra work and pushing up our costs and even then we still have lots of criticism and complaints.”

?“OK. Second question: Is your service financially viable?”

“Yes, but not for long. Last year we just broke even, this year we are projecting a big deficit. The cost of maintaining safety is ‘killing’ us.”

?“OK. Third question: Is your service delivering on time?”

“Mostly but not all of the time, and that is what is causing us the most pain. We keep getting beaten up for missing our targets.  We constantly ask, argue and plead for more capacity and all we get back is ‘that is your problem and your job to fix – there is no more money’. The system feels chaotic. There seems to be no rhyme nor reason to when we have a good day or a bad day. All we can hope to do is to spot the jobs that are about to slip through the net in time; to expedite them; and to just avoid failing the target. We are fire-fighting all of the time and it is not getting better. In fact it feels like it is getting worse. And no one seems to be able to do anything other than blame each other.”

There is a short pause then the calm voice continues.

?“OK. Do not panic. We can help – and you need to do exactly what we say to put the fire out. Are you willing to do that?”

“I do not have any other options! That is why I am calling.”

The calm voice replied without hesitation. 

?“We all always have the option of walking away from the fire. We all need to be prepared to exercise that option at any time. To be able to help then you will need to understand that and you will need to commit to tackling the fire. Are you willing to commit to that?”

You are surprised and strangely reassured by the clarity and confidence of this response and you take a moment to compose yourself.

“I see. Yes, I agree that I do not need to get toasted personally and I understand that you cannot parachute in to rescue me. I do not want to run away from my responsibility – I will tackle the fire.”

?“OK. First we need to know how stable your process is on the delivery time dimension. Do you have historical data on demand, activity and delivery time?”

“Hey! Data is one thing I do have – I am drowning in the stuff! RAG charts that blink at me like evil demons! None of it seems to help though – the more data I get sent the more confused I become!”

?“OK. Do not panic.  The data you need is very specific. We need the start and finish events for the most recent one hundred completed jobs. Do you have that?”

“Yes – I have it right here on a spreadsheet – do I send the data to you to analyse?”

?“There is no need to do that. I will talk you through how to do it.”

“You mean I can do it now?”

?“Yes – it will only take a few minutes.”

“OK, I am ready – I have the spreadsheet open – what do I do?”

?“Step 1. Arrange the start and finish events into two columns with a start and finish event for each task on each row.

You copy and paste the data you need into a new worksheet. 

“OK – done that”.

?“Step 2. Sort the two columns into ascending order using the start event.”

“OK – that is easy”.

?“Step 3. Create a third column and for each row calculate the difference between the start and the finish event for that task. Please label it ‘Lead Time’”.

“OK – do you want me to calculate the average Lead Time next?”

There was a pause. Then the calm voice continued but with a slight tinge of irritation.

?“That will not help. First we need to see if your system is unstable. We need to avoid the Flaw of Averages trap. Please follow the instructions exactly. Are you OK with that?”

This response was a surprise and you are starting to feel a bit confused.    

“Yes – sorry. What is the next step?”

?“Step 4: Plot a graph. Put the Lead Time on the vertical axis and the start time on the horizontal axis”.

“OK – done that.”

?“Step 5: Please describe what you see?”

“Um – it looks to me like a cave full of stalagtites. The top is almost flat, there are some spikes, but the bottom is all jagged.”

?“OK. Step 6: Does the pattern on the left-side and on the right-side look similar?”

“Yes – it does not seem to be rising or falling over time. Do you want me to plot the smoothed average over time or a trend line? They are options on the spreadsheet software. I do that use all the time!”

The calm voice paused then continued with the irritated overtone again.

?“No. There is no value is doing that. Please stay with me here. A linear regression line is meaningless on a time series chart. You may be feeling a bit confused. It is common to feel confused at this point but the fog will clear soon. Are you OK to continue?”

An odd feeling starts to grow in you: a mixture of anger, sadness and excitement. You find yourself muttering “But I spent my own hard-earned cash on that expensive MBA where I learned how to do linear regression and data smoothing because I was told it would be good for my career progression!”

?“I am sorry I did not catch that? Could you repeat it for me?”

“Um – sorry. I was talking to myself. Can we proceed to the next step?”

?”OK. From what you say it sounds as if your process is stable – for now. That is good.  It means that you do not need to Resuscitate your process and we can move to the Review phase and start to look for the cause of the pain. Are you OK to continue?”

An uncomfortable feeling is starting to form – one that you cannot quite put your finger on.

“Yes – please”. 

?Step 7: What is the value of the Lead Time at the ‘cave roof’?”

“Um – about 42”

?“OK – Step 8: What is your delivery time target?”

“42”

?“OK – Step 9: How is your delivery time performance measured?”

“By the percentage of tasks that are delivered late each month. Our target is better than 95%. If we fail any month then we are named-and-shamed at the monthly performance review meeting and we have to explain why and what we are going to do about it. If we succeed then we are spared the ritual humiliation and we are rewarded by watching others else being mauled instead. There is always someone in the firing line and attendance at the meeting is not optional!”

You also wanted to say that the data you submit is not always completely accurate and that you often expedite tasks just to avoid missing the target – in full knowkedge that the work had not been competed to the required standard. But you hold that back. Someone might be listening.

There was a pause. Then the calm voice continued with no hint of surprise. 

?“OK. Step 10. The most likely diagnosis here is a DRAT. You have probably developed a Gaussian Horn that is creating the emotional pain and that is fuelling the fire-fighting. Do not panic. This is a common and curable process illness.”

You look at the clock. The conversation has taken only a few minutes. Your feeling of panic is starting to fade and a sense of relief and curiosity is growing. Who are these people?

“Can you tell me more about a DRAT? I am not familiar with that term.”

?“Yes.  Do you have two minutes to continue the conversation?”

“Yes indeed! You have my complete attention for as long as you need. The emails can wait.”

The calm voice continues.

?“OK. I may need to put you on hold or call you back if another emergency call comes in. Are you OK with that?”

“You mean I am not the only person feeling like this?”

?“You are not the only person feeling like this. The process improvement emergency service, or PIES as we call it, receives dozens of calls like this every day – from organisations of every size and type.”

“Wow! And what is the outcome?”

There was a pause. Then the calm voice continued with an unmistakeable hint of pride.

?“We have a 100% success rate to date – for those who commit. You can look at our performance charts and the client feedback on the website.”

“I certainly will! So can you explain what a DRAT is?” 

And as you ask this you are thinking to yourself ‘I wonder what happened to those who did not commit?’ 

The calm voice interrupts your train of thought with a well-practiced explanation.

?“DRAT stands for Delusional Ratio and Arbitrary Target. It is a very common management reaction to unintended negative outcomes such as customer complaints. The concept of metric-ratios-and-performance-specifications is not wrong; it is just applied indiscriminately. Using DRATs can drive short-term improvements but over a longer time-scale they always make the problem worse.”

One thought is now reverberating in your mind. “I knew that! I just could not explain why I felt so uneasy about how my service was being measured.” And now you have a new feeling growing – anger.  You control the urge to swear and instead you ask:

“And what is a Horned Gaussian?”

The calm voice was expecting this question.

?“It is easier to demonstrate than to explain. Do you still have your spreadsheet open and do you know how to draw a histogram?”

“Yes – what do I need to plot?”

?“Use the Lead Time data and set up ten bins in the range 0 to 50 with equal intervals. Please describe what you see”.

It takes you only a few seconds to do this.  You draw lots of histograms – most of them very colourful but meaningless. No one seems to mind though.

“OK. The histogram shows a sort of heap with a big spike on the right hand side – at 42.”

The calm voice continued – this time with a sense of satisfaction.

?“OK. You are looking at the Horned Gaussian. The hump is the Gaussian and the spike is the Horn. It is a sign that your complex adaptive system behaviour is being distorted by the DRAT. It is the Horn that causes the pain and the perpetual fire-fighting. It is the DRAT that causes the Horn.”

“Is it possible to remove the Horn and put out the fire?”

?“Yes.”

This is what you wanted to hear and you cannot help cutting to the closure question.

“Good. How long does that take and what does it involve?”

The calm voice was clearly expecting this question too.

?“The Gaussian Horn is a non-specific reaction – it is an effect – it is not the cause. To remove it and to ensure it does not come back requires treating the root cause. The DRAT is not the root cause – it is also a knee-jerk reaction to the symptoms – the complaints. Treating the symptoms requires learning how to diagnose the specific root cause of the lead time performance failure. There are many possible contributors to lead time and you need to know which are present because if you get the diagnosis wrong you will make an unwise decision, take the wrong action and exacerbate the problem.”

Something goes ‘click’ in your head and suddently your fog of confusion evaporates. It is like someone just switched a light on.

“Ah Ha! You have just explained why nothing we try seems to work for long – if at all.  How long does it take to learn how to diagnose and treat the specific root causes?”

The calm voice was expecting this question and seemed to switch to the next part of the script.

?“It depends on how committed the learner is and how much unlearning they have to do in the process. Our experience is that it takes a few hours of focussed effort over a few weeks. It is rather like learning any new skill. Guidance, practice and feedback are needed. Just about anyone can learn how to do it – but paradoxically it takes longer for the more experienced and, can I say, cynical managers. We believe they have more unlearning to do.”

You are now feeling a growing sense of urgency and excitement.

“So it is not something we can do now on the phone?”

?“No. This conversation is just the first step.”

You are eager now – sitting forward on the edge of your chair and completely focussed.

“OK. What is the next step?”

There is a pause. You sense that the calm voice is reviewing the conversation and coming to a decision.

?“Before I can answer your question I need to ask you something. I need to ask you how you are feeling.”

That was not the question you expected! You are not used to talking about your feelings – especially to a complete stranger on the phone – yet strangely you do not sense that you are being judged. You have is a growing feeling of trust in the calm voice.

You pause, collect your thoughts and attempt to put your feelings into words. 

“Er – well – a mixture of feelings actually – and they changed over time. First I had a feeling of surprise that this seems so familiar and straightforward to you; then a sense of resistance to the idea that my problem is fixable; and then a sense of confusion because what you have shown me challenges everything I have been taught; and then a feeling distrust that there must be a catch and then a feeling of fear of embarassement if I do not spot the trick. Then when I put my natural skepticism to one side and considered the possibility as real then there was a feeling of anger that I was not taught any of this before; and then a feeling of sadness for the years of wasted time and frustration from battling something I could not explain.  Eventually I started to started to feel that my cherished impossibility belief was being shaken to its roots. And then I felt a growing sense of curiosity, optimism and even excitement that is also tinged with a feeling of fear of disappointment and of having my hopes dashed – again.”

There was a pause – as if the calm voice was digesting this hearty meal of feelings. Then the calm voice stated:

?“You are experiencing the Nerve Curve. It is normal and expected. It is a healthy sign. It means that the healing process has already started. You are part of your system. You feel what it feels – it feels what you do. The sequence of negative feelings: the shock, denial, anger, sadness, depression and fear will subside with time and the positive feelings of confidence, curiosity and excitement will replace them. Do not worry. This is normal and it takes time. I can now suggest the next step.”

You now feel like you have just stepped off an emotional rollercoaster – scary yet exhilarating at the same time. A sense of relief sweeps over you. You have shared your private emotional pain with a stranger on the phone and the world did not end! There is hope.

“What is the next step?”

This time there was no pause.

?“To commit to learning how to diagnose and treat your process illnesses yourself.”

“You mean you do not sell me an expensive training course or send me a sharp-suited expert who will come tell me what to do and charge me a small fortune?”

There is an almost sarcastic tone to your reply that you regret as soon as you have spoken.

Another pause.  An uncomfortably long one this time. You sense the calm voice knows that you know the answer to your own question and is waiting for you to answer it yourself.

You answer your own question.  

“OK. I guess not. Sorry for that. Yes – I am definitely up for learning how! What do I need to do.”

?“Just email us. The address is on the website. We will outline the learning process. It is neither difficult nor expensive.”

The way this reply was delivered – calmly and matter-of-factly – was reassuring but it also promoted a new niggle – a flash of fear.

“How long have I got to learn this?”

This time the calm voice had an unmistakable sense of urgency that sent a cold prickles down your spine.

?”Delay will add no value. You are being stalked by the Horned Gaussian. This means your system is on the edge of a catastrophe cliff. It could tip over any time. You cannot afford to relax. You must maintain all your current defenses. It is a learning-by-doing process. The sooner you start to learn-by-doing the sooner the fire starts to fade and the sooner you move away from the edge of the cliff.”       

“OK – I understand – and I do not know why I did not seek help a long time ago.”

The calm voice replied simply.

?”Many people find seeking help difficult. Especially senior people”.

Sensing that the conversation is coming to an end you feel compelled to ask:

“I am curious. Where do the DRATs come from?”

?“Curiosity is a healthy attitude to nurture. We believe that DRATs originated in finance departments – where they were originally called Fiscal Averages, Ratios and Targets.  At some time in the past they were sucked into operations and governance departments by a knowledge vacuum created by an unintended error of omission.”

You are not quite sure what this unfamiliar language means and you sense that you have strayed outside the scope of the “emergency script” but the phrase ‘error of omission sounds interesting’ and pricks your curiosity. You ask: 

“What was the error of omission?”

?“We believe it was not investing in learning how to design complex adaptive value systems to deliver capable win-win-win performance. Not investing in learning the Science of Improvement.”

“I am not sure I understand everything you have said.”

?“That is OK. Do not worry. You will. We look forward to your email.  My name is Bob by the way.”

“Thank you so much Bob. I feel better just having talked to someone who understands what I am going through and I am grateful to learn that there is a way out of this dark pit of despair. I will look at the website and send the email immediately.”

?”I am happy to have been of assistance.”

[/reveal]

Safety by Despair, Desire or Design?

Imagine the health and safety implications of landing a helicopter carrying a critically ill patient on the roof of a hospital.

Consider the possible number of ways that this scenario could go horribly wrong. But in reality it does not because this is a very visible hazard and the associated risks are actively mitigated.

It is much more dangerous for a slightly ill patient to enter the doors of the hospital on their own two legs.  Surely not!  How can that be?

First the reality – the evidence.

Repeated studies have shown that about 1 in 300  emergency admissions to hospitals do not leave alive and their death is avoidable. And it is not just weekends that are risky. That means about 1 person per week for each large acute hospital in England. That is about a jumbo-jet full of people every week in England. If you want to see the evidence click here to get a copy of a recent study.

How long would an airline stay in business if it crashed one plane full of passengers every week?

And how do we know that these are the risks? Well by looking at hospitals who have recognised the hazards and the risks and have actively done something about it. The ones that have used Improvement Science – and improved.


In one hospital the death rate from a common, high-risk emergency was significantly reduced overnight simply by designing and implementing a protocol that ensured these high-risk patients were admitted to the same ward. It cost nothing to do. No extra staff or extra beds. The effect was a consistently better level of care through proactive medical management. Preventing risk rather than correcting harm. The outcome was not just fewer deaths – the survivers did better too. More of them returned to independent living – which had a huge financial implication for the cost of long term care. It was cheaper for the healthcare system. But that benefit was felt in a different budget so there was no direct financial reward to the hospital for improving the outcome.  So the improvement was not celebrated and sustained. Finance trumped Governance. Desire to improve safety is not enough.


Eventually and inevitably the safety issue will resurface and bite back.  The Mid Staffordshire Hospital debacle is a timely reminder. Eventually despair will drive change – but it will come at a high price.  The emotional knee jerk reaction driven by public outrage will be to add yet more layers of bureaucracy and cost: more inspectors, inspections and delays.  The knee jerk is not designed to understand the root cause and correct it – that toxic combination of ignorance and confidence that goes by the name arrogance.


The reason that the helicopter-on-the-hospital is safer is because it is designed to be – and one of the tools used in safe process design is called Failure Modes and Effects Analysis or FMEA.

So if there is anyone reading this who is in a senior clinical or senior mangerial role in a hospital that has any safety issues – and who has not heard of FMEA then they have a golden opportunity to learn a skill that will lead to safer-by-design hospital.

Safer-by-design hospitals are less frightening to walk into, less demotivating to work in and cheaper to run.  Everyone wins.

If you want to learn more now then click here for a short summary of FMEA from the Institute of Healthcare Improvement.

It was written in 2004. That is eight years ago.

The Frightening Cost Of Fear

The recurring theme this week has been safety and risk.

Specifically in a healthcare context. Most people are not aware just how risky our current healthcare systems are. Those who work in healthcare are much more aware of the dangers but they seem powerless to do much to make their systems safer for patients.


The shroud-waving  zealots who rant on about safety often use a very unhelpful quotation. They say “Every system is perfectly designed to deliver the performance it does“. The implication is that when the evidence shows that our healthcare systems are dangerous …. then …. we designed them to be dangerous.  The reaction from the audience is emotional and predictable “We did not intend this so do not try to pin the blame on us!”  The well-intentioned shroud-waving safety zealot loses whatever credibility they had and the collective swamp of cynicism and despair gets a bit deeper.


The warning-word here is design – because it has many meanings.  The design of a system can mean “what the system is” in the sense of a blueprint. The design of a system can also mean “how the blueprint was created”.  This process sense is the trap – because it implies intention.  Design needs a purpose – the intended outcome – so to say an unsafe system has been designed is to imply that it was intended to be unsafe. This is incorrect.

The message in the emotional backlash that our well-intended zealot provoked is “You said we intended bad things to happen which is not correct so if you are wrong on that fundamental belief then how can I trust anything else you say?“. This is the reason zealots lose credibility and actually make improvement less likely to happen.


The reality is not that the system was designed to be unsafe – it is that it was not designed not to be. The double negatives are intentional. The two statements are not the same.


The default way of the Universe is evolutionary (which is unintentional and reactive) and chaotic (which is unstable and unsafe). To design a system to be not-unsafe we need to understand Two Sciences – Design Science and Safety Science. Only then can we proactively and intentionally design safe, stable, and trustable systems.    If we do nothing and do not invest in mastering the Two Sciences then we will get the default outcome: unintended unsafety.  This is what the uncomfortable  evidence says we have.


So where does the Frightening Cost of Fear come in?

If our system is unintentionally and unpredictably unsafe then of course we will try to protect ourselves from the blame which inevitably will follow from disappointed customers.  We fear the blame partly because we know it is justified and partly because we feel powerless to avoid it. So we cover our backs. We invent and implement complex check-and-correct systems and we document everything we do so that we have the evidence in the inevitable event of a bad outcome and the backlash it unleashes. The evidence that proves we did our best; it shows we did what the safety zealots told us to do; it shows that we cannot be held responsible for the bad outcome.

Unfortunately this strategy does little to prevent bad outcomes. In fact it can have has exactly the opposite effect of what is intended. The added complexity and cost of our cover-my-back bureaucracy actually increases the stress and chaos and makes bad outcomes more likely to happen. It makes the system even less safe. It does not deflect the blame. It just demonstrates that we do not understand how to design a not-unsafe system.


And the financial cost of our fear is frighteningly high.

Studies have shown that over 60% of nursing time is spent on documentation – and about 70% of healthcare cost is on hospital nurse salaries. The maths is easy – at least 42% of total healthcare cost is spent on back-covering-blame-deflection-bureaucracy.

It gets worse though.

Those legal documents called clinical records need to be moved around and stored for a minimum of seven years. That is expensive. Converting them into an electronic format misses the point entirely. Finding the few shreds of valuable clinical information amidst the morass of back-covering-bureaucracy uses up valuable specialist time and has a high risk of failure. Inevitably the risk of decision errors increases – but this risk is unmeasured and is possibly unmeasurable. The frustration and fear it creates is very obvious though: to anyone willing to look.

The cost of correcting the Niggles that have been detected before they escalate to Not Agains, Near Misses and Never Events can itself account for half the workload. And the cost of clearing up the mess after the uncommon but inevitable disaster becomes built into the system too – as insurance premiums to pay for future litigation and compensation. It is no great surprise that we have unintentionally created a compensation culture! Patient expectation is rising.

Add all those costs up and it becomes plausible to suggest that the Cost of Fear could be a terrifying 80% of the total cost!


Of course we cannot just flick a switch and say “Right – let us train everyone in safe system design science“.  What would all the people who make a living from feeding on the present dung-heap do? What would the checkers and auditors and litigators and insurers do to earn a crust? Join the already swollen ranks of the unemployed?


If we step back and ask “Does the Cost of Fear principle apply to everything?” then we are faced with the uncomfortable conclusion that it most likely is.  So the cost of everything we buy will have a Cost of Fear component in it. We will not see it written down like that but it will be in there – it must be.

This leads us to a profound idea.  If we collectively invested in learning how to design not-unsafe systems then the cost of everything could fall. This means we would not need to work as many hours to earn enough to pay for what we need to live. We could all have less fear and stress. We could all have more time to do what we enjoy. We could all have both of these and be no worse off in terms of financial security.

This Win-Win-Win outcome feels counter-intuitive enough to deserve serious consideration.


So here are some other blog topics on the theme of Safety and Design:

Never Events, Near Misses, Not Agains and Nailing Niggles

The Safety Line in the Quality Sand

Safety By Design

Standard Ambiguity

One that causes much confusion and debate in the world of Improvement is the word standard – because it has so many different yet inter-related meanings.

It is an ambiguous word and a multi-facetted concept.

For example, standard method can be the normal way of doing something (as in a standard operating procedure  or SOP); standard can be the expected outcome of doing something; standard can mean the minimum acceptable quality of the output (as in a safety standard); standard can mean an aspirational performance target; standard can mean an absolute reference or yardstick (as in the standard kilogram); standard can mean average; and so on.  It is an ambiguous word.

So, it is no surprise that we get confused. And when we feel confused we get scared and we try to relieve our fear by asking questions; which doesn’t help because we don’t get clear answers.  We start to discuss, and debate and argue and all this takes effort, time and inevitably money.  And the fog of confusion does not lift.  If anything it gets denser.  And the reason? Standard Ambiguity.


One contributory factor is the perennial confusion between purpose and process.  Purpose is the Why.  Process is the How.  The concept of Standard applied to the Purpose will include the outcomes: the minimum acceptable (safety standard), the expected (the specification standard) and the actual (the de facto standard).  The concept of Standard applied to the Process would include the standard operating procedures and the reference standards for accurate process measurement (e.g. a gold standard).


To illustrate the problems that result from confusing purpose standards with process standards we need look no further than education.

Q: What is the purpose of a school? Why does a school exist:

A:To deliver people who have achieved their highest educational potential perhaps.

Q: What is the purpose of an exam board? Why does an exam board exist?

A: To deliver a common educational reference standard and to have a reliable method for comparing individual pupils against that reference standard perhaps.

So, where does the idea of “Being the school that achieved the highest percentage of top grades?” fit with these two purpose standards?  Where does the school league table concept fit?  It is not obvious to see immediately.  But, you might say, we do want to improve the educational capability of our population because that is a national and global asset in an increasingly complex, rapidly changing, high technology world.  Surely a league table will drive up the quality of education? But it doesn’t seem to be turning out that way. What is getting in the way?


What might be getting in the way is how we often conflate collaboration with competition.

It seems that many believe we can only have either collaboration or competition.  Either-Or thinking is a trap for the unwary and whenever these words are uttered a small alarm bell should ring.  Are collaboration and competition mutually exclusive? Or are we just making this assumption to simplify the problem? PS. We do that a lot.


Suppose the exam boards were both competing and collaborating with each other. Suppose they collaborated to set and to maintain a stable and trusted reference standard; and suppose that they competed to provide the highest quality service to the schools – in terms of setting and marking exams. What would happen?

Firstly, an exam board that stepped out of line in terms of these standards would lose its authority to set and mark exams – it would cut its own commercial throat.  Secondly, the quality of the examination process would go up because those who invest in doing that will attract more of the market share.

What about the schools – what if they both collaborated and competed too?  What if they collaborated to set and maintain a stable and trusted reference standard of conduct and competency of their teachers – and what if they competed to improve the quality of their educational process. The best schools  would attract the most pupils.

What can happen if we combine competition and collaboration is that the sum becomes greater than the parts.


A similar situation exists in healthcare.  Some hospitals are talking about competing to be the safest hospitals and collaborating to improve quality.  It sounds plausible but it is rational?

Safety is an absolute standard – it is the common minimum acceptable quality.  No hospital should fail on safety so this is not a suitable subject for competition.  All hospitals could collaborate to set and to maintain safety – helping each other by sharing data, information, knowledge, understanding and wisdom.  And with that Foundation of Trust they can then compete on quality – using their natural competitive spirit to pull them ever higher. Better quality of service, better quality of delivery and better quality of performance – including financial. Win-win-win.  And when the quality of everything improves through collaborative and competitive upwards pull, then the achievable level of minimum acceptable quality increases.  This means that the Safety Standard can improve too.  Everyone wins.


Little and Often

There seem to be two extremes to building the momentum for improvement – One Big Whack or Many Small Nudges.


The One Big Whack can come at the start and is a shock tactic designed to generate an emotional flip – a Road to Damascus moment – one that people remember very clearly. This is the stuff that newspapers fall over themselves to find – the Big Front Page Story – because it is emotive so it sells newspapers.  The One Big Whack can also come later – as an act of desperation by those in power who originally broadcast The Big Idea and who are disappointed and frustrated by lack of measurable improvement as the time ticks by and the money is consumed.


Many Small Nudges do not generate a big emotional impact; they are unthreatening; they go almost unnoticed; they do not sell newspapers, and they accumulate over time.  The surprise comes when those in power are delighted to discover that significant improvement has been achieved at almost no cost and with no cajoling.

So how is the Many Small Nudge method implemented?

The essential element is The Purpose – and this must not be confused with A Process.  The Purpose is what is intended; A Process is how it is achieved.  And answering the “What is my/our purpose?” question is surprisingly difficult to do.

For example I often ask doctors “What is our purpose?”  The first reaction is usually “What a dumb question – it is obvious”.  “OK – so if it is obvious can you describe it?”  The reply is usually “Well, err, um, I suppose, um – ah yes – our purpose is to heal the sick!”  “OK – so if that is our purpose how well are we doing?”  Embarrassed silence. We do not know because we do not all measure our outcomes as a matter of course. We measure activity and utilisation – which are measures of our process not of our purpose – and we justify not measuring outcome by being too busy – measuring activity and utilisation.

Sometimes I ask the purpose question a different way. There is a Latin phrase that is often used in medicine: primum non nocere which means “First do no harm”.  So I ask – “Is that our purpose?”.  The reply is usually something like “No but safety is more important than efficiency!”  “OK – safety and efficiency are both important but are they our purpose?”.  It is not an easy question to answer.

A Process can be designed – because it has to obey the Laws of Physics. The Purpose relates to People not to Physics – so we cannot design The Purpose, we can only design a process to achieve The Purpose. We can define The Purpose though – and in so doing we achieve clarity of purpose.  For a healthcare organisation a possible Clear Statement of Purpose might be “WE want a system that protects, improves and restores health“.

Purpose statements state what we want to have. They do not state what we want to do, to not do or to not have.  This may seem like a splitting hairs but it is important because the Statement of Purpose is key to the Many Small Nudges approach.

Whenever we have a decision to make we can ask “How will this decision contribute to The Purpose?”.  If an option would move us in the direction of The Purpose then it gets a higher ranking to a choice that would steer us away from The Purpose.  There is only one On Purpose direction and many Off Purpose ones – and this insight explains why avoiding what we do not want (i.e. harm) is not the same as achieving what we do want.  We can avoid doing harm and yet not achieve health and be very busy all at the same time.


Leaders often assume that it is their job to define The Purpose for their Organisation – to create the Vision Statement, or the Mission Statement. Experience suggests that clarifying the existing but unspoken purpose is all that is needed – just by asking one little question – “What is our purpose?” – and asking it often and of everyone – and not being satisfied with a “process” answer.

Never Events and Nailing Niggles

Some events should NEVER happen – such as removing the wrong kidney; or injecting an anti-cancer drug designed for a vein into the spine; or sailing a cruise ship over a charted underwater reef; or driving a bus full of sleeping school children into a concrete wall.

But  these catastrophic irreversible and tragic Never Events do keep happening – rarely perhaps – but persistently. At the Never-Event investigation the Finger-of-Blame goes looking for the incompetent culprit while the innocent victims call for compensation.

And after the smoke has cleared and the pain of loss has dimmed another Never-Again-Event happens – and then another, and then another. Rarely perhaps – but not never.

Never Events are so awful and emotionally charged that we remember them and we come to believe that they are not rare and from that misperception we develop a constant nagging feeling of fear for the future. It is our fear that erodes our trust which leads to the paralysis that prevents us from acting.  In the globally tragic event of 9/11 several thousand innocents victims died while the world watched in horror.  More innocent victims than that die needlessly every day in high-tech hospitals from avoidable errors – but that statistic is never shared.

The metaphor that is often used is the Swiss Cheese – the sort on cartoons with lots of holes in it. The cheese represents a quality check – a barrier that catches and corrects mistakes before they cause irreversible damage. But the cheesy check-list is not perfect; it has holes in it.  Mistakes slip through.

So multiple layers of cheesy checks are added in the hope that the holes in the earlier slices will be covered by the cheese in the later ones – and our experience shows that this multi-check design does reduce the number of mistakes that get through. But not completely. And when, by rare chance, holes in each slice line up then the error penetrates all the way through and a Never Event becomes a Actual Catastrophe.  So, the typical recommendation from the after-the-never-event investigation is to add another layer of cheese to the stack – another check on the list on top of all the others.

But the cheese is not durable: it deteriorates over time with the incessant barrage of work and the pressure of increasing demand. The holes get bigger, the cheese gets thinner, and new holes appear. The inevitable outcome is the opening up of unpredictable, new paths through the cheese to a Never Event; more Never Events; more after-the-never-event investigation; and more slices of increasingly expensive and complex cheese added to the tottering, rotting heap.

A drawback of the Swiss Cheese metaphor is that it gives the impression that the slices are static and each cheesy check has a consistent position and persistent set of flaws in it. In reality this is not the case – the system behaves as if the slices and the holes are moving about: variation is jiggling , jostling and wobbling the whole cheesy edifice.

This wobble does not increase the risk of a Never Event  but it prevents the subsequent after-the-event investigation from discovering the specific conjunction of holes that caused it. The Finger of Blame cannot find a culprit and the cause is labelled a “system failure” or an unlucky individual is implicated and named-shamed-blamed and sacrificed to the Gods of Chance on the Alter of Hope! More often new slices of KneeJerk Cheese are added in the desperate hope of improvement – and creating an even greater burden of back-covering bureaucracy than before – and paradoxically increasing the number of holes!

Improvement Science offers a more rational, logical, effective and efficient approach to dissolving this messy, inefficient and ineffective safety design.

First it recognises that to prevent a Never Event then no errors should reach the last layer of cheese checking – the last opportunity to block the error trajectory. An error that penetrates that far is a Near Miss and these will happen more often than Never Events so they are the key to understanding and dissolving the problem.

Every Near Miss that is detected should be reported and investigated immediately – because that is the best time to identify the hole in the previous slice – before it wobbles out of sight. The goal of the investigation is understanding not accountability. Failure to report a near miss; failure to investigate it; failure to learn from it; failure to act on it; and failure to monitor the effect of the action are all errors of omission (EOOs) and they are the worst of management crimes.

The question to ask is “What error happened immediately before the Near Miss?”  This event is called a Not Again. Focussing attention on this Not Again and understanding what, where, when, who and how it happened is the path to preventing the Near Miss and the Never Event.  Why is not the question to ask – especially when trust is low and cynicism and fear are high – the question to ask is “how”.

The first action after Naming the Not Again is to design a counter-measure for it – to plug the hole – NOT to add another slice of Check-and Correct cheese! The second necessary action is to treat that Not Again as a Near-Miss and to monitor it so when it happens again the cause can be identified. These common, every day, repeating causes of Not Agains are called Niggles; the hundreds of minor irritations that we just accept as inevitable. This is where the real work happens – identifying the most common Niggle and focussing all attention on nailing it! Forever.  Niggle naming and nailing is everyone’s responsibility – it is part of business-as-usual – and if leaders do not demonstrate the behaviour and set the expectation then followers will not do it.

So what effect would we expect?

To answer that question we need a better metaphor than our static stack of Swiss cheese slices: we need something more dynamic – something like a motorway!

Suppose you were to set out walking across a busy motorway with your eyes shut and your fingers in your ears – hoping to get to the other side without being run over. What is the chance that you will make it across safely?  It depends on how busy the traffic is and how fast you walk – but say you have a 50:50 chance of getting across one lane safely (which is the same chance as tossing a fair coin and getting a head) – what is the chance that you will get across all six lanes safely? The answer is the same chance as tossing six heads in a row: a 1-in-2 chance of surviving the first lane (50%), a 1 in 4 chance of getting across two lanes (25%), a 1 in 8 chance of making it across three (12.5%) …. to a 1 in 64 chance of getting across all six (1.6%). Said another way that is a 63 out of 64 chance of being run over somewhere which is a 98.4% chance of failure – near certain death! Hardly a Never Event.

What happens to our risk of being run over if the traffic in just one lane is stopped and that lane is now 100% safe to cross? Well you might think that it depends on which lane it is but it doesn’t – the risk of failure is now 31/32 or 96.8% irrespective of which lane it is – so not much improvement apparently!  We have doubled the chance of success though!

Is there a better improvement strategy?

What if we work collectively to just reduce the flow of Niggles in all the lanes at the same time – and suppose we are all able to reduce the risk of a Niggle in our lane-of-influence from 1-in-2 to 1-in-6. How we do it is up to us. To illustrate the benefit we replace our coin with a six-sided die (no pun intended) and we only “die” if we throw a 1.  What happens to our pedestrian’s probability of survival? The chance of surviving the first lane is now 5/6 (83.3%), and both first and second 5/6 x 5/6 = 25/36 (69%.4) and so on to all six lanes which is 5/6 x 5/6 x 5/6 x 5/6 x 5/6 x 5/6 = 15625/46656 = 33.3% which is a lot better than our previous 1.6%!  And what if we keep plugging the holes in our bits of the cheese and we increase our individual lane success rate to 95% – our pedestrians probability of survival is now 73.5%. The chance of a catastrophic event becomes less and less.

The arithmetic may be a bit scary but the message is clear: to prevent the Never Events we must reduce the Near Misses and to to do that we investigate every Near Miss and expose the Not Agains and then use them to Name and Nail all the Niggles.  And we have complete control over the causes of our commonest Niggles because we create them.

This strategy will improve the safety of our system. It has another positive benefit – it will free up our Near Miss investigation team to do something else: it frees them to assist in the re-design the system so that Not Agains cannot happen at all – they become Never Events too – and the earlier in the path that safety-design happens the better – because it renders the other layers of check-and-correct cheesocracy irrelevant.

Just imagine what would happen in a real system if we did that …

And now try to justify not doing it …

And now consider what an individual, team and organisation would need to learn to do this …

It is called Improvement Science.

And learning the Foundations of Improvement Science in Healthcare (FISH) is one place to start.

fish

The Safety Line in the Quality Sand

Improvement Science is about getting better – and it is also about not getting worse.

These are not the same thing. Getting better requires dismantling barriers that block improvement. Not getting worse requires building barriers to block deterioration.

When things get tough and people start to panic it is common to see corners being cut and short-term quick fixes taking priority over long-term common sense.  The best defense against this self-defeating behaviour is the courage and discipline to say “This is our safety line in the quality sand and we do not cross it“.  This is not dogma it is discipline. Dogma is blind acceptance; discipline is applied wisdom.

Leaders show their mettle when times are difficult not when times are easy.  A leader who abandons their espoused principles when under pressure is a liability to themselves and to their teams and organisations.

The barrier that prevents descent into chaos is not the leader – it is the principle that there is a minimum level of acceptable quality – the line that will not be crossed. So when a decision needs to be made between safety and money the choice is not open to debate. Safety comes first.  

Only those who believe that higher quality always costs more will argue for compromise. So when the going gets tough those who question the Safety Line in the Quality Sand are the ones to challenge by respectfully reminding them of their own principles.

This challenge will require courage because they may be the ones in the seats of power.  But when leaders compromise their own principles they have sacrificed their credibility and have abdicated their power.

The Devil and the Detail

There are two directions from which we can approach an improvement challenge. From the bottom up – starting with the real details and distilling the principle later; and from the top down – starting with the conceptual principle and doing the detail later.  Neither is better than the other – both are needed.

As individuals we have an innate preference for real detail or conceptual principle – and our preference is manifest by the way we think, talk and behave – it is part of our personality.  It is useful to have insight into our own personality and to recognise that when other people approach a problem in a different way then we may experience a difference of opinion, a conflict of styles, and possibly arguments.  

One very well established model of personality type was proposed by Carl Gustav Jung who was a psychologist and who approached the subject from the perspective of understanding psychological “illness”.  Jung’s “Psychological Types” was used as the foundation of the life-work of Isabel Briggs Myers who was not a psychologist and who was looking from the direction of understanding psychological “normality”. In her book Gifts Differing – Understanding Personality Type (ISBN 978-0891-060741) she demonstrates using empirical data that there is not one normal or ideal type that we are all deviate from – rather that there is a set of stable types that each represents a “different gift”. By this she means that different personality types are suited to different tasks and when the type resonantes with the task it results in high-performance and is seen an asset or “strength” and when it does not it results in low performance and is seen as a liability or “weakness”.

One of the multiple dimensions of the Jungian and Myers-Briggs personality type model is the Sensor – iNtuitor dimension the S-N dimension. This dimension represents where we hold our reference model that provides us with data – data that we convert to information – and informationa the we use to derive decisions and actions.

A person who is naturally inclined to the Sensor end of the S-N dimension prefers to use Reality and Actuality as their reference – and they access it via their senses – sight, sound, touch, smell and taste. They are often detail and data focussed; they trust their senses and their conscious awareness; and they are more comfortable with routine and structure.  

A person who is naturally inclined to the iNtuitor end of the S-N dimension prefers to use Rhetoric and Possibility as their reference and their internal conceptual model that they access via their intuition. They are often principle and concept focussed and discount what their senses tell them in favour their intuition. Intuitors feel uncomfortable with routine and structure which they see as barriers to improvement.  

So when a Sensor and an iNtuitor are working together to solve a problem they are approaching it from two different directions and even when they have a common purpose, common values and a common objective it is very likely that conflict will occur if they are unaware of their different gifts

Gaining this awareness is a key to success because the synergy of the two approaches is greater than either working alone – the sum is greater than the parts – but only if there is awareness and mutual respect for the different gifts.  If there is no awareness and low mutual respect then the sum will be less than the parts and the problem will not be dissolvable.

In her research, Isabel Briggs Myers found that about 60% of high school students have a preference for S and 40% have a preference for N – but when the “academic high flyers”  were surveyed the ratio was S=17%  and N=83% – and there was no difference between males and females.  When she looked at the S-N distribution in different training courses she discovered that there were a higher proportion of S-types in Administrators (59%), Police (80%), and Finance (72%) and a higher proportion of N-types in Liberal Arts (59%), Engineering (65%), Science (83%), Fine Arts (91%), Occupational Therapy (66%), Art Education (87%), Counselor Education (85%), and Law (59%).  Her observation suggested that individuals select subjects based on their “different gifts” and this throws an interesting light on why traditional professions may come into conflict and perhaps why large organisations tend to form departments of “like-minded individuals”.  Departments with names like Finance, Operations and Governance  – or FOG.

This insight also offers an explanation for the conflict between “strategists” who tend to be N-types and who naturally gravitate to the “manager” part of an organisation and the “tacticians” who tend to be S-types and who naturally gravitate to the “worker” part of the same organisation.

It  has also been shown that conventional “intelligence tests” favour the N-types over the S-types and suggests why highly intelligent academics my perform very poorly when asked to apply their concepts and principles in the real world. Effective action requires pragmatists – but academics tend to congregate in academic instituitions – often disrespectfully labelled by pragmatists as “Ivory Towers”.      

Unfortunately this innate tendency to seek-like-types is counter-productive because it re-inforces the differences, exacerbates the communication barriers,  and leads to “tribal” and “disrespectful” and “trust eroding” behaviour, and to the “organisational silos” that are often evident.

Complex real-world problems cannot be solved this way because they require the synergy of the gifts – each part playing to its strength when the time is right.

The first step to know-how is self-awareness.

If you would like to know your Jungian/MBTI® type you can do so by getting the app: HERE

Argument-Free-Problem-Solving

I used to be puzzled when I reflected on the observation that we seem to be able to solve problems as individuals much more quickly and with greater certainty than we could as groups.

I used to believe that having many different perspectives of a problem would be an asset – but in reality it seems to be more of a liability.

Now when I receive an invitation to a meeting to discuss an issue of urgent importance my little heart sinks as I recall the endless hours of my limited life-time wasted in worthless, unproductive discussion.

But, not to be one to wallow in despair I have been busy applying the principles of Improvement Science to this ubiquitous and persistent niggle.  And I have discovered something called Argument Free Problem Solving (AFPS) – or rather that is my name for it because it does what it says on the tin – it solves problems without arguments.

The trick was to treat problem-solving as a process; to understand how we solve problems as individuals; what are the worthwhile bits; and how we scupper the process when we add-in more than one person; and then how to design-to-align the  problem-solving workflow so that it …. flows. So that it is effective and efficient.

The result is AFPS and I’ve been testing it out. Wow! Does it work or what!

I have also discovered that we do not need to create an artificial set of Rules or a Special Jargon – we can  apply the recipe to any situation in a very natural and unobtrusive way.  Just this week I have seen it work like magic several times: once in defusing what was looking like a big bust up looming; once t0 resolve a small niggle that had been magnified into a huge monster and a big battle – the smoke of which was obscuring the real win-win-win opportunity; and once in a collaborative process improvement exercise that demonstrated a 2000% improvement in system productivity – yes – two thousand percent!

So AFPS  has been added to the  Improvement Science treasure chest and (because I like to tease and have fun) I have hidden the key in cyberspace at coordinates  http://www.saasoft.com/moodle

Mwah ha ha ha – me hearties! 

Three Blind Men and an Elephant

The Blind Men and the Elephant Story   – adapted from the poem by John Godfrey Saxe.

 “Three blind men were discussing exactly what they believed an elephant to be, since each had heard how strange the creature was, yet none had ever seen one before. So the blind men agreed to find an elephant and discover what the animal was really like. It did not take the blind men long to find an elephant at a nearby market. The first blind man approached the animal and felt the elephant’s firm flat side. “It seems to me that an elephant is just like a wall,” he said to his friends. The second blind man reached out and touched one of the elephant’s tusks. “No, this is round and smooth and sharp – an elephant is like a spear.” Intrigued, the third blind man stepped up to the elephant and touched its trunk. “Well, I can’t agree with either of you; I feel a squirming writhing thing – surely an elephant is just like a snake.” All three blind men continued to argue, based on their own individual experiences, as to what they thought an elephant was like. It was an argument that they were never able to resolve. Each of them was concerned only with their own experience. None of them could see the full picture, and none could appreciate any of the other points of view. Each man saw the elephant as something quite different, and while each blind man was correct they could not agree.”

The Elephant in this parable is the NHS and the three blind men are Governance, Operations and Finance. Each is blind because he does not see reality clearly – his perception is limited to assumptions and crippled by distorted data. The three blind men cannot agree because they do not share a common understanding of the system; its parts and its relationships. Each is looking at a multi-dimensional entity from one dimension only and for each there is no obvious way forward. So while they appear to be in conflict about the “how” they are paradoxically in agreement about the “why”. The outcome is a fruitless and wasteful series of acrimonious arguments, meaningless meetings and directionless discussions.  It is not until they declare their common purpose that their differences of opinion are seen in a realistic perspective and as an opportunity to share and to learn and to create an collective understanding that is greater than the sum of the parts.

Focus-on-the-Flow

One of the foundations of Improvement Science is visualisation – presenting data in a visual format that we find easy to assimilate quickly – as pictures.

We derive deeper understanding from observing how things are changing over time – that is the reality of our everyday experience.

And we gain even deeper understanding of how the world behaves by acting on it and observing the effect of our actions. This is how we all learned-by-doing from day-one. Most of what we know about people, processes and systems we learned long before we went to school.


When I was at school the educational diet was dominated by rote learning of historical facts and tried-and-tested recipes for solving tame problems. It was all OK – but it did not teach me anything about how to improve – that was left to me.

More significantly it taught me more about how not to improve – it taught me that the delivered dogma was not to be questioned. Questions that challenged my older-and-better teachers’ understanding of the world were definitely not welcome.

Young children ask “why?” a lot – but as we get older we stop asking that question – not because we have had our questions answered but because we get the unhelpful answer “just because.”

When we stop asking ourselves “why?” then we stop learning, we close the door to improvement of our understanding, and we close the door to new wisdom.


So to open the door again let us leverage our inborn ability to gain understanding from interacting with the world and observing the effect using moving pictures.

Unfortunately our biology limits us to our immediate space-and-time, so to broaden our scope we need to have a way of projecting a bigger space-scale and longer time-scale into the constraints imposed by the caveman wetware between our ears.

Something like a video game that is realistic enough to teach us something about the real world.

If we want to understand better how a health care system behaves so that we can make wiser decisions of what to do (and what not to do) to improve it then a real-time, interactive, healthcare system video game might be a useful tool.

So, with this design specification I have created one.

The goal of the game is to defeat the enemy – and the enemy is intangible – it is the dark cloak of ignorance – literally “not knowing”.

Not knowing how to improve; not knowing how to ask the “why?” question in a respectful way.  A way that consolidates what we understand and challenges what we do not.

And there is an example of the Health Care System Flow Game being played here.

Safety-By-Design

The picture is of Elisha Graves Otis demonstrating, in the mid 19th century, his safe elevator that automatically applies a brake if the lift cable breaks. It is a “simple” fail-safe mechanical design that effectively created the elevator industry and the opportunity of high-rise buildings.

“To err is human” and human factors research into how we err has revealed two parts – the Error of Intention (poor decision) and the Error of Execution (poor delivery) – often referred to as “mistakes” and “slips”.

Most of the time we act unconsciously using well practiced skills that work because most of our tasks are predictable; walking, driving a car etc.

The caveman wetware between our ears has evolved to delegate this uninteresting and predictable work to different parts of the sub-conscious brain and this design frees us to concentrate our conscious attention on other things.

So, if something happens that is unexpected we may not be aware of it and we may make a slip without noticing. This is one way that process variation can lead to low quality – and these are the often the most insidious slips because they go unnoticed.

It is these unintended errors that we need to eliminate using safe process design.

There are two ways – by designing processes to reduce the opportunity for mistakes (i.e. improve our decision making); and then to avoid slips by designing the subsequent process to be predictable and therefore suitable for delegation.

Finally, we need to add a mechanism to automatically alert us of any slips and to protect us from their consequences by failing-safe.  The sign of good process design is that it becomes invisible – we are not aware of it because it works at the sub-conscious level.

As soon as we become aware of the design we have either made a slip – or the design is poor.


Suppose we walk up to a door and we are faced with a flat metal plate – this “says” to us that we need to “push” the door to open it – it is unambiguous design and we do not need to invoke consciousness to make a push-or-pull decision.  The technical term for this is an “affordance”.

In contrast a door handle is an ambiguous design – it may require a push or a pull – and we either need to look for other clues or conduct a suck-it-and-see experiment. Either way we need to switch our conscious attention to the task – which means we have to switch it away from something else. It is those conscious interruptions that cause us irritation and can spawn other, possibly much bigger, slips and mistakes.

Safe systems require safe processes – and safe processes mean fewer mistakes and fewer slips. We can reduce slips through good design and relentless improvement.

A simple and effective tool for this is The 4N Chart® – specifically the “niggle” quadrant.

Whenever we are interrupted by a poorly designed process we experience a niggle – and by recording what, where and when those niggles occur we can quickly focus our consciousness on the opportunity for improvement. One requirement to do this is the expectation and the discipline to record niggles – not necessarily to fix them immediately – but just to record them and to review them later.

In his book “Chasing the Rabbit” Steven Spear describes two examples of world class safety: the US Nuclear Submarine Programme and Alcoa, an aluminium producer.  Both are potentially dangerous activities and, in both examples, their world class safety record came from setting the expectation that all niggles are recorded and acted upon – using a simple, effective and efficient niggle-busting process.

In stark and worrying contrast, high-volume high-risk activities such as health care remain unsafe not because there is no incident reporting process – but because the design of the report-and-review process is both ineffective and inefficient and so is not used.

The risk of avoidable death in a modern hospital is quoted at around 1:300 – if our risk of dying in an elevator were that high we would take the stairs!  This worrying statistic is to be expected though – because if we lack the organisational capability to design a safe health care delivery process then we will lack the organisational capability to design a safe improvement process too.

Our skill gap is clear – we need to learn how to improve process safety-by-design.


Download Design for Patient Safety report written by the Design Council.

Other good examples are the WHO Safer Surgery Checklist, and the story behind this is told in Dr Atul Gawande’s Checklist Manifesto.

The Crime of Metric Abuse

We live in a world that is increasingly intolerant of errors – we want everything to be right all the time – and if it is not then someone must have erred with deliberate intent so they need to be named, blamed and shamed! We set safety standards and tough targets; we measure and check; and we expose and correct anyone who is non-conformant. We accept that is the price we must pay for a Perfect World … Yes? Unfortunately the answer is No. We are deluded. We are all habitual criminals. We are all guilty of committing a crime against humanity – the Crime of Metric Abuse. And we are blissfully ignorant of it so it comes as a big shock when we learn the reality of our unconscious complicity.

You might want to sit down for the next bit.

First we need to set the scene:
1. Sustained improvement requires actions that result in irreversible and beneficial changes to the structure and function of the system.
2. These actions require making wise decisions – effective decisions.
3. These actions require using resources well – efficient processes.
4. Making wise decisions requires that we use our system metrics correctly.
5. Understanding what correct use is means recognising incorrect use – abuse awareness.

When we commit the Crime of Metric Abuse, even unconsciously, we make poor decisions. If we act on those decisions we get an outcome that we do not intend and do not want – we make an error.  Unfortunately, more efficiency does not compensate for less effectiveness – if fact it makes it worse. Efficiency amplifies Effectiveness – “Doing the wrong thing right makes it wronger not righter” as Russell Ackoff succinctly puts it.  Paradoxically our inefficient and bureaucratic systems may be our only defence against our ineffective and potentially dangerous decision making – so before we strip out the bureaucracy and strive for efficiency we had better be sure we are making effective decisions and that means exposing and treating our nasty habit for Metric Abuse.

Metric Abuse manifests in many forms – and there are two that when combined create a particularly virulent addiction – Abuse of Ratios and Abuse of Targets. First let us talk about the Abuse of Ratios.

A ratio is one number divided by another – which sounds innocent enough – and ratios are very useful so what is the danger? The danger is that by combining two numbers to create one we throw away some information. This is not a good idea when making the best possible decision means squeezing every last drop of understanding our of our information. To unconsciously throw away useful information amounts to incompetence; to consciously throw away useful information is negligence because we could and should know better.

Here is a time-series chart of a process metric presented as a ratio. This is productivity – the ratio of an output to an input – and it shows that our productivity is stable over time.  We started OK and we finished OK and we congratulate ourselves for our good management – yes? Well, maybe and maybe not.  Suppose we are measuring the Quality of the output and the Cost of the input; then calculating our Value-For-Money productivity from the ratio; and then only share this derived metric. What if quality and cost are changing over time in the same direction and by the same rate? The productivity ratio will not change.

 

Suppose the raw data we used to calculate our ratio was as shown in the two charts of measured Ouput Quality and measured Input Cost  – we can see immediately that, although our ratio is telling us everything is stable, our system is actually changing over time – it is unstable and therefore it is unpredictable. Systems that are unstable have a nasty habit of finding barriers to further change and when they do they have a habit of crashing, suddenly, unpredictably and spectacularly. If you take your eyes of the white line when driving and drift off course you may suddenly discover a barrier – the crash barrier for example, or worse still an on-coming vehicle! The apparent stability indicated by a ratio is an illusion or rather a delusion. We delude ourselves that we are OK – in reality we may be on a collision course with catastrophe. 

But increasing quality is what we want surely? Yes – it is what we want – but at what cost? If we use the strategy of quality-by-inspection and add extra checking to detect errors and extra capacity to fix the errors we find then we will incur higher costs. This is the story that these Quality and Cost charts are showing.  To stay in business the extra cost must be passed on to our customers in the price we charge: and we have all been brainwashed from birth to expect to pay more for better quality. But what happens when the rising price hits our customers finanical constraint?  We are no longer able to afford the better quality so we settle for the lower quality but affordable alternative.  What happens then to the company that has invested in quality by inspection? It loses customers which means it loses revenue which is bad for its financial health – and to survive it starts cutting prices, cutting corners, cutting costs, cutting staff and eventually – cutting its own throat! The delusional productivity ratio has hidden the real problem until a sudden and unpredictable drop in revenue and profit provides a reality check – by which time it is too late. Of course if all our competitors are committing the same crime of metric abuse and suffering from the same delusion we may survive a bit longer in the toxic mediocrity swamp – but if a new competitor who is not deluded by ratios and who learns how to provide consistently higher quality at a consistently lower price – then we are in big trouble: our customers leave and our end is swift and without mercy. Competition cannot bring controlled improvement while the Abuse of Ratios remains rife and unchallenged.

Now let us talk about the second Metric Abuse, the Abuse of Targets.

The blue line on the Productivity chart is the Target Productivity. As leaders and managers we have bee brainwashed with the mantra that “you get what you measure” and with this belief we commit the crime of Target Abuse when we set an arbitrary target and use it to decide when to reward and when to punish. We compound our second crime when we connect our arbitrary target to our accounting clock and post periodic praise when we are above target and periodic pain when we are below. We magnify the crime if we have a quality-by-inspection strategy because we create an internal quality-cost tradeoff that generates conflict between our governance goal and our finance goal: the result is a festering and acrimonious stalemate. Our quality-by-inspection strategy paradoxically prevents improvement in productivity and we learn to accept the inevitable oscillation between good and bad and eventually may even convince ourselves that this is the best and the only way.  With this life-limiting-belief deeply embedded in our collective unconsciousness, the more enthusiastically this quality-by-inspection design is enforced the more fear, frustration and failures it generates – until trust is eroded to the point that when the system hits a problem – morale collapses, errors increase, checks are overwhelmed, rework capacity is swamped, quality slumps and costs escalate. Productivity nose-dives and both customers and staff jump into the lifeboats to avoid going down with the ship!  

The use of delusional ratios and arbitrary targets (DRATs) is a dangerous and addictive behaviour and should be made a criminal offense punishable by Law because it is both destructive and unnecessary.

With painful awareness of the problem a path to a solution starts to form:

1. Share the numerator, the denominator and the ratio data as time series charts.
2. Only put requirement specifications on the numerator and denominator charts.
3. Outlaw quality-by-inspection and replace with quality-by-design-and-improvement.  

Metric Abuse is a Crime. DRATs are a dangerous addiction. DRATs kill Motivation. DRATs Kill Organisations.

Charts created using BaseLine