Desperate Times

The NHS appears to be getting increasingly desperate in its cost control tactics:


What does this letter say …

  1. The NHS is required to improve productivity by 20%.
  2. The NHS needs to work collaboratively with its suppliers.
  3. The NHS would like to learn the “secrets” from its suppliers.
  4. And then a thinly-veiled threat.

A 20% productivity improvement has never been achieved before using a Cost Improvement Program (CIP) approach … so how will it now?

A 20% productivity improvement requires something a lot more radical than a “zero-inflation policy”.

A 20% productivity improvement requires wholesale system redesign.

And there is good news … that is possible … and the not-so-good news is that the NHS will need to learn how to do it, for itself.


One barrier to doing this is disbelief that it is possible.

Another is ignorance of how to do it.


If the NHS wants to survive, in anything like its current form, then it will need to grasp that nettle/opportunity … and to engage in wholesale raising of awareness of what is possible and how to achieve it.

Denial is not an option.

And there is one way to experience what is possible and how to achieve it … and it can be accessed here.


The seats on the HCSE bus are limited, so only those who are prepared to invest in their own learning and their own future career paths should even consider buying a ticket to ride …

… and follow the footsteps of the courageous innovators.

Here are some of their stories: Journal of Improvement Science

The OMG Effect … Revisited

Beliefs drive behaviour. Behaviour drives change. Improvement requires change.

So, improvement requires challenging beliefs; confirming some and disproving others.

And beliefs can only be confirmed or disproved rationally – with evidence and explanation. Rhetoric is too slippery. We can convince ourselves of anything with that!

So it comes as an emotional shock when one of our beliefs is disproved by experiencing reality from a new perspective.

Our natural reaction is surprise, perhaps delight, and then defense. We say “Yes, but ...”.

And that is healthy skepticism and it is a valuable and necessary part of the change and improvement process.

If there are not enough healthy skeptics on a design team it is unbalanced.

If there are too many healthy skeptics on a design team it is unbalanced.


This week I experienced this phenomenon first hand.

The context was a one day practical skills workshop and the topic was:

How to improve the safety, timeliness, quality and affordability of unscheduled care“.

The workshop is designed to approach this challenge from a different perspective.

Instead of asking “What is the problem and how do we solve it?” we took the system engineering approach of asking “What is the purpose and how can we achieve it?”

We used a range of practical exercises to illustrate some core concepts and principles – reality was our teacher. Then we applied those newly acquired insights to the design challenge using a proven methodology that ensured we do not skip steps.


And the outcome was: the participants discovered that …

it is indeed possible to improve the safety, timeliness, quality and affordability of unscheduled health care …

using health care systems engineering concepts, principles, techniques and tools that, until the workshop, they had been unaware even existed.


Their reaction was “OMG” and was shortly followed by “Yes, but …” which is to be expected and is healthy.

The rest of the “Yes, but … ” sentence was “… how will I convince my colleagues?

One way is for them to seek out the same experience …

… because reality is a much better teacher than rhetoric.

HCSE Practical Skills One Day Workshops

 

The Q-Community

At some point in the life-cycle of an innovation, there is the possibility of crossing an invisible line called the tipping point.

This happens when enough people have experienced the benefits of the innovation and believe that the innovation is the future.  These lone innovators start to connect and build a new community.

It is an emergent behaviour of a complex adaptive system.


This week I experienced what could be a tipping point.

I attended the Q-Community launch event for the West Midlands that was held at the ICC in Birmingham … and it was excellent.

The invited speakers were both engaging and inspiring – boosting the emotional charge in the old engagement batteries; which have become rather depleted of late by the incessant wailing from the all-too-numerous peddlers of doom-and-gloom.

There was an opportunity to re-connect with fellow radicals who, over nearly two decades, have had the persistent temerity to suggest that improvement is necessary, is possible, have invested in learning how to do it, and have disproved the impossibility hypothesis.

There were new connections with like-minded people who want to both share what they know about the science of improvement and to learn what they do not.

And there were hand-outs, side-shows and break-outs.  Something for everyone.


The voice of the Q-Community will grow louder – and for it to be listened to it will need to be patiently and persistently broadcasting the news stories of what has been achieved, and how it was achieved, and who has demonstrated they can walk-the-talk.  News stories like this one:

Improving safety, flow, quality and affordability of unscheduled care of the elderly.


I sincerely hope that in the future, with the benefit of hindsight, we in the West Midlands will say – the 19th July 2017 was our Q-Community tipping point.

And I pledge to do whatever I can to help make that happen.

Simulation Stimulation

One of the most effective ways to inspire others is to demonstrate what is possible, and then to explain how it is possible.

And one way to do that is to use a simulation game.

There are many different forms of simulation game from the imagination playground games we remember as children, to sophisticated and highly realistic computer simulations.

The purpose is the same: to have the experience without the risk and cost of doing it for real; to learn from the experience; and to increase our chance of success in the real world.


Simulations are very effective educational tools because we can simplify, focus, practice, pause, rewind, and reflect.

They are also very effective exploration tools for developing our understanding of hows things work.  We need to know that before we can make things work better.


And anyone who has tried it will confirm: creating an effective and enjoyable simulation game is not easy. It takes passion, persistence and practice and many iterations to get it right.

And that in itself is a powerful learning experience.


This week the topic of simulations has cropped up several times.

Firstly, the hands-on simulations at the Flow Design Practical Skills Workshop and how they generated insight and inspiration.  The experience certainly fired imaginations and will hopefully lead to innovations. For more click here …

Secondly, the computer simulation called the “Save The NHS Game” which is designed to illustrate the complex and counter-intuitive behaviour of real systems.  The rookie crew “crashed” the simulated healthcare system, but that was OK, it was just a simulation.  In the process they learned a lot about how not to improve NHS productivity. For more click here …

And later the same day being a crash-test dummy for an innovative table-top simulation game using different sizes and shapes of pasta and an ice tray to illustrate the confusing concept of carve-out!  For more click here …

And finally, a fantastic conversation with Dr Bryn Baxendale from the Trent Simulation Centre about how simulation training has become a growing part of how we train individuals and teams, especially in clinical skills, safety and human factors.


In health care systems engineering we use simulation tools in the diagnosis, design and delivery phases of complex improvement-by-design projects. So learning how to design, build and verify the simulation tools we need is a core part advanced HCSE training.  For more click here …

Lots of simulation sTimulation. What a great week!

What Is In It For Me?

One of the questions we all ask ourselves, perhaps unconsciously, when we are considering change is: “What is in it for me?

And if we do not get a convincing enough answer, quickly enough, we move on.

Effective sales people know this, and anyone needing to engage and influence others needs to as well.


One approach is to ask the same questions as the person we seek to influence are asking themselves, perhaps unconsciously.

So if you have an interest in healthcare improvement … see if these questions resonate with you.

Eating the Elephant in the Room

The Elephant in the Room is an English-language metaphorical idiom for an obvious problem or risk no one wants to discuss.

An undiscussable topic.

And the undiscussability is also undiscussable.

So the problem or risk persists.

And people come to harm as a result.

Which is not the intended outcome.

So why do we behave this way?

Perhaps it is because the problem looks too big and too complicated to solve in one intuitive leap, and we give up and label it a “wicked problem”.


The well known quote “When eating an elephant take one bite at a time” is attributed to Creighton Abrams, a US Chief of Staff.


It says that even seemingly “impossible” problems can be solved so long as we proceed slowly and carefully, in small steps, learning as we go.

And the continued decline of the NHS UK Unscheduled Care performance seems to be an Elephant-in-the-Room problem, as shown by the monthly A&E 4-hour performance over the last 10 years and the fact that this chart is not published by the NHS.

Red = England, Brown=Wales, Grey=N.Ireland, Purple=Scotland.


This week I experienced a bite of this Elephant being taken and chewed on.

The context was a Flow Design – Practical Skills – One Day Workshop and the design challenge posed to the eager delegates was to improve the quality and efficiency of a one stop clinic.

A seemingly impossible task because the delegates reported that the queues, delays and chaos that they experienced in the simulated clinic felt very realistic.

Which means that this experience is accepted as inevitable, and is impossible to improve without more resources, but financial cuts prevent that, so we have to accept the waits.


At the end of the day their belief had been shattered.

The queues, delays and chaos had evaporated and the cost to run the new one stop clinic design was actually less than the old one.

And when we combined the quality metrics with the cost metrics and calculated the measured improvement in productivity; the answer was over 70%!

The delegates experienced it all first-hand. They did the diagnosis, design, and delivery using no more than squared-paper and squeaky-pen.

And at the end they were looking at a glaring mismatch between their rhetoric and the reality.

The “impossible to improve without more money” hypothesis lay in tatters – it had been rationally, empirically and scientifically disproved.

I’d call that quite a big bite out of the Elephant-in-the-Room.


So if you have a healthy appetite for Elephant-in-the-Room challenges, and are not afraid to try something different, then there is a whole menu of nutritious food-for-thought at a FISH&CHIPs® practical skills workshop.

Unknown-Knowns

This is the now-infamous statement that Donald Rumsfeld made at a Pentagon Press Conference which triggered some good-natured jesting from the assembled journalists.

But there is a problem with it.

There is a fourth combination that he does not mention: the Unknown-Knowns.

Which is a shame because they are actually the most important because they cause the most problems.  Avoidable problems.


Suppose there is a piece of knowledge that someone knows but that someone else does not; then we have an unknown-known.

None of us know everything and we do not need to, because knowledge that is of no value to us is irrelevant for us.

But what happens when the unknown-known is of value to us, and more than that; what happens when it would be reasonable for someone else to expect us to know it; because it is our job to know.


A surgeon would be not expected to know a lot about astronomy, but they would be expected to know a lot about anatomy.


So, what happens if we become aware that we are missing an important piece of knowledge that is actually already known?  What is our normal human reaction to that discovery?

Typically, our first reaction is fear-driven and we express defensive behaviour.  This is because we fear the potential loss-of-face from being exposed as inept.

From this sudden shock we then enter a characteristic emotional pattern which is called the Nerve Curve.

After the shock of discovery we quickly flip into denial and, if that does not work then to anger (i.e. blame).  We ignore the message and if that does not work we shoot the messenger.


And when in this emotionally charged state, our rationality tends to take a back seat.  So, if we want to benefit from the discovery of an unknown-known, then we have to learn to bite-our-lip, wait, let the red mist dissipate, and then re-examine the available evidence with a cool, curious, open mind.  A state of mind that is receptive and open to learning.


Recently, I was reminded of this.


The context is health care improvement, and I was using a systems engineering framework to conduct some diagnostic data analysis.

My first task was to run a data-completeness-verification-test … and the data I had been sent did not pass the test.  There was some missing.  It was an error of omission (EOO) and they are the hardest ones to spot.  Hence the need for the verification test.

The cause of the EOO was an unknown-known in the department that holds the keys to the data warehouse.  And I have come across this EOO before, so I was not surprised.

Hence the need for the verification test.

I was not annoyed either.  I just fed back the results of the test, explained what the issue was, explained the cause, and they listened and learned.


The implication of this specific EOO is quite profound though because it appears to be ubiquitous across the NHS.

To be specific it relates to the precise details of how raw data on demand, activity, length of stay and bed occupancy is extracted from the NHS data warehouses.

So it is rather relevant to just about everything the NHS does!

And the error-of-omission leads to confusion at best; and at worst … to the following sequence … incomplete data =>  invalid analysis => incorrect conclusion => poor decision => counter-productive action => unintended outcome.

Does that sound at all familiar?


So, if would you like to learn about this valuable unknown-known is then I recommend the narrative by Dr Kate Silvester, an internationally recognised expert in healthcare improvement.  In it, Kate re-tells the story of her emotional roller-coaster ride when she discovered she was making the same error.


Here is the link to the full abstract and where you can download and read the full text of Kate’s excellent essay, and help to make it a known-known.

That is what system-wide improvement requires – sharing the knowledge.

The Checklist

Only a few parts of the NHS were adversely affected by the RansomWare cyber-attack on Friday 12th May 2017.

This well-known malware was designed to exploit a security loop-hole in out-of-date and poorly maintained computers still using the Windows XP operating system.

And just like virulent organisms and malignant cells … the loop-holes in our IT immune systems were exploited to cause infectious diseases and cancer!


The diagnosis and treatment of these acquired IT diseases is painful, expensive and it comes with no guarantee of a happy outcome.

Lesson: Proactive prevention is better than reactive cure!

And all it requires to achieve it is … a Checklist.


Prevention requires pre-emptive design, and to do this the system needs to be studied, and understood well enough for an early warning system (EWS) to be designed, tested and implemented.

Having an effective EWS also requires that the measured response to an EWS alert has been designed, tested and implemented as well.

The sensor and the effector are linked by something called a processor.

And the processor can be implemented using an easy-to-use, low-cost, effective tool called a Checklist.


The NHS was not cyber-attacked.  Parts of the NHS were more vulnerable than others to a well-known, endemic cyber-threat, and they were more vulnerable because they did not use an effective cyber-security checklist.  An error of omission.


Checklists are not recipes of how or why to do something.  They are primarily there to remind us to do what is required, and to not do what is not required.

But we need to refer to them … we need to befriend them … we need to create them and maintain them. They are our friends and they will protect us from harm.

And if we do that the we will reap the benefits of time and energy that are released in the future – to do with as we choose.

Catch-22

There is a Catch-22 in health care improvement and it goes a bit like this:

Most people are too busy fire-fighting the chronic chaos to have time to learn how to prevent the chaos, so they are stuck.

There is a deeper Catch-22 as well though:

The first step in preventing chaos is to diagnose the root cause and doing that requires experience, and we don’t have that experience available, and we are too busy fire-fighting to develop it.


Health care is improvement science in action – improving the physical and psychological health of those who seek our help. Patients.

And we have a tried-and-tested process for doing it.

First we study the problem to arrive at a diagnosis; then we design alternative plans to achieve our intended outcome and we decide which plan to go with; and then we deliver the plan.

Study ==> Plan ==> Do.

Diagnose  ==> Design & Decide ==> Deliver.

But here is the catch. The most difficult step is the first one, diagnosis, because there are many different illnesses and they often present with very similar patterns of symptoms and signs. It is not easy.

And if we make a poor diagnosis then all the action plans that follow will be flawed and may lead to disappointment and even harm.

Complaints and litigation follow in the wake of poor diagnostic ability.

So what do we do?

We defer reassuring our patients, we play safe, we request more tests and we refer for second opinions from specialists. Just to be on the safe side.

These understandable tactics take time, cost money and are not 100% reliable.  Diagnostic tests are usually precisely focused to answer specific questions but can have false positive and false negative results.

To request a broad batch of tests in the hope that the answer will appear like a rabbit out of a magician’s hat is … mediocre medicine.


This diagnostic dilemma arises everywhere: in primary care and in secondary care, and in non-urgent and urgent pathways.

And it generates extra demand, more work, bigger queues, longer delays, growing chaos, and mounting frustration, disappointment, anxiety and cost.

The solution is obvious but seemingly impossible: to ensure the most experienced diagnostician is available to be consulted at the start of the process.

But that must be impossible because if the consultants were seeing the patients first, what would everyone else do?  How would they learn to become more expert diagnosticians? And would we have enough consultants?


When I was a junior surgeon I had the great privilege to have the opportunity to learn from wise and experienced senior surgeons, who had seen it, and done it and could teach it.

Mike Thompson is one of these.  He is a general surgeon with a special interest in the diagnosis and treatment of bowel cancer.  And he has a particular passion for improving the speed and accuracy of the diagnosis step; because it can be a life-saver.

Mike is also a disruptive innovator and an early pioneer of the use of endoscopy in the outpatient clinic.  It is called point-of-care testing nowadays, but in the 1980’s it was a radically innovative thing to do.

He also pioneered collecting the symptoms and signs from every patient he saw, in a standard way using a multi-part printed proforma. And he invested many hours entering the raw data into a computer database.

He also did something that even now most clinicians do not do; when he knew the outcome for each patient he entered that into his database too – so that he could link first presentation with final diagnosis.


Mike knew that I had an interest in computer-aided diagnosis, which was a hot topic in the early 1980’s, and also that I did not warm to the Bayesian statistical models that underpinned it.  To me they made too many simplifying assumptions.

The human body is a complex adaptive system. It defies simplification.

Mike and I took a different approach.  We  just counted how many of each diagnostic group were associated with each pattern of presenting symptoms and signs.

The problem was that even his database of 8000+ patients was not big enough! This is why others had resorted to using statistical simplifications.

So we used the approach that an experienced diagnostician uses.  We used the information we had already gleaned from a patient to decide which question to ask next, and then the next one and so on.


And we always have three pieces of information at the start – the patient’s age, gender and presenting symptom.

What surprised and delighted us was how easy it was to use the database to help us do this for the new patients presenting to his clinic; the ones who were worried that they might have bowel cancer.

And what surprised us even more was how few questions we needed to ask arrive at a statistically robust decision to reassure-or-refer for further tests.

So one weekend, I wrote a little computer program that used the data from Mike’s database and our simple bean-counting algorithm to automate this process.  And the results were amazing.  Suddenly we had a simple and reliable way of using past experience to support our present decisions – without any statistical smoke-and-mirror simplifications getting in the way.

The computer program did not make the diagnosis, we were still responsible for that; all it did was provide us with reliable access to a clear and comprehensive digital memory of past experience.


What it then enabled us to do was to learn more quickly by exploring the complex patterns of symptoms, signs and outcomes and to develop our own diagnostic “rules of thumb”.

We learned in hours what it would take decades of experience to uncover. This was hot stuff, and when I presented our findings at the Royal Society of Medicine the audience was also surprised and delighted (and it was awarded the John of Arderne Medal).

So, we called it the Hot Learning System, and years later I updated it with Mike’s much bigger database (29,000+ records) and created a basic web-based version of the first step – age, gender and presenting symptom.  You can have a play if you like … just click HERE.


So what are the lessons here?

  1. We need to have the most experienced diagnosticians at the start of the improvement process.
  2. The first diagnostic assessment can be very quick so long as we have developed evidence-based heuristics.
  3. We can accelerate the training in diagnostic skills using simple information technology and basic analysis techniques.

And exactly the same is true in the health care system improvement.

We need to have an experienced health care improvement practitioner involved at the start, because if we skip this critical study step and move to plan without a correct diagnosis, then we will make errors, poor decisions, and counter-productive actions.  And then generate more work, more queues, more delays, more chaos, more distress and increased costs.

Exactly the opposite of what we want.

Q1: So, how do we develop experienced improvement practitioners more quickly?

Q2: Is there a hot learning system for improvement science?

A: Yes, there is. It can be found here.

The Marmite Effect

Have you heard the phrase “you either love it or you hate it“?  It is called the Marmite Effect.

Improvement science has Marmite-like effect on some people, or more specifically, the theory part does.

Both evidence and experience show that most people prefer to learn-by-doing first; and then consolidate their learning with the minimum, necessary amount of supporting theory.

But that is not how we usually share what we know with others.  We usually attempt to teach the theory first, perhaps in the belief that it will speed up the process of learning.

Sadly, it usually has the opposite effect. Too much theory too soon often creates a barrier to engagement. It actually slows learning down! Which was not the impact we were intending.


The implications of this is that teachers of the science of improvement need to provide a range of different ways to engage with the subject.  Complementary ways.  And leave the choice of which suits whom … to the learner.

And the way to tell if it is working is … the sound of laughter.

Why is that?


Laughing is a complex behaviour that leaves us feeling happier. Which is good.

Comedians make a living from being able to trigger this behaviour in their audiences, and we will gladly part with hard cash when we know something will make us feel better.

And laughing is one of the healthiest ways to feel better!

So why do we laugh when we are learning?

It is believed that one trigger for the laughter reaction is the sudden shift from one perspective to another.  More specifically, a mental shift that relieves a growing emotional tension.  The punch line of a really good joke for example.

And later-in-life learning is often more a process of unlearning.

When we challenge a learned assumption with evidence and if we disprove it … we are unlearning.  And doing that generates emotional tension. We are often very attached to our unconscious assumptions and will usually resist them being challenged.

The way to unlearn effectively is to use the evidence of our own eyes to raise doubts about our unconscious assumptions.  We need to actively generate a bit of confusion.

Then, we resolve the apparent paradox by creatively shifting perspective, often with a real example, a practical explanation or a hands-on demonstration.

And when we experience the “Ah ha! Now I see!” reaction, and we emerge from the fog of confusion, we will relieve the emotional tension and our involuntary reaction is to laugh.

But if our teacher unintentionally triggers a Marmite effect; a “Yeuk, I am NOT enjoying this!” feeling, then we need to respect that, and step back, and adopt a different tack.


Over the last few months I have been experimenting with different approaches to introducing the principles of improvement-by-design.

And the results are clear.

A minority prefer to start with the abstract theory, and then apply it in practice.

The majority have various degrees of Marmite reaction to the theory, and some are so put off that they actively disengage.  But when they have an opportunity to see the same principles demonstrated in a concrete, practical way; they learn and laugh.

Unlearning-by-doing seems to work better for the majority.

So, if you want to have fun and learn how to deliver significant and sustained improvements … then the evidence points to this as the starting point …

… the Flow Design Practical Skills One Day Workshop.

And if you also want to dip into a bit of the tried-and-tested theory that underpins improvement-by-design then you can do that as well, either before or later (when it becomes necessary), or both.


So, to have lots of fun and learn some valuable improvement-by-design practical skills at the same time …  click here.

The Storyboard

This week about thirty managers and clinicians in South Wales conducted two experiments to test the design of the Flow Design Practical Skills One Day Workshop.

Their collective challenge was to diagnose and treat a “chronically sick” clinic and the majority had no prior exposure to health care systems engineering (HCSE) theory, techniques, tools or training.

Two of the group, Chris and Jat, had been delegates at a previous ODWS, and had then completed their Level-1 HCSE training and real-world projects.

They had seen it and done it, so this experiment was to test if they could now teach it.

Could they replicate the “OMG effect” that they had experienced and that fired up their passion for learning and using the science of improvement?

Continue reading “The Storyboard”

The Chicken Coop

Chickens make interesting pets. They have personalities – no two are the same – and they produce something useful and valuable. Eggs. Yum yum!

But chickens are yummy too … especially to foxes. So we have a problem. We need to keep our ‘chucks’ safe and that means a fox-proof coop.

Here’s a picture of a chicken coop … looks great doesn’t it? You can just hear the happy clucks and taste the fresh eggs.

Have you any idea how complicated, difficult and expensive this would be to build from scratch?

Better not even try … just reach for the laptop and credit card and order a prefabricated one.  Just assembling the courier-delivered-flat-packed-made-in-China-from-renewable-forest-softwood coop will be enough of a DIY challenge!


We have had chickens for years and we have learned that they are very funny-feathered-characters-who-lay-eggs.

And we started with an old Wendy house, some softwood battening, some rolls of weld-mesh, a bag of screws and staples and a big dollop of suck-it-and-see.

The first attempt was Heath-Robinson but it worked OK.  The old Wendy house was transformed into a cosy coop and a safe-from-foxes chuck run.

And the eggs were delicious and nutritious.


But the arrow of time is relentless, and as with all organic things, the “rot had set in”.

The time had come for an update. Doing nothing was not an option.

Q: Start from scratch with a blank piece of paper and design and build a new coop and run (i.e. scrap the old one)? Or re-purpose what we have (i.e. cut out the rot, keep the good stuff and re-fashion something that is fit-for-purpose for years to come?

Oh, and we also need to keep-the-ship-afloat in the process – i.e. the keep the chucks safe-from-foxes and happily laying eggs.  That meant doing the project in one day.


What was interesting about this mini-transformation project was that I could apply exactly the same improvement framework as I would to a health care systems engineering one.

I had a clear problem (unsafe, semi-rotten chicken coop) and a clear purpose (fit-for-purpose and affordable coop and run).

Next I needed a diagnosis.  What was rotten and what was not?  And that required a bit of poking with a probe … and what I found was that most of the rot was hidden!

First I needed to study the problem (symptoms) and the purpose (required outcome) and the problem again (disease).

This was going to require some radical surgery!

With a clear destination and diagnosis it was now time to plan. For this I needed a robust design framework for exploring “radical” options – particularly those that open new opportunities that the old design prevented!  This is called “future-proofing”.

And the capital cost is always a factor – building a shiny, high-tech version of an old design that is no longer fit-for-purpose is a waste of capital investment and locks us into the past.


And remember, the innovative, fit-for-purpose, elegant, affordable design is just a dream when it is still only a plan.  Someone has to do the building work.  And it has to be feasible with the time, tools and skills available.  And all that needs to be considered at the design stage too!

With the benefit of hindsight, I have come to appreciate that the most valuable long-term investment is the new theory, new techniques, new tools and the new skills to use them. This is called “innovation”.


So with a diagnosis, a design, a sunny day, a sharpened-pencil-behind-the-ear, a just-in-time delivery of the bulkier building materials, a freshly charged power drill, and a hot cuppa … the work started.

It was going to be like performing a major operation.

The chucks were more than happy to be let out to scratch around in the garden; and groundwork always generates the opportunity for a creepy-crawly feast!  But safety comes first – foxes mainly hunt at night so in one daylight period I had to surgically excise the rot and then transform what was left into a safe space for the chucks to sleep.

When the study and plan work has been done diligently – the do phase is enjoyable.

If we skip the study phase and leap straight to plan with all the old assumptions (some rotten some not) still in place … the do phase is usually miserable! (No wonder many people have developed a high level of aversion to change!).


And the outcome?

Happy chucks, safely tucked up in their transformed, rot-free, safe-from-harm, coop and run.

The work is not quite finished – a new roof is awaiting installation but that is a quality issue not a safety one.

Safety always comes first.

And just look at how much rot had to be chopped out.

Any surgeon will tell you … “for the fastest recovery you have to cut out all the rot first“.

And that requires careful planning, courage, skill, a sharp blade, focus and … team work!

The Pathology of Variation I

In medical training we have to learn about lots of things. That is one reason why it takes a long time to train a competent and confident clinician.

First, we learn the anatomy (structure) and the physiology (function) of the normal, healthy human.

Then we learn about how this amazingly complicated system can go wrong.  We learn about pathology.  And we do that so that we understand the relationship between the cause (disease) and the effect (symptoms and signs).

Then we learn about diagnostics – which is how to work backwards from the effects to the most likely cause(s).

And only then can we learn about therapeutics – the design and delivery of a treatment plan that we are confident will relieve the symptoms by curing the disease.

And we learn about prevention – how to avoid some illnesses (and delay others) by addressing the root causes earlier.  Much of the increase in life expectancy over the last 200 years has come from prevention, not from cure.


The NHS is an amazingly complicated system, and it too can go wrong.  It can exhibit a wide spectrum of symptoms and signs; medical errors, long delays, unhappy patients, burned-out staff, and overspent budgets.

But, there is no equivalent training in how to diagnose and treat a sick health care system.  And this is not acceptable, especially given that the knowledge of how to do this is already available.

It is called complex adaptive systems engineering (CASE).


Before the Renaissance, the understanding of how the body works was primitive and it was believed that illness was “God’s Will” so we had to just grin-and-bear (and pray).

The Scientific Revolution brought us new insights, profound theories, innovative techniques and capability-extending tools.  And the impact has been dramatic.  Those who do have access to this knowledge live better and longer than ever.  Those who do not … do not.

Our current understanding of how health care systems work is, to be blunt, medieval.  The current approaches amount to little more than rune reading, incantations and the prescription of purgatives and leeches.  And the impact is about as effective.

So we need to study the anatomy, physiology, pathology, diagnostics and therapeutics of complex adaptive systems like healthcare.  And most of all we need to understand how to prevent catastrophes happening in the first place.  We need the NHS to be immortal.


And this week a prototype complex adaptive pathology training system was tested … and it employed cutting-edge 21st Century technology: Pasta Twizzles.

The specific topic under scrutiny was variation.  A brain-bending concept that is usually relegated to the mystical smoke-and-mirrors world called “Sadistics”.

But no longer!

The Mists-of-Jargon and Fog-of-Formulae were blown away as we switched on the Fan-of-Facilitation and the Light-of-Simulation and went exploring.

Empirically. Pragmatically.


And what we discovered was jaw-dropping.

A disease called the “Flaw of Averages” and its malignant manifestation “Carveoutosis“.


And with our new knowledge we opened the door to a previously hidden world of opportunity and improvement.

Then we activated the Laser-of-Insight and evaporated the queues and chaos that, before our new understanding, we had accepted as inevitable and beyond our understanding or control.

They were neither. And never had been. We were deluding ourselves.

Welcome to the Resilient Design – Practical Skills – One Day Workshop.

Validation Test: Passed.

Diagnose-Design-Deliver

A story was shared this week.

A story of hope for the hard-pressed NHS, its patients, its staff and its managers and its leaders.

A story that says “We can learn how to fix the NHS ourselves“.

And the story comes with evidence; hard, objective, scientific, statistically significant evidence.


The story starts almost exactly three years ago when a Clinical Commissioning Group (CCG) in England made a bold strategic decision to invest in improvement, or as they termed it “Achieving Clinical Excellence” (ACE).

They invited proposals from their local practices with the “carrot” of enough funding to allow GPs to carve-out protected time to do the work.  And a handful of proposals were selected and financially supported.

This is the story of one of those proposals which came from three practices in Sutton who chose to work together on a common problem – the unplanned hospital admissions in their over 70’s.

Their objective was clear and measurable: “To reduce the cost of unplanned admissions in the 70+ age group by working with hospital to reduce length of stay.

Did they achieve their objective?

Yes, they did.  But there is more to this story than that.  Much more.


One innovative step they took was to invest in learning how to diagnose why the current ‘system’ was costing what it was; then learning how to design an improvement; and then learning how to deliver that improvement.

They invested in developing their own improvement science skills first.

They did not assume they already knew how to do this and they engaged an experienced health care systems engineer (HCSE) to show them how to do it (i.e. not to do it for them).

Another innovative step was to create a blog to make it easier to share what they were learning with their colleagues; and to invite feedback and suggestions; and to provide a journal that captured the story as it unfolded.

And they measured stuff before they made any changes and afterwards so they could measure the impact, and so that they could assess the evidence scientifically.

And that was actually quite easy because the CCG was already measuring what they needed to know: admissions, length of stay, cost, and outcomes.

All they needed to learn was how to present and interpret that data in a meaningful way.  And as part of their IS training,  they learned how to use system behaviour charts, or SBCs.


By Jan 2015 they had learned enough of the HCSE techniques and tools to establish the diagnosis and start to making changes to the parts of the system that they could influence.


Two years later they subjected their before-and-after data to robust statistical analysis and they had a surprise. A big one!

Reducing hospital mortality was not a stated objective of their ACE project, and they only checked the mortality data to be sure that it had not changed.

But it had, and the “p=0.014” part of the statement above means that the probability that this 20.0% reduction in hospital mortality was due to random chance … is less than 1.4%.  [This is well below the 5% threshold that we usually accept as “statistically significant” in a clinical trial.]

But …

This was not a randomised controlled trial.  This was an intervention in a complicated, ever-changing system; so they needed to check that the hospital mortality for comparable patients who were not their patients had not changed as well.

And the statistical analysis of the hospital mortality for the ‘other’ practices for the same patient group, and the same period of time confirmed that there had been no statistically significant change in their hospital mortality.

So, it appears that what the Sutton ACE Team did to reduce length of stay (and cost) had also, unintentionally, reduced hospital mortality. A lot!


And this unexpected outcome raises a whole raft of questions …


If you would like to read their full story then you can do so … here.

It is a story of hunger for improvement, of humility to learn, of hard work and of hope for the future.

Levels of Resistance

Improvement implies change, but change does not imply improvement.

We have all experienced the pain of disappointment when a change that promised much delivered no improvement, or even worse, a negative impact.

We have learned to become wary and skeptical about change.

We have learned a whole raft of tactics for deflection and diffusion of the enthusiasm of others.  And by doing so we don the black hat of the healthy skeptic and the tell tale mantra of “Yes, but …”.

So here is an onion diagram to use as a reference.  It comes from a recently published essay that compares and contrasts two schools of flow improvement.  Eli Goldratt’s “Theory of Constraints” and a translation of Systems Engineering called 6M Design®.


The first five layers can be described as “denial”, the second four as “grudging acceptance” … and the last one is the sound of the final barrier coming down and revealing the raw emotion underpinning our reluctance to change. Fear.


The good news is that this diagram helps us to shape and steer change in a way that improves its chances of success, because if we can learn to peel back these layers by sharing information that soothes the fear of the unknown, then we can align and engage.  And that is essential for emotional momentum to build.

So when we meet resistance do we push or not?

Ask yourself. How would prefer to be engaged? Pushed or not?

Hugh, Louise and Bob

Bob Jekyll was already sitting at a table, sipping a pint of Black Sheep and nibbling on a bowl of peanuts when Hugh and Louise arrived.

<Hugh> Hello, are you Bob?

<Bob> Yes, indeed! You must be Hugh and Louise. Can I get you a thirst quencher?

<Louise> Lime and soda for me please.

<Hugh> I’ll have the same as you, a Black Sheep.

<Bob> On the way.

<Hugh> Hello Louise, I’m Hugh Lewis.  I am the ops manager for acute medicine at St. Elsewhere’s Hospital. It is good to meet you at last. I have seen your name on emails and performance reports.

<Louise> Good to meet you too Hugh. I am senior data analyst for St. Elsewhere’s and I think we may have met before, but I’m not sure when.  Do you know what this is about? Your invitation was a bit mysterious.

<Hugh> Yes. Sorry about that. I was chatting to a friend of mine at the golf club last week, Dr Bill Hyde who is one of our local GPs.  As you might expect, we got to talking about the chronic pressure we are all under in both primary and secondary care.  He said he has recently crossed paths with an old chum of his from university days who he’d had a very interesting conversation with in this very pub, and he recommended I email him. So I did. And that led to a phone conversation with Bob Jekyll. I have to say he asked some very interesting questions that left me feeling a mixture of curiosity and discomfort. After we talked Bob suggested that we meet for a longer chat and that I invite my senior data analyst along. So here we are.

<Louise> I have to say my curiosity was pricked by your invitation, specifically the phrase ‘system behaviour charts’. That is a new one on me and I have been working in the NHS for some time now. It is too many years to mention since I started as junior data analyst, fresh from university!

<Hugh> That is the term Bob used, and I confess it was new to me too.

<Bob> Here we are, Black Sheep, lime soda and more peanuts.  Thank you both for coming, so shall we talk about the niggle that Hugh raised when we spoke on the phone?

<Hugh> Ah! Louise, please accept my apologies in advance. I think Bob might be referring to when I said that “90% of the performance reports don’t make any sense to me“.

<Louise> There is no need to apologise Hugh. I am actually reassured that you said that. They don’t make any sense to me either! We only produce them that way because that is what we are asked for.  My original degree was geography and I discovered that I loved data analysis! My grandfather was a doctor so I guess that’s how I ended up in doing health care data analysis. But I must confess, some days I do not feel like I am adding much value.

<Hugh> Really? I believe we are in heated agreement! Some days I feel the same way.  Is that why you invited us both Bob?

<Bob> Yes.  It was some of the things that Hugh said when we talked on the phone.  They rang some warning bells for me because, in my line of work, I have seen many people fall into a whole minefield of data analysis traps that leave them feeling confused and frustrated.

<Louise> What exactly is your line of work, Bob?

<Bob> I am a systems engineer.  I design, build, verify, integrate, implement and validate systems. Fit-for-purpose systems.

<Louise> In health care?

<Bob> Not until last week when I bumped into Bill Hyde, my old chum from university.  But so far the health care system looks just like all the other ones I have worked in, so I suspect some of the lessons from other systems are transferable.

<Hugh> That sounds interesting. Can you give us an example?

<Bob> OK.  Hugh, in our first conversation, you often used the words “demand”  and “capacity”. What do you mean by those terms?

<Hugh> Well, demand is what comes through the door, the flow of requests, the workload we are expected to manage.  And capacity is the resources that we have to deliver the work and to meet our performance targets.  Capacity is the staff, the skills, the equipment, the chairs, and the beds. The stuff that costs money to provide.  As a manager, I am required to stay in-budget and that consumes a big part of my day!

<Bob> OK. Speaking as an engineer I would like to know the units of measurement of “demand” and “capacity”?

<Hugh> Oh! Um. Let me think. Er. I have never been asked that question before. Help me out here Louise.  I told you Bob asks tricky questions!

<Louise> I think I see what Bob is getting at.  We use these terms frequently but rather loosely. On reflection they are not precisely defined, especially “capacity”. There are different sorts of capacity all of which will be measured in different ways so have different units. No wonder we spend so much time discussing and debating the question of if we have enough capacity to meet the demand.  We are probably all assuming different things.  Beds cannot be equated to staff, but too often we just seem to lump everything together when we talk about “capacity”.  So by doing that what we are really asking is “do we have enough cash in the budget to pay for the stuff we thing we need?”. And if we are failing one target or another we just assume that the answer is “No” and we shout for “more cash”.

<Bob> Exactly my point. And this was one of the warning bells.  Lack of clarity on these fundamental definitions opens up a minefield of other traps like the “Flaw of Averages” and “Time equals Money“.  And if we are making those errors then they will, unwittingly, become incorporated into our data analysis.

<Louise> But we use averages all the time! What is wrong with an average?

<Bob> I can sense you are feeling a bit defensive Louise.  There is no need to.  An average is perfectly OK and is very useful tool.  The “flaw” is when it is used inappropriately.  Have you heard of Little’s Law?

<Louise> No. What’s that?

<Bob> It is the mathematically proven relationship between flow, work-in-progress and lead time.  It is a fundamental law of flow physics and it uses averages. So averages are OK.

<Hugh> So what is the “Flaw of Averages”?

<Bob> It is easier to demonstrate it than to describe it.  Let us play a game.  I have some dice and we have a big bowl of peanuts.  Let us simulate a simple two step process.  Hugh you are Step One and Louise you are Step Two.  I will be the the source of demand.

I will throw a dice and count that many peanuts out of the bowl and pass them to Hugh.  Hugh, you then throw the dice and move that many peanuts from your heap to Louise, then Louise throws the dice and moves that many from her pile to the final heap which we will call activity.

<Hugh> Sounds easy enough.  If we all use the same dice then the average flow through each step will be the same so after say ten rounds we should have, um …

<Louise> … thirty five peanuts in the activity heap.  On average.

<Bob> OK.  That’s the theory, let’s see what happens in reality.  And no eating the nuts-in-progress please.


They play the game and after a few minutes they have completed the ten rounds.


<Hugh> That’s odd.  There are only 30 nuts in the activity heap and we expected 35.  Nobody nibbled any nuts so its just chance I suppose.  Lets play again. It should average out.

…..  …..

<Louise> Thirty four this time which is better, but is still below the predicted average.  That could still be a chance effect though.  Let us run the ‘nutty’ game this a few more times.

….. …..

<Hugh> We have run the same game six times with the same nuts and the same dice and we delivered activities of 30, 34, 30, 24, 23 and 31 and there are usually nuts stuck in the process at the end of each game, so it is not due to a lack of demand.  We are consistently under-performing compared with our theoretical prediction.  That is weird.  My head says we were just unlucky but I have a niggling doubt that there is more to it.

<Louise> Is this the Flaw of Averages?

<Bob> Yes, it is one of them. If we set our average future flow-capacity to the average historical demand and there is any variation anywhere in the process then we will see this effect.

<Hugh> H’mmm.  But we do this all the time because we assume that the variation will average out over time. Intuitively it must average out over time.  What would happen if we kept going for more cycles?

<Bob> That is a very good question.  And your intuition is correct.  It does average out eventually but there is a catch.

<Hugh> What is the catch?

<Bob>  The number of peanuts in the process and the time it takes for one peanut to get through is very variable.

<Louise> Is there any pattern to the variation? Is it predictable?

<Bob> Another excellent question.  Yes, there is a pattern.  It is called “chaos”.  Predictable chaos if you like.

<Hugh> So is that the reason you said on the phone that we should present our metrics as time-series charts?

<Bob> Yes, one of them.  The appearance of chaotic system behaviour is very characteristic on a time-series chart.

<Louise> And if we see the chaos pattern on our charts then we could conclude that we have made the Flaw of Averages error?

<Bob> That would be a reasonable hypothesis.

<Hugh> I think I understand the reason you invited us to a face-to-face demonstration.  It would not have worked if you had just described it.  You have to experience it because it feels so counter-intuitive.  And this is starting to feel horribly familiar; perpetual chaos about sums up my working week!

<Louise> You also mentioned something you referred to as the “time equals money” trap.  Is that somehow linked to this?

<Bob> Yes.  We often equate time and money but they do not behave the same way.  If have five pounds today and I only spend four pounds then I can save the remaining one pound for tomorrow and spend it then – so the Law of Averages works.  But if I have five minutes today and I only use four minutes then the other minute cannot be saved and used tomorrow, it is lost forever.  That is why the Law of Averages does not work for time.

<Hugh> But that means if we set our budgets based on the average demand and the cost of people’s time then not only will we have queues, delays and chaos, we will also consistently overspend the budget too.  This is sounding more and more familiar by the minute!  This is nuts, if you will excuse the pun.

<Louise> So what is the solution?  I hope you would not have invited us here if there was no solution.

<Bob> Part of the solution is to develop our knowledge of system behaviour and how we need to present it in a visual format. With that we develop a deeper understanding of what the system behaviour charts are saying to us.  With that we can develop our ability to make wiser decisions that will lead to effective actions which will eliminate the queues, delays, chaos and cost-pressures.

<Hugh> This is possible?

<Bob> Yes. It is called systems engineering. That’s what I do.

<Louise> When do we start?

<Bob> We have started.

Dr Hyde and Mr Jekyll

Dr Bill Hyde was already at the bar when Bob Jekyll arrived.

Bill and  Bob had first met at university and had become firm friends, but their careers had diverged and it was only by pure chance that their paths had crossed again recently.

They had arranged to meet up for a beer and to catch up on what had happened in the 25 years since they had enjoyed the “good old times” in the university bar.

<Dr Bill> Hi Bob, what can I get you? If I remember correctly it was anything resembling real ale. Will this “Black Sheep” do?

<Bob> Hi Bill, Perfect! I’ll get the nibbles. Plain nuts OK for you?

<Dr Bill> My favourite! So what are you up to now? What doors did your engineering degree open?

<Bob> Lots!  I’ve done all sorts – mechanical, electrical, software, hardware, process, all except civil engineering. And I love it. What I do now is a sort of synthesis of all of them.  And you? Where did your medical degree lead?

<Dr Bill> To my hearts desire, the wonderful Mrs Hyde, and of course to primary care. I am a GP. I always wanted to be a GP since I was knee-high to a grasshopper.

<Bob> Yes, you always had that “I’m going to save the world one patient at a time!” passion. That must be so rewarding! Helping people who are scared witless by the health horror stories that the media pump out.  I had a fright last year when I found a lump.  My GP was great, she confidently diagnosed a “hernia” and I was all sorted in a matter of weeks with a bit of nifty day case surgery. I was convinced my time had come. It just shows how damaging the fear of the unknown can be!

<Dr Bill> Being a GP is amazingly rewarding. I love my job. But …

<Bob> But what? Are you alright Bill? You suddenly look really depressed.

<Dr Bill> Sorry Bob. I don’t want to be a damp squib. It is good to see you again, and chat about the old days when we were teased about our names.  And it is great to hear that you are enjoying your work so much. I admit I am feeling low, and frankly I welcome the opportunity to talk to someone I know and trust who is not part of the health care system. If you know what I mean?

<Bob> I know exactly what you mean.  Well, I can certainly offer an ear, “a problem shared is a problem halved” as they say. I can’t promise to do any more than that, but feel free to tell me the story, from the beginning. No blood-and-guts gory details though please!

<Dr Bill> Ha! “Tell me the story from the beginning” is what I say to my patients. OK, here goes. I feel increasingly overwhelmed and I feel like I am drowning under a deluge of patients who are banging on the practice door for appointments to see me. My intuition tells me that the problem is not the people, it is the process, but I can’t seem to see through the fog of frustration and chaos to a clear way forward.

<Bob> OK. I confess I know nothing about how your system works, so can you give me a bit more context.

<Dr Bill> Sorry. Yes, of course. I am what is called a single-handed GP and I have a list of about 1500 registered patients and I am contracted to provide primary care for them. I don’t have to do that 24 x 7, the urgent stuff that happens in the evenings and weekends is diverted to services that are designed for that. I work Monday to Friday from 9 AM to 5 PM, and I am contracted to provide what is needed for my patients, and that means face-to-face appointments.

<Bob> OK. When you say “contracted” what does that mean exactly?

<Dr Bill> Basically, the St. Elsewhere’s® Practice is like a small business. It’s annual income is a fixed amount per year for each patient on the registration list, and I have to provide the primary care service for them from that pot of cash. And that includes all the costs, including my income, our practice nurse, and the amazing Mrs H. She is the practice receptionist, manager, administrator and all-round fixer-of-anything.

<Bob> Wow! What a great design. No need to spend money on marketing, research, new product development, or advertising! Just 100% pure service delivery of tried-and-tested medical know-how to a captive audience for a guaranteed income. I have commercial customers who would cut off their right arms for an offer like that!

<Dr Bill> Really? It doesn’t feel like that to me. It feels like the more I offer, the more the patients expect. The demand is a bottomless well of wants, but the income is capped and my time is finite!

<Bob> H’mm. Tell me more about the details of how the process works.

<Dr Bill> Basically, I am a problem-solving engine. Patients phone for an appointment, Mrs H books one, the patient comes at the appointed time, I see them, and I diagnose and treat the problem, or I refer on to a specialist if it’s more complicated. That’s basically it.

<Bob> OK. Sounds a lot simpler than 99% of the processes that I’m usually involved with. So what’s the problem?

<Dr Bill> I don’t have enough capacity! After all the appointments for the day are booked Mrs H has to say “Sorry, please try again tomorrow” to every patient who phones in after that.  The patients who can’t get an appointment are not very happy and some can get quite angry. They are anxious and frustrated and I fully understand how they feel. I feel the same.

<Bob> We will come back to what you mean by “capacity”. Can you outline for me exactly how a patient is expected to get an appointment?

<Dr Bill> We tell them to phone at 8 AM for an appointment, there is a fixed number of bookable appointments, and it is first-come-first-served.  That is the only way I can protect myself from being swamped and is the fairest solution for patients.  It wasn’t my idea; it is called Advanced Access. Each morning at 8 AM we switch on the phones and brace ourselves for the daily deluge.

<Bob> You must be pulling my leg! This design is a batch-and-queue phone-in appointment booking lottery!  I guess that is one definition of “fair”.  How many patients get an appointment on the first attempt?

<Dr Bill> Not many.  The appointments are usually all gone by 9 AM and a lot are to people who have been trying to get one for several days. When they do eventually get to see me they are usually grumpy and then spring the trump card “And while I’m here doctor I have a few other things that I’ve been saving up to ask you about“. I help if I can but more often than not I have to say, “I’m sorry, you’ll have to book another appointment!“.

<Bob> I’m not surprised you patients are grumpy. I would be too. And my recollection of seeing my GP with my scary lump wasn’t like that at all. I phoned at lunch time and got an appointment the same day. Maybe I was just lucky, or maybe my GP was as worried as me. But it all felt very calm. When I arrived there was only one other patient waiting, and I was in and out in less than ten minutes – and mightily reassured I can tell you! It felt like a high quality service that I could trust if-and-when I needed it, which fortunately is very infrequently.

<Dr Bill> I dream of being able to offer a service like that! I am prepared to bet you are registered with a group practice and you see whoever is available rather than your own GP. Single-handed GPs like me who offer the old fashioned personal service are a rarity, and I can see why. We must be suckers!

<Bob> OK, so I’m starting to get a sense of this now. Has it been like this for a long time?

<Dr Bill> Yes, it has. When I was younger I was more resilient and I did not mind going the extra mile.  But the pressure is relentless and maybe I’m just getting older and grumpier.  My real fear is I end up sounding like the burned-out cynics that I’ve heard at the local GP meetings; the ones who crow about how they are counting down the days to when they can retire and gloat.

<Bob> You’re the same age as me Bill so I don’t think either of us can use retirement as an exit route, and anyway, that’s not your style. You were never a quitter at university. Your motto was always “when the going gets tough the tough get going“.

<Dr Bill> Yeah I know. That’s why it feels so frustrating. I think I lost my mojo a long time back. Maybe I should just cave in and join up with the big group practice down the road, and accept the inevitable loss of the personal service. They said they would welcome me, and my list of 1500 patients, with open arms.

<Bob> OK. That would appear to be an option, or maybe a compromise, but I’m not sure we’ve exhausted all the other options yet.  Tell me, how do you decide how long a patient needs for you to solve their problem?

<Dr Bill> That’s easy. It is ten minutes. That is the time recommended in the Royal College Guidelines.

<Bob> Eh? All patients require exactly ten minutes?

<Dr Bill> No, of course not!  That is the average time that patients need.  The Royal College did a big survey and that was what most GPs said they needed.

<Bob> Please tell me if I have got this right.  You work 9-to-5, and you carve up your day into 10-minute time-slots called “appointments” and, assuming you are allowed time to have lunch and a pee, that would be six per hour for seven hours which is 42 appointments per day that can be booked?

<Dr Bill> No. That wouldn’t work because I have other stuff to do as well as see patients. There are only 25 bookable 10-minute appointments per day.

<Bob> OK, that makes more sense. So where does 25 come from?

<Dr Bill> Ah! That comes from a big national audit. For an average GP with and average  list of 1,500 patients, the average number of patients seeking an appointment per day was found to be 25, and our practice population is typical of the national average in terms of age and deprivation.  So I set the upper limit at 25. The workload is manageable but it seems to generate a lot of unhappy patients and I dare not increase the slots because I’d be overwhelmed with the extra workload and I’m barely coping now.  I feel stuck between a rock and a hard place!

<Bob> So you have set the maximum slot-capacity to the average demand?

<Dr Bill> Yes. That’s OK isn’t it? It will average out over time. That is what average means! But it doesn’t feel like that. The chaos and pressure never seems to go away.


There was a long pause while Bob mulls over what he had heard, sips his pint of Black Sheep and nibbles on the dwindling bowl of peanuts.  Eventually he speaks.


<Bob> Bill, I have some good news and some not-so-good news and then some more good news.

<Dr Bill> Oh dear, you sound just like me when I have to share the results of tests with one of my patients at their follow up appointment. You had better give me the “bad news sandwich”!

<Bob> OK. The first bit of good news is that this is a very common, and easily treatable flow problem.  The not-so-good news is that you will need to change some things.  The second bit of good news is that the changes will not cost anything and will work very quickly.

<Dr Bill> What! You cannot be serious!! Until ten minutes ago you said that you knew nothing about how my practice works and now you are telling me that there is a quick, easy, zero cost solution.  Forgive me for doubting your engineering know-how but I’ll need a bit more convincing than that!

<Bob> And I would too if I were in your position.  The clues to the diagnosis are in the story. You said the process problem was long-standing; you said that you set the maximum slot-capacity to the average demand; and you said that you have a fixed appointment time that was decided by a subjective consensus.  From an engineering perspective, this is a perfect recipe for generating chronic chaos, which is exactly the symptoms you are describing.

<Dr Bill> Is it? OMG. You said this is well understood and resolvable? So what do I do?

<Bob> Give me a minute.  You said the average demand is 25 per day. What sort of service would you like your patients to experience? Would “90% can expect a same day appointment on the first call” be good enough as a starter?

<Dr Bill> That would be game changing!  Mrs H would be over the moon to be able to say “Yes” that often. I would feel much less anxious too, because I know the current system is a potentially dangerous lottery. And my patients would be delighted and relieved to be able to see me that easily and quickly.

<Bob> OK. Let me work this out. Based on what you’ve said, some assumptions, and a bit of flow engineering know-how; you would need to offer up to 31 appointments per day.

<Dr Bill> What! That’s impossible!!! I told you it would be impossible! That would be another hour a day of face-to-face appointments. When would I do the other stuff? And how did you work that out anyway?

<Bob> I did not say they would have to all be 10-minute appointments, and I did not say you would expect to fill them all every day. I did however say you would have to change some things.  And I did say this is a well understood flow engineering problem.  It is called “resilience design“. That’s how I was able to work it out on the back of this Black Sheep beer mat.

<Dr Bill> H’mm. That is starting to sound a bit more reasonable. What things would I have to change? Specifically?

<Bob> I’m not sure what specifically yet.  I think in your language we would say “I have taken a history, and I have a differential diagnosis, so next I’ll need to examine the patient, and then maybe do some tests to establish the actual diagnosis and to design and decide the treatment plan“.

<Dr Bill> You are learning the medical lingo fast! What do I need to do first? Brace myself for the forensic rubber-gloved digital examination?

<Bob> Alas, not yet and certainly not here. Shall we start with the vital signs? Height, weight, pulse, blood pressure, and temperature? That’s what my GP did when I went with my scary lump.  The patient here is not you, it is your St. Elsewhere’s® Practice, and we will need to translate the medical-speak into engineering-speak.  So one thing you’ll need to learn is a bit of the lingua-franca of systems engineering.  By the way, that’s what I do now. I am a systems engineer, or maybe now a health care systems engineer?

<Dr Bill> Point me in the direction of the HCSE dictionary! The next round is on me. And the nuts!

<Bob> Excellent. I’ll have another Black Sheep and some of those chilli-coated ones. We have work to do.  Let me start by explaining what “capacity” actually means to an engineer. Buckle up. This ride might get a bit bumpy.


This story is fictional, but the subject matter is factual.

Bob’s diagnosis and recommendations are realistic and reasonable.

Chapter 1 of the HCSE dictionary can be found here.

And if you are a GP who recognises these “symptoms” then this may be of interest.

MOOCHI

When education fails to keep pace with technology the result is inequality. Without the skills to stay useful as innovations arrive, workers suffer“. The Economist January 14th 2017, p 11.

The stark reality is that we all have to develop the habit of lifelong learning, especially if we want to avoid mid-career obsolescence.

A terrifying prospect for the family bread-winner.

This risk is especially true in health care because medical and managerial technology is always changing as the health care system evolves and adapts to the shifting sands and tides.

But we cannot keep going back to traditional classroom methods to update our knowledge and skills: it is too disruptive and expensive.  And when organisations are in a financial squeeze, the training budget is usually the first casualty!

So, how can we protect ourselves?  One answer is a MOOC.

The mantra is “learn while you earn” which means that we do not take time out to do this intermittently, we do it in parallel, and continuously.

The MOOC model leverages the power of the Internet and mobile technology, allowing us to have bites of learning where and when it most suits us, at whatever pace we choose to set.

We can have all the benefits of traditional education too: certificates, communities, and coaching.

And when keeping a job, climbing the career ladder, or changing companies all require a bang-up-to-date set of skills – a bit of time, effort and money may be a very wise investment and deliver a healthy return!


And the good news is that there a is a MOOC for Healthcare Improvement.

It is called the …

Foundations of Improvement Science in Healthcare

which is an open door to a growing …

Community of Healthcare Improvement Practitioners.

Click HERE for a free taste …. yum yum!


 

Streeeeeetch!

Today was an especially interesting one.

All days are interesting and every day I learn something of great value and today was no different.

But today was in a different league!


My job today was to deliver health care. I am a surgeon. I perform operations that are intended to improve the health of the people who place their trust in me.

Patients.

But I was only able to deliver three operations today. Usually I would do eight. Normally I would use every precious minute of operating theatre time.

But today, half of that (very expensive) time went unused. It was paid for but it was wasted. The whole theatre team were idle. And patients needing operations were waiting too. Lose, lose.

And the reason?

The day surgery unit in my hospital was being used for something that it was not designed for. It was being used by non-surgical patients.

And that was the best of a bad job because the alternative was those non-surgical patients would otherwise have been lying on trolleys in corridors.


But how could frail elderly medical emergency admissions spill over into the day surgery unit?

Because the current design of the health and social care system guarantees that will happen.  That was not the intention, but it is the impact of the policies that dictate how the system behaves.


So, to fill in the idle time while unable to operate (and after deleting all the spam email and processing the non-spam email) I looked at jobs on the NHS jobs website.

This is a behaviour I have observed many times, and to-date I have not indulged in it, but today I was idle, and I was irritated, and I was curious to see what I might find.

And I quite quickly came across a job for a “STP Programme Director” with an eye-watering, five-figure salary!  H’mmm …

STP is shortcut for “Sustainability and Transformation Plans” and, forgive me for appearing skeptical but, that sounds rather familiar.

But, ever wary of the dangers of pre-judgement, I dug deeper into the online information to learn more.


And I downloaded the STP for our local health care economy, all 80-pages of it, and I even had time to read it.

The offered purpose made complete sense to me.

A vision of an integrated health and social care system that converts public cash into public contentment. Fantastic! Sign me up to that!!

What I was less able to make sense of was the process for delivering the dream.

The job of the STP Programme Director seemed to be “to bring all the separate parts of the current system together and to weld them into a synergistic whole“.

That would be the perfect job for someone who sees the whole as greater than the sum of the parts, and someone with the skills and experience to do that. Someone like a systems engineer. A health and social care systems engineer.

My interest was growing!


And it was at that point that I felt the emotional pain of disappointment.

There was nothing new in the JD or the STP that even hinted at “how” this wonderful vision would be achieved. All I found was the well-worn “CIP and QIPP” language.

That, forgive me for saying, does not seem to have delivered so far. Apologies for the reality check.

Oh well! Never mind. My skepticism had prepared me for disappointment.


Ah! Here is the next patient. Time to wield the scalpel and to actually deliver some health care. A much better use of my time than web-surfing, eh?


But the idle time was not completely wasted. I did learn much but from the opportunity to experience the streeeeetch between the NHS reality and the NHS rhetoric.

Every day is an opportunity to learn something. You never know what will turn up tomorrow.

Miracle on Tavanagh Avenue

Sometimes change is dramatic. A big improvement appears very quickly. And when that happens we are caught by surprise (and delight).

Our emotional reaction is much faster than our logical response. “Wow! That’s a miracle!


Our logical Tortoise eventually catches up with our emotional Hare and says “Hare, we both know that there is no such thing as miracles and magic. There must be a rational explanation. What is it?

And Hare replies “I have no idea, Tortoise.  If I did then it would not have been such a delightful surprise. You are such a kill-joy! Can’t you just relish the relief without analyzing the life out of it?

Tortoise feels hurt. “But I just want to understand so that I can explain to others. So that they can do it and get the same improvement.  Not everyone has a ‘nothing-ventured-nothing-gained’ attitude like you! Most of us are too fearful of failing to risk trusting the wild claims of improvement evangelists. We have had our fingers burned too often.


The apparent miracle is real and recent … here is a snippet of the feedback:

Notice carefully the last sentence. It took a year of discussion to get an “OK” and a month of planning to prepare the “GO”.

That is not a miracle and some magic … that took a lot of hard work!

The evangelist is the customer. The supplier is an engineer.


The context is the chronic niggle of patients trying to get an appointment with their GP, and the chronic niggle of GPs feeling overwhelmed with work.

Here is the back story …

In the opening weeks of the 21st Century, the National Primary Care Development Team (NPDT) was formed.  Primary care was a high priority and the government had allocated £168m of investment in the NHS Plan, £48m of which was earmarked to improve GP access.

The approach the NPDT chose was:

harvest best practice +
use a panel of experts +
disseminate best practice.

Dr (later Sir) John Oldham was the innovator and figure-head.  The best practice was copied from Dr Mark Murray from Kaiser Permanente in the USA – the Advanced Access model.  The dissemination method was copied from from Dr Don Berwick’s Institute of Healthcare Improvement (IHI) in Boston – the Collaborative Model.

The principle of Advanced Access is “today’s-work-today” which means that all the requests for a GP appointment are handled the same day.  And the proponents of the model outlined the key elements to achieving this:

1. Measure daily demand.
2. Set capacity so that is sufficient to meet the daily demand.
3. Simple booking rule: “phone today for a decision today”.

But that is not what was rolled out. The design was modified somewhere between aspiration and implementation and in two important ways.

First, by adding a policy of “Phone at 08:00 for an appointment”, and second by adding a policy of “carving out” appointment slots into labelled pots such as ‘Dr X’ or ‘see in 2 weeks’ or ‘annual reviews’.

Subsequent studies suggest that the tweaking happened at the GP practice level and was driven by the fear that, by reducing the waiting time, they would attract more work.

In other words: an assumption that demand for health care is supply-led, and without some form of access barrier, the system would be overwhelmed and never be able to cope.


The result of this well-intended tampering with the Advanced Access design was to invalidate it. Oops!

To a systems engineer this is meddling was counter-productive.

The “today’s work today” specification is called a demand-led design and, if implemented competently, will lead to shorter waits for everyone, no need for urgent/routine prioritization and slot carve-out, and a simpler, safer, calmer, more efficient, higher quality, more productive system.

In this context it does not mean “see every patient today” it means “assess and decide a plan for every patient today”.

In reality, the actual demand for GP appointments is not known at the start; which is why the first step is to implement continuous measurement of the daily number and category of requests for appointments.

The second step is to feed back this daily demand information in a visual format called a time-series chart.

The third step is to use this visual tool for planning future flow-capacity, and for monitoring for ‘signals’, such as spikes, shifts, cycles and slopes.

That was not part of the modified design, so the reasonable fear expressed by GPs was (and still is) that by attempting to do today’s-work-today they would unleash a deluge of unmet need … and be swamped/drowned.

So a flood defense barrier was bolted on; the policy of “phone at 08:00 for an appointment today“, and then the policy of  channeling the over spill into pots of “embargoed slots“.

The combined effect of this error of omission (omitting the measured demand visual feedback loop) and these errors of commission (the 08:00 policy and appointment slot carve-out policy) effectively prevented the benefits of the Advanced Access design being achieved.  It was a predictable failure.

But no one seemed to realize that at the time.  Perhaps because of the political haste that was driving the process, and perhaps because there were no systems engineers on the panel-of-experts to point out the risks of diluting the design.

It is also interesting to note that the strategic aim of the NPCT was to develop a self-sustaining culture of quality improvement (QI) in primary care. That didn’t seem to have happened either.


The roll out of Advanced Access was not the success it was hoped. This is the conclusion from the 300+ page research report published in 2007.


The “Miracle on Tavanagh Avenue” that was experienced this week by both patients and staff was the expected effect of this tampering finally being corrected; and the true potential of the original demand-led design being released – for all to experience.

Remember the essential ingredients?

1. Measure daily demand and feed it back as a visual time-series chart.
2. Set capacity so that is sufficient to meet the daily demand.
3. Use a simple booking rule: “phone anytime for a decision today”.

But there is also an extra design ingredient that has been added in this case, one that was not part of the original Advanced Access specification, one that frees up GP time to provide the required “resilience” to sustain a same-day service.

And that “secret” ingredient is how the new design worked so quickly and feels like a miracle – safe, calm, enjoyable and productive.

This is health care systems engineering (HCSE) in action.


So congratulations to Harry Longman, the whole team at GP Access, and to Dr Philip Lusty and the team at Riverside Practice, Tavangh Avenue, Portadown, NI.

You have demonstrated what was always possible.

The fear of failure prevented it before, just as it prevented you doing this until you were so desperate you had no other choices.

To read the fuller story click here.

PS. Keep a close eye on the demand time-series chart and if it starts to rise then investigate the root cause … immediately.


The Power of Pictures

I am a big fan of pictures that tell a story … and this week I discovered someone who is creating great pictures … Hayley Lewis.

This is one of Hayley’s excellent sketch notes … the one that captures the essence of the Bruce Tuckman model of team development.

The reason that I share this particular sketch-note is because my experience of developing improvement-by-design teams is that it works just like this!

The tricky phase is the STORMING one because not all teams survive it!

About half sink in the storm – and that seems like an awful waste – and I believe it is avoidable.

This means that before starting the team development cycle, the leader needs to be aware of how to navigate themselves and the team through the storm phase … and that requires training, support and practice.

Which is the reason why coaching from a independent, experienced, capable practitioner is a critical element of the improvement process.

How Do We Know We Have Improved?

Phil and Pete are having a coffee and a chat.  They both work in the NHS and have been friends for years.

They have different jobs. Phil is a commissioner and an accountant by training, Pete is a consultant and a doctor by training.

They are discussing a challenge that affects them both on a daily basis: unscheduled care.

Both Phil and Pete want to see significant and sustained improvements and how to achieve them is often the focus of their coffee chats.


<Phil> We are agreed that we both want improvement, both from my perspective as a commissioner and from your perspective as a clinician. And we agree that what we want to see improvements in patient safety, waiting, outcomes, experience for both patients and staff, and use of our limited NHS resources.

<Pete> Yes. Our common purpose, the “what” and “why”, has never been an issue.  Where we seem to get stuck is the “how”.  We have both tried many things but, despite our good intentions, it feels like things are getting worse!

<Phil> I agree. It may be that what we have implemented has had a positive impact and we would have been even worse off if we had done nothing. But I do not know. We clearly have much to learn and, while I believe we are making progress, we do not appear to be learning fast enough.  And I think this knowledge gap exposes another “how” issue: After we have intervened, how do we know that we have (a) improved, (b) not changed or (c) worsened?

<Pete> That is a very good question.  And all that I have to offer as an answer is to share what we do in medicine when we ask a similar question: “How do I know that treatment A is better than treatment B?”  It is the essence of medical research; the quest to find better treatments that deliver better outcomes and at lower cost.  The similarities are strong.

<Phil> OK. How do you do that? How do you know that “Treatment A is better than Treatment B” in a way that anyone will trust the answer?

 <Pete> We use a science that is actually very recent on the scientific timeline; it was only firmly established in the first half of the 20th century. One reason for that is that it is rather a counter-intuitive science and for that reason it requires using tools that have been designed and demonstrated to work but which most of us do not really understand how they work. They are a bit like magic black boxes.

<Phil> H’mm. Please forgive me for sounding skeptical but that sounds like a big opportunity for making mistakes! If there are lots of these “magic black box” tools then how do you decide which one to use and how do you know you have used it correctly?

<Pete> Those are good questions! Very often we don’t know and in our collective confusion we generate a lot of unproductive discussion.  This is why we are often forced to accept the advice of experts but, I confess, very often we don’t understand what they are saying either! They seem like the medieval Magi.

<Phil> H’mm. So these experts are like ‘magicians’ – they claim to understand the inner workings of the black magic boxes but are unable, or unwilling, to explain in a language that a ‘muggle’ would understand?

<Pete> Very well put. That is just how it feels.

<Phil> So can you explain what you do understand about this magical process? That would be a start.


<Pete> OK, I will do my best.  The first thing we learn in medical research is that we need to be clear about what it is we are looking to improve, and we need to be able to measure it objectively and accurately.

<Phil> That  makes sense. Let us say we want to improve the patient’s subjective quality of the A&E experience and objectively we want to reduce the time they spend in A&E. We measure how long they wait. 

<Pete> The next thing is that we need to decide how much improvement we need. What would be worthwhile? So in the example you have offered we know that reducing the average time patients spend in A&E by just 30 minutes would have a significant effect on the quality of the patient and staff experience, and as a by-product it would also dramatically improve the 4-hour target performance.

<Phil> OK.  From the commissioning perspective there are lots of things we can do, such as commissioning alternative paths for specific groups of patients; in effect diverting some of the unscheduled demand away from A&E to a more appropriate service provider.  But these are the sorts of thing we have been experimenting with for years, and it brings us back to the question: How do we know that any change we implement has had the impact we intended? The system seems, well, complicated.

<Pete> In medical research we are very aware that the system we are changing is very complicated and that we do not have the power of omniscience.  We cannot know everything.  Realistically, all we can do is to focus on objective outcomes and collect small samples of the data ocean and use those in an attempt to draw conclusions can trust. We have to design our experiment with care!

<Phil> That makes sense. Surely we just need to measure the stuff that will tell us if our impact matches our intent. That sounds easy enough. What’s the problem?

<Pete> The problem we encounter is that when we measure “stuff” we observe patient-to-patient variation, and that is before we have made any changes.  Any impact that we may have is obscured by this “noise”.

<Phil> Ah, I see.  So if the our intervention generates a small impact then it will be more difficult to see amidst this background noise. Like trying to see fine detail in a fuzzy picture.

<Pete> Yes, exactly like that.  And it raises the issue of “errors”.  In medical research we talk about two different types of error; we make the first type of error when our actual impact is zero but we conclude from our data that we have made a difference; and we make the second type of error when we have made an impact but we conclude from our data that we have not.

<Phil> OK. So does that imply that the more “noise” we observe in our measure for-improvement before we make the change, the more likely we are to make one or other error?

<Pete> Precisely! So before we do the experiment we need to design it so that we reduce the probability of making both of these errors to an acceptably low level.  So that we can be assured that any conclusion we draw can be trusted.

<Phil> OK. So how exactly do you do that?

<Pete> We know that whenever there is “noise” and whenever we use samples then there will always be some risk of making one or other of the two types of error.  So we need to set a threshold for both. We have to state clearly how much confidence we need in our conclusion. For example, we often use the convention that we are willing to accept a 1 in 20 chance of making the Type I error.

<Phil> Let me check if I have heard you correctly. Suppose that, in reality, our change has no impact and we have set the risk threshold for a Type 1 error at 1 in 20, and suppose we repeat the same experiment 100 times – are you saying that we should expect about five of our experiments to show data that says our change has had the intended impact when in reality it has not?

<Pete> Yes. That is exactly it.

<Phil> OK.  But in practice we cannot repeat the experiment 100 times, so we just have to accept the 1 in 20 chance that we will make a Type 1 error, and we won’t know we have made it if we do. That feels a bit chancy. So why don’t we just set the threshold to 1 in 100 or 1 in 1000?

<Pete> We could, but doing that has a consequence.  If we reduce the risk of making a Type I error by setting our threshold lower, then we will increase the risk of making a Type II error.

<Phil> Ah! I see. The old swings-and-roundabouts problem. By the way, do these two errors have different names that would make it  easier to remember and to explain?

<Pete> Yes. The Type I error is called a False Positive. It is like concluding that a patient has a specific diagnosis when in reality they do not.

<Phil> And the Type II error is called a False Negative?

<Pete> Yes.  And we want to avoid both of them, and to do that we have to specify a separate risk threshold for each error.  The convention is to call the threshold for the false positive the alpha level, and the threshold for the false negative the beta level.

<Phil> OK. So now we have three things we need to be clear on before we can do our experiment: the size of the change that we need, the risk of the false positive that we are willing to accept, and the risk of a false negative that we are willing to accept.  Is that all we need?

<Pete> In medical research we learn that we need six pieces of the experimental design jigsaw before we can proceed. We only have three pieces so far.

<Phil> What are the other three pieces then?

<Pete> We need to know the average value of the metric we are intending to improve, because that is our baseline from which improvement is measured.  Improvements are often framed as a percentage improvement over the baseline.  And we need to know the spread of the data around that average, the “noise” that we referred to earlier.

<Phil> Ah, yes!  I forgot about the noise.  But that is only five pieces of the jigsaw. What is the last piece?

<Pete> The size of the sample.

<Phil> Eh?  Can’t we just go with whatever data we can realistically get?

<Pete> Sadly, no.  The size of the sample is how we control the risk of a false negative error.  The more data we have the lower the risk. This is referred to as the power of the experimental design.

<Phil> OK. That feels familiar. I know that the more experience I have of something the better my judgement gets. Is this the same thing?

<Pete> Yes. Exactly the same thing.

<Phil> OK. So let me see if I have got this. To know if the impact of the intervention matches our intention we need to design our experiment carefully. We need all six pieces of the experimental design jigsaw and they must all fall inside our circle of control. We can measure the baseline average and spread; we can specify the impact we will accept as useful; we can specify the risks we are prepared to accept of making the false positive and false negative errors; and we can collect the required amount of data after we have made the intervention so that we can trust our conclusion.

<Pete> Perfect! That is how we are taught to design research studies so that we can trust our results, and so that others can trust them too.

<Phil> So how do we decide how big the post-implementation data sample needs to be? I can see we need to collect enough data to avoid a false negative but we have to be pragmatic too. There would appear to be little value in collecting more data than we need. It would cost more and could delay knowing the answer to our question.

<Pete> That is precisely the trap than many inexperienced medical researchers fall into. They set their sample size according to what is achievable and affordable, and then they hope for the best!

<Phil> Well, we do the same. We analyse the data we have and we hope for the best.  In the magical metaphor we are asking our data analysts to pull a white rabbit out of the hat.  It sounds rather irrational and unpredictable when described like that! Have medical researchers learned a way to avoid this trap?

<Pete> Yes, it is a tool called a power calculator.

<Phil> Ooooo … a power tool … I like the sound of that … that would be a cool tool to have in our commissioning bag of tricks. It would be like a magic wand. Do you have such a thing?

<Pete> Yes.

<Phil> And do you understand how the power tool magic works well enough to explain to a “muggle”?

<Pete> Not really. To do that means learning some rather unfamiliar language and some rather counter-intuitive concepts.

<Phil> Is that the magical stuff I hear lurks between the covers of a medical statistics textbook?

<Pete> Yes. Scary looking mathematical symbols and unfathomable spells!

<Phil> Oh dear!  Is there another way for to gain a working understanding of this magic? Something a bit more pragmatic? A path that a ‘statistical muggle’ might be able to follow?

<Pete> Yes. It is called a simulator.

<Phil> You mean like a flight simulator that pilots use to learn how to control a jumbo jet before ever taking a real one out for a trip?

<Pete> Exactly like that.

<Phil> Do you have one?

<Pete> Yes. It was how I learned about this “stuff” … pragmatically.

<Phil> Can you show me?

<Pete> Of course.  But to do that we will need a bit more time, another coffee, and maybe a couple of those tasty looking Danish pastries.

<Phil> A wise investment I’d say.  I’ll get the the coffee and pastries, if you fire up the engines of the simulator.

The Lost Tribe

figures_lost_looking_at_map_anim_150_wht_15601

“Jingle Bells, Jingle Bells” announced Bob’s computer as he logged into the Webex meeting with Lesley.

<Bob> Hi Lesley, in case I forget later I’d like to wish you a Happy Christmas and hope that 2017 brings you new opportunity for learning and fun.

<Lesley> Thanks Bob, and I wish you the same. And I believe the blog last week pointed to some.

<Bob> Thank you and I agree;  every niggle is an opportunity for improvement and the “Houston we have a problem!” one is a biggie.

<Lesley> So how do we start on this one? It is massive!

<Bob> The same way we do on all niggles; we diagnose the root cause first. What do you feel they might be?

<Lesley> Well, following it backwards from your niggle, the board reports are created by the data analysts, and they will produce whatever they are asked to. It must be really irritating for them to have their work rubbished!

<Bob> Are you suggesting that they understand the flaws in what they are asked to do but keep quiet?

<Lesley> I am not sure they do, but there is clearly a gap between their intent and their impact. Where would they gain the insight? Do they have access to the sort of training I have am getting?

<Bob> That is a very good question, and until this week I would not have been able to answer, but an interesting report by the Health Foundation was recently published on that very topic. It is entitled “Understanding Analytical Capability In Health Care” and what it says is that there is a lost tribe of data analysts in the NHS.

<Lesley> How interesting! That certainly resonates with my experience.  All the data analysts I know seem to be hidden away behind their computers, caught in the cross-fire between between the boards and the wards, and very sensibly keeping their heads down and doing what they are asked to.

<Bob> That would certainly help to explain what we are seeing! And the good news is that Martin Bardsley, the author of the paper, has interviewed many people across the system, gathered their feedback, and offered some helpful recommendations.  Here is a snippet.

analysiscapability

<Lesley> I like these recommendations, especially the “in-work training programmes” and inclusion “in general management and leadership training“. But isn’t that one of the purposes of the CHIPs training?

<Bob> It is indeed, which is why it is good to see that Martin has specifically recommended it.

saasoftrecommended

<Lesley> Excellent! That means that my own investment in the CHIPs training has just gained in street value and that’s good for my CV. An unexpected early Xmas present. Thank you!

“Houston, we have a problem!”

The immortal words from Apollo 13 that alerted us to an evolving catastrophe …

… and that is what we are seeing in the UK health and social care system … using the thermometer of A&E 4-hour performance. England is the red line.

uk_ae_runchart

The chart shows that this is not a sudden change, it has been developing over quite a long period of time … so why does it feel like an unpleasant surprise?


One reason may be that NHS England is using performance management techniques that were out of date in the 1980’s and are obsolete in the 2010’s!

Let me show you what I mean. This is a snapshot from the NHS England Board Minutes for November 2016.

nhse_rag_nov_2016
RAG stands for Red-Amber-Green and what we want to see on a Risk Assessment is Green for the most important stuff like safety, flow, quality and affordability.

We are not seeing that.  We are seeing Red/Amber for all of them. It is an evolving catastrophe.

A risk RAG chart is an obsolete performance management tool.

Here is another snippet …

nhse_ae_nov_2016

This demonstrates the usual mix of single point aggregates for the most recent month (October 2016); an arbitrary target (4 hours) used as a threshold to decide failure/not failure; two-point comparisons (October 2016 versus October 2015); and a sprinkling of ratios. Not a single time-series chart in sight. No pictures that tell a story.

Click here for the full document (which does also include some very sensible plans to maintain hospital flow through the bank holiday period).

The risk of this way of presenting system performance data is that it is a minefield of intuitive traps for the unwary.  Invisible pitfalls that can lead to invalid conclusions, unwise decisions, potentially ineffective and/or counter-productive actions, and failure to improve. These methods are risky and that is why they should be obsolete.

And if NHSE is using obsolete tools than what hope do CCGs and Trusts have?


Much better tools have been designed.  Tools that are used by organisations that are innovative, resilient, commercially successful and that deliver safety, on-time delivery, quality and value for money. At the same time.

And they are obsolete outside the NHS because in the competitive context of the dog-eat-dog real world, organisations do not survive if they do not innovate, improve and learn as fast as their competitors.  They do not have the luxury of being shielded from reality by having a central tax-funded monopoly!

And please do not misinterpret my message here; I am a 100% raving fan of the NHS ethos of “available to all and free at the point of delivery” and an NHS that is funded centrally and fairly. That is not my issue.

My issue is the continued use of obsolete performance management tools in the NHS.


Q: So what are the alternatives? What do the successful commercial organisations use instead?

A: System behaviour charts.

SBCs are pictures of how the system is behaving over time – pictures that tell a story – pictures that have meaning – pictures that we can use to diagnose, design and deliver a better outcome than the one we are heading towards.

Pictures like the A&E performance-over-time chart above.

Click here for more on how and why.


Therefore, if the DoH, NHSE, NHSI, STPs, CCGs and Trust Boards want to achieve their stated visions and missions then the writing-on-the-wall says that they will need to muster some humility and learn how successful organisations do this.

This is not a comfortable message to hear and it is easier to be defensive than receptive.

The NHS has to change if it wants to survive and continue serve the people who pay the salaries. And time is running out. Continuing as we are is not an option. Complaining and blaming are not options. Doing nothing is not an option.

Learning is the only option.

Anyone can learn to use system behaviour charts.  No one needs to rely on averages, two-point comparisons, ratios, targets, and the combination of failure-metrics and us-versus-them-benchmarking that leads to the chronic mediocrity trap.

And there is hope for those with enough hunger, humility and who are prepared to do the hard-work of developing their personal, team, department and organisational capability to use better management methods.


Apollo 13 is a true story.  The catastrophe was averted.  The astronauts were brought home safely.  The film retells the story of how that miracle was achieved. Perhaps watching the whole film would be somewhere to start, because it holds many valuable lessons for us all – lessons on how effective teams behave.

Hungry, Hardworking, Humble

lencioni_ideal_team_playerThis week I read a new book by one of my favourite authors – Patrick Lencioni.

The book is The Ideal Team Player.

Patrick’s books are written as stories which makes them very accessible and easily memorable.  And each one captures a priceless pearl of wisdom.

Improving a complex adaptive system such as health care can only be done by the people in the system working together and sharing expectations, experiences, knowledge, understanding and wisdom.

So each person needs to understand what it is to be able to contribute effectively to a team – because teams are how complex systems are designed and how they are improved.


Patrick identifies three “virtues” – and he uses that term appropriately.

Hungry … which means a having a burning ambition.  Something needed and wanted. An unsatisfied longing. A vision. A mission. A goal. A pull. A purpose.

Hardworking … which means a willingness to do what is needed to satisfy the hunger. Going that extra mile. Reading that extra book. Solving that extra problem. Giving that extra bit of feedback. Doing that extra job that no one else wants to do. Investing in the future.

Humble … which means that Ego is not running the show.  Confidence is linked to competence. Impact and intent are aligned. The mind is open to learning. The eyes are open to seeing. The ears are open to listening. And the mouth is only open for asking questions and telling stories.


The three virtues are necessary and sufficient, they are effective and efficient.

So if any one is missing the outcome is not achievable.

Time to pick up the mirror and look deeply into it … and ask:

“Am I hungry enough?”
“Am I prepared to commit my lifetime?”
“Am I open to learning from reality and from others?”

Our tangible record of past behaviour provides us with our answers.

 It is the time to dig deep and ask the question: am  hungry, hardworking and humble?

Pride and Joy

stick_figure_superhero_anim_150_wht_1857Have you heard the phrase “Pride comes before a fall“?

What does this mean? That the feeling of pride is the reason for the subsequent fall?

So by following that causal logic, if we do not allow ourselves to feel proud then we can avoid the fall?

And none of us like the feeling of falling and failing. We are fearful of that negative feeling, so with this simple trick we can avoid feeling bad. Yes?

But we all know the positive feeling of achievement – we feel pride when we have done good work, when our impact matches our intent.  Pride in our work.

Is that bad too?

Should we accept under-achievement and unexceptional mediocrity as the inevitable cost of avoiding the pain of possible failure?  Is that what we are being told to do here?


The phrase comes from the Bible, from the Book of Proverbs 16:18 to be precise.

proverb

And the problem here is that the phrase “pride comes before a fall” is not the whole proverb.

It has been simplified. Some bits have been omitted. And those omissions lead to ambiguity and the opportunity for obfuscation and re-interpretation.

pride_goes_before_a_fall
In the fuller New International Version we see a missing bit … the “haughty spirit” bit.  That is another way of saying “over-confident” or “arrogant”.


But even this “authorised” version is still ambiguous and more questions spring to mind:

Q1. What sort of pride are we referring to? Just the confidence version? What about the pride that follows achievement?

Q2. How would we know if our feeling of confidence is actually justified?

Q3. Does a feeling of confidence always precede a fall? Is that how we diagnose over-confidence? Retrospectively? Are there instances when we feel confident but we do not fail? Are there instances when we do not feel confident and then fail?

Q4. Does confidence cause the fall or it is just a temporal association? Is there something more fundamental that causes both high-confidence and low-competence?


There is a well known model called the Conscious-Competence model of learning which generates a sequence of four stages to achieving a new skill. Such as one we need to achieve our intended outcomes.

We all start in the “blissful ignorance” zone of unconscious incompetence.  Our unknowns are unknown to us.  They are blind spots.  So we feel unjustifiably confident.

hierarchy_of_competence

In this model the first barrier to progress is “wrong intuition” which means that we actually have unconscious assumptions that are distorting our perception of reality.

What we perceive makes sense to us. It is clear and obvious. We feel confident. We believe our own rhetoric.

But our unconscious assumptions can trick us into interpreting information incorrectly.  And if we derive decisions from unverified assumptions and invalid analysis then we may do the wrong thing and not achieve our intended outcome.  We may unintentionally cause ourselves to fail and not be aware of it.  But we are proud and confident.

Then the gap between our intent and our impact becomes visible to all and painful to us. So we are tempted to avoid the social pain of public failure by retreating behind the “Yes, But” smokescreen of defensive reasoning. The “doom loop” as it is sometimes called. The Victim Vortex. “Don’t name, shame and blame me, I was doing my best. I did not intent that to happen. To err is human”.


The good news is that this learning model also signposts a possible way out; a door in the black curtain of ignorance.  It suggests that we can learn how to correct our analysis by using feedback from reality to verify our rhetorical assumptions.  Those assumptions which pass the “reality check” we keep, those which fail the “reality check” we redesign and retest until they pass.  Bit by bit our inner rhetoric comes to more closely match reality and the wisdom of our decisions will improve.

And what we then see is improvement.  Our impact moves closer towards our intent. And we can justifiably feel proud of that achievement. We do not need to be best-compared-with-the-rest; just being better-than-we-were-before is OK. That is learning.

the_learning_curve

And this is how it feels … this is the Learning Curve … or the Nerve Curve as we call it.

What it says is that to be able to assess confidence we must also measure competence. Outcomes. Impact.

And to achieve excellence we have to be prepared to actively look for any gap between intent and impact.  And we have to be prepared to see it as an opportunity rather than as a threat. And we will need to be able to seek feedback and other people’s perspectives. And we need to be to open to asking for examples and explanations from those who have demonstrated competence.

It says that confidence is not a trustworthy surrogate for competence.

It says that we want the confidence that flows from competence because that is the foundation of trust.

Improvement flows at the speed of trust and seeing competence, confidence and trust growing is a joyous thing.

Pride and Joy are OK.

Arrogance and incompetence comes before a fall would be a better proverb.

Focus

focus_on_sfqpThe theme of the week has been “focus” and by that I mean the amazing ability of the human mind to concentrate on one thing to the exclusion of almost all else.

To illustrate what I mean, just reflect on what happens when we watch a television program.  We do not see the TV screen, controls, or the “stuff” around it.  Or to be more precise … we do see it but we do not perceive it.

Even our Mark I Eyeballs have evolved to “focus” and I do not mean just the clear bits that create a sharp image on the light-sensitive layer at the back (the retina).

Our retinas are not like a video camera … not at all … they have a very high resolution bit at the center which is quite small, and a rather low resolution bit that surrounds it and that is much bigger.

But we do not perceive that … because we have some very advanced data processing wetware … and the process actually starts in the retina.


And our eyes are always moving … just observe someone else’s eyes when they are looking at a picture or reading a book.  If the cameras in a TV studio did that we would complain!

So what is happening here?

The answer is that our advanced data processing wetware is scanning, but not in the way that a radar scans … in a mindless cycle.  Our eye scanning has purpose … it is driven by the mental model inside our heads that is looking for information, and the search is based on what we already believe and perceive.


Psychologists have studied this using cool technology that tracks the eye position and works out what the person is looking at.  And what they found was surprising.

facescanIf we are presented with a picture of a face we will scan it in a very consistent way.  We look at the nose first and then we look at eyes, mouth and we pattern-match to answer the question “Do I recognize this person?

If we do then we can draw on past memories of them to help inform our interpretation of what we see.  If we do not then we need to keep watching and learning.  We need an answer to the question “Is this person an opportunity or a threat?


And it is a very fast process, and it happens out of awareness, and it is hard-wired and it is automatic.

After initial recognition we will focus on the eyes and mouth because, as the Greeks said, “the eyes are the window to the soul“.  We need to infer what the other person is thinking … unconsciously.


And the good news is that this amazing ability to focus is not completely automatic … it can be directed … rather like a radio can be tuned to specific frequency.

And when we learn how to do that as individuals the effect is surprising.

And when we learn how to do that as a group, in synergy, the effect is amazing!

Defensive Reasoning

monkey_on_back_anim_150_wht_11200

About 25 years ago a paper was published in the Harvard Business Review with the interesting title of “Teaching Smart People How To Learn

The uncomfortable message was that many people who are top of the intellectual rankings are actually very poor learners.

This sounds like a paradox.  How can people be high-achievers and yet be unable to learn?


Health care systems are stuffed full of super-smart, high-achieving professionals. The cream of educational crop. The top 2%. They are called “doctors”.

And we have a problem with improvement in health care … a big problem … the safety, delivery, quality and affordability of the NHS is getting worse. Not better.

Improvement implies change and change implies learning, so if smart people struggle to learn then could that explain why health care systems find self-improvement so difficult?

This paragraph from the 1991 HBR paper feels uncomfortably familiar:

defensive_reasoning_2

The author, Chris Argyris, refers to something called “single-loop learning” and if we translate this management-speak into the language of medicine it would come out as “treating the symptom and ignoring the disease“.  That is poor medicine.

Chris also suggests an antidote to this problem and gave it the label “double-loop learning” which if translated into medical speak becomes “diagnosis“.  And that is something that doctors can relate to because without a diagnosis, a justifiable treatment is difficult to formulate.


We need to diagnose the root cause(s) of the NHS disease.


The 1991 HBR paper refers back to an earlier 1977 HBR paper called Double Loop Learning in Organisations where we find the theory that underpins it.

The proposed hypothesis is that we all have cognitive models that we use to decide our actions (and in-actions), what I have referred to before as ChimpWare.  In it is a reference to a table published in a 1974 book and the message is that Single-Loop learning is a manifestation of a Model 1 theory-in-action.

defensive_reasoning_models


And if we consider the task that doctors are expected to do then we can empathize with their dominant Model 1 approach.  Health care is a dangerous business.  Doctors can cause a lot of unintentional harm – both physical and psychological.  Doctors are dealing with a very, very complex system – a human body – that they only partially understand.  No two patients are exactly the same and illness is a dynamic process.  Everyone’s expectations are high. We have come a long way since the days of blood-letting and leeches!  Failure is not tolerated.

Doctors are intelligent and competitive … they had to be to win the education race.

Doctors must make tough decisions and have to have tough conversations … many, many times … and yet not be consumed in the process.  They often have to suppress emotions to be effective.

Doctors feel the need to protect patients from harm – both physical and emotional.

And collectively they do a very good job.  Doctors are respected and trusted professionals.


But …  to quote Chris Argyris …

“Model I blinds people to their weaknesses. For instance, the six corporate presidents were unable to realize how incapable they were of questioning their assumptions and breaking through to fresh understanding. They were under the illusion that they could learn, when in reality they just kept running around the same track.”

This blindness is self-reinforcing because …

“All parties withheld information that was potentially threatening to themselves or to others, and the act of cover-up itself was closed to discussion.”


How many times have we seen this in the NHS?

The Mid-Staffordshire Hospital debacle that led to the Francis Report is all the evidence we need.


So what is the way out of this double-bind?

Chris gives us some hints with his Model II theory-in-use.

  1. Valid information – Study.
  2. Free and informed choice – Plan.
  3. Constant monitoring of the implementation – Do.

The skill required is to question assumptions and break through to fresh understanding and we can do that with design-led approach because that is what designers do.

They bring their unconscious assumptions up to awareness and ask “Is that valid?” and “What if” questions.

It is called Improvement-by-Design.

And the good news is that this Model II approach works in health care, and we know that because the evidence is accumulating.

 

Value, Verify and Validate

thinker_figure_unsolve_puzzle_150_wht_18309Many of the challenges that we face in delivering effective and affordable health care do not have well understood and generally accepted solutions.

If they did there would be no discussion or debate about what to do and the results would speak for themselves.

This lack of understanding is leading us to try to solve a complicated system design challenge in our heads.  Intuitively.

And trying to do it this way is fraught with frustration and risk because our intuition tricks us. It was this sort of challenge that led Professor Rubik to invent his famous 3D Magic Cube puzzle.

It is difficult enough to learn how to solve the Magic Cube puzzle by trial and error; it is even more difficult to attempt to do it inside our heads! Intuitively.


And we know the Rubik Cube puzzle is solvable, so all we need are some techniques, tools and training to improve our Rubik Cube solving capability.  We can all learn how to do it.


Returning to the challenge of safe and affordable health care, and to the specific problem of unscheduled care, A&E targets, delayed transfers of care (DTOC), finance, fragmentation and chronic frustration.

This is a systems engineering challenge so we need some systems engineering techniques, tools and training before attempting it.  Not after failing repeatedly.

se_vee_diagram

One technique that a systems engineer will use is called a Vee Diagram such as the one shown above.  It shows the sequence of steps in the generic problem solving process and it has the same sequence that we use in medicine for solving problems that patients present to us …

Diagnose, Design and Deliver

which is also known as …

Study, Plan, Do.


Notice that there are three words in the diagram that start with the letter V … value, verify and validate.  These are probably the three most important words in the vocabulary of a systems engineer.


One tool that a systems engineer always uses is a model of the system under consideration.

Models come in many forms from conceptual to physical and are used in two main ways:

  1. To assist the understanding of the past (diagnosis)
  2. To predict the behaviour in the future (prognosis)

And the process of creating a system model, the sequence of steps, is shown in the Vee Diagram.  The systems engineer’s objective is a validated model that can be trusted to make good-enough predictions; ones that support making wiser decisions of which design options to implement, and which not to.


So if a systems engineer presented us with a conceptual model that is intended to assist our understanding, then we will require some evidence that all stages of the Vee Diagram process have been completed.  Evidence that provides assurance that the model predictions can be trusted.  And the scope over which they can be trusted.


Last month a report was published by the Nuffield Trust that is entitled “Understanding patient flow in hospitals”  and it asserts that traffic flow on a motorway is a valid conceptual model of patient flow through a hospital.  Here is a direct quote from the second paragraph in the Executive Summary:

nuffield_report_01
Unfortunately, no evidence is provided in the report to support the validity of the statement and that omission should ring an alarm bell.

The observation that “the hospitals with the least free space struggle the most” is not a validation of the conceptual model.  Validation requires a concrete experiment.


To illustrate why observation is not validation let us consider a scenario where I have a headache and I take a paracetamol and my headache goes away.  I now have some evidence that shows a temporal association between what I did (take paracetamol) and what I got (a reduction in head pain).

But this is not a valid experiment because I have not considered the other seven possible combinations of headache before (Y/N), paracetamol (Y/N) and headache after (Y/N).

An association cannot be used to prove causation; not even a temporal association.

When I do not understand the cause, and I am without evidence from a well-designed experiment, then I might be tempted to intuitively jump to the (invalid) conclusion that “headaches are caused by lack of paracetamol!” and if untested this invalid judgement may persist and even become a belief.


Understanding causality requires an approach called counterfactual analysis; otherwise known as “What if?” And we can start that process with a thought experiment using our rhetorical model.  But we must remember that we must always validate the outcome with a real experiment. That is how good science works.

A famous thought experiment was conducted by Albert Einstein when he asked the question “If I were sitting on a light beam and moving at the speed of light what would I see?” This question led him to the Theory of Relativity which completely changed the way we now think about space and time.  Einstein’s model has been repeatedly validated by careful experiment, and has allowed engineers to design and deliver valuable tools such as the Global Positioning System which uses relativity theory to achieve high positional precision and accuracy.


So let us conduct a thought experiment to explore the ‘faster movement requires more space‘ statement in the case of patient flow in a hospital.

First, we need to define what we mean by the words we are using.

The phrase ‘faster movement’ is ambiguous.  Does it mean higher flow (more patients per day being admitted and discharged) or does it mean shorter length of stage (the interval between the admission and discharge events for individual patients)?

The phrase ‘more space’ is also ambiguous. In a hospital that implies physical space i.e. floor-space that may be occupied by corridors, chairs, cubicles, trolleys, and beds.  So are we actually referring to flow-space or storage-space?

What we have in this over-simplified statement is the conflation of two concepts: flow-capacity and space-capacity. They are different things. They have different units. And the result of conflating them is meaningless and confusing.


However, our stated goal is to improve understanding so let us consider one combination, and let us be careful to be more precise with our terminology, “higher flow always requires more beds“. Does it? Can we disprove this assertion with an example where higher flow required less beds (i.e. space-capacity)?

The relationship between flow and space-capacity is well understood.

The starting point is Little’s Law which was proven mathematically in 1961 by J.D.C. Little and it states:

Average work in progress = Average lead time  X  Average flow.

In the hospital context, work in progress is the number of occupied beds, lead time is the length of stay and flow is admissions or discharges per time interval (which must be the same on average over a long period of time).

(NB. Engineers are rather pedantic about units so let us check that this makes sense: the unit of WIP is ‘patients’, the unit of lead time is ‘days’, and the unit of flow is ‘patients per day’ so ‘patients’ = ‘days’ * ‘patients / day’. Correct. Verified. Tick.)

So, is there a situation where flow can increase and WIP can decrease? Yes. When lead time decreases. Little’s Law says that is possible. We have disproved the assertion.


Let us take the other interpretation of higher flow as shorter length of stay: i.e. shorter length of stay always requires more beds.  Is this correct? No. If flow remains the same then Little’s Law states that we will require fewer beds. This assertion is disproved as well.

And we need to remember that Little’s Law is proven to be valid for averages, does that shed any light on the source of our confusion? Could the assertion about flow and beds actually be about the variation in flow over time and not about the average flow?


And this is also well understood. The original work on it was done almost exactly 100 years ago by Agner Krarup Erlang and the problem he looked at was the quality of customer service of the early telephone exchanges. Specifically, how likely was the caller to get the “all lines are busy, please try later” response.

What Erlang showed was there there is a mathematical relationship between the number of calls being made (the demand), the probability of a call being connected first time (the service quality) and the number of telephone circuits and switchboard operators available (the service cost).


So it appears that we already have a validated mathematical model that links flow, quality and cost that we might use if we substitute ‘patients’ for ‘calls’, ‘beds’ for ‘telephone circuits’, and ‘being connected’ for ‘being admitted’.

And this topic of patient flow, A&E performance and Erlang queues has been explored already … here.

So a telephone exchange is a more valid model of a hospital than a motorway.

We are now making progress in deepening our understanding.


The use of an invalid, untested, conceptual model is sloppy systems engineering.

So if the engineering is sloppy we would be unwise to fully trust the conclusions.

And I share this feedback in the spirit of black box thinking because I believe that there are some valuable lessons to be learned here – by us all.


To vote for this topic please click here.
To subscribe to the blog newsletter please click here.
To email the author please click here.

Courage and Constancy of Purpose

bull_by_the_horns_anim_150_wht_9609This week I witnessed an act of courage by someone prepared to take the health care bull by the horns.

On 25th October 2016 a landmark review was published about the integrated health and social care system in Northern Ireland.

It is not a comfortable read.

And the act of courage was the simultaneous publication of the document “Health and Well-being 2026” by Michelle O’Neill, the new Minister of Health.

The full document can be downloaded here.


It is courageous because it says, bluntly, that there is a burning platform, the level of service is not acceptable, doing nothing is not an option, and nothing short of a system-wide redesign will be required.

It is courageous because it sets a clear vision, a burning ambition, and is very clear that this will not be a quick fix. It is a ten year plan.

That implies a constancy of purpose will need to be maintained for at least a decade.

science_of_improvement

And it is courageous because it says that:

we will have to learn how to do this

Here is one paragraph that says that:

Developing the science of improvement can be done at the same time as making improvements

and

We need an infrastructure that makes this possible.”


The good news is that this science of improvement in health care is already well advanced, and it will advance further: a whole health and social care system transformation-by-design is a challenge of some magnitude.

A health and social care system engineering (HSCSE) challenge.


One component of the ten year plan is to develop this capability through a process called co-production.

co-productionNotice that the focus is on pro-actively preventing illness, not just re-actively managing it.

Notice that the design is centered on both the customer and the supplier, not just on the supplier.

And notice that the population served are also expected to be equal partners in the transformation-by-design process.


Courage, constancy of purpose and capability development  … a very welcome breath of fresh air!


For more posts like this please vote here.
For more information please subscribe here.

ChimpWare

chimpwareOne of the recurring themes in this narrative is the realisation that we are all subject to the emergent effects of millions of years of adaptive evolution.

We all know about genes, the chemical code called DNA, that holds the instructions for building a person.

We are less aware of memes, the cultural code that holds the instructions for building a society.

And we are even less aware of the complex interaction between genes and memes.


One of the emergent properties of this gene/meme interaction is our ability to use symbolic language and causal reasoning.

But this amazing ability only developed recently, in the last few million years, and that means evolution has not had time to finish the job.  So we are left with prototype hardware and software.


The prototype hardware is called ChimpWare and it is the 1.3 kg of wetware between our ears.  On the surface it looks a bit like the wetware of other animals, but appearances are deceptive.

Our ChimpWare is a multi-level-parallel-multi-processor! Amazing engineering.

And rather than evolving a completely new design (which is rather difficult for a reactive evolutionary process), we have evolved newer prototypes that sit on top of the older wetware.

This build-on-the-old-foundations approach has a downside … the newer parts and the older parts need to talk to each other and they use different languages.

Different software.


The newer part uses sequential, causal logic and communicates using symbolic language.  The older part uses parallel, associative logic and communicates using emotions.  Thinking and feeling.  Rational and irrational.

The software is ChimpOS 1.0 and we are not going to get an update … because it too is a work-in-progress.


When we are forced by circumstance to grapple with the challenge of improving a complex adaptive system such as health care, we have no choice but to use ChimpWare and ChimpOS both individually and collectively.  And it is not well designed for this job.

invisible_gorillaSo we make mistakes, and we are often not aware of the errors we are making.  All we become aware of is the gap between our intent and our impact.

Our intuition deceives us, which also implies that some concepts that are valid and useful feel counter-intuitive.  So we discount them.

The Invisible Gorilla” is well worth a read because it describes many of the illusions that our ChimpWare and ChimpOS generate.

Illusions such as the illusion of attention,  the illusion of memory,  the illusion of confidence,  the illusion of knowledge, and the illusion of cause.


But with a conscious insight into the limitations of the legacy of evolution, we can actually learn to avoid many of the pitfalls, and to develop our individual and collective capability for improving the complex adaptive systems that we live in.

For the benefit of everyone and everything.

In fact, our long term survival depends on it – both collectively and individually.

So doing nothing is not an option.


For more posts like this please vote here.
For more information please subscribe here.

Patient Traffic Engineering

motorway[Beep] Bob’s computer alerted him to Leslie signing on to the Webex session.

<Bob> Good afternoon Leslie, how are you? It seems a long time since we last chatted.

<Leslie> Hi Bob. I am well and it has been a long time. If you remember, I had to loop out of the Health Care Systems Engineering training because I changed job, and it has taken me a while to bring a lot of fresh skeptics around to the idea of improvement-by-design.

<Bob> Good to hear, and I assume you did that by demonstrating what was possible by doing it, delivering results, and describing the approach.

<Leslie> Yup. And as you know, even with objective evidence of improvement it can take a while because that exposes another gap, the one between intent and impact.  Many people get rather defensive at that point, so I have had to take it slowly. Some people get really fired up though.

 <Bob> Yes. Respect, challenge, patience and persistence are all needed. So, where shall we pick up?

<Leslie> The old chestnut of winter pressures and A&E targets.  Except that it is an all-year problem now and according to what I read in the news, everyone is predicting a ‘melt-down’.

<Bob> Did you see last week’s IS blog on that very topic?

<Leslie> Yes, I did!  And that is what prompted me to contact you and to re-start my CHIPs coaching.  It was a real eye opener.  I liked the black swan code-named “RC9” story, it makes it sound like a James Bond film!

<Bob> I wonder how many people dug deeper into how “RC9” achieved that rock-steady A&E performance despite a rising tide of arrivals and admissions?

<Leslie> I did, and I saw several examples of anti-carve-out design.  I have read though my notes and we have talked about carve out many times.

<Bob> Excellent. Being able to see the signs of competent design is just as important as the symptoms of inept design. So, what shall we talk about?

<Leslie> Well, by co-incidence I was sent a copy of of a report entitled “Understanding patient flow in hospitals” published by one of the leading Think Tanks and I confess it made no sense to me.  Can we talk about that?

<Bob> OK. Can you describe the essence of the report for me?

<Leslie> Well, in a nutshell it said that flow needs space so if we want hospitals to flow better we need more space, in other words more beds.

<Bob> And what evidence was presented to support that hypothesis?

<Leslie> The authors equated the flow of patients through a hospital to the flow of traffic on a motorway. They presented a table of numbers that made no sense to me, I think partly because there are no units stated for some of the numbers … I’ll email you a picture.

traffic_flow_dynamics

<Bob> I agree this is not a very informative table.  I am not sure what the definition of “capacity” is here and it may be that the authors may be equating “hospital bed” to “area of tarmac”.  Anyway, the assertion that hospital flow is equivalent to motorway flow is inaccurate.  There are some similarities and traffic engineering is an interesting subject, but they are not equivalent.  A hospital is more like a busy city with junctions, cross-roads, traffic lights, roundabouts, zebra crossings, pelican crossings and all manner of unpredictable factors such as cyclists and pedestrians. Motorways are intentionally designed without these “impediments”, for obvious reasons! A complex adaptive flow system like a hospital cannot be equated to a motorway. It is a dangerous over-simplification.

<Leslie> So, if the hospital-motorway analogy is invalid then the conclusions are also invalid?

<Bob> Sometimes, by accident, we get a valid conclusion from an invalid method. What were the conclusions?

<Leslie> That the solution to improving A&E performance is more space (i.e. hospital beds) but there is no more money to build them or people to staff them.  So the recommendations are to reduce volume, redesign rehabilitation and discharge processes, and improve IT systems.

<Bob> So just re-iterating the habitual exhortations and nothing about using well-understood systems engineering methods to accurately diagnose the actual root cause of the ‘symptoms’, which is likely to be the endemic carveoutosis multiforme, and then treat accordingly?

<Leslie> No. I could not find the term “carve out” anywhere in the document.

<Bob> Oh dear.  Based on that observation, I do not believe this latest Think Tank report is going to be any more effective than the previous ones.  Perhaps asking “RC9” to write an account of what they did and how they learned to do it would be more informative?  They did not reduce volume, and I doubt they opened more beds, and their annual report suggests they identified some space and flow carveoutosis and treated it. That is what a competent systems engineer would do.

<Leslie> Thanks Bob. Very helpful as always. What is my next step?

<Bob> Some ISP-2 brain-teasers, a juicy ISP-2 project, and some one day training workshops for your all-fired-up CHIPs.

<Leslie> Bring it on!


For more posts like this please vote here.
For more information please subscribe here.

Outliers

reading_a_book_pa_150_wht_3136An effective way to improve is to learn from others who have demonstrated the capability to achieve what we seek.  To learn from success.

Another effective way to improve is to learn from those who are not succeeding … to learn from failures … and that means … to learn from our own failings.

But from an early age we are socially programmed with a fear of failure.

The training starts at school where failure is not tolerated, nor is challenging the given dogma.  Paradoxically, the effect of our fear of failure is that our ability to inquire, experiment, learn, adapt, and to be resilient to change is severely impaired!

So further failure in the future becomes more likely, not less likely. Oops!


Fortunately, we can develop a healthier attitude to failure and we can learn how to harness the gap between intent and impact as a source of energy, creativity, innovation, experimentation, learning, improvement and growing success.

And health care provides us with ample opportunities to explore this unfamiliar terrain. The creative domain of the designer and engineer.


The scatter plot below is a snapshot of the A&E 4 hr target yield for all NHS Trusts in England for the month of July 2016.  The required “constitutional” performance requirement is better than 95%.  The delivered whole system average is 85%.  The majority of Trusts are failing, and the Trust-to-Trust variation is rather wide. Oops!

This stark picture of the gap between intent (95%) and impact (85%) prompts some uncomfortable questions:

Q1: How can one Trust achieve 98% and yet another can do no better than 64%?

Q2: What can all Trusts learn from these high and low flying outliers?

[NB. I have not asked the question “Who should we blame for the failures?” because the name-shame-blame-game is also a predictable consequence of our fear-of-failure mindset.]


Let us dig a bit deeper into the information mine, and as we do that we need to be aware of a trap:

A snapshot-in-time tells us very little about how the system and the set of interconnected parts is behaving-over-time.

We need to examine the time-series charts of the outliers, just as we would ask for the temperature, blood pressure and heart rate charts of our patients.

Here are the last six years by month A&E 4 hr charts for a sample of the high-fliers. They are all slightly different and we get the impression that the lower two are struggling more to stay aloft more than the upper two … especially in winter.


And here are the last six years by month A&E 4 hr charts for a sample of the low-fliers.  The Mark I Eyeball Test results are clear … these swans are falling out of the sky!


So we need to generate some testable hypotheses to explain these visible differences, and then we need to examine the available evidence to test them.

One hypothesis is “rising demand”.  It says that “the reason our A&E is failing is because demand on A&E is rising“.

Another hypothesis is “slow flow”.  It says that “the reason our A&E is failing is because of the slow flow through the hospital because of delayed transfers of care (DTOCs)“.

So, if these hypotheses account for the behaviour we are observing then we would predict that the “high fliers” are (a) diverting A&E arrivals elsewhere, and (b) reducing admissions to free up beds to hold the DTOCs.

Let us look at the freely available data for the highest flyer … the green dot on the scatter gram … code-named “RC9”.

The top chart is the A&E arrivals per month.

The middle chart is the A&E 4 hr target yield per month.

The bottom chart is the emergency admissions per month.

Both arrivals and admissions are increasing, while the A&E 4 hr target yield is rock steady!

And arranging the charts this way allows us to see the temporal patterns more easily (and the images are deliberately arranged to show the overall pattern-over-time).

Patterns like the change-for-the-better that appears in the middle of the winter of 2013 (i.e. when many other trusts were complaining that their sagging A&E performance was caused by “winter pressures”).

The objective evidence seems to disprove the “rising demand”, “slow flow” and “winter pressure” hypotheses!

So what can we learn from our failure to adequately explain the reality we are seeing?


The trust code-named “RC9” is Luton and Dunstable, and it is an average district general hospital, on the surface.  So to reveal some clues about what actually happened there, we need to read their Annual Report for 2013-14.  It is a public document and it can be downloaded here.

This is just a snippet …

… and there are lots more knowledge nuggets like this in there …

… it is a treasure trove of well-known examples of good system flow design.

The results speak for themselves!


Q: How many black swans does it take to disprove the hypothesis that “all swans are white”.

A: Just one.

“RC9” is a black swan. An outlier. A positive deviant. “RC9” has disproved the “impossibility” hypothesis.

And there is another flock of black swans living in the North East … in the Newcastle area … so the “Big cities are different” hypothesis does not hold water either.


The challenge here is a human one.  A human factor.  Our learned fear of failure.

Learning-how-to-fail is the way to avoid failing-how-to-learn.

And to read more about that radical idea I strongly recommend reading the recently published book called Black Box Thinking by Matthew Syed.

It starts with a powerful story about the impact of human factors in health care … and here is a short video of Martin Bromiley describing what happened.

The “black box” that both Martin and Matthew refer to is the one that is used in air accident investigations to learn from what happened, and to use that learning to design safer aviation systems.

Martin Bromiley has founded a charity to support the promotion of human factors in clinical training, the Clinical Human Factors Group.

So if we can muster the courage and humility to learn how to do this in health care for patient safety, then we can also learn to how do it for flow, quality and productivity.

Our black swan called “RC9” has demonstrated that this goal is attainable.

And the body of knowledge needed to do this already exists … it is called Health and Social Care Systems Engineering (HSCSE).


For more posts like this please vote here.
For more information please subscribe here.
To email the author please click here.


Postscript: And I am pleased to share that Luton & Dunstable features in the House of Commons Health Committee report entitled Winter Pressures in A&E Departments that was published on 3rd Nov 2016.

Here is part of what L&D shared to explain their deviant performance:

luton_nuggets

These points describe rather well the essential elements of a pull design, which is the antidote to the rather more prevalent pressure cooker design.

The Cream of the Crap Trap

database_transferring_data_150_wht_10400It has been a busy week.

And a common theme has cropped up which I have attempted to capture in the diagram below.

It relates to how the NHS measures itself and how it “drives” improvement.

The measures are called “failure metrics” – mortality, infections, pressure sores, waiting time breaches, falls, complaints, budget overspends.  The list is long.

The data for a specific trust are compared with an arbitrary minimum acceptable standard to decide where the organisation is on the Red-Amber-Green scale.

If we are in the red zone on the RAG chart … we get a kick.  If not we don’t.

The fear of being bullied and beaten raises the emotional temperature and the internal pressure … which drives movement to get away from the pain.  A nematode worm will behave this way. They are not stupid either.

As as we approach the target line our RAG indicator turns “amber” … this is the “not statistically significant zone” … and now the stick is being waggled, ready in case the light goes red again.

So we muster our reserves of emotional energy and we PUSH until our RAG chart light goes green … but then we have to hold it there … which is exhausting.  One pain is replaced by another.

The next step is for the population of NHS nematodes to be compared with each other … they must be “bench-marked”, and some are doing better than others … as we might expect. We have done our “sadistics” training courses.

The bottom 5% or 10% line is used to set the “arbitrary minimum standard target” … and the top 10% are feted at national award ceremonies … and feast on the envy of the other 90 or 95% of “losers”.

The Cream of the Crop now have a big tick in their mission statement objectives box “To be in the Top 10% of Trusts in the UK“.  Hip hip huzzah.

And what has this system design actually achieved? The Cream of the Crap.

Oops!


It is said that every system is perfectly designed to deliver what it delivers.

And a system that has been designed to only use failure and fear to push improvement can only ever achieve chronic mediocrity – either chaotic mediocrity or complacent mediocrity.

So, if we want to tap into the vast zone of unfulfilled potential, and if we want to escape the perpetual pain of the Cream of the Crap Trap … we need a better system design.

And maybe we might need a splash of humility and some system engineers to help us do that.

This week I met some at the Royal Academy of Engineering in London, and it felt like finding a candle of hope amidst the darkness of despair.

I said it had been a busy week!

DIKUW

This 100 second video of the late Russell Ackoff is solid gold!

In it he describes the DIKUW hierarchy – data, information, knowledge, understanding and wisdom – and how it is critical to put effectiveness before efficiency.

A wise objective is a purpose … the intended outcome … and a well designed system will be both effective and efficient.  That is the engineers definition of productivity.  Doing the right thing first, and doing it right second.

So how do we transform data into wisdom? What are needs to be added or taken away? What is the process?

Data is what we get from our senses.

To convert data into information we add context.

To convert information into knowledge we use memory.

To convert knowledge into understanding we need to learn-by-doing.

And the test of understanding is to be able to teach someone else what we know and to be able to support them developing an understanding through practice.

To convert understanding into wisdom requires years of experience of seeing, doing and teaching.

There are no short cuts.

So the sooner we start learning-by-doing the quicker we will develop the wisdom of purpose, and the understanding of process.


For more posts like this please vote here.
For more information please subscribe here.

Socrates the Improvement Coach

One of the challenges involved in learning the science of improvement, is to be able to examine our own beliefs.

We need to do that to identify the invalid assumptions that lead us to make poor decisions, and to act in ways that push us off the path to our intended outcome.

Over two thousand years ago, a Greek philosopher developed a way of exposing invalid assumptions.  He was called Socrates.

The Socratic method involves a series of questions that are posed to help a person or group to determine their underlying beliefs and the extent of their knowledge.  It is a way to develop better hypotheses by steadily identifying and eliminating those that lead to contradictions.

Socrates designed his method to force one to examine one’s own beliefs and the validity of such beliefs.


That skill is as valuable today as it was then, and is especially valuable when we explore complex subjects,  such as improving the performance of our health and social care system.

Our current approach is called reactive improvement – and we are reacting to failure.

Reactive improvement zealots seem obsessed with getting away from failure, disappointment, frustration, fear, waste, variation, errors, cost etc. in the belief that what remains after the dross has been removed is the good stuff. The golden nuggets.

And there is nothing wrong with that.

It has a couple of downsides though:

  1. Removing dross leaves holes, that all too easily fill up with different dross!
  2. Reactive improvement needs a big enough problem to drive it.  A crisis!

The implication is that reactive improvement grinds to a halt as the pressure is relieved and as it becomes mired in a different form of bureaucratic dross … the Quality Control Inspectorate!

No wonder we feel as if we are trapped in a perpetual state of chronic and chaotic mediocrity.


Creative improvement is, as the name suggests, focused on creating something that we want in the future.  Something like a health and social care system that is safe, calm, fit-4-purpose, and affordable.

Creative improvement does not need a problem to get started. A compelling vision and a choice to make-it-so is enough.

Creative improvement does not fizzle out as soon as we improve… because our future vision is always there to pull us forward.  And the more we practice creative improvement, the better we get, the more progress we make, and the stronger the pull becomes.


The main thing that blocks us from using creative improvement are our invalid, unconscious beliefs and assumptions about what is preventing us achieving our vision now.

So we need a way to examine our beliefs and assumptions in a disciplined and robust way, and that is the legacy that Socrates left us.


For more posts like this please vote here.
For more information please subscribe here.

Surgeon Designers

This is a snapshot of an experiment in progress.  The question being asked is “Can consultant surgeons be trained to be system flow designers in one day?”

On the left are Kate Silvester and Phil Debenham … their doctor/trainers.

 

On the right are some brave volunteer consultant surgeons.

It is a tense moment. The focused concentration is palpable. It is a tough design assignment … a chronically chaotic one-stop outpatient clinic. They know it well.


They have the raw, unprocessed, data and they are deep into diagnosis mode.  On the other side of the room is another team of consultant surgeon volunteers who are struggling with the same challenge. Competition is in the air. Reputations are on the line. The game is on.

They are racing to generate this … a process template chart … that illustrates the conversion of raw event data into something visible and meaningful. A Gantt chart.

Their tools are basic – coloured pens and squared paper – just as Henry L. Gantt used in 1916 – a hundred years ago.

Hidden in this Gantt chart is the diagnosis, the open door to the path to improving this clinic design.  It is as plain as the nose on your face … if you know what to look for. They don’t. Well, … not yet.


Skip forwards to later in the experiment. Both teams have solved the ‘impossible’ problem. They have diagnosed the system design flaw that was causing the queues, chaos and waiting … and they have designed and verified a solution. With no more than squared paper and coloured pens.  Henry G would be delighted.

And they are justifiably proud of their achievement because, when they tested their design in the real world, it showed that the queues and chaos had “evaporated”.  And it cost … nothing.


At the start of the experiment they were unaware of what was possible. At the end of the experiment they knew how to do it. In one day.

The question: ‘”Can consultant surgeons be trained to be system flow designers in one day?”

The answer: “Yes”


 

Righteous Indignation

On 5th July 2018, the NHS will be 70 years old, and like many of those it was created to serve, it has become elderly and frail.

We live much longer, on average, than we used to and the growing population of frail elderly are presenting an unprecedented health and social care challenge that the NHS was never designed to manage.

The creases and cracks are showing, and each year feels more pressured than the last.


This week a story that illustrates this challenge was shared with me along with permission to broadcast …

“My mother-in-law is 91, in general she is amazingly self-sufficient, able to arrange most of her life with reasonable care at home via a council tendered care provider.

She has had Parkinson’s for years, needing regular medication to enable her to walk and eat (it affects her jaw and swallowing capability). So the care provision is time critical, to get up, have lunch, have tea and get to bed.

She’s also going deaf, profoundly in one ear, pretty bad in the other. She wears a single ‘in-ear’ aid, which has a micro-switch on/off toggle, far too small for her to see or operate. Most of the carers can’t put it in, and fail to switch it off.

Her care package is well drafted, but rarely adhered to. It should be 45 minutes in the morning, 30, 15, 30 through the day. Each time administering the medications from the dossette box. Despite the register in/out process from the carers, many visits are far less time than designed (and paid for by the council), with some lasting 8 minutes instead of 30!

Most carers don’t ensure she takes her meds, which sometimes leads to dropped pills on the floor, with no hope of picking them up!

While the care is supposedly ‘time critical’ the provider don’t manage it via allocated time slots, they simply provide lists, that imply the order of work, but don’t make it clear. My mother-in-law (Mum) cannot be certain when the visit will occur, which makes going out very difficult.

The carers won’t cook food, but will micro-wave it, thus if a cooked meal is to happen, my Mum will start it, with the view of the carers serving it. If they arrive early, the food is under-cooked (“Just put vinegar on it, it will taste better”) and if they arrive late, either she’ll try to get it out herself, or it will be dried out / cremated.

Her medication pattern should be every 4 to 5 hours in the day, with a 11:40 lunch visit, and a 17:45 tea visit, followed by a 19:30 bed prep visit, she finishes up with too long between meds, followed by far too close together. Her GP has stated that this is making her health and Parkinson’s worse.

Mum also rarely drinks enough through the day, in the hot whether she tends to dehydrate, which we try to persuade her must be avoided. Part of the problem is Parkinson’s related, part the hassle of getting to the toilet more often. Parkinson’s affects swallowing, so she tends to sip, rather than gulp. By sipping often, she deludes herself that she is drinking enough.

She also is stubbornly not adjusting methods to align to issues. She drinks tea and water from her lovely bone china cups. Because her grip is not good and her hand shakes, we can’t fill those cups very high, so her ‘cup of tea’ is only a fraction of what it could be.

As she can walk around most days, there’s no way of telling whether she drinks enough, and she frequently has several different carers in a day.

When Mum gets dehydrated, it affects her memory and her reasoning, similar to the onset of dementia. It also seems to increase her probability of falling, perhaps due to forgetting to be defensive.

When she falls, she cannot get up, thus usually presses her alarm dongle, resulting in me going round to get her up, check for concussion, and check for other injuries, prior to settling her down again. These can be ten weeks apart, through to a few in a week.

When she starts to hallucinate, we do our very best to increase drinking, seeking to re-hydrate.

On Sunday, something exceptional happened, Mum fell out of bed and didn’t press her alarm. The carer found her and immediately called the paramedics and her GP, who later called us in. For the first time ever she was not sufficiently mentally alert to press her alarm switch.

After initial assessment, she was taken to A&E, luckily being early on Sunday morning it was initially quite quiet.

Hospital

The Hospital is on the boundary between two counties, within a large town, a mixture of new build elements, between aging structures. There has been considerable investment within A&E, X-ray etc. due partly to that growth industry and partly due to the closures of cottage hospitals and reducing GP services out of hours.

It took some persuasion to have Mum put on a drip, as she hadn’t had breakfast or any fluids, and dehydration was a probable primary cause of her visit. They took bloods, an X-ray of her chest (to check for fall related damage) and a CT scan of her head, to see if there were issues.

I called the carers to tell them to suspend visits, but the phone simply rang without be answered (not for the first time.)

After about six hours, during which time she was awake, but not very lucid, she was transferred to the day ward, where after assessment she was given some meds, a sandwich and another drip.

Later that evening we were informed she was to be kept on a drip for 24 hours.

The next day (Bank Holiday Monday) she was transferred to another ward. When we arrived she was not on a drip, so their decisions had been reversed.

I spoke at length with her assigned staff nurse, and was told the following: Mum could come out soon if she had a 24/7 care package, and that as well as the known issues mum now has COPD. When I asked her what COPD was, she clearly didn’t know, but flustered a ‘it is a form of heart failure that affects breathing’. (I looked it up on my phone a few minutes later.)

So, to get mum out, I had to arrange a 24/7 care package, and nowhere was open until the next day.

Trying to escalate care isn’t going to be easy, even in the short term. My emails to ‘usually very good’ social care people achieved nothing to start with on Tuesday, and their phone was on the ‘out of hours’ setting for evenings and weekends, despite being during the day of a normal working week.

Eventually I was told that there would be nothing to achieve until the hospital processed the correct exit papers to Social Care.

When we went in to the hospital (on Tuesday) a more senior nurse was on duty. She explained that mum was now medically fit to leave hospital if care can be re-established. I told her that I was trying to set up 24/7 care as advised. She looked through the notes and said 24/7 care was not needed, the normal 4 x a day was enough. (She was clearly angry).

I then explained that the newly diagnosed COPD may be part of the problem, she said that she’s worked with COPD patients for 16 years, and mum definitely doesn’t have COPD. While she was amending the notes, I noticed that mum’s allergy to aspirin wasn’t there, despite us advising that on entry. The nurse also explained that as the hospital is in one county, but almost half their patients are from another, they are always stymied on ‘joined up working’

While we were talking with mum, her meds came round and she was only given paracetamol for her pain, but NOT her meds for Parkinson’s. I asked that nurse why that was the case, and she said that was not on her meds sheet. So I went back to the more senior nurse, she checked the meds as ordered and Parkinson’s was required 4 x a day, but it was NOT transferred onto the administration sheet. The doctor next to us said she would do it straight away, and I was told, “Thank God you are here to get this right!”

Mum was given her food, it consisted of some soup, which she couldn’t spoon due to lack of meds and a dry tough lump of gammon and some mashed sweet potato, which she couldn’t chew.

When I asked why meds were given at five, after the delivery of food, they said ‘That’s our system!’, when I suggested that administering Parkinson’s meds an hour before food would increase the ability to eat the food they said “that’s a really good idea, we should do that!”

On Wednesday I spoke with Social Care to try to re-start care to enable mum to get out. At that time the social worker could neither get through to the hospital nor the carers. We spoke again after I had arrived in hospital, but before I could do anything.

On arrival at the hospital I was amazed to see the white-board declaring that mum would be discharged for noon on Monday (in five days-time!). I spoke with the assigned staff nurse who said, “That’s the earliest that her carers can re-start, and anyway its nearly the weekend”.

I said that “mum was medically OK for discharge on Tuesday, after only two days in the hospital, and you are complacent to block the bed for another six days, have you spoken with the discharge team?”

She replied, “No they’ll have gone home by now, and I’ve not seen them all day” I told her that they work shifts, and that they will be here, and made it quite clear if she didn’t contact SHEDs that I’d go walkabout to find them. A few minutes later she told me a SHED member would be with me in 20 minutes.

While the hospital had resolved her medical issues, she was stuck in a ward, with no help to walk, the only TV via a complex pay-for system she had no hope of understanding, with no day room, so no entertainment, no exercise, just boredom encouraged to lay in bed, wear a pad because she won’t be taken to the loo in time.

When the SHED worker arrived I explained the staff nurse attitude, she said she would try to improve those thinking processes. She took lots of details, then said that so long as mum can walk with assistance, she could be released after noon, to have NHS carer support, 4 times a day, from the afternoon. She walked around the ward for the first time since being admitted, and while shaky was fine.

Hopefully all will be better now?”


This story is not exceptional … I have heard it many times from many people in many different parts of the UK.  It is the norm rather than the exception.

It is the story of a fragmented and fractured system of health and social care.

It is the story of frustration for everyone – patients, family, carers, NHS staff, commissioners, and tax-payers.  A fractured care system is unsafe, chaotic, frustrating and expensive.

There are no winners here.  It is not a trade off, compromise or best possible.

It is just poor system design.


What we want has a name … it is called a Frail Safe design … and this is not a new idea.  It is achievable. It has been achieved.

http://www.frailsafe.org.uk

So why is this still happening?

The reason is simple – the NHS does not know any other way.  It does not know how to design itself to be safe, calm, efficient, high quality and affordable.

It does not know how to do this because it has never learned that this is possible.

But it is possible to do, and it is possible to learn, and that learning does not take very long or cost very much.

And the return vastly outnumbers the investment.


The title of this blog is Righteous Indignation

… if your frail elderly parents, relatives or friends were forced to endure a system that is far from frail safe; and you learned that this situation was avoidable and that a safer design would be less expensive; and all you hear is “can’t do” and “too busy” and “not enough money” and “not my job” …  wouldn’t you feel a sense of righteous indignation?

I do.


For more posts like this please vote here.
For more information please subscribe here.

The Pressure Cooker

About a year ago we looked back at the previous 10 years of NHS unscheduled care performance …

click here to read

… and warned that a catastrophe was on the way because we had unintentionally created a urgent care “pressure cooker”.

 

Did waving the red warning flag make any difference? It seems not.

The catastrophe unfolded as predicted … A&E performance slumped to an all-time low, and has not recovered.


A pressure cooker is an elegantly simple self-regulating system.  A strong metal box with a sealed lid and a pressure-sensitive valve.  Food cooks more quickly at a higher temperature, and we can increase the boiling point of water by increasing the ambient pressure.  So all we need to do is put some water in the cooker, close the lid, set the pressure limit we need (i.e. the temperature we want) and apply some heat.  Simple.  As the water boils the steam increases the pressure inside, until the regulator valve opens and lets a bit of steam out.  The more heat we apply – the faster the steam comes out – but the internal pressure and temperature remain constant.  An elegantly simple self-regulating system.


Our unscheduled care acute hospital “pressure cooker” design is very similar – but it has an additional feature – we can squeeze raw patients in through a one-way valve labelled “admissions”.  The internal pressure will eventually squeeze them out through another one-way pressure-sensitive valve called “discharges”.

But there is not much head-space inside our hospital (i.e. empty beds) so pushing patients in will increase the pressure inside, and it will trigger an internal reaction called “fire-fighting” that generates heat (but no insight).  When the internal pressure reaches the critical level, patients are squeezed out; ready-or-not.

What emerges from the chaotic internal cauldron is a mixture of under-cooked, just-right, and over-cooked patients.  And we then conduct quality control audits and we label what we find as “quality variation”, but it looks random so it gives us no clues as to the causes or what to do next.

Equilibrium is eventually achieved – what goes in comes out – the pressure and temperature auto-regulate – the chaos becomes chronic – and the quality of the output is predictably unacceptable and unpredictable, with some of it randomly spoiled (i.e. harmed).

And our acute care pressure cooker is very resistant to external influences. It is one of its key design features, it is an auto-regulating system.


Option 1: Admissions Avoidance
Squeezing a bit less in does not make any difference to the internal pressure and temperature.  It auto-regulates.  The reduced inflow means a reduced outflow and a longer cooking time and we just get less under-cooked and more over-cooked output.  Oh, and we go bust because our revenue has reduced but our costs have not.

Option 2: Build a Bigger Hospital
Building a bigger pressure cooker (i.e. adding more beds) does not make any sustained difference either.  Again the system auto-regulates.  The extra space-capacity allows a longer cooking time – and again we get less under-cooked and more over-cooked output.  Oh, and we still go bust (same revenue but increased cost).

Option 3: Reduce the Expectation
Turning down the heat (i.e. reducing the 4 hr A&E lead time target yield from 98% to 95%) does not make any difference. Our elegant auto-regulating design adjusts itself to sustain the internal pressure and temperature.  Output is still variable, but least we do not go bust.


This metaphor may go some way to explain why the intuitively obvious “initiatives” to improve unscheduled care performance appear to have had no significant or sustained impact.

And what is more worrying is that they may even have made the situation worse.

Also, working inside an urgent care pressure cooker is dangerous.  People get emotionally damaged and permanently scarred.


The good news is that a different approach is available … a health and social care systems engineering (HSCSE) approach … one that we could use to change the fundamental design from fire-fighter to flow-facilitator.

Using HSCSE theory, techniques and tools we could specify, design, build, verify, implement and validate a low-pressure, low-resistance, low-wait, low-latency, high-efficiency unscheduled care flow design that is safe, timely, effective and affordable.

But we are not training NHS staff to do that.

Why is that?  Is is because we are not aware that this is possible, or that we do not believe that it can work, or that we lack the capability to do it? Or all three?

The first step is raising awareness … so here is an example that proves it is possible.

Bloodsucking Bugs

BloodSuckerThis is a magnified picture of a blood sucking bug called a Red Poultry Mite.

They go red after having gorged themselves on chicken blood.

Their life-cycle is only 7 days so, when conditions are just right, they can quickly cause an infestation – and one that is remarkably difficult to eradicate!  But if it is not dealt with then chicken coop productivity will plummet.


We use the term “bug” for something else … a design error … in a computer program for example.  If the conditions are just right, then software bugs can spread too and can infest a computer system.  They feed on the hardware resources – slurping up processor time and memory space until the whole system slows to a crawl.


And one especially pernicious type of system design error is called an Error of Omission.  These are the things we do not do that would prevent the bloodsucking bugs from breeding and spreading.

Prevention is better than cure.


In the world of health care improvement there are some blood suckers out there, ones who home in on a susceptible host looking for a safe place to establish a colony.  They are masters of the art of mimicry.  They look like and sound like something they are not … they claim to be symbiotic whereas in reality they are parasitic.

The clue to their true nature is that their impact does not match their intent … but by the time that gap is apparent they are entrenched and their spores have already spread.

Unlike the Red Poultry Mites, we do not want to eradicate them … we need to educate them. They only behave like parasites because they are missing a few essential bits of software.  And once those upgrades are installed they can achieve their potential and become symbiotic.

So, let me introduce them, they are called Len, Siggy and Tock and here is their story:

Six Ways Not To Improve Flow

Crash Test Dummy

CrashTestDummyThere are two complementary approaches to safety and quality improvement: desire and design.

In the improvement-by-desire world we use a suck-it-and-see approach to fix a problem.  It is called PDSA.

Sometimes this works and we pat ourselves on the back, and remember the learning for future use.

Sometimes it works for us but has a side effect: it creates a problem for someone else.  And we may not be aware of the unintended consequence unless someone shouts “Oi!” It may be too late by then of course.


The more parts in a system, and the more interconnected they are, the more likely it is that a well-intended suck-it-and-see change will create an unintended negative impact.

And in that situation our temptation is to … do nothing … and put up with the problems. It seems the safest option.


In the improvement-by-design world we choose to study first, and to find the causal roots of the system behaviour we are seeing.  Our first objective is a diagnosis.

With that we can propose rational design changes that we anticipate will deliver the improvement we seek without creating adverse effects.

But we have learned the hard way that our intuition can trick us … so we need a way to test our designs … a safe and controlled way.  We need a crash test dummy!


What they do is to deliberately experience our design in a controlled experiment, and what they generate for us is constructive feedback. What did work, and what did not.

A crash test dummy is tough and sensitive at the same time.  They do not break easily and yet they feel the pain and gain too.  They are resilient.


And with their feedback we can re-visit our design and improve it further, or we can use it to offer evidence-based assurance that our design is fit-for-purpose.

Safety and Quality Assurance is improvement-by-design. Diagnosis-and-treatment.

Safety and Quality Control is improvement-by-desire. Suck-and-see.

If you were a passenger or a patient … which option would you prefer?

Fragmentation Cost

figure_falling_with_arrow_17621The late Russell Ackoff used to tell a great story. It goes like this:

“A team set themselves the stretch goal of building the World’s Best Car.  So the put their heads together and came up with a plan.

First they talked to drivers and drew up a list of all the things that the World’s Best Car would need to have. Safety, speed, low fuel consumption, comfort, good looks, low emissions and so on.

Then they drew up a list of all the components that go into building a car. The engine, the wheels, the bodywork, the seats, and so on.

Then they set out on a quest … to search the world for the best components … and to bring the best one of each back.

Then they could build the World’s Best Car.

Or could they?

No.  All they built was a pile of incompatible parts. The WBC did not work. It was a futile exercise.


Then the penny dropped. The features in their wish-list were not associated with any of the separate parts. Their desired performance emerged from the way the parts worked together. The working relationships between the parts were as necessary as the parts themselves.

And a pile of average parts that work together will deliver a better performance than a pile of best parts that do not.

So the relationships were more important than the parts!


From this they learned that the quickest, easiest and cheapest way to degrade performance is to make working-well-together a bit more difficult.  Irrespective of the quality of the parts.


Q: So how do we reverse this degradation of performance?

A: Add more failure-avoidance targets of course!

But we just discovered that the performance is the effect of how the parts work well together?  Will another failure-metric-fueled performance target help? How will each part know what it needs to do differently – if anything?  How will each part know if the changes they have made are having the intended impact?

Fragmentation has a cost.  Fear, frustration, futility and ultimately financial failure.

So if performance is fading … the quality of the working relationships is a good place to look for opportunities for improvement.

The Fog

businessman_cloud_periscope_18347The path from chaos to calm is not clearly marked.  If it were we would not have chaotic health care processes, anxious patients, frustrated staff and escalating costs.

Many believe that there is no way out of the chaos. They have given up trying.

Some still nurture the hope that there is a way and are looking for a path through the fog of confusion.

A few know that there is a way out because they have been shown a path from chaos to calm and can show others how to find it.

Someone, a long time ago, explored the fog and discovered clarity of understanding on the far side, and returned with a Map of the Mind-field.


Q: What is causing The Fog?

When hot rhetoric meets cold reality the fog of disillusionment forms.

Q: Where does the hot rhetoric come from?

Passionate, well-intended and ill-informed people in positions of influence, authority and power. The orators, debaters and commentators.

They do not appear to have an ability to diagnose and to design, so cannot generate effective decisions and coordinate efficient delivery of solutions.

They have not learned how and seem to be unaware of it.

If they had, then they would be able to show that there is a path from chaos to calm.

A safe, quick, surprisingly enjoyable and productive path.

If they had the know-how then they could pull from the front in the ‘right’ direction, rather than push from the back in the ‘wrong’ one.


And the people who are spreading this good news are those who have just emerged from the path.  Their own fog of confusion evaporating as they discovered the clarity of hindsight for themselves.

Ah ha!  Now I see! Wow!  The view from the far side of The Fog is amazing and exciting. The opportunity and potential is … unlimited.  I must share the news. I must tell everyone! I must show them how-to.

Here is a story from Chris Jones who has recently emerged from The Fog.

And here is a description of part of the Mind-field Map, narrated in 2008 by Kate Silvester, a doctor and manufacturing systems engineer.

Early Warning System

radar_screen_anim_300_clr_11649The most useful tool that a busy operational manager can have is a reliable and responsive early warning system (EWS).

One that alerts when something is changing and that, if missed or ignored, will cause a big headache in the future.

Rather like the radar system on an aircraft that beeps if something else is approaching … like another aircraft or the ground!


Operational managers are responsible for delivering stuff on time.  So they need a radar that tells them if they are going to deliver-on-time … or not.

And their on-time-delivery EWS needs to alert them soon enough that they have time to diagnose the ‘threat’, design effective plans to avoid it, decide which plan to use, and deliver it.

So what might an effective EWS for a busy operational manager look like?

  1. It needs to be reliable. No missed threats or false alarms.
  2. It needs to be visible. No tomes of text and tables of numbers.
  3. It needs to be simple. Easy to learn and quick to use.

And what is on offer at the moment?

The RAG Chart
This is a table that is coloured red, amber and green. Red means ‘failing’, green means ‘not failing’ and amber means ‘not sure’.  So this meets the specification of visible and simple, but it is reliable?

It appears not.  RAG charts do not appear to have helped to solve the problem.

A RAG chart is generated using historic data … so it tells us where we are now, not how we got here, where we are going or what else is heading our way.  It is a snapshot. One frame from the movie.  Better than complete blindness perhaps, but not much.

The SPC Chart
This is a statistical process control chart and is a more complicated beast.  It is a chart of how some measure of performance has changed over time in the past.  So like the RAG chart it is generated using historic data.  The advantage is that it is not just a snapshot of where were are now, it is a picture of story of how we got to where we are, so it offers the promise of pointing to where we may be heading.  It meets the specification of visible, and while more complicated than a RAG chart, it is relatively easy to learn and quick to use.

Luton_A&E_4Hr_YieldHere is an example. It is the SPC  chart of the monthly A&E 4-hour target yield performance of an acute NHS Trust.  The blue lines are the ‘required’ range (95% to 100%), the green line is the average and the red lines are a measure of variation over time.  What this charts says is: “This hospital’s A&E 4-hour target yield performance is currently acceptable, has been so since April 2012, and is improving over time.”

So that is much more helpful than a RAG chart (which in this case would have been green every month because the average was above the minimum acceptable level).


So why haven’t SPC charts replaced RAG charts in every NHS Trust Board Report?

Could there be a fly-in-the-ointment?

The answer is “Yes” … there is.

SPC charts are a quality audit tool.  They were designed nearly 100 years ago for monitoring the output quality of a process that is already delivering to specification (like the one above).  They are designed to alert the operator to early signals of deterioration, called ‘assignable cause signals’, and they prompt the operator to pay closer attention and to investigate plausible causes.

SPC charts are not designed for predicting if there is a flow problem looming over the horizon.  They are not designed for flow metrics that exhibit expected cyclical patterns.  They are not designed for monitoring metrics that have very skewed distributions (such as length of stay).  They are not designed for metrics where small shifts generate big cumulative effects.  They are not designed for metrics that change more slowly than the frequency of measurement.

And these are exactly the sorts of metrics that a busy operational manager needs to monitor, in reality, and in real-time.

Demand and activity both show strong cyclical patterns.

Lead-times (e.g. length of stay) are often very skewed by variation in case-mix and task-priority.

Waiting lists are like bank accounts … they show the cumulative sum of the difference between inflow and outflow.  That simple fact invalidates the use of the SPC chart.

Small shifts in demand, activity, income and expenditure can lead to big cumulative effects.

So if we abandon our RAG charts and we replace them with SPC charts … then we climb out of the RAG frying pan and fall into the SPC fire.

Oops!  No wonder the operational managers and financial controllers have not embraced SPC.


So is there an alternative that works better?  A more reliable EWS that busy operational managers and financial controllers can use?

Yes, there is, and here is a clue …

… but tread carefully …

… building one of these Flow-Productivity Early Warning Systems is not as obvious as it might first appear.  There are counter-intuitive traps for the unwary and the untrained.

You may need the assistance of a health care systems engineer (HCSE).

Precious Life Time

stick_figure_help_button_150_wht_9911Imagine this scenario:

You develop some non-specific symptoms.

You see your GP who refers you urgently to a 2 week clinic.

You are seen, assessed, investigated and informed that … you have cancer!


The shock, denial, anger, blame, bargaining, depression, acceptance sequence kicks off … it is sometimes called the Kübler-Ross grief reaction … and it is a normal part of the human psyche.

But there is better news. You also learn that your condition is probably treatable, but that it will require chemotherapy, and that there are no guarantees of success.

You know that time is of the essence … the cancer is growing.

And time has a new relevance for you … it is called life time … and you know that you may not have as much left as you had hoped.  Every hour is precious.


So now imagine your reaction when you attend your local chemotherapy day unit (CDU) for your first dose of chemotherapy and have to wait four hours for the toxic but potentially life-saving drugs.

They are very expensive and they have a short shelf-life so the NHS cannot afford to waste any.   The Aseptic Unit team wait until all the safety checks are OK before they proceed to prepare your chemotherapy.  That all takes time, about four hours.

Once the team get to know you it will go quicker. Hopefully.

It doesn’t.

The delays are not the result of unfamiliarity … they are the result of the design of the process.

All your fellow patients seem to suffer repeated waiting too, and you learn that they have been doing so for a long time.  That seems to be the way it is.  The waiting room is well used.

Everyone seems resigned to the belief that this is the best it can be.

They are not happy about it but they feel powerless to do anything.


Then one day someone demonstrates that it is not the best it can be.

It can be better.  A lot better!

And they demonstrate that this better way can be designed.

And they demonstrate that they can learn how to design this better way.

And they demonstrate what happens when they apply their new learning …

… by doing it and by sharing their story of “what-we-did-and-how-we-did-it“.

CDU_Waiting_Room

If life time is so precious, why waste it?

And perhaps the most surprising outcome was that their safer, quicker, calmer design was also 20% more productive.

The Capstan

CapstanA capstan is a simple machine for combining the effort of many people and enabling them to achieve more than any of them could do alone.

The word appears to have come into English from the Portuguese and Spanish sailors at around the time of the Crusades.

Each sailor works independently of the others. There is no requirement them to be equally strong because the capstan will combine their efforts.  And the capstan also serves as a feedback loop because everyone can sense when someone else pushes harder or slackens off.  It is an example of simple, efficient, effective, elegant design.


In the world of improvement we also need simple, efficient, effective and elegant ways to combine the efforts of many in achieving a common purpose.  Such as raising the standards of excellence and weighing the anchors of resistance.

In health care improvement we have many simultaneous constraints and we have many stakeholders with specific perspectives and special expertise.

And if we are not careful they will tend to pull only in their preferred direction … like a multi-way tug-o-war.  The result?  No progress and exhausted protagonists.

There are those focused on improving productivity – Team Finance.

There are those focused on improving delivery – Team Operations.

There are those focused on improving safety – Team Governance.

And we are all tasked with improving quality – Team Everyone.

So we need a synergy machine that works like a capstan-of-old, and here is one design.

Engine_Of_ExcellenceIt has four poles and it always turns in a clockwise direction, so the direction of push is clear.

And when all the protagonists push in the same direction, they will get their own ‘win’ and also assist the others to make progress.

This is how the sails of success are hoisted to catch the wind of change; and how the anchors of anxiety are heaved free of the rocks of fear; and how the bureaucratic bilge is pumped overboard to lighten our load and improve our speed and agility.

And the more hands on the capstan the quicker we will achieve our common goal.

Collective excellence.

Resuscitate-Review-Repair

Portsmouth_News_20160609We form emotional attachments to places where we have lived and worked.  And it catches our attention when we see them in the news.

So this headline caught my eye, because I was a surgical SHO in Portsmouth in the closing years of the Second Millennium.  The good old days when we still did 1:2 on call rotas (i.e. up to 104 hours per week) and we were paid 70% LESS for the on call hours than the Mon-Fri 9-5 work.  We also had stable ‘firms’, superhuman senior registrars, a canteen that served hot food and strong coffee around the clock, and doctors mess parties that were … well … messy!  A lot has changed.  And not all for the better.

Here is the link to the fuller story about the emergency failures.

And from it we get the impression that this is a recent problem.  And with a bit of a smack and some name-shame-blame-game feedback from the CQC, then all will be restored to robust health. H’mm. I am not so sure that is the full story.


Portsmouth_A&E_4Hr_YieldHere is the monthly aggregate A&E 4-hour target performance chart for Portsmouth from 2010 to date.

It says “this is not a new problem“.

It also says that the ‘patient’ has been deteriorating spasmodically over six years and is now critically-ill.

And giving a critically-ill hospital a “good telling off” is about as effective as telling a critically-ill patient to “pull themselves together“.  Inept management.

In A&E a critically-ill patient requires competent resuscitation using a tried-and-tested process of ABC.  Airway, Breathing, Circulation.


Also, the A&E 4-hour performance is only a symptom of the sickness in the whole urgent care system.  It is the reading on an emotometer inserted into the A&E orifice of the acute hospital!  Just one piece in a much bigger flow jigsaw.

It only tells us the degree of distress … not the diagnosis … nor the required treatment.


So what level of A&E health can we realistically expect to be able to achieve? What is possible in the current climate of austerity? Just how chilled-out can the A&E cucumber run?

Luton_A&E_4Hr_Yield

This is the corresponding A&E emotometer chart for a different district general hospital somewhere else in NHS England.

Luton & Dunstable Hospital to be specific.

This A&E happiness chart looks a lot healthier and it seems to be getting even healthier over time too.  So this is possible.


Yes, but … if our hospital deteriorates enough to be put on the ‘critical list’ then we need to call in an Emergency Care Intensive Support Team (ECIST) to resuscitate us.

Kettering_A&E_4Hr_YieldA very good idea.

And how do their critically-ill patients fare?

Here is the chart of one of them. The significant improvement following the ‘resuscitation’ is impressive to be sure!

But, disappointingly, it was not sustained and the patient ‘crashed’ again. Perhaps they were just too poorly? Perhaps the first resuscitation call was sent out too late? But at least they tried their best.

An experienced clinician might comment: Those are indeed a plausible explanations, but before we conclude that is the actual cause, can I check that we did not just treat the symptoms and miss the disease?


Q: So is it actually possible to resuscitate and repair a sick hospital?  Is it possible to restore it to sustained health, by diagnosing and treating the cause, and not just the symptoms?


Monklands_A&E_4Hr_YieldHere is the corresponding A&E emotometer chart of yet another hospital.

It shows the same pattern of deteriorating health. And it shows a dramatic improvement.  It appears to have responded to some form of intervention.

And this time the significant improvement has sustained. The patient did not crash-and-burn again.

So what has happened here that explains this different picture?

This hospital had enough insight and humility to seek the assistance of someone who knew what to do and who had a proven track record of doing it.  Dr Kate Silvester to be specific.  A dual-trained doctor and manufacturing systems engineer.

Dr Kate is now a health care systems engineer (HCSE), and an experienced ‘hospital doctor’.

Dr Kate helped them to learn how to diagnose the root causes of their A&E 4-hr fever, and then she showed them how to design an effective treatment plan.

They did the re-design; they tested it; and they delivered their new design. Because they owned it, they understood it, and they trusted their own diagnosis-and-design competence.

And the evidence of their impact matching their intent speaks for itself.

A Recipe for Chaos

growing_workload_anim_6858There is an easy and quick-to-cook recipe for chaos.

All we have to do is to ensure that the maximum number of jobs that we can do in a given time is set equal to the average number of jobs that we are required to do in the same period of time.

Eh?

That does not make sense.  Our intuition says that looks like the perfect recipe for a hyper-efficient, zero-waste, zero idle-time design which is what we want.


I know it does, but it isn’t.  Our intuition is tricking us.

It is the recipe for chaos – and to prove it all we will have to do a real world experiment – because to prove it using maths is really difficult. So difficult in fact that the formula was not revealed until 1962 – by a mathematician called John Kingman while a postgraduate student at Pembroke College, Cambridge.

The empirical experiment is very easy to do – all we need is a single step process – and a stream of jobs to do.

And we could do it for real, or we can simulate it using an Excel spreadsheet – which is much quicker.


So we set up our spreadsheet to simulate a new job arriving every X minutes and each job taking X minutes to complete.

Our operator can only do one job at a time so if a job arrives and the operator is busy the job joins the back of a queue of jobs and waits.

When the operator finishes a job it takes the next one from the front of the queue, the one that has been waiting longest.

And if there is no queue the operator will wait until the next job arrives.

Simple.

And when we run simulation the we see that there is indeed no queue, no jobs waiting and the operator is always busy (i.e. 100% utilised). Perfection!

BUT ….

This is not a realistic scenario.  In reality there is always some random variation.  Not all jobs require the same length of time, and jobs do not arrive at precisely the right intervals.

No matter, our confident intuition tells us. It will average out.  Swings-and-roundabouts. Give-and-take.

It doesn’t.

And if you do not believe me just build the simple Excel model outlined above, verify that it works, then add some random variation to the time it takes to do each job … and observe what happens to the average waiting time.

What you will discover is that as soon as we add even a small amount of random variation we get a queue, and waiting and idle resources as well!

But not a steady, stable, predictable queue … Oh No! … We get an unsteady, unstable and unpredictable queue … we get chaos.

Try it.


So what? How does this abstract ‘queue theory’ apply to the real world?


Well, suppose we have a single black box system called ‘a hospital’ – patients arrive and we work hard to diagnose and treat them.  And so long as we have enough resource-time to do all the jobs we are OK. No unstable queues. No unpredictable waiting.

But time-costs-money and we have an annual cost improvement target (CIP) that we are required to meet so we need to ‘trim’ resource-time capacity to push up resource utilisation.  And we will call that an ‘efficiency improvement’ which is good … yes?

It isn’t actually.  I can just as easily push up my ‘utilisation’ by working slower, or doing stuff I do not need to, or by making mistakes that I have to check for and then correct.  I can easily make myself busier and delude myself I am working harder.

And we are also a victim of our own success … the better we do our job … the longer people live and the more workload they put on the health and social care system.

So we have the perfect storm … the perfect recipe for chaos … slowly rising demand … slowly shrinking budgets … and an inefficient ‘business’ design.

And that in a nutshell is the reason the NHS is descending into chaos.


So what is the solution?

Reduce demand? Stop people getting sick? Or make them sicker so they die quicker?

Increase budgets? Where will the money come from? Beg? Borrow? Steal? Economic growth?

Improve the design?  Now there’s a thought. But how? By using the same beliefs and behaviours that have created the current chaos?

Maybe we need to challenge some invalid beliefs and behaviours … and replace those that fail the Reality Test with some more effective ones.

High Performing Design Teams

figures_colored_teamwork_pass_puzzle_piece_300_wht_9681It is possible but unusual for significant improvement-by-design to be delivered by an individual.

It is much more likely to require a group of people – a design team.


And that is where efforts to improve often come to a grinding halt because, despite our good intentions, we are not always very good at collaborative improvement.


This is not a new problem so the solution must be elusive, yes?

Well, actually that is not the case.  We all already know what to do, we all know the pieces of the productive team jigsaw … we just do not use all of them all of the time.

Fortunately, there is an easy way to get around this problem. A checklist.

Just like the ones that astronauts, pilots, and surgeons use.

And this week I discovered an excellent source of checklists for developing and sustaining high performance teams:

A Systematic Guide to High Performing Teams by Ken Thompson (ISBN 9-781522-871910) and here is a TEDx talk of Ken describing the ‘secrets’.

The ones that we all know.

Notably Absent

KingsFund_Quality_Report_May_2016This week the King’s Fund published their Quality Monitoring Report for the NHS, and it makes depressing reading.

These highlights are a snapshot.

The website has some excellent interactive time-series charts that transform the deluge of data the NHS pumps out into pictures that tell a shameful story.

On almost all reported dimensions, things are getting worse and getting worse faster.

Which I do not believe is the intention.

But it is clearly the impact of the last 20 years of health and social care policy.


What is more worrying is the data that is notably absent from the King’s Fund QMR.

The first omission is outcome: How well did the NHS deliver on its intended purpose?  It is stated at the top of the NHS England web site …

NHSE_Purpose

And lets us be very clear here: dying, waiting, complaining, and over-spending are not measures of what we want: health and quality success metrics.  They are a measures of what we do not want; they are failure metrics.

The fanatical focus on failure is part of the hyper-competitive, risk-averse medical mindset:

primum non nocere (first do no harm),

and as a patient I am reassured to hear that but is no harm all I can expect?

What about:

tunc mederi (then do some healing)


And where is the data on dying in the Kings Fund QMR?

It seems to be notably absent.

And I would say that is a quality issue because it is something that patients are anxious about.  And that may be because they are given so much ‘open information’ about what might go wrong, not what should go right.


And you might think that sharp, objective data on dying would be easy to collect and to share.  After all, it is not conveniently fuzzy and subjective like satisfaction.

It is indeed mandatory to collect hospital mortality data, but sharing it seems to be a bit more of a problem.

The fear-of-failure fanaticism extends there too.  In the wake of humiliating, historical, catastrophic failures like Mid Staffs, all hospitals are monitored, measured and compared. And the negative deviants are named, shamed and blamed … in the hope that improvement might follow.

And to do the bench-marking we need to compare apples with apples; not peaches with lemons.  So we need to process the raw data to make it fair to compare; to ensure that factors known to be associated with higher risk of death are taken into account. Factors like age, urgency, co-morbidity and primary diagnosis.  Factors that are outside the circle-of-control of the hospitals themselves.

And there is an army of academics, statisticians, data processors, and analysts out there to help. The fruit of their hard work and dedication is called SHMI … the Summary Hospital Mortality Index.

SHMI_Specification

Now, the most interesting paragraph is the third one which outlines what raw data is fed in to building the risk-adjusted model.  The first four are objective, the last two are more subjective, especially the diagnosis grouping one.

The importance of this distinction comes down to human nature: if a hospital is failing on its SHMI then it has two options:
(a) to improve its policies and processes to improve outcomes, or
(b) to manipulate the diagnosis group data to reduce the SHMI score.

And the latter is much easier to do, it is called up-coding, and basically it involves camping at the pessimistic end of the diagnostic spectrum. And we are very comfortable with doing that in health care. We favour the Black Hat.

And when our patients do better than our pessimistically-biased prediction, then our SHMI score improves and we look better on the NHS funnel plot.

We do not have to do anything at all about actually improving the outcomes of the service we provide, which is handy because we cannot do that. We do not measure it!


And what might be notably absent from the data fed in to the SHMI risk-model?  Data that is objective and easy to measure.  Data such as length of stay (LOS) for example?

Is there a statistical reason that LOS is omitted? Not really. Any relevant metric is a contender for pumping into a risk-adjustment model.  And we all know that the sicker we are, the longer we stay in hospital, and the less likely we are to come out unharmed (or at all).  And avoidable errors create delays and complications that imply more risk, more work and longer length of stay. Irrespective of the illness we arrived with.

So why has LOS been omitted from SHMI?

The reason may be more political than statistical.

We know that the risk of death increases with infirmity and age.

We know that if we put frail elderly patients into a hospital bed for a few days then they will decondition and become more frail, require more time in hospital, are more likely to need a transfer of care to somewhere other than home, are more susceptible to harm, and more likely to die.

So why is LOS not in the risk-of-death SHMI model?

And it is not in the King’s Fund QR report either.

Nor is the amount of cash being pumped in to keep the HMS NHS afloat each month.

All notably absent!