Managing Complex Change

There is no doubt about it: Change is Difficult. None of us like it. We all prefer the comfort and security of sameness. We are wired that way. Yet, change is inevitable. So, what can we do?

One option is to ignore it. Another is to resist it. And another is to make the best of it. It is even possible to embrace and celebrate it!

And what if we are responsible for managing change that will have an effect on others? That is a whole order of magnitude more difficult. There is an oft quoted statistic that 70% of change initiatives fail – which proves that it must be more difficult than the originators anticipated. Yet, if we look around we can examples of successful change everywhere – so it must be possible to manage it. How is it done? What are the traps to avoid? What do we need? What don’t we need? Where do we start? Who can we ask for guidance?

If we search the Web or use an AI assistant like ChatGPT, we will discover a multitude of change models such as Kurt Lewin’s Unfreeze-Change-Refreeze model or John Kotter’s Eight Steps. And if we compare and contrast these recipes we will find common themes such as the importance of leadership, and vision and a clear plan. That all makes sense.

What is more difficult to find are root cause analyses of failed changes that we can learn from. No one likes to talk about their failures but we need to compare the successes and failures to find the nuggets of wisdom that we can learn from and use to reduce the risk of failure for our own change initiatives. Learn to fail or fail to learn.

And how do we know if we are on track? What are the early warning signs of an impending failure that we could use to get us back on track or give us the confidence to abandon the attempt before too much time, money, blood, sweat and tears are wasted?


These are questions that have been buzzing around for years and recently I chanced across something that caught my eye. It was diagram that I had not come across before.

Two things immediately struck me. The first was the explicit inclusion of “Skills’ in the recipe for success. That made sense to me. The second was the symptoms of what happens if an ingredient of the complex change recipe is missing. Those made sense to me too because I have experienced them all.

The diagram I found was not attributed so I did a bit of searching – using the five ingredients as a starter. What I discovered was fascinating – a sort of Chinese Whispers story with different names attached to emergent variants of the diagram. I persevered and eventually found the original source – Dr Mary Lippitt who created and copyrighted the diagram in 1987.


The next thing I did was float the Lippitt diagram with other people who are actively working in applying the science of improvement in the health care sector – and who are faced with the challenge of having to manage complex change. The Lippitt diagram resonated strongly with them too – which I saw as a good sign.

I then found Dr Mary Lippitt’s email address and emailed her, out of the blue. And she replied almost immediately, thanked me, and we arranged to have a Zoom chat. It was fascinating. What I learned was that her passion for complex change blossomed when she inherited her father Gordon’s consulting business. He, like his older brother Ronald, worked in the organisational change domain and he wrote a book entitled “Organization Renewal” whose second edition was published in 1982. And I discovered that Ronald Lippitt was a colleague of Kurt Lewin – the Father of Social Psychology. So, the pedigree of the diagram I came across by chance is impeccable!


Changing even a small part of a health care system is a tough sociotechnical challenge and I have learned the hard way that a combination of social and technical skills are required. Many of these skills appear to be missing in health care organisations and that skills gap leads to the commonest source of resistance to change that I see: Anxiety.

It also goes some way to explain why we made significant progress in delivering health care service improvements when we focussed on giving the front line staff

a) the necessary technical skills to diagnose the causes of their service issues, and

b) the skills to redesign their processes to release the improvements they wanted to see.

We now have good evidence that we also, unwittingly, developed the complementary social skills to help spread the word of what is possible and how to achieve it organically across teams, departments and organisations.

So, with her generous permission, we will be using Dr Mary Lippitt’s diagram to tell the story of how to manage complex change, and we will share what we learn as we go.

Resilience

The rise in the use of the term “resilience” seems to mirror the sense of an accelerating pace of change. So, what does it mean? And is the meaning evolving over time?

One sense of the meaning implies a physical ability to handle stresses and shocks without breaking or failing. Flexible, robust and strong are synonyms; and opposites are rigid, fragile, and weak.

So, digging a bit deeper we know that strong implies an ability to withstand extreme stress while resilient implies the ability to withstanding variable stress. And the opposite of resilient is brittle because something can be both strong and brittle.

This is called passive resilience because it is an inherent property and cannot easily be changed. A ball is designed to be resilient – it will bounce back – and this inherent in the material and the structure. The implication of this is that to improve passive resilience we would need to remove and to replace with something better suited to the range of expected variation.

The concept of passive resilience applies to processes as well, and a common manifestation of a brittle process is one that has been designed using averages.

Processes imply flows. The flow into a process is called demand, while the flow out of the process is called activity. What goes in must come out, so if the demand exceeds the activity then a backlog will be growing inside the process. This growing queue creates a number of undesirable effects – first it takes up space, and second it increases the time for demand to be converted into activity. This conversion time is called the lead-time.

So, to avoid a growing queue and a growing wait, there must be sufficient flow-capacity at each and every step along the process. The obvious solution is to set the average flow-capacity equal to the average demand; and we do this because we know that more flow-capacity implies more cost – and to stay in business we must keep a lid on costs!

This sounds obvious and easy but does it actually work in practice?

The surprising answer is “No”. It doesn’t.

What happens in practice is that the measured average activity is always less than the funded flow-capacity, and so less than the demand. The backlogs will continue to grow; the lead-time will continue to grow; the waits will continue to grow; the internal congestion will continue to grow – until we run out of space. At that point everything can grind to a catastrophic halt. That is what we mean by a brittle process.

This fundamental and unexpected result can easily and quickly be demonstrated in a concrete way on a table top using ordinary dice and tokens. A credible game along these lines was described almost 40 years ago in The Goal by Eli Goldratt, originator of the school of improvement called Theory of Constraints. The emotional impact of gaining this insight can be profound and positive because it opens the door to a way forward which avoids the Flaw of Averages trap. There are countless success stories of using this understanding.


So, when we need to cope with variation and we choose a passive resilience approach then we have to plan to the extremes of the range of variation. Sometimes that is not possible and we are forced to accept the likelihood of failure. Or we can consider a different approach.

Reactive resilience is one that living systems have evolved to use extensively, and is illustrated by the simple reflex loop shown in the diagram.

A reactive system has three components linked together – a sensor (i.e. temperature sensitive nerves endings in the skin), a processor (i.e. the grey matter of the spinal chord) and an effector (i.e. the muscle, ligaments and bones). So, when a pre-defined limit of variation is reached (e.g. the flame) then the protective reaction withdraws the finger before it becomes damaged. The advantage this type of reactive resilience is that it is relatively simple and relatively fast. The disadvantage is that it is not addressing the cause of the problem.

This is called reactive, automatic and agnostic.

The automatic self-regulating systems that we see in biology, and that we have emulated in our machines, are evidence of the effectiveness of a combination of passive and reactive resilience. It is good enough for most scenarios – so long as the context remains stable. The problem comes when the context is evolving, and in that case the automatic/reflex/blind/agnostic approach will fail – at some point.


Survival in an evolving context requires more – it requires proactive resilience.

What that means is that the processor component of the feedback loop gains an extra feature – a memory. The advantage this brings is that past experience can be recalled, reflected upon and used to guide future expectation and future behaviour. We can listen and learn and become proactive. We can look ahead and we can keep up with our evolving context. One might call, this reactive adaptation or co-evolution and it is a widely observed phenomenon in nature.

The usual manifestation is this called competition.

Those who can reactively adapt faster and more effectively than others have a better chance of not failing – i.e. a better chance of survival. The traditional term for this is survival of the fittest but the trendier term for proactive resilience is agile.

And that is what successful organisations are learning to do. They are adding a layer of proactive resilience on top of their reactive resilience and their passive resilience.

All three layers of resilience are required to survive in an evolving context.

One manifestation of this is the concept of design which is where we create things with the required resilience before they are needed. This is illustrated by the design squiggle which has time running left to right and shows the design evolving adaptively until there is sufficient clarity to implement and possibly automate.

And one interesting thing about design is that it can be done without an understanding of how something works – just knowing what works is enough. The elegant and durable medieval cathedrals were designed and built by Master builders who had no formal education. They learned the heuristics as apprentices and through experience.


And if we project the word game forwards we might anticipate a form of resilience called proactive adaptation. However, we sense that is a novel thing because there is no proadaptive word in the dictionary.

PS. We might also use the term Anti-Fragile, which is the name of a thought-provoking book that explores this very topic.

Business as Usual

At last the light appears to be visible at the end of the tunnel for Covid-19 in the UK. And we have our fingers-crossed that we can contemplate getting back to business as usual. What ever that is.

For the NHS, attention will no doubt return to patient access targets. The data has continued to be collected, processed and published for the last 16 months, so we are able to see the impact that the Covid-19 epidemic has had on the behaviour of the hospital-based emergency system. The Emergency Departments.

The run chart below shows the monthly reported ED metrics for England from Nov 2010.

The solid grey line is the infamous 4 Hr target – the proportion of ED attendances that are seen and admitted or discharged within 4 hours. It reveals that the progressive decline over the last decade improved during the first and second waves. And if we look for plausible causes we can see that the ED attendances dropped precipitously (blue dotted line) in both the first and second waves. We dutifully “Stayed at Home to Protect the NHS and Save Lives”.

The drop in ED attendances was accompanied by a drop in ED admissions (dotted red line) but a higher proportion of those who did attend were admitted (solid orange line) – which suggests they were sicker patients. So, all that makes sense.

And as restrictions are relaxed we can see that attendances, admissions, 4 Hr yield and proportion admitted are returning to the projected levels. Business as Usual.


Up to March 2021 the chart says that 70-75% of patients who attend ED did not need to be admitted to hospital. So this begs a raft of questions

Q1: What is that makes nearly 35,000 people per day to go to ED and then home?

Q2. How can the ED footfall drop by 50% almost overnight?

Q3. Where did those patients go for the services they were previously seeking in ED?

Q4. What were their outcomes?

Q5. What are the reasons they were choosing to go to ED rather than their GP before March 2020?

Q6. How much of the ED demand is spillover from Primary Care?

Q7: How much of the ED workload is diagnostic testing to exclude serious illness?

Q8: What lessons can be learned to mitigate the growing pressure on EDs?

Q9: Can urgent care services for this 70% be provided in a more distributed way?


And if we can do drive-thru urgent testing during Covid-19 and we can do and drive-thru urgent treatment during Covid-19 then perhaps we can do more drive-thru urgent care after Covid-19?

The Crystal Ball

A crystal ball or orbuculum is a crystal or glass ball and is associated with the performance of clairvoyance and the ability to predict future events.

Before the modern era, those who claimed to be able to see the future were treated with suspicion and branded as alchemists, magicians and heretics.

Nowadays we take it for granted that the weather can be predicted with surprising accuracy for a few days at least – certainly long enough to influence our decisions.

And weather forecasting is a notoriously tricky challenge because small causes can have big effects – and big causes can have no effects.  The reason for this is that weather forecasting is called a nonlinear problem and to solve it we have had to resort to using sophisticated computer simulations run on powerful computers.

In contrast, predicting the course of the COVID-19 epidemic is a walk in the park.   It too is a nonlinear problem but much a less complicated one that can be solved using a simple computer simulation on a basic laptop.

The way it is done is to use the equations that describe how epidemics work (which have been known for nearly 100 years) and then use the emerging data to calibrate the model, so over time it gets more accurate.

Here’s what it looks like for COVID-19 associated mortality in the UK.  The red dotted line is the reported data and the oscillation is caused by the reporting process with weekend delays.  The solid red line is the same data with the 7-day oscillation filtered out to reveal the true pattern.  The blue line is the prediction made my the model.

And we can see how accurate the prediction is, especially since the peak of the third wave.

What this chart does not show is the restrictions being gradually lifted and completely removed by April 2020.

The COVID Crystal Ball says it will be OK so long as nothing unexpected happens – like a new variation that evades our immune systems, or even a new bug completely.

It has been a tough year.  We have learned a lot through hardship and heroism and that a random act of nature can swat us like an annoying fly.

So, perhaps our sense of hope should be tempered with some humility because the chart above did not need to look like that.  We have the knowledge, tools and skills to to better.  We have lots of Crystal Balls.

End In Sight

We are a month into Lock-down III.

Is there any light at the end of the tunnel?

Here is the reported UK data.  As feared the Third Wave was worse than the First and the Second, and the cumulative mortality has exceeded 100,000 souls.  But the precipitous fall in reported positive tests is encouraging and it looks like the mortality curve is also turning the corner.

The worst is over.

So, was this turnaround caused by Lock-down III?

It is not possible to say for sure from this data.  We would need a No Lock-down randomised control group to keep the statistical purists happy and we could not do that.

Is there another way?

Yes, there is.  It is called a digital twin.  The basic idea is we design, build, verify and calibrate a digital simulation model of the system that we are interested and use that to explore cause-and-effect hypotheses.  Here is an example: The solid orange line in the chart above (daily reported positive tests) is closely related to the dotted grey line in the chart below (predicted daily prevalence of infectious people).   Note the almost identical temporal pattern and be aware that in the first wave we only reported positive tests of patients admitted to hospital.

What does our digital twin say was the cause?

It says that the primary cause of the fall in daily prevalence of infectious people is because the number of susceptible people (the solid blue line) has fallen to a low enough level for the epidemic to fizzle out on its own.  Without any more help from us.

And it says that Lock-down III has contributed a bit by flattening and lowering the peak of infections, admissions and deaths.

And it says that the vaccination programme has not contributed to the measured fall in prevalence.

What are the implications if our digital twin is speaking the truth?

Firstly, that the epidemic is already self-terminating.
Secondly, that the restrictions will not be needed after the end of February.
Thirdly, that a mass vaccination programme is a belt-and-braces insurance policy.

I would say that is all good news.  The light the end would appear to be in sight.

No Queue Vaccination

Vaccinating millions of vulnerable people in the middle of winter requires a safe, efficient and effective process.

It is not safe to have queues of people waiting outside in the freezing cold.  It is not safe to have queues of people packed into an indoor waiting area.

It is not safe to have queues full stop.

And let us face it, the NHS is not brilliant at avoiding queues.

My experience is that the commonest cause of queues in health care processes something called the Flaw of Averages.

This is where patients are booked to arrive at an interval equal to the average rate they can be done.

For example, suppose I can complete 15 vaccinations in an hour … that is one every 4 minutes on average … so common sense tells me it that the optimum way to book patients for their jab is one every four minutes.  Yes?

Actually, No.  That is the perfect design for generating a queue – and the reason is because, in reality, patients don’t arrive exactly on time, and they don’t arrive at exactly one every three minutes, and  there will be variation in exactly how long it takes me to do each jab, and unexpected things will happen.  In short, there are lots of sources of variation.  Some random and some not.  And just that variation is enough to generate a predictably unpredictable queue.  A chaotic queue.

The Laws of Physics decree it.


So, to illustrate the principles of creating a No Queue design here are some videos of a simulated mass vaccination process.

The process is quite simple – there are three steps that every patient must complete in sequence:

1) Pre-Jab Safety Check – Covid Symptoms + Identity + Clinical Check.
2) The Jab.
3) Post-Jab Safety Check (15 minutes of observation … just-in-case).

And the simplest layout of a sequential process is a linear one with the three steps in sequence.

So, let’s see what happens.

Notice where the queue develops … this tells us that we have a flow design problem.  A queue is signpost that points to the cause.

The first step is to create a “balanced load, resilient flow” design.

Hurrah! The upstream queue has disappeared and we finish earlier.  The time from starting to finishing is called the makespan and the shorter this is, the more efficient the design.

OK. Let’s scale up and have multiple, parallel, balanced-load lanes running with an upstream FIFO (first-in-first-out) buffer and a round-robin stream allocation policy (the sorting hat in the video).  Oh, and can we see some process performance metrics too please.

Good, still no queues.  We are making progress.  Only problem is our average utilisation is less than 90% and The Accountants won’t be happy with that.  Also, the Staff are grumbling that they don’t get rest breaks.

Right, let’s add a Flow Coordinator to help move things along quicker and hit that optimum 100% utilisation target that The Accountants desire.

Oh dear!  Adding a Flow Coordinator seems to made queues worse rather than better; and we’ve increased costs so The Accountants will be even less happy.  And the Staff are still grumbling because they still don’t get any regular rest breaks.  The Flow Coordinator is also grumbling because they are running around like a blue a***d fly.  Everyone is complaining now.  That was not the intended effect.  I wonder what went wrong?

But, to restore peace let’s take out the Flow Coordinator and give the Staff regular rest breaks.

H’mm.  We still seem to have queues.  Maybe we just have to live with the fact that patients have to queue.  So long as The Accountants are happy and the Staff  get their breaks then that’s as good as we can expect. Yes?

But … what if … we flex the Flow Coordinator to fill staggered Staff rest breaks and keep the flow moving calmly and smoothly all day without queues?

At last! Everyone is happy. Patients don’t wait. Staff are comfortably busy and also get regular rest breaks. And we actually have the most productive (value for money) design.

This is health care systems engineering (HCSE) in action.

PS. The Flaw of Averages error is a consequence of two widely held and invalid assumptions:

  1. That time is money. It isn’t. Time costs money but they are not interchangeable.
  2. That utilisation and efficiency are interchangeable.  They aren’t.  It is actually often possible to increase efficiency and reduce utilisation at the same time!

The Final Push

It is New Year 2021 and the spectre of COVID-4-Christmas came true.  We are now in the depths of winter and in the jaws of the Third Wave.  What happened?  Let us look back at the UK data for positive tests and deaths to see how this tragic story unfolded.

There was a Second Wave that started to build when Lock-down I was relaxed in July 2020.  And it looks like Lock-down II in November 2020 did indeed have a beneficial effect – but not as much as was needed.  So, when it too was relaxed at the start of December 2020 then … infections took off again … even faster than before!

That is the nature of epidemics and of exponential growth.  It seems we have not learned those painful lessons well enough.

And we all so desperately wanted a more normal Xmas that we conspired to let the COVID cat out of the bag again.  The steep rise in positive tests is real and we know that because a rise in deaths is following about three weeks behind.  And that means hospitals have filled up again.

Are we back to square one?

The emerging news of an even more contagious variant has only compounded our misery, but it is hard to separate the effect of that from all the other factors that are fuelling the Third Wave.

Is there no end to this recurring nightmare?

The short answer is – “It will end“.  It cannot continue forever.  All epidemics eventually burn themselves out when there are too few susceptible people left to infect and we enter the “endemic” phase.  When that happens the R number will gravitate to 1.0 again which some might find confusing.  The confusion is caused by mixing up Ro and Rt.

How close are we to that end game?

Well, we are certainly a lot closer than we were in July 2020 because millions more people have been exposed, infected and recovered and many of those were completely asymptomatic.  It is estimated that about a third of those who catch it do not have any symptoms – so they will not step forward to be tested and will not appear in the statistics.  But they can unwittingly and silently spread the virus while they are infectious.  And many who are symptomatic do not come get tested so they won’t appear in the statistics either.

And there are now two new players in the COVID-19 Game … the Pfizer vaccine and the Oxford vaccine.  They are the White Knights and they are on our side.

Hurrah!

Now we must manufacture, distribute and administer these sickness-and-death-preventing vaccines to 65 million people as soon as possible.  That alone is a massive logistical challenge when we are already fighting battles on many fronts.  It seems impossible.

Or do we?

It feels obvious but is it the most effective strategy?  Should we divert our limited, hard-pressed, exhausted health care staff to jabbing the worried-well?  Should we eke out our limited supplies of precious vaccine to give more people a first dose by delaying the second dose for others?

Will the White Knights save us?

The short answer is – “Not on their own“.

The maths is simple enough.

Over the last three weeks we have, through Herculean effort, managed to administer 1 million first doses of the Pfizer vaccine.  That sounds like a big number but when put into the context of a UK population of 65 million it represents less than 2% and offers only delayed and partial protection.  The trial evidence confirmed that two doses of the Pfizer vaccine given at a three week interval would confer about 90% protection.  That is the basis of the licence and the patient consent.

So, even if we delay second doses and double the rate of first dose delivery we can only hope to partially protect about 2-3% of the population by the end of January 2021.  That is orders of magnitude too slow.

And the vaccines are not a treatment.  The vaccine cannot mitigate the fact that a large number of people are already infected and will have to run the course of their illness.  Most will recover, but many will not.

So, how do we get our heads around all these interacting influences?  How do we predict how the Coronavirus Game is likely to play out over the next few weeks? How do we decide what to do for the best?

I believe it is already clear that trying to answer these questions using the 1.3 kg of wetware between our ears is fraught with problems.

We need to seek the assistance of some hardware, software and some knowledge of how to configure them to illuminate the terrain ahead.


Here is what the updated SEIR-V model suggests will happen if we continue with the current restrictions and the current vaccination rate.  I’ve updated it with the latest data and added a Vaccination component.

The lines to focus on are the dotted ones: grey = number of infected cases, yellow = number ill enough to justify hospital treatment, red = critically ill and black = not survived.

The vertical black line is Now and the lines to the right of that is the most plausible prediction.

It says that a Third Wave is upon us and that it could be worse than the First Wave.  That is the bad news. The good news is that the reason that the infection rate drops is because the epidemic will finally burn itself out – irrespective of the vaccinations.

So, it would appear that the White Knights cannot rescue us on their own … but we can all help to accelerate the final phase and limit the damage – if we all step up and pull together, at the same time and in the same direction.

We need a three-pronged retaliation:

  1. Lock-down:  “Stay at home. Protect the NHS. Save Lives”.  It worked in the First Wave and it will work in the Third Wave.
  2. Care in the Community:  For those who will become unwell and who will need the support of family, friends, neighbours and the NHS.
  3. Volunteer to Vaccinate:  To protect everyone as soon as is practically feasible.

Here is what it could look like.  All over by Easter.

There is light at the end of the tunnel.  The end is in sight.  We just have to pull together in the final phase of the Game.


PS. For those interested in how an Excel-based SEIR-V model is designed, built and used here’s a short (7 minute) video of the highlights:

This is health care systems engineering (HCSE) in action.

And I believe that the UK will need a new generation of HCSEs to assist in the re-designing and re-building of our shattered care services.  So, if you are interested then click here to explore further.

Second Wave

The summer holidays are over and schools are open again – sort of.

Restaurants, pubs and nightclubs are open again – sort of.

Gyms and leisure facilities are open again – sort of.

And after two months of gradual easing of social restrictions and massive expansion of test-and-trace we now have the spectre of a Second Wave looming.  It has happened in Australia, Italy, Spain and France so it can happen here.

As usual, the UK media are hyping up the general hysteria and we now also have rioting disbelievers claiming it is all a conspiracy and that re-applying local restrictions is an infringement of their liberty.

So, what is all the fuss about?

We need to side-step the gossip and get some hard data from a reliable source (i.e. not a newspaper). Here is what worldometer is sharing …

OMG!  It looks like The Second Wave is here already!  There are already as many cases now as in March and we still have the mantra “Stay At Home – Protect the NHS – Save Lives” ringing in our ears.  But something is not quite right.  No one is shouting that hospitals are bursting at the seams.  No one is reporting that the mortuaries are filling up.  Something is different.  What is going on?  We need more data.That is odd!  We can clearly see that cases and deaths went hand-in-hand in the First Wave with about 1:5 cases not making it.  But this time the deaths are not rising with the cases.

Ah ha!  Maybe that is because the virus has mutated into something much more benign and because we have got much better at diagnosing and treating this illness – the ventilators and steroids saved the day.  Hurrah!  It’s all a big fuss about nothing … we should still be able to have friends round for parties and go on pub crawls again!

But … what if there was a different explanation for the patterns on the charts above?

It is said that “data without context is meaningless” … and I’d go further than that … data without context is dangerous because if it leads to invalid conclusions and inappropriate decisions we can get well-intended actions that cause unintended harm.  Death.

So, we need to check the context of the data.

In the First Wave the availability of the antigen (swab) test was limited so it was only available to hospitals and the “daily new cases” were in patients admitted to hospital – the ones with severe enough symptoms to get through the NHS 111 telephone triage.  Most people with symptoms, even really bad ones, stayed at home to protect the NHS.  They didn’t appear in the statistics.

But did the collective sacrifice of our social lives save actual lives?

The original estimates of the plausible death toll in the UK ranged up to 500,000 from coronavirus alone (and no one knows how many more from the collateral effects of an overwhelmed NHS).  The COVID-19 body count to date is just under 50000, so putting a positive spin on that tragic statistic, 90% of the potential deaths were prevented.  The lock-down worked.  The NHS did not collapse.  The Nightingales stood ready and idle – an expensive insurance policy.  Lives were actually saved.

Why isn’t that being talked about?

And the context changed in another important way.  The antigen testing capacity was scaled up despite being mired in confusing jargon.  Who thought up the idea of calling them “pillars”?

But, if we dig about on the GOV.UK website long enough there is a definition:

So, Pillar 1 = NHS testing capacity Pillar 2 = commercial testing capacity and we don’t actually know how much was in-hospital testing and how much was in-community testing because the definitions seem to reflect budgets rather than patients.  Ever has it been thus in the NHS!

However, we can see from the chart below that testing activity (blue bars) has increased many-fold but the two testing streams (in hospital and outside hospital) are combined in one chart.  Well, it is one big pot of tax-payers cash after all and it is the same test.

To unravel this a bit we have to dig into the website, download the raw data, and plot it ourselves.  Looking at Pillar 2 (commercial) we can see they had a late start, caught the tail of the First Wave, and then ramped up activity as the population testing caught up with the available capacity (because hospital activity has been falling since late April).

Now we can see that the increased number of positive tests could be explained by the fact that we are now testing anyone with possible COVID-19 symptoms who steps up – mainly in the community.  And we were unable to do this before because the testing capacity did not exist.

The important message is that in the First Wave we were not measuring what was happening in the community – it was happening though – it must have been.  We measured the knock on effects: hospital admissions with positive tests and deaths after positive tests.

So, to present the daily positive tests as one time-series chart that conflates both ‘pillars’ is both meaningless and dangerous and it is no surprise that people are confused.


This raises a question: Can we estimate how many people there would have been in the community in the First Wave so that we can get a sense of what the rising positive test rate means now?

The way that epidemiologists do this is to build a generic simulation of the system dynamics of an epidemic (a SEIR multi-compartment model) and then use the measured data to calibrate the this model so that it can then be used for specific prediction and planning.

Here is an example of the output of a calibrated multi-compartment system dynamics model of the UK COVID-19 epidemic for a nominal 1.3 million population.  The compartments that are included are Susceptible, Exposed, Infectious, and Recovered (i.e. not infectious) and this model also simulates the severity of the illness i.e. Severe (in hospital), Critical (in ITU) and Died.

The difference in size of the various compartments is so great that the graph below requires two scales – the solid line (Infectious) is plotted on the left hand scale and the others are plotted on the right hand scale which is 10 times smaller.  The green line is today and the reported data up to that point has been used to calibrate the model and to estimate the historical metrics that we did not measure – such as how many people in the community were infectious (and would have tested positive).

At the peak of the First Wave, for this population of 1.3 million, the model estimates there were about 800 patients in hospital (which there were) and 24,000 patients in the community who would have tested positive if we had been able to test them.  24,000/800 = 30 which means the peak of the grey line is 30 x higher than the peak of the orange line – hence the need for the two Y-axes with a 10-fold difference in scale.

Note the very rapid rise in the number of infectious people from the beginning of March when the first UK death was announced, before the global pandemic was declared and before the UK lock-down was enacted in law and implemented.  Coronavirus was already spreading very rapidly.

Note how this rapid rise in the number of infectious people came to an abrupt halt when the UK lock-down was put into place in the third week of March 2020.  Social distancing breaks the chain of transmission from one infectious person to many other susceptible ones.

Note how the peaks of hospital admissions, critical care admissions and deaths lag after the rise in infectious people (because it takes time for the coronavirus to do its damage) and how each peak is smaller (because only about 1:30 get sick enough to need admission, and only 1:5 of hospital admissions do not survive.

Note how the fall in the infectious group was more gradual than the rise (because the lock-down was partial,  because not everyone could stay at home (essential services like the NHS had to continue), and because there was already a big pool of infectious people in the community.


So, by early July 2020 it was possible to start a gradual relaxation of the lock down and from then we can see a gradual rise in infectious people again.  But now we were measuring them because of the growing capacity to perform antigen tests in the community.  The relatively low level and the relatively slow rise are much less dramatic than what was happening in March (because of the higher awareness and the continued social distancing and use of face coverings).  But it is all too easy to become impatient and complacent.

But by early September 2020 it was clear that the number on infectious people was growing faster in the community – and then we saw hospital admissions reach a minimum and start to rise again.  And then the number if deaths reach a minimum and start to rise again.  And this evidence proves that the current level of social distancing is not enough to keep a lid on this disease.  We are in the foothills of a Second Wave.


So what do we do next?

First, we must estimate the effect that the current social distancing policies are having and one way to do that would be to stop doing them and see what happens.  Clearly that is not an ethical experiment to perform given what we already know.  But, we can simulate that experiment using our calibrated SEIR model.  Here is what is predicted to happen if we went back to the pre-lockdown behaviours: There would be a very rapid spread of the virus followed by a Second Wave that would be many times bigger than the first!!  Then it would burn itself out and those who had survived could go back to some semblance of normality.  The human sacrifice would be considerable though.

So, despite the problems that the current social distancing is causing, they pale into insignificance compared to what could happen if they were dropped.

The previous model shows what is predicted would happen if we continue as we are with no further easing of restrictions and assuming people stick to them.  In short, we will have COVID-for-Christmas and it could be a very nasty business indeed as it would come at the same time as other winter-associated infectious diseases such as influenza and norovirus.

The next chart shows what could happen if we squeeze the social distancing brake a bit harder by focusing only on the behaviours that the track-and-trace-and-test system is highlighting as the key drivers of the growth infections, admissions and deaths.

What we see is an arrest of the rise of the number of infectious people (as we saw before), a small and not sustained increase in hospital admissions, then a slow decline back to the levels that were achieved in early July – and at which point it would be reasonable to have a more normal Christmas.

And another potential benefit of a bit more social distancing might be a much less problematic annual flu epidemic because that virus would also find it harder to spread – plus we have a flu vaccination which we can use to reduce that risk further.


It is not going to be easy.  We will have to sacrifice a bit of face-to-face social life for a bit longer.  We will have to measure, monitor, model and tweak the plan as we go.

And one thing we can do immediately is to share the available information in a more informative and less histrionic way than we are seeing at the moment.


Update: Sunday 1st November 2020

Yesterday the Government had to concede that the policy of regional restrictions had failed and bluffing it out and ignoring the scientific advice was, with the clarity of hindsight, an unwise strategy.

In the face of the hard evidence of rapidly rising COVID+ve hospital admissions and deaths, the decision to re-impose a national 4-week lock-down was announced.  This is the only realistic option to prevent overwhelming the NHS at a time of year that it struggles with seasonal influenza causing a peak of admissions and deaths.

Paradoxically, this year the effect of influenza may be less because social distancing will reduce the spread of that as well and also because there is a vaccination for influenza.  Many will have had their flu jab early … I certainly did.

So, what is the predicted effect of a 4 week lock down?  Well, the calibrated model (also used to generate the charts above) estimates that it could indeed suppress the Second Wave and mitigate a nasty COVID-4-Christmas scenario.  But even with it the hospital admissions and associated mortality will continue to increase until the effect kicks in.

Brace yourselves.

Coronavirus


The start of a new year, decade, century or millennium is always associated with a sense of renewal and hope.  Little did we know that in January 2020 a global threat had hatched and was growing in the city of Wuhan, Hubei Province, China.  A virus of the family coronaviridae had mutated and jumped from animal to man where it found a new host and a vehicle to spread itself.   Several weeks later the World became aware of the new threat and in the West … we ignored it.  Maybe we still remember the SARS epidemic which was heralded as a potential global catastrophe but was contained in the Far East and fizzled out.  So, maybe we assumed this SARS-like virus would do the same.

It didn’t.  This mutant was different.  It caused a milder illness and unwitting victims were infectious before they were symptomatic.  And most got better on their own, so they spread the mutant to many other people.  Combine that mutant behaviour with the winter (when infectious diseases spread more easily because we spend more time together indoors), Chinese New Year and global air travel … and we have the perfect recipe for cooking up a global pandemic of a new infectious disease.  But we didn’t know that at the time and we carried on as normal, blissfully unaware of the catastrophe that was unfolding.

By February 2020 it became apparent that the mutant had escaped containment in China and was wreaking havoc in other countries – with Italy high on the casualty list.  We watched in horror at the scenes on television of Italian hospitals overwhelmed with severely ill people fighting for breath as the virus attacked their lungs.  The death toll rose sharply but we still went on our ski holidays and assumed that the English Channel and our Quarantine Policy would protect us.

They didn’t.  This mutant was different.  We now know that it had already silently gained access into the UK and was growing and spreading.  The first COVID-19 death reported in the UK was in early March 2020 and only then did we sit up and start to take notice.  This was getting too close to home.

But it was too late.  The mathematics of how epidemics spread was worked out 100 years ago, not long after the 1918 pandemic of Spanish Flu that killed tens of millions of people before it burned itself out.  An epidemic is like cancer.  By the time it is obvious it is already far advanced because the growth is not linear – it is exponential.

As a systems engineer I am used to building simulation models to reveal the complex and counter-intuitive behaviour of nonlinear systems using the methods first developed by Jay W. Forrester in the 1950’s.  And when I looked up the equations that describe epidemics (on Wikipedia) I saw that I could build a system dynamics model of a COVID-19 epidemic using no more than an Excel spreadsheet.

So I did.  And I got a nasty surprise.  Using the data emerging from China on the nature of the spread of the mutant virus, the incidence of severe illness and the mortality rate … my simple Excel model predicted that, if COVID-19 was left to run its natural course in the UK, then it would burn itself out over several months but the human cost would be 500,000 deaths and the NHS would be completely overwhelmed with a “tsunami of sick”.  And I could be one of them!  The fact that there is no treatment and no vaccine for this novel threat excluded those options.  My basic Excel model confirmed that the only effective option to mitigate this imminent catastrophe was to limit the spread of the virus through social engineering i.e. an immediate and drastic lock-down.  Everyone who was not essential to maintaining core services should “Stay at home, Protect the NHS and Save lives“.  That would become the mantra.  And others were already saying this – epidemiologists whose careers are spent planning for this sort of eventuality.  But despite all this there still seemed to be little sense of urgency, perhaps because their super-sophisticated models predicted that the peak of the UK epidemic would be in mid-June so there was time to prepare.  My basic model predicted that the peak would be in mid-April, in about 4 weeks, and that it was already too late to prevent about 50,000 deaths.

It turns out I was right.  That is exactly what happened.  By mid-March 2020 London was already seeing an exponential rise in hospital admissions, intensive care admissions and deaths and suddenly the UK woke up and panicked.  By that time I had enlisted the help of a trusted colleague who is a public health doctor and who had studied epidemiology, and together we wrote up and published the emerging story as we saw it:

An Acute Hospital Demand Surge Planning Model for the COVID-19 Epidemic using Stock-and-Flow Simulation in Excel: Part 1. Journal of Improvement Science 2020: 68; 1-20.  The link to download the full paper is here.

I also shared the draft paper with another trusted friend and colleague who works for my local clinical commissioning group (CCG) and I asked “Has the CCG a sense of the speed and magnitude of what is about to happen and has it prepared for the tsunami of sick that primary care will need to see?

What then ensued was an almost miraculous emergence of a coordinated and committed team of health care professionals and NHS managers with a single, crystal clear goal:  To design, build and deliver a high-flow, drive-through community-based facility to safely see-and-assess hundreds of patients per day with suspected COVID-19 who were too sick/worried to be managed on the phone, but not sick enough to go to A&E.  This was not a Nightingale Ward – that was a parallel, more public and much more expensive endeavour designed as a spillover for overwhelmed acute hospitals.  Our purpose was to help to prevent that and the time scale was short.  We had three weeks to do it because Easter weekend was the predicted peak of the COVID-19 surge if the national lock-down policy worked as hoped.  No one really had an accurate estimate how effective the lock-down would be and how big the peak of the tsunami of sick would rise as it crashed into the NHS.  So, we planned for the worst and hoped for the best.  The Covid Referral Centre (CRC) was an insurance policy and we deliberately over-engineered it use to every scrap of space we had been offered in a small car park on the south side of the NEC site.

The CRC needed to open by Sunday 12th April 2020 and we were ready, but the actual opening was delayed by NHS bureaucracy and politics.  It did eventually open on 22nd April 2020, just four weeks after we started, and it worked exactly as designed.  The demand was, fortunately, less than our worst case scenario; partly because we had missed the peak by 10 days and we opened the gates to a falling tide; and partly because the social distancing policy had been more effective than hoped; and partly because it takes time for risk-averse doctors to develop trust and to change their ingrained patterns of working.  A drive-thru COVID-19 see-and-treat facility? That was innovative and untested!!

The CRC expected to see a falling demand as the first wave of COVID-19 washed over, and that exactly is what happened.  So, as soon as that prediction was confirmed, the CRC was progressively repurposed to provide other much needed services such as drive-thru blood tests, drive-thru urgent care, and even outpatient clinics in the indoor part of the facility.

The CRC closed its gates to suspected COVID-19 patients on 31st July 2020, as planned and as guided by the simple Excel computer model.

This is health care systems engineering in action.

And the simple Excel model has been continuously re-calibrated as fresh evidence has emerged.  The latest version predicts that a second peak of COVID-19 (that is potentially worse than the first) will happen in late summer or autumn if social distancing is relaxed too far (see below).

But we don’t know what “too far” looks like in practical terms.  Oh, and a second wave could kick off just just when we expect the annual wave of seasonal influenza to arrive.  Or will it?  Maybe the effect of social distancing for COVID-19 in other countries will suppress the spread of seasonal flu as well?  We don’t know that either but the data of the incidence of flu from Australia certainly supports that hypothesis.

We may need a bit more health care systems engineering in the coming months. We shall see.

Oh, and if we are complacent enough to think a second wave could never happen in the UK … here is what is happening in Australia.

Co-Diagnosis, Co-Design and Co-Delivery

The thing that gives me the biggest buzz when it comes to improvement is to see a team share their story of what they have learned-by-doing; and what they have delivered that improves their quality of life and the quality of their patients’ experience.

And while the principles that underpin these transformations are generic, each story is unique because no two improvement challenges are exactly the same and no two teams are exactly the same.

The improvement process is not a standardised production line.  It is much more organic and adaptive experience and that requires calm, competent, consistent, compassionate and courageous facilitation.

So when I see a team share their story of what they have done and learned then I know that behind the scenes there will have been someone providing that essential ingredient.

This week a perfect example of a story like this was shared.

It is about the whole team who run the Diabetic Complex Cases Clinic at Guy’s and St. Thomas’ NHS Trust in London.  Everyone involved in the patient care was involved.  It tells the story of how they saw what might be possible and how they stepped up to the challenge of learning to apply the same principles in their world.  And it tells their story of what they diagnosed, what they designed and what they delivered.

The facilitation and support was provided Ellen Pirie who works for the Health Innovation Network (HIN) in South London and who is a Level 2 Health Care Systems Engineer.

And the link to the GSTT Diabetic Complex Clinic Team story is here.

Restoring Pride-in-Work

In 1986, Dr Don Berwick from Boston attended a 4-day seminar run by Dr W. Edwards Deming in Washington.  Dr Berwick was a 40 year old paediatrician who was also interested in health care management and improving quality and productivity.  Dr Deming was an 86 year old engineer and statistician who, when he was in his 40’s, helped the US to improve the quality and productivity of the industrial processes supporting the US and Allies in WWII.

Don Berwick describes attending the seminar as an emotionally challenging life-changing experience when he realised that his well-intended attempts to improve quality by inspection-and-correction was a counterproductive, abusive approach that led to fear, demotivation and erosion of pride-in-work.  His blinding new clarity of insight led directly to the Institute of Healthcare Improvement in the USA in the early 1990’s.

One of the tenets of Dr Deming’s theories is that the ingrained beliefs and behaviours that erode pride-in-work also lead to the very outcomes that management do not want – namely conflict between managers and workers and economic failure.

So, an explicit focus on improving pride-in-work as an early objective in any improvement exercise makes very good economic sense, and is a sign of wise leadership and competent management.


Last week a case study was published that illustrates exactly that principle in action.  The important message in the title is “restore the calm”.

One of the most demotivating aspects of health care that many complain about is the stress caused a chaotic environment, chronic crisis and perpetual firefighting.  So, anything that can restore calm will, in principle, improve motivation – and that is good for staff, patients and organisations.

The case study describes, in detail, how calm was restored in a chronically chaotic chemotherapy day unit … on Weds, June 19th 2019 … in one day and at no cost!

To say that the chemotherapy nurses were surprised and delighted is an understatement.  They were amazed to see that they could treat the same number of patients, with the same number of staff, in the same space and without the stress and chaos.  And they had time to keep up with the paperwork; and they had time for lunch; and they finished work 2 hours earlier than previously!

Such a thing was not possible surely? But here they were experiencing it.  And their patients noticed the flip from chaos-to-strangely-calm too.

The impact of the one-day-test was so profound that the nurses voted to adopt the design change the following week.  And they did.  And the restored calm has been sustained.


What happened next?

The chemotherapy nurses were able to catch up with their time-owing that had accumulated from the historical late finishes.  And the problem of high staff turnover and difficultly in recruitment evaporated.  Highly-trained chemotherapy nurses who had left because of the stressful chaos now want to come back.  Pride-in-work has been re-established.  There are no losers.  It is a win-win-win result for staff, patients and organisations.


So, how was this “miracle” achieved?

Well, first of all it was not a miracle.  The flip from chaos-to-calm was predicted to happen.  In fact, that was the primary objective of the design change.

So, how what this design change achieved?

By establishing the diagnosis first – the primary cause of the chaos – and it was not what the team believed it was.  And that is the reason they did not believe the design change would work; and that is the reason they were so surprised when it did.

So, how was the diagnosis achieved?

By using an advanced systems engineering technique called Complex Physical System (CPS) modelling.  That was the game changer!  All the basic quality improvement techniques had been tried and had not worked – process mapping, direct observation, control charts, respectful conversations, brainstorming, and so on.  The system structure was too complicated. The system behaviour was too complex (i.e. chaotic).

What CPS revealed was that the primary cause of the chaotic behaviour was the work scheduling policy.  And with that clarity of focus, the team were able to re-design the policy themselves using a simple paper-and-pen technique.  That is why it cost nothing to change.

So, why hadn’t they been able to do this before?

Because systems engineering is not a taught component of the traditional quality improvement offerings.  Healthcare is rather different to manufacturing! As the complexity of the health care system increases we need to learn the more advanced tools that are designed for this purpose.

What is the same is the principle of restoring pride-in-work and that is what Dr Berwick learned from Dr Deming in 1986, and what we saw happen on June 19th, 2019.

To read the story of how it was done click here.

Crossing the Chasm

Innovation means anything new and new ideas spread through groups of people in a characteristic way that was described by Everett Rogers in the 1970’s.

The evidence showed that innovation started with the small minority of innovators (about 2%)  and  diffuses through the population – first to the bigger minority called early adopters.

Later, it became apparent that the diffusion path was not smooth and that there was a chasm into which many promising innovations fell and from which they did not emerge.

If this change chasm can be bridged then a tipping point is achieved when wider adoption by the majority becomes much more likely.

And for innovations that fundamentally change the way we live and work, this whole process can take decades! Generations even.

Take mobile phones and the Internet as good examples. How many can remember life before those innovations?  And we are living the transition to renewable energy, artificial intelligence and electric cars.


So, it is very rewarding to see growing evidence that the innovators who started the health care improvement movement back in the 1990’s, such as Dr Don Berwick in the USA and Dr Kate Silvester in the UK, have grown a generation of early adopters who now appear to have crossed the chasm.

The evidence for that can be found on the NHS Improvement website – for example the QSIR site (Quality, Service Improvement and Redesign).

Browsing through the QSIR catalogue of improvement tools I recognised them all from previous incarnations developed and tested by the NHS Modernisation Agency and NHS Institute for Innovation and Improvement.  And although those organisations no longer exist, they served as incubators for the growing community of healthcare improvement practitioners (CHIPs) and their legacy lives on.

This is all good news because we now also have a new NHS Long Term Plan which sets out an ambitious vision for the next 10 years and it is going to need a lot of work from the majority of people who work in the NHS to deliver. That will need capability-at-pace-and-scale.

And this raises some questions:

Q1: Will the legacy of the MA and NHSi scale to meet the more challenging task of designing and delivering the vision of a system of Integrated Care Systems (ICS) that include primary care, secondary care, community care, mental health and social care?

Q2: Will some more innovation be required?

If history is anything to go by, then I suspect the the answers will be “Q1: No” and “Q2: Yes”.

Bring it on!

Carveoutosis Multiforme Fulminans

This is the name given to an endemic, chronic, systemic, design disease that afflicts the whole NHS that very few have heard of, and even fewer understand.

This week marked two milestones in the public exposure of this elusive but eminently treatable health care system design illness that causes queues, delays, overwork, chaos, stress and risk for staff and patients alike.

The first was breaking news from the team in Swansea led by Chris Jones.

They had been grappling with the wicked problem of chronic queues, delays, chaos, stress, high staff turnover, and escalating costs in their Chemotherapy Day Unit (CDU) at the Singleton Hospital.

The breakthrough came earlier in the year when we used the innovative eleGANTT® system to measure and visualise the CDU chaos in real-time.

This rich set of data enabled us, for the first time, to apply a powerful systems engineering  technique called counterfactual analysis which revealed the primary cause of the chaos – the elusive and counter-intuitive design disease carvoutosis multiforme fulminans.

And this diagnosis implied that the chaos could be calmed quickly and at no cost.

But that news fell on slightly deaf ears because, not surprisingly, the CDU team were highly sceptical that such a thing was possible.

So, to convince them we needed to demonstrate the adverse effect of carveoutosis in a way that was easy to see.  And to do that we used some advanced technology: dice and tiddly winks.

The reaction of the CDU nurses was amazing.  As soon as they ‘saw’ it they clicked and immediately grasped how to apply it in their world.  They designed the change they needed to make in a matter of minutes.


But the proof-of-the-pudding-is-in-the eating and we arranged a one-day-test-of-change of their anti-carveout design.

The appointed day arrived, Wednesday 19th June.  The CDU nurses implemented their new design (which cost nothing to do).  Within an hour of the day starting they reported that the CDU was strangely calm.   And at the end of the day they reported that it had remained strangely calm all day; and that they had time for lunch; and that they had time to do all their admin as they went; and that they finished on time; and that the patients did not wait for their chemotherapy; and that the patients noticed the chaos-to-calm transformation too.

They treated just the same number of patients as usual with the same staff, in the same space and with the same equipment.  It cost nothing to make the change.

To say they they were surprised is an understatement!  They were so surprised and so delighted that they did not want to go back to the old design – but they had to because it was only a one-day-test-of-change.

So, on Thursday and Friday they reverted back to the carveoutosis design.  And the chaos returned.  That nailed it!  There was a riot!!  The CDU nurses refused to wait until later in the year to implement their new design and they voted unanimously to implement it from the following Monday.  And they did.  And calm was restored.


The second milestone happened on Thursday 11th July when we ran a Health Care Systems Engineering (HCSE) Masterclass on the very same topic … chronic systemic carveoutosis multiforme fulminans.

This time we used the dice and tiddly winks to demonstrate the symptoms, signs and the impact of treatment.  Then we explored the known pathophysiology of this elusive and endemic design disease in much more depth.

This is health care systems engineering in action.

It seems to work.

Leverage Points

One of the most surprising aspects of systems is how some big changes have no observable effect and how some small changes are game-changers. Why is that?

The technical name for this phenomenon is leverage points.

When a nudge is made at a leverage point in a real system the impact is amplified – so a small cause can have a big effect.

And when a big kick is made where there is no leverage point the effort is dissipated. Like flogging a dead horse.

Other names for leverage points are triggers, buttons, catalysts, fuses etc.


The fact that there is a big effect does not imply it is a good effect.

Poking a leverage point can trigger a catastrophe just as it can trigger a celebration. It depends on how it is poked.

Perhaps that is one reason people stay away from them.

But when our heath care system performance is in decline, if we do nothing or if we act but stay away from leverage points (i.e. flog the dead horse) then we will deny ourselves the opportunity of improvement.

So, we need a way to (a) identify the leverage points and (b) know how to poke them positively and know how to not poke them into delivering a catastrophe.


Here is a couple of real examples.


The time-series chart above shows the A&E performance of a real acute trust.  Notice the pattern as we read left-to-right; baseline performance is OKish and dips in the winters, and the winter dips get deeper but the baseline performance recovers.  In April 2015 (yellow flag) the system behaviour changes, and it goes into a steady decline with added winter dips.  This is the characteristic pattern of poking a leverage point in the wrong way … and the fact it happened at the start of the financial year suggests that Finance was involved.  Possibly triggered by a cost-improvement programme (CIP) action somewhere else in the system.  Save a bit of money here and create a bigger problem over there. That is how systems work. Not my budget so not my problem.

Here is a different example, again from a real hospital and around the same time.  It starts with a similar pattern of deteriorating performance and there is a clear change in system behaviour in Jan 2015.  But in this case the performance improves and stays improved.  Again, the visible sign of a leverage point being poked but this time in a good way.

In this case I do know what happened.  A contributory cause of the deteriorating performance was correctly diagnosed, the leverage point was identified, a change was designed and piloted, and then implemented and validated.  And it worked as predicted.  It was not a fluke.  It was engineered.


So what is the reason that the first example much more commonly seen than the second?

That is a very good question … and to answer it we need to explore the decision making process that leads up to these actions because I refuse to believe that anyone intentionally makes decisions that lead to actions that lead to deterioration in health care performance.

And perhaps we can all learn how to poke leverage points in a positive way?

Commissioned Improvement

This recent tweet represents a significant milestone.  It formally recognises and celebrates in public the impact that developing health care systems engineering (HCSE) capability has had on the culture of the organisation.

What is also important is that the HCSE training was not sought and funded by the Trust, it was discovered by chance and funded by their commissioners, the local clinical commissioning group (CCG).


The story starts back in the autumn of 2017 and, by chance, I was chatting with Rob, a friend-of-a-friend, about work. As you do. It turned out that Rob was the CCG Lead for Unscheduled Care and I was describing how HCSE can be applied in any part of any health care system; primary care, secondary care, scheduled, unscheduled, clinical, operational or whatever.  They are all parts of the same system and the techniques and tools of improvement-by-design are generic.  And I described lots of real examples of doing just that and the sustained improvements that had followed.

So he asked “If you were to apply this approach to unscheduled care in a large acute trust how would you do it?“.  My immediate reply was “I would start by training the front line teams in the HCSE Level 1 stuff, and the first step is to raise awareness of what is possible.  We do that by demonstrating it in practice because you have to see it and experience it to believe it.

And so that is what we did.

The CCG commissioned a one-year HCSE Level 1 programme for four teams at University Hospitals of North Midlands (UHNM) and we started in January 2018 with some One Day Flow Workshops.

The intended emotional effect of a Flow Workshop is surprise and delight.  The challenge for the day is to start with a simulated, but very realistic, one-stop outpatient clinic which is chaotic and stressful for everyone.  And with no prior training the delegates transform it into a calm and enjoyable experience using the HCSE approach.  It is called emergent learning.  We have run dozens of these workshops and it has never failed.

After directly experiencing HCSE working in practice the teams that stepped up to the challenge were from ED, Transformation, Ambulatory Emergency Care and Outpatients.


The key to growing HCSE capability is to assemble small teams, called micro-system design teams (MSDTs) and to focus on causes that fall inside their circle of control.

The MSDT sessions need to be regular, short, and facilitated by an experienced HCSE who has seen it, done it and can teach it.

In UHNM, the Transformation team divided themselves between the front-line teams and they learned HCSE together.  Here’s a picture of the ED team … left to right we have Alex, Mark and Julie (ED consultants) then Steve and Janina (Transformation).  The essential tools are a big table, paper, pens, notebooks, coffee and a laptop/projector.

The purpose of each session is empirical learning-by-doing i.e. using a real improvement challenge to learn and practice the method so that before the end of the programme the team can confidently “fly” solo.

That is the key to continued growth and sustained improvement.  The HCSE capability needs to become embedded.

It is good fun and immensely rewarding to see the “ah ha” moments and improvements happen as the needle on the emotometer moves from “Can’t Do” to “Can Do”.

Metamorphosis is re-arranging what you already have in a way that works better.


The tweet is objective evidence that demonstrates the HCSE programme delivers as designed.  It is fit-for-purpose.  It is called validation.

The other objective evidence of effectiveness comes from the learning-by-doing projects themselves.  And for an individual to gain a coveted HCSE Level 1 Certificate of Competency requires writing up to a publishable quality and sharing the story. Warts-and-all.

To read the full story of just click here

And what started this was the CCG who had the strategic vision, looked outside themselves for innovative approaches, and demonstrated the courage to take a risk.

Commissioned Improvement.

System Dynamics

On Thursday we had a very enjoyable and educational day.  I say “we” because there were eleven of us learning together.

There was Declan, Chris, Lesley, Imran, Phil, Pete, Mike, Kate, Samar and Ellen and me (behind the camera).  Some are holding their long-overdue HCSE Level-1 Certificates and Badges that were awarded just before the photo was taken.

The theme for the day was System Dynamics which is a tried-and-tested approach for developing a deep understanding of how a complex adaptive system (CAS) actually works.  A health care system is a complex adaptive system.

The originator of system dynamics is Jay Wright Forrester who developed it around the end of WW2 (i.e. about 80 years ago) and who later moved to MIT.  Peter Senge, author of The Fifth Discipline was part of the same group as was Donella Meadows who wrote Limits to Growth.  Their dream was much bigger – global health – i.e. the whole planet not just the human passengers!  It is still a hot topic [pun intended].


The purpose of the day was to introduce the team of apprentice health care system engineers (HCSEs) to the principles of system dynamics and to some of its amazing visualisation and prediction techniques and tools.

The tangible output we wanted was an Excel-based simulation model that we could use to solve a notoriously persistent health care service management problem …

How to plan the number of new and review appointment slots needed to deliver a safe, efficient, effective and affordable chronic disease service?

So, with our purpose in mind, the problem clearly stated, and a blank design canvas we got stuck in; and we used the HCSE improvement-by-design framework that everyone was already familiar with.

We made lots of progress, learned lots of cool stuff, and had lots of fun.

We didn’t quite get to the final product but that was OK because it was a very tough design assignment.  We got 80% of the way there though which is pretty good in one day from a standing start.  The last 20% can now be done by the HCSEs themselves.

We were all exhausted at the end.  We had worked hard.  It was a good day.


And I am already looking forward to the next HCSE Masterclass that will be in about six weeks time.  This one will address another chronic, endemic, systemic health care system “disease” called carveoutosis multiforme fulminans.

Warts-and-All

This week saw the publication of a landmark paper – one that will bring hope to many.  A paper that describes the first step of a path forward out of the mess that healthcare seems to be in.  A rational, sensible, practical, learnable and enjoyable path.


This week I also came across an idea that triggered an “ah ha” for me.  The idea is that the most rapid learning happens when we are making mistakes about half of the time.

And when I say ‘making a mistake’ I mean not achieving what we predicted we would achieve because that implies that our understanding of the world is incomplete.  In other words, when the world does not behave as we expect, we have an opportunity to learn and to improve our ability to make more reliable predictions.

And that ability is called wisdom.


When we get what we expect about half the time, and do not get what we expect about the other half of the time, then we have the maximum amount of information that we can use to compare and find the differences.

Was it what we did? Was it what we did not do? What are the acts and errors of commission and omission? What can we learn from those? What might we do differently next time? What would we expect to happen if we do?


And to explore this terrain we need to see the world as it is … warts and all … and that is the subject of the landmark paper that was published this week.


The context of the paper is improvement of cancer service delivery, and specifically of reducing waiting time from referral to first appointment.  This waiting is a time of extreme anxiety for patients who have suspected cancer.

It is important to remember that most people with suspected cancer do not have it, so most of the work of an urgent suspected cancer (USC) clinic is to reassure and to relieve the fear that the spectre of cancer creates.

So, the sooner that reassurance can happen the better, and for the unlucky minority who are diagnosed with cancer, the sooner they can move on to treatment the better.

The more important paragraph in the abstract is the second one … which states that seeing the system behaviour as it is, warts-and-all,  in near-real-time, allows us to learn to make better decisions of what to do to achieve our intended outcomes. Wiser decisions.

And the reason this is the more important paragraph is because if we can do that for an urgent suspected cancer pathway then we can do that for any pathway.


The paper re-tells the first chapter of an emerging story of hope.  A story of how an innovative and forward-thinking organisation is investing in building embedded capability in health care systems engineering (HCSE), and is now delivering a growing dividend.  Much bigger than the investment on every dimension … better safety, faster delivery, higher quality and more affordability. Win-win-win-win.

The only losers are the “warts” – the naysayers and the cynics who claim it is impossible, or too “wicked”, or too difficult, or too expensive.

Innovative reality trumps cynical rhetoric … and the full abstract and paper can be accessed here.

So, well done to Chris Jones and the whole team in ABMU.

And thank you for keeping the candle of hope alight in these dark, stormy and uncertain times for the NHS.

Congratulations Kate!

This week, it was my great pleasure to award the first Health Care Systems Engineering (HCSE) Level 2 Medal to Dr Kate Silvester, MBA, FRCOphth.

Kate is internationally recognised as an expert in health care improvement and over more than two decades has championed the adoption of improvement methods such as Lean and Quality Improvement in her national roles in the Modernisation Agency and then the NHS Institute for Innovation and Improvement.

Kate originally trained as a doctor and then left the NHS to learn manufacturing systems engineering with Lucas and Airbus.  Kate then brought these very valuable skills back with her into the NHS when she joined the Cancer Services Collaborative.

Kate is co-founder of the Journal of Improvement Science and over the last five years has been highly influential in the development of the Health Care Systems Engineering Programme – the first of its kind in the world that is designed by clinicians for clinicians.

The HCSE Programme is built on the pragmatic See One-Do Some-Teach Many principle of developing competence and confidence through being trained and coached by a more experienced practitioner while doing projects of increasing complexity and training and coaching others who are less experienced.

Competence is based on evidence-of-effectiveness, and Kate has achieved HCSE Level 2 by demonstrating that she can do HCSE and that she can teach and coach others how to do HCSE as well.

To illustrate, here is a recent FHJ paper that Kate has authored which illustrates the HCSE principles applied in practice in a real hospital.  This work was done as part of the Health Foundation’s Flow, Cost and Quality project that Kate led and recent evidence proves that the improvements have sustained and spread.  South Warwickshire NHS Foundation Trust is now one of the top-performing Trusts in the NHS.

More recently, Kate has trained and coached new practitioners in Exeter and North Devon who have delivered improvements and earned their HCSE 1 wings.

Congratulations Kate!

Filter-Pull versus Push-Carveout

It is November 2018, the clocks have changed back to GMT, the trick-and-treats are done, the fireworks light the night skies and spook the hounds, and the seasonal aisles in the dwindling number of high street stores are already stocked for Christmas.

I have been a bit quiet on the blog front this year but that is because there has been a lot happening behind the scenes and I have had to focus.

One output of is the recent publication of an article in Future Healthcare Journal on the topic of health care systems engineering (HCSE).  Click here to read the article and the rest of this excellent edition of FHJ that is dedicated to “systems”.

So, as we are back to the winter phase of the annual NHS performance cycle it is a good time to glance at the A&E Performance Radar and see who is doing well, and not-so-well.

Based on past experience, I was expecting Luton to be Top-of-the-Pops and so I was surprised (and delighted) to see that Barnsley have taken the lead.  And the chart shows that Barnsley has turned around a reasonable but sagging performance this year.

So I would be asking “What has happened at Barnsley that we can all learn from? What did you change and how did you know what and how to do that?

To be sure, Luton is still in the top three and it is interesting to explore who else is up there and what their A&E performance charts look like.

The data is all available for anyone with a web-browser to view – here.

For completeness, this is the chart for Luton, and we can see that, although the last point is lower than Barnsley, the performance-over-time is more consistent and less variable. So who is better?

NB. This is a meaningless question and illustrates the unhelpful tactic of two-point comparisons with others, and with oneself. The better question is “Is my design fit-for-purpose?”

The question I have for Luton is different. “How do you achieve this low variation and how do you maintain it? What can we all learn from you?”

And I have some ideas how they do that because in a recent HSJ interview they said “It is all about the filters“.


What do they mean by filters?

A filter is an essential component of any flow design if we want to deliver high safety, high efficiency, high effectiveness, and high productivity.  In other words, a high quality, fit-4-purpose design.

And the most important flow filters are the “upstream” ones.

The design of our upstream flow filters is critical to how the rest of the system works.  Get it wrong and we can get a spiralling decline in system performance because we can unintentionally trigger a positive feedback loop.

Queues cause delays and chaos that consume our limited resources.  So, when we are chasing cost improvement programme (CIP) targets using the “salami slicer” approach, and combine that with poor filter design … we can unintentionally trigger the perfect storm and push ourselves over the catastrophe cliff into perpetual, dangerous and expensive chaos.

If we look at the other end of the NHS A&E league table we can see typical examples that illustrate this pattern.  I have used this one only because it happens to be bottom this month.  It is not unique.

All other NHS trusts fall somewhere between these two extremes … stable, calm and acceptable and unstable, chaotic and unacceptable.

Most display the stable and chaotic combination – the “Zone of Perpetual Performance Pain”.

So what is the fundamental difference between the outliers that we can all learn from? The positive deviants like Barnsley and Luton, and the negative deviants like Blackpool.  I ask this because comparing the extremes is more useful than laboriously exploring the messy, mass-mediocrity in the middle.

An effective upstream flow filter design is a necessary component, but it is not sufficient. Triage (= French for sorting) is OK but it is not enough.  The other necessary component is called “downstream pull” and omitting that element of the design appears to be the primary cause of the chronic chaos that drags trusts and their staff down.

It is not just an error of omission though, the current design is an actually an error of commission. It is anti-pull; otherwise known as “push”.


This year I have been busy on two complicated HCSE projects … one in secondary care and the other in primary care.  In both cases the root cause of the chronic chaos is the same.  They are different systems but have the same diagnosis.  What we have revealed together is a “push-carveout” design which is the exact opposite of the “upstream-filter-plus-downstream-pull” design we need.

And if an engineer wanted to design a system to be chronically chaotic then it is very easy to do. Here is the recipe:

a) Set high average utilisation target of all resources as a proxy for efficiency to ensure everything is heavily loaded. Something between 80% and 100% usually does the trick.

b) Set a one-size-fits-all delivery performance target that is not currently being achieved and enforce it punitively.  Something like “>95% of patients seen and discharged or admitted in less than 4 hours, or else …”.

c) Divvy up the available resources (skills, time, space, cash, etc) into ring-fenced pots.

Chronic chaos is guaranteed.  The Laws of Physics decree it.


Unfortunately, the explanation of why this is the case is counter-intuitive, so it is actually better to experience it first, and then seek the explanation.  Reality first, reasoning second.

And, it is a bittersweet experience, so it needs to be done with care and compassion.

And that’s what I’ve been busy doing this year. Creating the experiences and then providing the explanations.  And if done gradually what then happens is remarkable and rewarding.

The FHJ article outlines one validated path to developing individual and organisational capability in health care systems engineering.

Seeing The Voice of the System

It is always a huge compliment to see an idea improved and implemented by inspired innovators.

Health care systems engineering (HCSE) brings together concepts from the separate domains of systems engineering and health care.  And one idea that emerged from this union is to regard the health care system as a living, evolving, adapting entity.

In medicine we have the concept of ‘vital signs’ … a small number of objective metrics that we can measure easily and quickly.  With these we can quickly assess the physical health of a patient and decide if we need to act, and when.

With a series of such measurements over time we can see the state of a patient changing … for better or worse … and we can use this to monitor the effect of our actions and to maintain the improvements we achieve.

For a patient, the five vital signs are conscious level, respiratory rate, pulse, blood pressure and temperature. To sustain life we must maintain many flows within healthy ranges and the most critically important is the flow of oxygen to every cell in the body.  Oxygen is carried by blood, so blood flow is critical.

So, what are the vital signs for a health care system where the flows are not oxygen and blood?  They are patients, staff, consumables, equipment, estate, data and cash.

The photograph shows a demonstration of a Vitals Dashboard for a part of the cancer care system in the ABMU health board in South Wales.  The inspirational innovators who created it are Imran Rao (left), Andy Jones (right) and Chris Jones (top left), and they are being supported by ABMU to do this as part of their HCSE training programme.

So well done guys … we cannot wait to hear how being better able to seeing the voice of your cancer system translates into improved care for patients, and improved working life for the dedicated NHS staff, and improved use of finite public resources.  Win-win-win.

Making NHS Data Count

The debate about how to sensibly report NHS metrics has been raging for decades.

So I am delighted to share the news that NHS Improvement have finally come out and openly challenged the dogma that two-point comparisons and red-amber-green (RAG) charts are valid methods for presenting NHS performance data.

Their rather good 147-page guide can be downloaded: HERE


The subject is something called a statistical process control (SPC) chart which sounds a bit scary!  The principle is actually quite simple:

Plot data that emerges over time as a picture that tells a story – #plotthedots

The  main trust of the guide is learning the ropes of how to interpret these pictures in a meaningful way and to avoid two traps (i.e. errors).

Trap #1 = Over-reacting to random variation.
Trap #2 = Under-reacting to non-random variation.

Both of these errors cause problems, but in different ways.


Over-reacting to random variation

Random variation is a fact of life.  No two days in any part of the NHS are the same.  Some days are busier/quieter than others.

Plotting the daily-arrivals-in-A&E dots for a trust somewhere in England gives us this picture.  (The blue line is the average and the purple histogram shows the distribution of the points around this average.)

Suppose we were to pick any two days at random and compare the number of arrivals on those two days? We could get an answer anywhere between an increase of 80% (250 to 450) or a decrease of 44% (450 to 250).

But if we look at the while picture above we get the impression that, over time:

  1. There is an expected range of random-looking variation between about 270 and 380 that accounts for the vast majority of days.
  2. There are some occasional, exceptional days.
  3. There is the impression that average activity fell by about 10% in around August 2017.

So, our two-point comparison method seriously misleads us – and if we react to the distorted message that a two-point comparison generates then we run the risk of increasing the variation and making the problem worse.

Lesson: #plotthedots


One of the downsides of SPC is the arcane and unfamiliar language that is associated with it … terms like ‘common cause variation‘ and ‘special cause variation‘.  Sadly, the authors at NHS Improvement have fallen into this ‘special language’ trap and therefore run the risk of creating a new clique.

The lesson here is that SPC is a specific, simplified application of a more generic method called a system behaviour chart (SBC).

The first SPC chart was designed by Walter Shewhart in 1924 for one purpose and one purpose only – for monitoring the output quality of a manufacturing process in terms of how well the product conformed to the required specification.

In other words: SPC is an output quality audit tool for a manufacturing process.

This has a number of important implications for the design of the SPC tool:

  1. The average is not expected to change over time.
  2. The distribution of the random variation is expected to be bell-shaped.
  3. We need to be alerted to sudden shifts.

Shewhart’s chart was designed to detect early signs of deviation of a well-performing manufacturing process.  To detect possible causes that were worth investigating and minimise the adverse effects of over-reacting or under-reacting.


However,  for many reasons, the tool we need for measuring the behaviour of healthcare processes needs to be more sophisticated than the venerable SPC chart.  Here are three of them:

  1. The average is expected to change over time.
  2. The distribution of the random variation is not expected to be bell-shaped.
  3. We need to be alerted to slow drifts.

Under-Reacting to Non-Random Variation

Small shifts and slow drifts can have big cumulative effects.

Suppose I am a NHS service manager and I have a quarterly performance target to meet, so I have asked my data analyst to prepare a RAG chart to review my weekly data.

The quarterly target I need to stay below is 120 and my weekly RAG chart is set to show green when less than 108 (10% below target) and red when more than 132 (10% above target) because I know there is quite a lot of random week-to-week variation.

On the left is my weekly RAG chart for the first two quarters and I am in-the-green for both quarters (i.e. under target).

Q: Do I need to do anything?

A: The first quarter just showed “greens” and “ambers” so I relaxed and did nothing. There are a few “reds” in the second quarter, but about the same number as the “greens” and lots of “ambers” so it looks like I am about on target. I decide to do nothing again.

At the end of Q3 I’m in big trouble!

The quarterly RAG chart has flipped from Green to Red and I am way over target for the whole quarter. I missed the bus and I’m looking for a new job!

So, would a SPC chart have helped me here?

Here it is for Q1 and Q2.  The blue line is the target and the green line is the average … so below target for both quarters, as the RAG chart said.

The was a dip in Q1 for a few weeks but it was not sustained and the rest of the chart looks stable (all the points inside the process limits).  So, “do nothing” seemed like a perfectly reasonable strategy. Now I feel even more of a victim of fortune!

So, let us look at the full set of weekly date for the financial year and apply our  retrospectoscope.

This is just a plain weekly performance run chart with the target limit plotted as the blue line.

It is clear from this that there is a slow upward drift and we can see why our retrospective quarterly RAG chart flipped from green to red, and why neither our weekly RAG chart nor our weekly SPC chart alerted us in time to avoid it!

This problem is often called ‘leading by looking in the rear view mirror‘.

The variation we needed to see was not random, it was a slowly rising average, but it was hidden in the random variation and we missed it.  So we under-reacted and we paid the price.


This example illustrates another limitation of both RAG charts and SPC charts … they are both insensitive to small shifts and slow drifts when there is lots of random variation around, which there usually is.

So, is there a way to avoid this trap?

Yes. We need to learn to use the more powerful system behaviour charts and the systems engineering techniques and tools that accompany them.


But that aside, the rather good 147-page guide from NHS Improvement is a good first step for those still using two-point comparisons and RAG charts and it can be downloaded: HERE

The 85% Optimum Bed Occupancy Myth

A few years ago I had a rant about the dangers of the widely promoted mantra that 85% is the optimum average measured bed-occupancy target to aim for.

But ranting is annoying, ineffective and often counter-productive.

So, let us revisit this with some calm objectivity and disprove this Myth a step at a time.

The diagram shows the system of interest (SoI) where the blue box represents the beds, the coloured arrows are the patient flows, the white diamond is a decision and the dotted arrow is information about how full the hospital is (i.e. full/not full).

A new emergency arrives (red arrow) and needs to be admitted. If the hospital is not full the patient is moved to an empty bed (orange arrow), the medical magic happens, and some time later the patient is discharged (green arrow).  If there is no bed for the emergency request then we get “spillover” which is the grey arrow, i.e. the patient is diverted elsewhere (n.b. these are critically ill patients …. they cannot sit and wait).


This same diagram could represent patients trying to phone their GP practice for an appointment.  The blue box is the telephone exchange and if all the lines are busy then the call is dropped (grey arrow).  If there is a line free then the call is connected (orange arrow) and joins a queue (blue box) to be answered some time later (green arrow).

In 1917, a Danish mathematician/engineer called Agner Krarup Erlang was working for the Copenhagen Telephone Company and was grappling with this very problem: “How many telephone lines do we need to ensure that dropped calls are infrequent AND the switchboard operators are well utilised?

This is the perennial quality-versus-cost conundrum. The Value-4-Money challenge. Too few lines and the quality of the service falls; too many lines and the cost of the service rises.

Q: Is there a V4M ‘sweet spot” and if so, how do we find it? Trial and error?

The good news is that Erlang solved the problem … mathematically … and the not-so good news is that his equations are very scary to a non mathematician/engineer!  So this solution is not much help to anyone else.


Fortunately, we have a tool for turning scary-equations into easy-2-see-pictures; our trusty Excel spreadsheet. So, here is a picture called a heat-map, and it was generated from one of Erlang’s equations using Excel.

The Erlang equation is lurking in the background, safely out of sight.  It takes two inputs and gives one output.

The first input is the Capacity, which is shown across the top, and it represents the number of beds available each day (known as the space-capacity).

The second input is the Load (or offered load to use the precise term) which is down the left side, and is the number of bed-days required per day (e.g. if we have an average of 10 referrals per day each of whom would require an average 2-day stay then we have an average of 10 x 2 = 20 bed-days of offered load per day).

The output of the Erlang model is the probability that a new arrival finds all the beds are full and the request for a bed fails (i.e. like a dropped telephone call).  This average probability is displayed in the cell.  The colour varies between red (100% failure) and green (0% failure), with an infinite number of shades of red-yellow-green in between.

We can now use our visual heat-map in a number of ways.

a) We can use it to predict the average likelihood of rejection given any combination of bed-capacity and average offered load.

Suppose the average offered load is 20 bed-days per day and we have 20 beds then the heat-map says that we will reject 16% of requests … on average (bottom left cell).  But how can that be? Why do we reject any? We have enough beds on average! It is because of variation. Requests do not arrive in a constant stream equal to the average; there is random variation around that average.  Critically ill patients do not arrive at hospital in a constant stream; so our system needs some resilience and if it does not have it then failures are inevitable and mathematically predictable.

b) We can use it to predict how many beds we need to keep the average rejection rate below an arbitrary but acceptable threshold (i.e. the quality specification).

Suppose the average offered load is 20 bed-days per day, and we want to have a bed available more than 95% of the time (less than 5% failures) then we will need at least 25 beds (bottom right cell).

c) We can use it to estimate the maximum average offered load for a given bed-capacity and required minimum service quality.

Suppose we have 22 beds and we want a quality of >=95% (failure <5%) then we would need to keep the average offered load below 17 bed-days per day (i.e. by modifying the demand and the length of stay because average load = average demand * average length of stay).


There is a further complication we need to be mindful of though … the measured utilisation of the beds is related to the successful admissions (orange arrow in the first diagram) not to the demand (red arrow).  We can illustrate this with a complementary heat map generated in Excel.

For scenario (a) above we have an offered load of 20 bed-days per day, and we have 20 beds but we will reject 16% of requests so the accepted bed load is only 16.8 bed days per day  (i.e. (100%-16%) * 20) which is the reason that the average  utilisation is only 16.8/20 = 84% (bottom left cell).

For scenario (b) we have an offered load of 20 bed-days per day, and 25 beds and will only reject 5% of requests but the average measured utilisation is not 95%, it is only 76% because we have more beds (the accepted bed load is 95% * 20 = 19 bed-days per day and 19/25 = 76%).

For scenario (c) the average measured utilisation would be about 74%.


So, now we see the problem more clearly … if we blindly aim for an average, measured, bed-utilisation of 85% with the untested belief that it is always the optimum … this heat-map says it is impossible to achieve and at the same time offer an acceptable quality (>95%).

We are trading safety for money and that is not an acceptable solution in a health care system.


So where did this “magic” value of 85% come from?

From the same heat-map perhaps?

If we search for the combination of >95% success (<5% fail) and 85% average bed-utilisation then we find it at the point where the offered load reaches 50 bed-days per day and we have a bed-capacity of 56 beds.

And if we search for the combination of >99% success (<1% fail) and 85% average utilisation then we find it with an average offered load of just over 100 bed-days per day and a bed-capacity around 130 beds.

H’mm.  “Houston, we have a problem“.


So, even in this simplified scenario the hypothesis that an 85% average bed-occupancy is a global optimum is disproved.

The reality is that the average bed-occupancy associated with delivering the required quality for a given offered load with a specific number of beds is almost never 85%.  It can range anywhere between 50% and 100%.  Erlang knew that in 1917.


So, if a one-size-fits-all optimum measured average bed-occupancy assumption is not valid then how might we work out how many beds we need and predict what the expected average occupancy will be?

We would design the fit-4-purpose solution for each specific context …
… and to do that we need to learn the skills of complex adaptive system design …
… and that is part of the health care systems engineering (HCSE) skill-set.

 

The Strangeness of LoS

It had been some time since Bob and Leslie had chatted so an email from the blue was a welcome distraction from a complex data analysis task.

<Bob> Hi Leslie, great to hear from you. I was beginning to think you had lost interest in health care improvement-by-design.

<Leslie> Hi Bob, not at all.  Rather the opposite.  I’ve been very busy using everything that I’ve learned so far.  It’s applications are endless, but I have hit a problem that I have been unable to solve, and it is driving me nuts!

<Bob> OK. That sounds encouraging and interesting.  Would you be able to outline this thorny problem and I will help if I can.

<Leslie> Thanks Bob.  It relates to a big issue that my organisation is stuck with – managing urgent admissions.  The problem is that very often there is no bed available, but there is no predictability to that.  It feels like a lottery; a quality and safety lottery.  The clinicians are clamoring for “more beds” but the commissioners are saying “there is no more money“.  So the focus has turned to reducing length of stay.

<Bob> OK.  A focus on length of stay sounds reasonable.  Reducing that can free up enough beds to provide the necessary space-capacity resilience to dramatically improve the service quality.  So long as you don’t then close all the “empty” beds to save money, or fall into the trap of believing that 85% average bed occupancy is the “optimum”.

<Leslie> Yes, I know.  We have explored all of these topics before.  That is not the problem.

<Bob> OK. What is the problem?

<Leslie> The problem is demonstrating objectively that the length-of-stay reduction experiments are having a beneficial impact.  The data seems to say they they are, and the senior managers are trumpeting the success, but the people on the ground say they are not. We have hit a stalemate.


<Bob> Ah ha!  That old chestnut.  So, can I first ask what happens to the patients who cannot get a bed urgently?

<Leslie> Good question.  We have mapped and measured that.  What happens is the most urgent admission failures spill over to commercial service providers, who charge a fee-per-case and we have no choice but to pay it.  The Director of Finance is going mental!  The less urgent admission failures just wait on queue-in-the-community until a bed becomes available.  They are the ones who are complaining the most, so the Director of Governance is also going mental.  The Director of Operations is caught in the cross-fire and the Chief Executive and Chair are doing their best to calm frayed tempers and to referee the increasingly toxic arguments.

<Bob> OK.  I can see why a “Reduce Length of Stay Initiative” would tick everyone’s Nice If box.  So, the data analysts are saying “the length of stay has come down since the Initiative was launched” but the teams on the ground are saying “it feels the same to us … the beds are still full and we still cannot admit patients“.

<Leslie> Yes, that is exactly it.  And everyone has come to the conclusion that demand must have increased so it is pointless to attempt to reduce length of stay because when we do that it just sucks in more work.  They are feeling increasingly helpless and hopeless.

<Bob> OK.  Well, the “chronic backlog of unmet need” issue is certainly possible, but your data will show if admissions have gone up.

<Leslie> I know, and as far as I can see they have not.

<Bob> OK.  So I’m guessing that the next explanation is that “the data is wonky“.

<Leslie> Yup.  Spot on.  So, to counter that the Information Department has embarked on a massive push on data collection and quality control and they are adamant that the data is complete and clean.

<Bob> OK.  So what is your diagnosis?

<Leslie> I don’t have one, that’s why I emailed you.  I’m stuck.


<Bob> OK.  We need a diagnosis, and that means we need to take a “history” and “examine” the process.  Can you tell me the outline of the RLoS Initiative.

<Leslie> We knew that we would need a baseline to measure from so we got the historical admission and discharge data and plotted a Diagnostic Vitals Chart®.  I have learned something from my HCSE training!  Then we planned the implementation of a visual feedback tool that would show ward staff which patients were delayed so that they could focus on “unblocking” the bottlenecks.  We then planned to measure the impact of the intervention for three months, and then we planned to compare the average length of stay before and after the RLoS Intervention with a big enough data set to give us an accurate estimate of the averages.  The data showed a very obvious improvement, a highly statistically significant one.

<Bob> OK.  It sounds like you have avoided the usual trap of just relying on subjective feedback, and now have a different problem because your objective and subjective feedback are in disagreement.

<Leslie> Yes.  And I have to say, getting stuck like this has rather dented my confidence.

<Bob> Fear not Leslie.  I said this is an “old chestnut” and I can say with 100% confidence that you already have what you need in your T4 kit bag?

<Leslie>Tee-Four?

<Bob> Sorry, a new abbreviation. It stands for “theory, techniques, tools and training“.

<Leslie> Phew!  That is very reassuring to hear, but it does not tell me what to do next.

<Bob> You are an engineer now Leslie, so you need to don the hard-hat of Improvement-by-Design.  Start with your Needs Analysis.


<Leslie> OK.  I need a trustworthy tool that will tell me if the planned intervention has has a significant impact on length of stay, for better or worse or not at all.  And I need it to tell me that quickly so I can decide what to do next.

<Bob> Good.  Now list all the things that you currently have that you feel you can trust.

<Leslie> I do actually trust that the Information team collect, store, verify and clean the raw data – they are really passionate about it.  And I do trust that the front line teams are giving accurate subjective feedback – I work with them and they are just as passionate.  And I do trust the systems engineering “T4” kit bag – it has proven itself again-and-again.

<Bob> Good, and I say that because you have everything you need to solve this, and it sounds like the data analysis part of the process is a good place to focus.

<Leslie> That was my conclusion too.  And I have looked at the process, and I can’t see a flaw. It is driving me nuts!

<Bob> OK.  Let us take a different tack.  Have you thought about designing the tool you need from scratch?

<Leslie> No. I’ve been using the ones I already have, and assume that I must be using them incorrectly, but I can’t see where I’m going wrong.

<Bob> Ah!  Then, I think it would be a good idea to run each of your tools through a verification test and check that they are fit-4-purpose in this specific context.

<Leslie> OK. That sounds like something I haven’t covered before.

<Bob> I know.  Designing verification test-rigs is part of the Level 2 training.  I think you have demonstrated that you are ready to take the next step up the HCSE learning curve.

<Leslie> Do you mean I can learn how to design and build my own tools?  Special tools for specific tasks?

<Bob> Yup.  All the techniques and tools that you are using now had to be specified, designed, built, verified, and validated. That is why you can trust them to be fit-4-purpose.

<Leslie> Wooohooo! I knew it was a good idea to give you a call.  Let’s get started.


[Postscript] And Leslie, together with the other stakeholders, went on to design the tool that they needed and to use the available data to dissolve the stalemate.  And once everyone was on the same page again they were able to work collaboratively to resolve the flow problems, and to improve the safety, flow, quality and affordability of their service.  Oh, and to know for sure that they had improved it.

The Turkeys Voting For Xmas Trap

One of the quickest and easiest ways to kill an improvement initiative stone dead is to label it as a “cost improvement program” or C.I.P.

Everyone knows that the biggest single contributor to cost is salaries.

So cost reduction means head count reduction which mean people lose their jobs and their livelihood.

Who is going to sign up to that?

It would be like turkeys voting for Xmas.

There must be a better approach?

Yes. There is.


Over the last few weeks, groups of curious skeptics have experienced the immediate impact of systems engineering theory, techniques and tools in a health care context.

They experienced queues, delays and chaos evaporate in front of their eyes … and it cost nothing to achieve. No extra resources. No extra capacity. No extra cash.

Their reaction was “surprise and delight”.

But … it also exposed a problem.  An undiscussable problem.


Queues and chaos require expensive resources to manage.

We call them triagers, progress-chasers, and fire-fighters.  And when the queues and chaos evaporate then their jobs do too.

The problem is that the very people who are needed to make the change happen are the ones who become surplus-to-requirement as a result of the change.

So change does not happen.

It would like turkeys voting for Xmas.


The way around this impasse is to anticipate the effect and to proactively plan to re-invest the resource that is released.  And to re-invest it doing a more interesting and more worthwhile jobs than queue-and-chaos management.

One opportunity for re-investment is called time-buffering which is an effective way to improve resilience to variation, especially in an unscheduled care context.

Another opportunity for re-investment is tail-gunning the chronic backlogs until they are down to a safe and sensible size.

And many complain that they do not have time to learn about improvement because they are too busy managing the current chaos.

So, another opportunity for re-investment is training – oneself first and then others.


R.I.P.    C.I.P.

The Disbelief to Belief Transition

The NHS appears to be descending in a frenzy of fear as the winter looms and everyone says it will be worse than last and the one before that.

And with that we-are-going-to-fail mindset, it almost certainly will.

Athletes do not start a race believing that they are doomed to fail … they hold a belief that they can win the race and that they will learn and improve even if they do not. It is a win-win mindset.

But to succeed in sport requires more than just a positive attitude.

It also requires skills, training, practice and experience.

The same is true in healthcare improvement.


That is not the barrier though … the barrier is disbelief.

And that comes from not having experienced what it is like to take a system that is failing and transform it into one that is succeeding.

Logically, rationally, enjoyably and surprisingly quickly.

And, the widespread disbelief that it is possible is paradoxical because there are plenty of examples where others have done exactly that.

The disbelief seems to be “I do not believe that will work in my world and in my hands!

And the only way to dismantle that barrier-of-disbelief is … by doing it.


How do we do that?

The emotionally safest way is in a context that is carefully designed to enable us to surface the unconscious assumptions that are the bricks in our individual Barriers of Disbelief.

And to discard the ones that do not pass a Reality Check, and keep the ones that are OK.

This Disbelief-Busting design has been proven to be effective, as evidenced by the growing number of individuals who are learning how to do it themselves, and how to inspire, teach and coach others to as well.


So, if you would like to flip disbelief-and-hopeless into belief-and-hope … then the door is here.

The Awareness Ability Gap

It is always rewarding when separate but related ideas come together and go “click”.

And this week I had one of those “ah ha” moments while attempting to explain how the process of engagement works.

Many years ago I was introduced to the conscious-competence model of learning which I found really insightful.  Sometime later I renamed it as the awareness-ability model because the term “incompetent” felt too judgemental.

The idea is that when we learn, we all start from a position of being unaware of our inability.  We don’t know what we don’t know.

This state is called blissful ignorance.

And it is only when we try to do something that we become aware of what we cannot do; which can lead to temper tantrums!

As we ask, listen, reflect, learn, and practice our ability improves and we enter the zone of Know How.  We become able to demonstrate what we can do, and explain how we are doing it.

The Zone of Known Known.

The final phase comes when our ability becomes so habitual that we forget how we achieve our skill – it has become so intuitive and second nature.


Some years later I was introduced to the Nerve Curve which is the emotional roller-coaster ride that accompanies change.  Any form of change.

The multi-step model was described in the context of bereavement by psychiatrist Elisabeth Kübler-Ross in her 1969 book “On Death & Dying: What the Dying Have to Teach Doctors, Nurses, Clergy and their Families.

More recently this grief reaction has been extended and applied by authors such as William Bridges and John Fisher in the less emotionally traumatic contexts called transitions.

The characteristic sequence of emotions are triggered by external events are:

  • shock
  • denial
  • frustration
  • blame
  • guilt
  • depression
  • acceptance
  • engagement
  • excitement.

The important messages in both of these models is that (a) this is a normal and expected process and (b) we can get stuck along the path of transition.  We can disengage at several points, signalling to others that we have come off the track.  When we do that we exhibit behaviours such as denial, disillusionment and hostility.


More recently I was introduced to the work of the late Chris Argyris and specifically the concept of “defensive reasoning“.

The essence of the concept:  As we start to become aware of a gap between our intentions and our impact, then we feel threatened and our natural emotional reaction is defensive.  This is the essence of the behaviour called “resistance to change”, and it is interesting to note that “smart” people are particularly adept at it.


These three concepts are clearly related in some way.   But how?


As a systems engineer I am used to cyclical processes and the concepts of wavelength, amplitude, phase and offset, and I found myself looking at the Awareness-Ability cycle and asking:

“How could that cycle generate the characteristic shape of the transition curve?”

Then the Argyris idea of the gap between intent and impact popped up and triggered another question:

“What if we look at the gap between our ability and our awareness?”

So, I conducted a thought experiment and imagined myself going around the cycle – and charting my ability, awareness and emotional state along the way … and this sketch emerged. Ah ha!

When my awareness exceeded my ability I felt disheartened. That is the defensive reasoning that Chris Argyris talks about, the emotional barrier to self-improvement.

But that sense is, paradoxically, associated with the steepest part of the learning curve.  It is almost as it there is a piece of emotional elastic linking the blue and green lines and how we feel is related to how much it is being stretched and in what direction.


This insight suggested to me that the process of building self-engagement requires opening the ability-versus-awareness gap a little-bit-at-a-time, then sensing the emotional discomfort, and then actively releasing the tension by learning a new concept, principle, technique or tool (and usually all four).  That makes sense.

Evidence-Based Co-Design

The first step in a design conversation is to understand the needs of the customer.

It does not matter if you are designing a new kitchen, bathroom, garden, house, widget, process, or system.  It is called a “needs analysis”.

Notice that it is not called a “wants analysis”.  They are not the same thing because there is often a gap between what we want (and do not want) and what we need (and do not need).

The same is true when we are looking to use a design-based approach to improve something that we already have.


This is especially true when we are improving services because the the needs and wants of a service tend to drift and shift continuously, and we are in a continual state of improvement.

For design to work the “customers” and the “suppliers” need work collaboratively to ensure that they both get what they need.

Frustration and fragmentation are the symptoms of a combative approach where a “win” for one is a “lose” for the other (NB. In absolute terms both will end up worse off than they started so both lose in the long term.)


And there is a tried and tested process to collaborative improvement-by-design.

One version is called “experience based co-design” (EBCD) and it was cooked up in a health care context about 20 years ago and shown to work in a few small pilot studies.

The “experience” that triggered the projects was almost always a negative one and was associated with feelings of frustration, anxiety and disappointment. So, the EBCD case studies were more focused on helping the protagonists to share their perspectives, in the belief that will be enough to solve the problem.  And it is indeed a big step forwards.

It has a limitation though.  It assumes that the staff and patients know how to design processes so that they are fit-4-purpose, and the evidence to support that assumption is scanty.

In one pilot in mental health, the initial improvement (a fall in patient and carer complaints) was not sustained.  The reason given was that the staff who were involved in the pilot inevitably moved on, and as they did the old attitudes, beliefs and behaviours returned.


So, an improved version of EBCD is needed.  One that is based on hard evidence of what works and what does not.  One that is also focused on moving towards a future-purpose rather than just moving away from past-problems.

Let us call this improved version “Evidence-Based Co-Design“.

And we already know that by a different name:

Health Care Systems Engineering (HCSE).

O.O.D.A.

OODA is something we all do thousands of times a day without noticing.

Observe – Orient – Decide – Act.

The term is attributed to Colonel John Boyd, a real world “Top Gun” who studied economics and engineering, then flew and designed fighter planes, then became a well-respected military strategist.

OODA is a continuous process of updating our mental model based on sensed evidence.

And it is a fast process because happens largely out of awareness.

This was Boyd’s point: In military terms, the protagonist that can make wiser and faster decisions are more likely to survive in combat.


And notice that it is not a simple linear sequence … it is a system … there are parallel paths and both feed-forward and feed-backward loops … there are multiple information flow paths.

And notice that the Implicit Guidance & Control links do not go through Decision – this means they operate out of awareness and are much faster.

And notice the Feed Forward links link the OODA steps – this is the conscious, sequential, future looking process that we know by another name:

Study-Adjust-Plan-Do.


We use the same process in medicine: first we study the patient and the problem they are presenting (history, examination, investigation), then we adjust our generic mental model of how the body works to the specific patient (diagnosis), then we plan and decide a course of action to achieve the intended outcome, and then we act, we do it (treatment).

But at any point we can jump back to an earlier step and we can jump forwards to a later one.  The observe, orient, decide, act modes are running in parallel.

And the more experience we have of similar problems the faster we can complete the OODA (or SAPD) work because we learn what is the most useful information to attend to, and we learn how to interpret it.

We learn the patterns and what to look for – and that speeds up the process – a lot!


This emergent learning is then re-inforced if the impact of our action matches our intent and prediction and our conscious learning is then internalised as unconscious “rules of thumb” called heuristics.


We start by thinking our way consciously and slowly … and … we finish by feeling our way unconsciously and quickly.


Until … we  encounter a novel problem that does not fit any of our learned pattern matching neural templates. When that happens, our unconscious, parallel processing, pattern-matching system alerts us with a feeling of confusion and bewilderment – and we freeze (often with fright!)

Now we have a choice: We can retreat to using familiar, learned, reactive, knee-jerk patterns of behaviour (presumably in the hope that they will work) or we can switch into a conscious learning loop and start experimenting with novel ideas.

If we start at Hypothesis then we have the Plan-Do-Study-Act cycle; where we generate novel hypotheses to explain the unexpected, and we then plan experiments to test our hypotheses; and we then study the outcome of the experiments and we then we act on our conclusions.

This mindful mode of thinking is well described in the book “Managing the Unexpected” by Weick and Sutcliffe and is the behaviour that underpins the success of HROs – High Reliability Organisations.

The image is of the latest (3rd edition) but the previous (2nd edition) is also worth reading.

So we have two interdependent problem solving modes – the parallel OODA system and the sequential SAPD process.

And we can switch between them depending on the context.


Which is an effective long-term survival strategy because the more we embrace the unexpected, the more opportunities we will have to switch into exploration mode and learn new patterns; and the more patterns we recognise the more efficient and effective our unconscious decision-making process will become.

This complex adaptive system behaviour has another name … Resilience.

One Step Back; Two Steps Forward.

This week a ground-breaking case study was published.

It describes how a team in South Wales discovered how to make the flows visible in a critical part of their cancer pathway.

Radiology.

And they did that by unintentionally falling into a trap!  A trap that many who set out to improve health care services fall into.  But they did not give up.  They sought guidance and learned some profound lessons.

Part 1 of their story is shared here.


One lesson they learned is that, as they take on more complex improvement challenges, they need to be equipped with the right tools, and they need to be trained to use them, and they need to have practiced using them.

Another lesson they learned is that making the flows in a system visible is necessary before the current behaviour of the system can be understood.

And they learned that they needed a clear diagnosis of how the current system is not performing; before they can attempt to design an intervention to deliver the intended improvement.

They learned how the Study-Plan-Do cycle works, and they learned the reason it starts with “Study”, and not with “Plan”.


They tried, failed, took one step back, asked, listened and learned.


Then with their new knowledge, more advanced tools, and deeper understanding they took two steps forward; diagnosed problem, designed an intervention, and delivered a significant improvement.

And visualised just how significant.

Then they shared Part 2 of their story … here.

 

 

The OMG Effect … Revisited

Beliefs drive behaviour. Behaviour drives change. Improvement requires change.

So, improvement requires challenging beliefs; confirming some and disproving others.

And beliefs can only be confirmed or disproved rationally – with evidence and explanation. Rhetoric is too slippery. We can convince ourselves of anything with that!

So it comes as an emotional shock when one of our beliefs is disproved by experiencing reality from a new perspective.

Our natural reaction is surprise, perhaps delight, and then defense. We say “Yes, but ...”.

And that is healthy skepticism and it is a valuable and necessary part of the change and improvement process.

If there are not enough healthy skeptics on a design team it is unbalanced.

If there are too many healthy skeptics on a design team it is unbalanced.


This week I experienced this phenomenon first hand.

The context was a one day practical skills workshop and the topic was:

How to improve the safety, timeliness, quality and affordability of unscheduled care“.

The workshop is designed to approach this challenge from a different perspective.

Instead of asking “What is the problem and how do we solve it?” we took the system engineering approach of asking “What is the purpose and how can we achieve it?”

We used a range of practical exercises to illustrate some core concepts and principles – reality was our teacher. Then we applied those newly acquired insights to the design challenge using a proven methodology that ensured we do not skip steps.


And the outcome was: the participants discovered that …

it is indeed possible to improve the safety, timeliness, quality and affordability of unscheduled health care …

using health care systems engineering concepts, principles, techniques and tools that, until the workshop, they had been unaware even existed.


Their reaction was “OMG” and was shortly followed by “Yes, but …” which is to be expected and is healthy.

The rest of the “Yes, but … ” sentence was “… how will I convince my colleagues?

One way is for them to seek out the same experience …

… because reality is a much better teacher than rhetoric.

HCSE Practical Skills One Day Workshops

 

Unknown-Knowns

This is the now-infamous statement that Donald Rumsfeld made at a Pentagon Press Conference which triggered some good-natured jesting from the assembled journalists.

But there is a problem with it.

There is a fourth combination that he does not mention: the Unknown-Knowns.

Which is a shame because they are actually the most important because they cause the most problems.  Avoidable problems.


Suppose there is a piece of knowledge that someone knows but that someone else does not; then we have an unknown-known.

None of us know everything and we do not need to, because knowledge that is of no value to us is irrelevant for us.

But what happens when the unknown-known is of value to us, and more than that; what happens when it would be reasonable for someone else to expect us to know it; because it is our job to know.


A surgeon would be not expected to know a lot about astronomy, but they would be expected to know a lot about anatomy.


So, what happens if we become aware that we are missing an important piece of knowledge that is actually already known?  What is our normal human reaction to that discovery?

Typically, our first reaction is fear-driven and we express defensive behaviour.  This is because we fear the potential loss-of-face from being exposed as inept.

From this sudden shock we then enter a characteristic emotional pattern which is called the Nerve Curve.

After the shock of discovery we quickly flip into denial and, if that does not work then to anger (i.e. blame).  We ignore the message and if that does not work we shoot the messenger.


And when in this emotionally charged state, our rationality tends to take a back seat.  So, if we want to benefit from the discovery of an unknown-known, then we have to learn to bite-our-lip, wait, let the red mist dissipate, and then re-examine the available evidence with a cool, curious, open mind.  A state of mind that is receptive and open to learning.


Recently, I was reminded of this.


The context is health care improvement, and I was using a systems engineering framework to conduct some diagnostic data analysis.

My first task was to run a data-completeness-verification-test … and the data I had been sent did not pass the test.  There was some missing.  It was an error of omission (EOO) and they are the hardest ones to spot.  Hence the need for the verification test.

The cause of the EOO was an unknown-known in the department that holds the keys to the data warehouse.  And I have come across this EOO before, so I was not surprised.

Hence the need for the verification test.

I was not annoyed either.  I just fed back the results of the test, explained what the issue was, explained the cause, and they listened and learned.


The implication of this specific EOO is quite profound though because it appears to be ubiquitous across the NHS.

To be specific it relates to the precise details of how raw data on demand, activity, length of stay and bed occupancy is extracted from the NHS data warehouses.

So it is rather relevant to just about everything the NHS does!

And the error-of-omission leads to confusion at best; and at worst … to the following sequence … incomplete data =>  invalid analysis => incorrect conclusion => poor decision => counter-productive action => unintended outcome.

Does that sound at all familiar?


So, if would you like to learn about this valuable unknown-known is then I recommend the narrative by Dr Kate Silvester, an internationally recognised expert in healthcare improvement.  In it, Kate re-tells the story of her emotional roller-coaster ride when she discovered she was making the same error.


Here is the link to the full abstract and where you can download and read the full text of Kate’s excellent essay, and help to make it a known-known.

That is what system-wide improvement requires – sharing the knowledge.

Catch-22

There is a Catch-22 in health care improvement and it goes a bit like this:

Most people are too busy fire-fighting the chronic chaos to have time to learn how to prevent the chaos, so they are stuck.

There is a deeper Catch-22 as well though:

The first step in preventing chaos is to diagnose the root cause and doing that requires experience, and we don’t have that experience available, and we are too busy fire-fighting to develop it.


Health care is improvement science in action – improving the physical and psychological health of those who seek our help. Patients.

And we have a tried-and-tested process for doing it.

First we study the problem to arrive at a diagnosis; then we design alternative plans to achieve our intended outcome and we decide which plan to go with; and then we deliver the plan.

Study ==> Plan ==> Do.

Diagnose  ==> Design & Decide ==> Deliver.

But here is the catch. The most difficult step is the first one, diagnosis, because there are many different illnesses and they often present with very similar patterns of symptoms and signs. It is not easy.

And if we make a poor diagnosis then all the action plans that follow will be flawed and may lead to disappointment and even harm.

Complaints and litigation follow in the wake of poor diagnostic ability.

So what do we do?

We defer reassuring our patients, we play safe, we request more tests and we refer for second opinions from specialists. Just to be on the safe side.

These understandable tactics take time, cost money and are not 100% reliable.  Diagnostic tests are usually precisely focused to answer specific questions but can have false positive and false negative results.

To request a broad batch of tests in the hope that the answer will appear like a rabbit out of a magician’s hat is … mediocre medicine.


This diagnostic dilemma arises everywhere: in primary care and in secondary care, and in non-urgent and urgent pathways.

And it generates extra demand, more work, bigger queues, longer delays, growing chaos, and mounting frustration, disappointment, anxiety and cost.

The solution is obvious but seemingly impossible: to ensure the most experienced diagnostician is available to be consulted at the start of the process.

But that must be impossible because if the consultants were seeing the patients first, what would everyone else do?  How would they learn to become more expert diagnosticians? And would we have enough consultants?


When I was a junior surgeon I had the great privilege to have the opportunity to learn from wise and experienced senior surgeons, who had seen it, and done it and could teach it.

Mike Thompson is one of these.  He is a general surgeon with a special interest in the diagnosis and treatment of bowel cancer.  And he has a particular passion for improving the speed and accuracy of the diagnosis step; because it can be a life-saver.

Mike is also a disruptive innovator and an early pioneer of the use of endoscopy in the outpatient clinic.  It is called point-of-care testing nowadays, but in the 1980’s it was a radically innovative thing to do.

He also pioneered collecting the symptoms and signs from every patient he saw, in a standard way using a multi-part printed proforma. And he invested many hours entering the raw data into a computer database.

He also did something that even now most clinicians do not do; when he knew the outcome for each patient he entered that into his database too – so that he could link first presentation with final diagnosis.


Mike knew that I had an interest in computer-aided diagnosis, which was a hot topic in the early 1980’s, and also that I did not warm to the Bayesian statistical models that underpinned it.  To me they made too many simplifying assumptions.

The human body is a complex adaptive system. It defies simplification.

Mike and I took a different approach.  We  just counted how many of each diagnostic group were associated with each pattern of presenting symptoms and signs.

The problem was that even his database of 8000+ patients was not big enough! This is why others had resorted to using statistical simplifications.

So we used the approach that an experienced diagnostician uses.  We used the information we had already gleaned from a patient to decide which question to ask next, and then the next one and so on.


And we always have three pieces of information at the start – the patient’s age, gender and presenting symptom.

What surprised and delighted us was how easy it was to use the database to help us do this for the new patients presenting to his clinic; the ones who were worried that they might have bowel cancer.

And what surprised us even more was how few questions we needed to ask arrive at a statistically robust decision to reassure-or-refer for further tests.

So one weekend, I wrote a little computer program that used the data from Mike’s database and our simple bean-counting algorithm to automate this process.  And the results were amazing.  Suddenly we had a simple and reliable way of using past experience to support our present decisions – without any statistical smoke-and-mirror simplifications getting in the way.

The computer program did not make the diagnosis, we were still responsible for that; all it did was provide us with reliable access to a clear and comprehensive digital memory of past experience.


What it then enabled us to do was to learn more quickly by exploring the complex patterns of symptoms, signs and outcomes and to develop our own diagnostic “rules of thumb”.

We learned in hours what it would take decades of experience to uncover. This was hot stuff, and when I presented our findings at the Royal Society of Medicine the audience was also surprised and delighted (and it was awarded the John of Arderne Medal).

So, we called it the Hot Learning System, and years later I updated it with Mike’s much bigger database (29,000+ records) and created a basic web-based version of the first step – age, gender and presenting symptom.  You can have a play if you like … just click HERE.


So what are the lessons here?

  1. We need to have the most experienced diagnosticians at the start of the improvement process.
  2. The first diagnostic assessment can be very quick so long as we have developed evidence-based heuristics.
  3. We can accelerate the training in diagnostic skills using simple information technology and basic analysis techniques.

And exactly the same is true in the health care system improvement.

We need to have an experienced health care improvement practitioner involved at the start, because if we skip this critical study step and move to plan without a correct diagnosis, then we will make errors, poor decisions, and counter-productive actions.  And then generate more work, more queues, more delays, more chaos, more distress and increased costs.

Exactly the opposite of what we want.

Q1: So, how do we develop experienced improvement practitioners more quickly?

Q2: Is there a hot learning system for improvement science?

A: Yes, there is. It can be found here.

The Storyboard

This week about thirty managers and clinicians in South Wales conducted two experiments to test the design of the Flow Design Practical Skills One Day Workshop.

Their collective challenge was to diagnose and treat a “chronically sick” clinic and the majority had no prior exposure to health care systems engineering (HCSE) theory, techniques, tools or training.

Two of the group, Chris and Jat, had been delegates at a previous ODWS, and had then completed their Level-1 HCSE training and real-world projects.

They had seen it and done it, so this experiment was to test if they could now teach it.

Could they replicate the “OMG effect” that they had experienced and that fired up their passion for learning and using the science of improvement?

Continue reading “The Storyboard”

The Pathology of Variation I

In medical training we have to learn about lots of things. That is one reason why it takes a long time to train a competent and confident clinician.

First, we learn the anatomy (structure) and the physiology (function) of the normal, healthy human.

Then we learn about how this amazingly complicated system can go wrong.  We learn about pathology.  And we do that so that we understand the relationship between the cause (disease) and the effect (symptoms and signs).

Then we learn about diagnostics – which is how to work backwards from the effects to the most likely cause(s).

And only then can we learn about therapeutics – the design and delivery of a treatment plan that we are confident will relieve the symptoms by curing the disease.

And we learn about prevention – how to avoid some illnesses (and delay others) by addressing the root causes earlier.  Much of the increase in life expectancy over the last 200 years has come from prevention, not from cure.


The NHS is an amazingly complicated system, and it too can go wrong.  It can exhibit a wide spectrum of symptoms and signs; medical errors, long delays, unhappy patients, burned-out staff, and overspent budgets.

But, there is no equivalent training in how to diagnose and treat a sick health care system.  And this is not acceptable, especially given that the knowledge of how to do this is already available.

It is called complex adaptive systems engineering (CASE).


Before the Renaissance, the understanding of how the body works was primitive and it was believed that illness was “God’s Will” so we had to just grin-and-bear (and pray).

The Scientific Revolution brought us new insights, profound theories, innovative techniques and capability-extending tools.  And the impact has been dramatic.  Those who do have access to this knowledge live better and longer than ever.  Those who do not … do not.

Our current understanding of how health care systems work is, to be blunt, medieval.  The current approaches amount to little more than rune reading, incantations and the prescription of purgatives and leeches.  And the impact is about as effective.

So we need to study the anatomy, physiology, pathology, diagnostics and therapeutics of complex adaptive systems like healthcare.  And most of all we need to understand how to prevent catastrophes happening in the first place.  We need the NHS to be immortal.


And this week a prototype complex adaptive pathology training system was tested … and it employed cutting-edge 21st Century technology: Pasta Twizzles.

The specific topic under scrutiny was variation.  A brain-bending concept that is usually relegated to the mystical smoke-and-mirrors world called “Sadistics”.

But no longer!

The Mists-of-Jargon and Fog-of-Formulae were blown away as we switched on the Fan-of-Facilitation and the Light-of-Simulation and went exploring.

Empirically. Pragmatically.


And what we discovered was jaw-dropping.

A disease called the “Flaw of Averages” and its malignant manifestation “Carveoutosis“.


And with our new knowledge we opened the door to a previously hidden world of opportunity and improvement.

Then we activated the Laser-of-Insight and evaporated the queues and chaos that, before our new understanding, we had accepted as inevitable and beyond our understanding or control.

They were neither. And never had been. We were deluding ourselves.

Welcome to the Resilient Design – Practical Skills – One Day Workshop.

Validation Test: Passed.

Diagnose-Design-Deliver

A story was shared this week.

A story of hope for the hard-pressed NHS, its patients, its staff and its managers and its leaders.

A story that says “We can learn how to fix the NHS ourselves“.

And the story comes with evidence; hard, objective, scientific, statistically significant evidence.


The story starts almost exactly three years ago when a Clinical Commissioning Group (CCG) in England made a bold strategic decision to invest in improvement, or as they termed it “Achieving Clinical Excellence” (ACE).

They invited proposals from their local practices with the “carrot” of enough funding to allow GPs to carve-out protected time to do the work.  And a handful of proposals were selected and financially supported.

This is the story of one of those proposals which came from three practices in Sutton who chose to work together on a common problem – the unplanned hospital admissions in their over 70’s.

Their objective was clear and measurable: “To reduce the cost of unplanned admissions in the 70+ age group by working with hospital to reduce length of stay.

Did they achieve their objective?

Yes, they did.  But there is more to this story than that.  Much more.


One innovative step they took was to invest in learning how to diagnose why the current ‘system’ was costing what it was; then learning how to design an improvement; and then learning how to deliver that improvement.

They invested in developing their own improvement science skills first.

They did not assume they already knew how to do this and they engaged an experienced health care systems engineer (HCSE) to show them how to do it (i.e. not to do it for them).

Another innovative step was to create a blog to make it easier to share what they were learning with their colleagues; and to invite feedback and suggestions; and to provide a journal that captured the story as it unfolded.

And they measured stuff before they made any changes and afterwards so they could measure the impact, and so that they could assess the evidence scientifically.

And that was actually quite easy because the CCG was already measuring what they needed to know: admissions, length of stay, cost, and outcomes.

All they needed to learn was how to present and interpret that data in a meaningful way.  And as part of their IS training,  they learned how to use system behaviour charts, or SBCs.


By Jan 2015 they had learned enough of the HCSE techniques and tools to establish the diagnosis and start to making changes to the parts of the system that they could influence.


Two years later they subjected their before-and-after data to robust statistical analysis and they had a surprise. A big one!

Reducing hospital mortality was not a stated objective of their ACE project, and they only checked the mortality data to be sure that it had not changed.

But it had, and the “p=0.014” part of the statement above means that the probability that this 20.0% reduction in hospital mortality was due to random chance … is less than 1.4%.  [This is well below the 5% threshold that we usually accept as “statistically significant” in a clinical trial.]

But …

This was not a randomised controlled trial.  This was an intervention in a complicated, ever-changing system; so they needed to check that the hospital mortality for comparable patients who were not their patients had not changed as well.

And the statistical analysis of the hospital mortality for the ‘other’ practices for the same patient group, and the same period of time confirmed that there had been no statistically significant change in their hospital mortality.

So, it appears that what the Sutton ACE Team did to reduce length of stay (and cost) had also, unintentionally, reduced hospital mortality. A lot!


And this unexpected outcome raises a whole raft of questions …


If you would like to read their full story then you can do so … here.

It is a story of hunger for improvement, of humility to learn, of hard work and of hope for the future.

Dr Hyde and Mr Jekyll

Dr Bill Hyde was already at the bar when Bob Jekyll arrived.

Bill and  Bob had first met at university and had become firm friends, but their careers had diverged and it was only by pure chance that their paths had crossed again recently.

They had arranged to meet up for a beer and to catch up on what had happened in the 25 years since they had enjoyed the “good old times” in the university bar.

<Dr Bill> Hi Bob, what can I get you? If I remember correctly it was anything resembling real ale. Will this “Black Sheep” do?

<Bob> Hi Bill, Perfect! I’ll get the nibbles. Plain nuts OK for you?

<Dr Bill> My favourite! So what are you up to now? What doors did your engineering degree open?

<Bob> Lots!  I’ve done all sorts – mechanical, electrical, software, hardware, process, all except civil engineering. And I love it. What I do now is a sort of synthesis of all of them.  And you? Where did your medical degree lead?

<Dr Bill> To my hearts desire, the wonderful Mrs Hyde, and of course to primary care. I am a GP. I always wanted to be a GP since I was knee-high to a grasshopper.

<Bob> Yes, you always had that “I’m going to save the world one patient at a time!” passion. That must be so rewarding! Helping people who are scared witless by the health horror stories that the media pump out.  I had a fright last year when I found a lump.  My GP was great, she confidently diagnosed a “hernia” and I was all sorted in a matter of weeks with a bit of nifty day case surgery. I was convinced my time had come. It just shows how damaging the fear of the unknown can be!

<Dr Bill> Being a GP is amazingly rewarding. I love my job. But …

<Bob> But what? Are you alright Bill? You suddenly look really depressed.

<Dr Bill> Sorry Bob. I don’t want to be a damp squib. It is good to see you again, and chat about the old days when we were teased about our names.  And it is great to hear that you are enjoying your work so much. I admit I am feeling low, and frankly I welcome the opportunity to talk to someone I know and trust who is not part of the health care system. If you know what I mean?

<Bob> I know exactly what you mean.  Well, I can certainly offer an ear, “a problem shared is a problem halved” as they say. I can’t promise to do any more than that, but feel free to tell me the story, from the beginning. No blood-and-guts gory details though please!

<Dr Bill> Ha! “Tell me the story from the beginning” is what I say to my patients. OK, here goes. I feel increasingly overwhelmed and I feel like I am drowning under a deluge of patients who are banging on the practice door for appointments to see me. My intuition tells me that the problem is not the people, it is the process, but I can’t seem to see through the fog of frustration and chaos to a clear way forward.

<Bob> OK. I confess I know nothing about how your system works, so can you give me a bit more context.

<Dr Bill> Sorry. Yes, of course. I am what is called a single-handed GP and I have a list of about 1500 registered patients and I am contracted to provide primary care for them. I don’t have to do that 24 x 7, the urgent stuff that happens in the evenings and weekends is diverted to services that are designed for that. I work Monday to Friday from 9 AM to 5 PM, and I am contracted to provide what is needed for my patients, and that means face-to-face appointments.

<Bob> OK. When you say “contracted” what does that mean exactly?

<Dr Bill> Basically, the St. Elsewhere’s® Practice is like a small business. It’s annual income is a fixed amount per year for each patient on the registration list, and I have to provide the primary care service for them from that pot of cash. And that includes all the costs, including my income, our practice nurse, and the amazing Mrs H. She is the practice receptionist, manager, administrator and all-round fixer-of-anything.

<Bob> Wow! What a great design. No need to spend money on marketing, research, new product development, or advertising! Just 100% pure service delivery of tried-and-tested medical know-how to a captive audience for a guaranteed income. I have commercial customers who would cut off their right arms for an offer like that!

<Dr Bill> Really? It doesn’t feel like that to me. It feels like the more I offer, the more the patients expect. The demand is a bottomless well of wants, but the income is capped and my time is finite!

<Bob> H’mm. Tell me more about the details of how the process works.

<Dr Bill> Basically, I am a problem-solving engine. Patients phone for an appointment, Mrs H books one, the patient comes at the appointed time, I see them, and I diagnose and treat the problem, or I refer on to a specialist if it’s more complicated. That’s basically it.

<Bob> OK. Sounds a lot simpler than 99% of the processes that I’m usually involved with. So what’s the problem?

<Dr Bill> I don’t have enough capacity! After all the appointments for the day are booked Mrs H has to say “Sorry, please try again tomorrow” to every patient who phones in after that.  The patients who can’t get an appointment are not very happy and some can get quite angry. They are anxious and frustrated and I fully understand how they feel. I feel the same.

<Bob> We will come back to what you mean by “capacity”. Can you outline for me exactly how a patient is expected to get an appointment?

<Dr Bill> We tell them to phone at 8 AM for an appointment, there is a fixed number of bookable appointments, and it is first-come-first-served.  That is the only way I can protect myself from being swamped and is the fairest solution for patients.  It wasn’t my idea; it is called Advanced Access. Each morning at 8 AM we switch on the phones and brace ourselves for the daily deluge.

<Bob> You must be pulling my leg! This design is a batch-and-queue phone-in appointment booking lottery!  I guess that is one definition of “fair”.  How many patients get an appointment on the first attempt?

<Dr Bill> Not many.  The appointments are usually all gone by 9 AM and a lot are to people who have been trying to get one for several days. When they do eventually get to see me they are usually grumpy and then spring the trump card “And while I’m here doctor I have a few other things that I’ve been saving up to ask you about“. I help if I can but more often than not I have to say, “I’m sorry, you’ll have to book another appointment!“.

<Bob> I’m not surprised you patients are grumpy. I would be too. And my recollection of seeing my GP with my scary lump wasn’t like that at all. I phoned at lunch time and got an appointment the same day. Maybe I was just lucky, or maybe my GP was as worried as me. But it all felt very calm. When I arrived there was only one other patient waiting, and I was in and out in less than ten minutes – and mightily reassured I can tell you! It felt like a high quality service that I could trust if-and-when I needed it, which fortunately is very infrequently.

<Dr Bill> I dream of being able to offer a service like that! I am prepared to bet you are registered with a group practice and you see whoever is available rather than your own GP. Single-handed GPs like me who offer the old fashioned personal service are a rarity, and I can see why. We must be suckers!

<Bob> OK, so I’m starting to get a sense of this now. Has it been like this for a long time?

<Dr Bill> Yes, it has. When I was younger I was more resilient and I did not mind going the extra mile.  But the pressure is relentless and maybe I’m just getting older and grumpier.  My real fear is I end up sounding like the burned-out cynics that I’ve heard at the local GP meetings; the ones who crow about how they are counting down the days to when they can retire and gloat.

<Bob> You’re the same age as me Bill so I don’t think either of us can use retirement as an exit route, and anyway, that’s not your style. You were never a quitter at university. Your motto was always “when the going gets tough the tough get going“.

<Dr Bill> Yeah I know. That’s why it feels so frustrating. I think I lost my mojo a long time back. Maybe I should just cave in and join up with the big group practice down the road, and accept the inevitable loss of the personal service. They said they would welcome me, and my list of 1500 patients, with open arms.

<Bob> OK. That would appear to be an option, or maybe a compromise, but I’m not sure we’ve exhausted all the other options yet.  Tell me, how do you decide how long a patient needs for you to solve their problem?

<Dr Bill> That’s easy. It is ten minutes. That is the time recommended in the Royal College Guidelines.

<Bob> Eh? All patients require exactly ten minutes?

<Dr Bill> No, of course not!  That is the average time that patients need.  The Royal College did a big survey and that was what most GPs said they needed.

<Bob> Please tell me if I have got this right.  You work 9-to-5, and you carve up your day into 10-minute time-slots called “appointments” and, assuming you are allowed time to have lunch and a pee, that would be six per hour for seven hours which is 42 appointments per day that can be booked?

<Dr Bill> No. That wouldn’t work because I have other stuff to do as well as see patients. There are only 25 bookable 10-minute appointments per day.

<Bob> OK, that makes more sense. So where does 25 come from?

<Dr Bill> Ah! That comes from a big national audit. For an average GP with and average  list of 1,500 patients, the average number of patients seeking an appointment per day was found to be 25, and our practice population is typical of the national average in terms of age and deprivation.  So I set the upper limit at 25. The workload is manageable but it seems to generate a lot of unhappy patients and I dare not increase the slots because I’d be overwhelmed with the extra workload and I’m barely coping now.  I feel stuck between a rock and a hard place!

<Bob> So you have set the maximum slot-capacity to the average demand?

<Dr Bill> Yes. That’s OK isn’t it? It will average out over time. That is what average means! But it doesn’t feel like that. The chaos and pressure never seems to go away.


There was a long pause while Bob mulls over what he had heard, sips his pint of Black Sheep and nibbles on the dwindling bowl of peanuts.  Eventually he speaks.


<Bob> Bill, I have some good news and some not-so-good news and then some more good news.

<Dr Bill> Oh dear, you sound just like me when I have to share the results of tests with one of my patients at their follow up appointment. You had better give me the “bad news sandwich”!

<Bob> OK. The first bit of good news is that this is a very common, and easily treatable flow problem.  The not-so-good news is that you will need to change some things.  The second bit of good news is that the changes will not cost anything and will work very quickly.

<Dr Bill> What! You cannot be serious!! Until ten minutes ago you said that you knew nothing about how my practice works and now you are telling me that there is a quick, easy, zero cost solution.  Forgive me for doubting your engineering know-how but I’ll need a bit more convincing than that!

<Bob> And I would too if I were in your position.  The clues to the diagnosis are in the story. You said the process problem was long-standing; you said that you set the maximum slot-capacity to the average demand; and you said that you have a fixed appointment time that was decided by a subjective consensus.  From an engineering perspective, this is a perfect recipe for generating chronic chaos, which is exactly the symptoms you are describing.

<Dr Bill> Is it? OMG. You said this is well understood and resolvable? So what do I do?

<Bob> Give me a minute.  You said the average demand is 25 per day. What sort of service would you like your patients to experience? Would “90% can expect a same day appointment on the first call” be good enough as a starter?

<Dr Bill> That would be game changing!  Mrs H would be over the moon to be able to say “Yes” that often. I would feel much less anxious too, because I know the current system is a potentially dangerous lottery. And my patients would be delighted and relieved to be able to see me that easily and quickly.

<Bob> OK. Let me work this out. Based on what you’ve said, some assumptions, and a bit of flow engineering know-how; you would need to offer up to 31 appointments per day.

<Dr Bill> What! That’s impossible!!! I told you it would be impossible! That would be another hour a day of face-to-face appointments. When would I do the other stuff? And how did you work that out anyway?

<Bob> I did not say they would have to all be 10-minute appointments, and I did not say you would expect to fill them all every day. I did however say you would have to change some things.  And I did say this is a well understood flow engineering problem.  It is called “resilience design“. That’s how I was able to work it out on the back of this Black Sheep beer mat.

<Dr Bill> H’mm. That is starting to sound a bit more reasonable. What things would I have to change? Specifically?

<Bob> I’m not sure what specifically yet.  I think in your language we would say “I have taken a history, and I have a differential diagnosis, so next I’ll need to examine the patient, and then maybe do some tests to establish the actual diagnosis and to design and decide the treatment plan“.

<Dr Bill> You are learning the medical lingo fast! What do I need to do first? Brace myself for the forensic rubber-gloved digital examination?

<Bob> Alas, not yet and certainly not here. Shall we start with the vital signs? Height, weight, pulse, blood pressure, and temperature? That’s what my GP did when I went with my scary lump.  The patient here is not you, it is your St. Elsewhere’s® Practice, and we will need to translate the medical-speak into engineering-speak.  So one thing you’ll need to learn is a bit of the lingua-franca of systems engineering.  By the way, that’s what I do now. I am a systems engineer, or maybe now a health care systems engineer?

<Dr Bill> Point me in the direction of the HCSE dictionary! The next round is on me. And the nuts!

<Bob> Excellent. I’ll have another Black Sheep and some of those chilli-coated ones. We have work to do.  Let me start by explaining what “capacity” actually means to an engineer. Buckle up. This ride might get a bit bumpy.


This story is fictional, but the subject matter is factual.

Bob’s diagnosis and recommendations are realistic and reasonable.

Chapter 1 of the HCSE dictionary can be found here.

And if you are a GP who recognises these “symptoms” then this may be of interest.

MOOCHI

When education fails to keep pace with technology the result is inequality. Without the skills to stay useful as innovations arrive, workers suffer“. The Economist January 14th 2017, p 11.

The stark reality is that we all have to develop the habit of lifelong learning, especially if we want to avoid mid-career obsolescence.

A terrifying prospect for the family bread-winner.

This risk is especially true in health care because medical and managerial technology is always changing as the health care system evolves and adapts to the shifting sands and tides.

But we cannot keep going back to traditional classroom methods to update our knowledge and skills: it is too disruptive and expensive.  And when organisations are in a financial squeeze, the training budget is usually the first casualty!

So, how can we protect ourselves?  One answer is a MOOC.

The mantra is “learn while you earn” which means that we do not take time out to do this intermittently, we do it in parallel, and continuously.

The MOOC model leverages the power of the Internet and mobile technology, allowing us to have bites of learning where and when it most suits us, at whatever pace we choose to set.

We can have all the benefits of traditional education too: certificates, communities, and coaching.

And when keeping a job, climbing the career ladder, or changing companies all require a bang-up-to-date set of skills – a bit of time, effort and money may be a very wise investment and deliver a healthy return!


And the good news is that there a is a MOOC for Healthcare Improvement.

It is called the …

Foundations of Improvement Science in Healthcare

which is an open door to a growing …

Community of Healthcare Improvement Practitioners.

Click HERE for a free taste …. yum yum!


 

Miracle on Tavanagh Avenue

Sometimes change is dramatic. A big improvement appears very quickly. And when that happens we are caught by surprise (and delight).

Our emotional reaction is much faster than our logical response. “Wow! That’s a miracle!


Our logical Tortoise eventually catches up with our emotional Hare and says “Hare, we both know that there is no such thing as miracles and magic. There must be a rational explanation. What is it?

And Hare replies “I have no idea, Tortoise.  If I did then it would not have been such a delightful surprise. You are such a kill-joy! Can’t you just relish the relief without analyzing the life out of it?

Tortoise feels hurt. “But I just want to understand so that I can explain to others. So that they can do it and get the same improvement.  Not everyone has a ‘nothing-ventured-nothing-gained’ attitude like you! Most of us are too fearful of failing to risk trusting the wild claims of improvement evangelists. We have had our fingers burned too often.


The apparent miracle is real and recent … here is a snippet of the feedback:

Notice carefully the last sentence. It took a year of discussion to get an “OK” and a month of planning to prepare the “GO”.

That is not a miracle and some magic … that took a lot of hard work!

The evangelist is the customer. The supplier is an engineer.


The context is the chronic niggle of patients trying to get an appointment with their GP, and the chronic niggle of GPs feeling overwhelmed with work.

Here is the back story …

In the opening weeks of the 21st Century, the National Primary Care Development Team (NPDT) was formed.  Primary care was a high priority and the government had allocated £168m of investment in the NHS Plan, £48m of which was earmarked to improve GP access.

The approach the NPDT chose was:

harvest best practice +
use a panel of experts +
disseminate best practice.

Dr (later Sir) John Oldham was the innovator and figure-head.  The best practice was copied from Dr Mark Murray from Kaiser Permanente in the USA – the Advanced Access model.  The dissemination method was copied from from Dr Don Berwick’s Institute of Healthcare Improvement (IHI) in Boston – the Collaborative Model.

The principle of Advanced Access is “today’s-work-today” which means that all the requests for a GP appointment are handled the same day.  And the proponents of the model outlined the key elements to achieving this:

1. Measure daily demand.
2. Set capacity so that is sufficient to meet the daily demand.
3. Simple booking rule: “phone today for a decision today”.

But that is not what was rolled out. The design was modified somewhere between aspiration and implementation and in two important ways.

First, by adding a policy of “Phone at 08:00 for an appointment”, and second by adding a policy of “carving out” appointment slots into labelled pots such as ‘Dr X’ or ‘see in 2 weeks’ or ‘annual reviews’.

Subsequent studies suggest that the tweaking happened at the GP practice level and was driven by the fear that, by reducing the waiting time, they would attract more work.

In other words: an assumption that demand for health care is supply-led, and without some form of access barrier, the system would be overwhelmed and never be able to cope.


The result of this well-intended tampering with the Advanced Access design was to invalidate it. Oops!

To a systems engineer this is meddling was counter-productive.

The “today’s work today” specification is called a demand-led design and, if implemented competently, will lead to shorter waits for everyone, no need for urgent/routine prioritization and slot carve-out, and a simpler, safer, calmer, more efficient, higher quality, more productive system.

In this context it does not mean “see every patient today” it means “assess and decide a plan for every patient today”.

In reality, the actual demand for GP appointments is not known at the start; which is why the first step is to implement continuous measurement of the daily number and category of requests for appointments.

The second step is to feed back this daily demand information in a visual format called a time-series chart.

The third step is to use this visual tool for planning future flow-capacity, and for monitoring for ‘signals’, such as spikes, shifts, cycles and slopes.

That was not part of the modified design, so the reasonable fear expressed by GPs was (and still is) that by attempting to do today’s-work-today they would unleash a deluge of unmet need … and be swamped/drowned.

So a flood defense barrier was bolted on; the policy of “phone at 08:00 for an appointment today“, and then the policy of  channeling the over spill into pots of “embargoed slots“.

The combined effect of this error of omission (omitting the measured demand visual feedback loop) and these errors of commission (the 08:00 policy and appointment slot carve-out policy) effectively prevented the benefits of the Advanced Access design being achieved.  It was a predictable failure.

But no one seemed to realize that at the time.  Perhaps because of the political haste that was driving the process, and perhaps because there were no systems engineers on the panel-of-experts to point out the risks of diluting the design.

It is also interesting to note that the strategic aim of the NPCT was to develop a self-sustaining culture of quality improvement (QI) in primary care. That didn’t seem to have happened either.


The roll out of Advanced Access was not the success it was hoped. This is the conclusion from the 300+ page research report published in 2007.


The “Miracle on Tavanagh Avenue” that was experienced this week by both patients and staff was the expected effect of this tampering finally being corrected; and the true potential of the original demand-led design being released – for all to experience.

Remember the essential ingredients?

1. Measure daily demand and feed it back as a visual time-series chart.
2. Set capacity so that is sufficient to meet the daily demand.
3. Use a simple booking rule: “phone anytime for a decision today”.

But there is also an extra design ingredient that has been added in this case, one that was not part of the original Advanced Access specification, one that frees up GP time to provide the required “resilience” to sustain a same-day service.

And that “secret” ingredient is how the new design worked so quickly and feels like a miracle – safe, calm, enjoyable and productive.

This is health care systems engineering (HCSE) in action.


So congratulations to Harry Longman, the whole team at GP Access, and to Dr Philip Lusty and the team at Riverside Practice, Tavangh Avenue, Portadown, NI.

You have demonstrated what was always possible.

The fear of failure prevented it before, just as it prevented you doing this until you were so desperate you had no other choices.

To read the fuller story click here.

PS. Keep a close eye on the demand time-series chart and if it starts to rise then investigate the root cause … immediately.


The Power of Pictures

I am a big fan of pictures that tell a story … and this week I discovered someone who is creating great pictures … Hayley Lewis.

This is one of Hayley’s excellent sketch notes … the one that captures the essence of the Bruce Tuckman model of team development.

The reason that I share this particular sketch-note is because my experience of developing improvement-by-design teams is that it works just like this!

The tricky phase is the STORMING one because not all teams survive it!

About half sink in the storm – and that seems like an awful waste – and I believe it is avoidable.

This means that before starting the team development cycle, the leader needs to be aware of how to navigate themselves and the team through the storm phase … and that requires training, support and practice.

Which is the reason why coaching from a independent, experienced, capable practitioner is a critical element of the improvement process.

How Do We Know We Have Improved?

Phil and Pete are having a coffee and a chat.  They both work in the NHS and have been friends for years.

They have different jobs. Phil is a commissioner and an accountant by training, Pete is a consultant and a doctor by training.

They are discussing a challenge that affects them both on a daily basis: unscheduled care.

Both Phil and Pete want to see significant and sustained improvements and how to achieve them is often the focus of their coffee chats.


<Phil> We are agreed that we both want improvement, both from my perspective as a commissioner and from your perspective as a clinician. And we agree that what we want to see improvements in patient safety, waiting, outcomes, experience for both patients and staff, and use of our limited NHS resources.

<Pete> Yes. Our common purpose, the “what” and “why”, has never been an issue.  Where we seem to get stuck is the “how”.  We have both tried many things but, despite our good intentions, it feels like things are getting worse!

<Phil> I agree. It may be that what we have implemented has had a positive impact and we would have been even worse off if we had done nothing. But I do not know. We clearly have much to learn and, while I believe we are making progress, we do not appear to be learning fast enough.  And I think this knowledge gap exposes another “how” issue: After we have intervened, how do we know that we have (a) improved, (b) not changed or (c) worsened?

<Pete> That is a very good question.  And all that I have to offer as an answer is to share what we do in medicine when we ask a similar question: “How do I know that treatment A is better than treatment B?”  It is the essence of medical research; the quest to find better treatments that deliver better outcomes and at lower cost.  The similarities are strong.

<Phil> OK. How do you do that? How do you know that “Treatment A is better than Treatment B” in a way that anyone will trust the answer?

 <Pete> We use a science that is actually very recent on the scientific timeline; it was only firmly established in the first half of the 20th century. One reason for that is that it is rather a counter-intuitive science and for that reason it requires using tools that have been designed and demonstrated to work but which most of us do not really understand how they work. They are a bit like magic black boxes.

<Phil> H’mm. Please forgive me for sounding skeptical but that sounds like a big opportunity for making mistakes! If there are lots of these “magic black box” tools then how do you decide which one to use and how do you know you have used it correctly?

<Pete> Those are good questions! Very often we don’t know and in our collective confusion we generate a lot of unproductive discussion.  This is why we are often forced to accept the advice of experts but, I confess, very often we don’t understand what they are saying either! They seem like the medieval Magi.

<Phil> H’mm. So these experts are like ‘magicians’ – they claim to understand the inner workings of the black magic boxes but are unable, or unwilling, to explain in a language that a ‘muggle’ would understand?

<Pete> Very well put. That is just how it feels.

<Phil> So can you explain what you do understand about this magical process? That would be a start.


<Pete> OK, I will do my best.  The first thing we learn in medical research is that we need to be clear about what it is we are looking to improve, and we need to be able to measure it objectively and accurately.

<Phil> That  makes sense. Let us say we want to improve the patient’s subjective quality of the A&E experience and objectively we want to reduce the time they spend in A&E. We measure how long they wait. 

<Pete> The next thing is that we need to decide how much improvement we need. What would be worthwhile? So in the example you have offered we know that reducing the average time patients spend in A&E by just 30 minutes would have a significant effect on the quality of the patient and staff experience, and as a by-product it would also dramatically improve the 4-hour target performance.

<Phil> OK.  From the commissioning perspective there are lots of things we can do, such as commissioning alternative paths for specific groups of patients; in effect diverting some of the unscheduled demand away from A&E to a more appropriate service provider.  But these are the sorts of thing we have been experimenting with for years, and it brings us back to the question: How do we know that any change we implement has had the impact we intended? The system seems, well, complicated.

<Pete> In medical research we are very aware that the system we are changing is very complicated and that we do not have the power of omniscience.  We cannot know everything.  Realistically, all we can do is to focus on objective outcomes and collect small samples of the data ocean and use those in an attempt to draw conclusions can trust. We have to design our experiment with care!

<Phil> That makes sense. Surely we just need to measure the stuff that will tell us if our impact matches our intent. That sounds easy enough. What’s the problem?

<Pete> The problem we encounter is that when we measure “stuff” we observe patient-to-patient variation, and that is before we have made any changes.  Any impact that we may have is obscured by this “noise”.

<Phil> Ah, I see.  So if the our intervention generates a small impact then it will be more difficult to see amidst this background noise. Like trying to see fine detail in a fuzzy picture.

<Pete> Yes, exactly like that.  And it raises the issue of “errors”.  In medical research we talk about two different types of error; we make the first type of error when our actual impact is zero but we conclude from our data that we have made a difference; and we make the second type of error when we have made an impact but we conclude from our data that we have not.

<Phil> OK. So does that imply that the more “noise” we observe in our measure for-improvement before we make the change, the more likely we are to make one or other error?

<Pete> Precisely! So before we do the experiment we need to design it so that we reduce the probability of making both of these errors to an acceptably low level.  So that we can be assured that any conclusion we draw can be trusted.

<Phil> OK. So how exactly do you do that?

<Pete> We know that whenever there is “noise” and whenever we use samples then there will always be some risk of making one or other of the two types of error.  So we need to set a threshold for both. We have to state clearly how much confidence we need in our conclusion. For example, we often use the convention that we are willing to accept a 1 in 20 chance of making the Type I error.

<Phil> Let me check if I have heard you correctly. Suppose that, in reality, our change has no impact and we have set the risk threshold for a Type 1 error at 1 in 20, and suppose we repeat the same experiment 100 times – are you saying that we should expect about five of our experiments to show data that says our change has had the intended impact when in reality it has not?

<Pete> Yes. That is exactly it.

<Phil> OK.  But in practice we cannot repeat the experiment 100 times, so we just have to accept the 1 in 20 chance that we will make a Type 1 error, and we won’t know we have made it if we do. That feels a bit chancy. So why don’t we just set the threshold to 1 in 100 or 1 in 1000?

<Pete> We could, but doing that has a consequence.  If we reduce the risk of making a Type I error by setting our threshold lower, then we will increase the risk of making a Type II error.

<Phil> Ah! I see. The old swings-and-roundabouts problem. By the way, do these two errors have different names that would make it  easier to remember and to explain?

<Pete> Yes. The Type I error is called a False Positive. It is like concluding that a patient has a specific diagnosis when in reality they do not.

<Phil> And the Type II error is called a False Negative?

<Pete> Yes.  And we want to avoid both of them, and to do that we have to specify a separate risk threshold for each error.  The convention is to call the threshold for the false positive the alpha level, and the threshold for the false negative the beta level.

<Phil> OK. So now we have three things we need to be clear on before we can do our experiment: the size of the change that we need, the risk of the false positive that we are willing to accept, and the risk of a false negative that we are willing to accept.  Is that all we need?

<Pete> In medical research we learn that we need six pieces of the experimental design jigsaw before we can proceed. We only have three pieces so far.

<Phil> What are the other three pieces then?

<Pete> We need to know the average value of the metric we are intending to improve, because that is our baseline from which improvement is measured.  Improvements are often framed as a percentage improvement over the baseline.  And we need to know the spread of the data around that average, the “noise” that we referred to earlier.

<Phil> Ah, yes!  I forgot about the noise.  But that is only five pieces of the jigsaw. What is the last piece?

<Pete> The size of the sample.

<Phil> Eh?  Can’t we just go with whatever data we can realistically get?

<Pete> Sadly, no.  The size of the sample is how we control the risk of a false negative error.  The more data we have the lower the risk. This is referred to as the power of the experimental design.

<Phil> OK. That feels familiar. I know that the more experience I have of something the better my judgement gets. Is this the same thing?

<Pete> Yes. Exactly the same thing.

<Phil> OK. So let me see if I have got this. To know if the impact of the intervention matches our intention we need to design our experiment carefully. We need all six pieces of the experimental design jigsaw and they must all fall inside our circle of control. We can measure the baseline average and spread; we can specify the impact we will accept as useful; we can specify the risks we are prepared to accept of making the false positive and false negative errors; and we can collect the required amount of data after we have made the intervention so that we can trust our conclusion.

<Pete> Perfect! That is how we are taught to design research studies so that we can trust our results, and so that others can trust them too.

<Phil> So how do we decide how big the post-implementation data sample needs to be? I can see we need to collect enough data to avoid a false negative but we have to be pragmatic too. There would appear to be little value in collecting more data than we need. It would cost more and could delay knowing the answer to our question.

<Pete> That is precisely the trap than many inexperienced medical researchers fall into. They set their sample size according to what is achievable and affordable, and then they hope for the best!

<Phil> Well, we do the same. We analyse the data we have and we hope for the best.  In the magical metaphor we are asking our data analysts to pull a white rabbit out of the hat.  It sounds rather irrational and unpredictable when described like that! Have medical researchers learned a way to avoid this trap?

<Pete> Yes, it is a tool called a power calculator.

<Phil> Ooooo … a power tool … I like the sound of that … that would be a cool tool to have in our commissioning bag of tricks. It would be like a magic wand. Do you have such a thing?

<Pete> Yes.

<Phil> And do you understand how the power tool magic works well enough to explain to a “muggle”?

<Pete> Not really. To do that means learning some rather unfamiliar language and some rather counter-intuitive concepts.

<Phil> Is that the magical stuff I hear lurks between the covers of a medical statistics textbook?

<Pete> Yes. Scary looking mathematical symbols and unfathomable spells!

<Phil> Oh dear!  Is there another way for to gain a working understanding of this magic? Something a bit more pragmatic? A path that a ‘statistical muggle’ might be able to follow?

<Pete> Yes. It is called a simulator.

<Phil> You mean like a flight simulator that pilots use to learn how to control a jumbo jet before ever taking a real one out for a trip?

<Pete> Exactly like that.

<Phil> Do you have one?

<Pete> Yes. It was how I learned about this “stuff” … pragmatically.

<Phil> Can you show me?

<Pete> Of course.  But to do that we will need a bit more time, another coffee, and maybe a couple of those tasty looking Danish pastries.

<Phil> A wise investment I’d say.  I’ll get the the coffee and pastries, if you fire up the engines of the simulator.

The Lost Tribe

figures_lost_looking_at_map_anim_150_wht_15601

“Jingle Bells, Jingle Bells” announced Bob’s computer as he logged into the Webex meeting with Lesley.

<Bob> Hi Lesley, in case I forget later I’d like to wish you a Happy Christmas and hope that 2017 brings you new opportunity for learning and fun.

<Lesley> Thanks Bob, and I wish you the same. And I believe the blog last week pointed to some.

<Bob> Thank you and I agree;  every niggle is an opportunity for improvement and the “Houston we have a problem!” one is a biggie.

<Lesley> So how do we start on this one? It is massive!

<Bob> The same way we do on all niggles; we diagnose the root cause first. What do you feel they might be?

<Lesley> Well, following it backwards from your niggle, the board reports are created by the data analysts, and they will produce whatever they are asked to. It must be really irritating for them to have their work rubbished!

<Bob> Are you suggesting that they understand the flaws in what they are asked to do but keep quiet?

<Lesley> I am not sure they do, but there is clearly a gap between their intent and their impact. Where would they gain the insight? Do they have access to the sort of training I have am getting?

<Bob> That is a very good question, and until this week I would not have been able to answer, but an interesting report by the Health Foundation was recently published on that very topic. It is entitled “Understanding Analytical Capability In Health Care” and what it says is that there is a lost tribe of data analysts in the NHS.

<Lesley> How interesting! That certainly resonates with my experience.  All the data analysts I know seem to be hidden away behind their computers, caught in the cross-fire between between the boards and the wards, and very sensibly keeping their heads down and doing what they are asked to.

<Bob> That would certainly help to explain what we are seeing! And the good news is that Martin Bardsley, the author of the paper, has interviewed many people across the system, gathered their feedback, and offered some helpful recommendations.  Here is a snippet.

analysiscapability

<Lesley> I like these recommendations, especially the “in-work training programmes” and inclusion “in general management and leadership training“. But isn’t that one of the purposes of the CHIPs training?

<Bob> It is indeed, which is why it is good to see that Martin has specifically recommended it.

saasoftrecommended

<Lesley> Excellent! That means that my own investment in the CHIPs training has just gained in street value and that’s good for my CV. An unexpected early Xmas present. Thank you!

“Houston, we have a problem!”

The immortal words from Apollo 13 that alerted us to an evolving catastrophe …

… and that is what we are seeing in the UK health and social care system … using the thermometer of A&E 4-hour performance. England is the red line.

uk_ae_runchart

The chart shows that this is not a sudden change, it has been developing over quite a long period of time … so why does it feel like an unpleasant surprise?


One reason may be that NHS England is using performance management techniques that were out of date in the 1980’s and are obsolete in the 2010’s!

Let me show you what I mean. This is a snapshot from the NHS England Board Minutes for November 2016.

nhse_rag_nov_2016
RAG stands for Red-Amber-Green and what we want to see on a Risk Assessment is Green for the most important stuff like safety, flow, quality and affordability.

We are not seeing that.  We are seeing Red/Amber for all of them. It is an evolving catastrophe.

A risk RAG chart is an obsolete performance management tool.

Here is another snippet …

nhse_ae_nov_2016

This demonstrates the usual mix of single point aggregates for the most recent month (October 2016); an arbitrary target (4 hours) used as a threshold to decide failure/not failure; two-point comparisons (October 2016 versus October 2015); and a sprinkling of ratios. Not a single time-series chart in sight. No pictures that tell a story.

Click here for the full document (which does also include some very sensible plans to maintain hospital flow through the bank holiday period).

The risk of this way of presenting system performance data is that it is a minefield of intuitive traps for the unwary.  Invisible pitfalls that can lead to invalid conclusions, unwise decisions, potentially ineffective and/or counter-productive actions, and failure to improve. These methods are risky and that is why they should be obsolete.

And if NHSE is using obsolete tools than what hope do CCGs and Trusts have?


Much better tools have been designed.  Tools that are used by organisations that are innovative, resilient, commercially successful and that deliver safety, on-time delivery, quality and value for money. At the same time.

And they are obsolete outside the NHS because in the competitive context of the dog-eat-dog real world, organisations do not survive if they do not innovate, improve and learn as fast as their competitors.  They do not have the luxury of being shielded from reality by having a central tax-funded monopoly!

And please do not misinterpret my message here; I am a 100% raving fan of the NHS ethos of “available to all and free at the point of delivery” and an NHS that is funded centrally and fairly. That is not my issue.

My issue is the continued use of obsolete performance management tools in the NHS.


Q: So what are the alternatives? What do the successful commercial organisations use instead?

A: System behaviour charts.

SBCs are pictures of how the system is behaving over time – pictures that tell a story – pictures that have meaning – pictures that we can use to diagnose, design and deliver a better outcome than the one we are heading towards.

Pictures like the A&E performance-over-time chart above.

Click here for more on how and why.


Therefore, if the DoH, NHSE, NHSI, STPs, CCGs and Trust Boards want to achieve their stated visions and missions then the writing-on-the-wall says that they will need to muster some humility and learn how successful organisations do this.

This is not a comfortable message to hear and it is easier to be defensive than receptive.

The NHS has to change if it wants to survive and continue serve the people who pay the salaries. And time is running out. Continuing as we are is not an option. Complaining and blaming are not options. Doing nothing is not an option.

Learning is the only option.

Anyone can learn to use system behaviour charts.  No one needs to rely on averages, two-point comparisons, ratios, targets, and the combination of failure-metrics and us-versus-them-benchmarking that leads to the chronic mediocrity trap.

And there is hope for those with enough hunger, humility and who are prepared to do the hard-work of developing their personal, team, department and organisational capability to use better management methods.


Apollo 13 is a true story.  The catastrophe was averted.  The astronauts were brought home safely.  The film retells the story of how that miracle was achieved. Perhaps watching the whole film would be somewhere to start, because it holds many valuable lessons for us all – lessons on how effective teams behave.

Pride and Joy

stick_figure_superhero_anim_150_wht_1857Have you heard the phrase “Pride comes before a fall“?

What does this mean? That the feeling of pride is the reason for the subsequent fall?

So by following that causal logic, if we do not allow ourselves to feel proud then we can avoid the fall?

And none of us like the feeling of falling and failing. We are fearful of that negative feeling, so with this simple trick we can avoid feeling bad. Yes?

But we all know the positive feeling of achievement – we feel pride when we have done good work, when our impact matches our intent.  Pride in our work.

Is that bad too?

Should we accept under-achievement and unexceptional mediocrity as the inevitable cost of avoiding the pain of possible failure?  Is that what we are being told to do here?


The phrase comes from the Bible, from the Book of Proverbs 16:18 to be precise.

proverb

And the problem here is that the phrase “pride comes before a fall” is not the whole proverb.

It has been simplified. Some bits have been omitted. And those omissions lead to ambiguity and the opportunity for obfuscation and re-interpretation.

pride_goes_before_a_fall
In the fuller New International Version we see a missing bit … the “haughty spirit” bit.  That is another way of saying “over-confident” or “arrogant”.


But even this “authorised” version is still ambiguous and more questions spring to mind:

Q1. What sort of pride are we referring to? Just the confidence version? What about the pride that follows achievement?

Q2. How would we know if our feeling of confidence is actually justified?

Q3. Does a feeling of confidence always precede a fall? Is that how we diagnose over-confidence? Retrospectively? Are there instances when we feel confident but we do not fail? Are there instances when we do not feel confident and then fail?

Q4. Does confidence cause the fall or it is just a temporal association? Is there something more fundamental that causes both high-confidence and low-competence?


There is a well known model called the Conscious-Competence model of learning which generates a sequence of four stages to achieving a new skill. Such as one we need to achieve our intended outcomes.

We all start in the “blissful ignorance” zone of unconscious incompetence.  Our unknowns are unknown to us.  They are blind spots.  So we feel unjustifiably confident.

hierarchy_of_competence

In this model the first barrier to progress is “wrong intuition” which means that we actually have unconscious assumptions that are distorting our perception of reality.

What we perceive makes sense to us. It is clear and obvious. We feel confident. We believe our own rhetoric.

But our unconscious assumptions can trick us into interpreting information incorrectly.  And if we derive decisions from unverified assumptions and invalid analysis then we may do the wrong thing and not achieve our intended outcome.  We may unintentionally cause ourselves to fail and not be aware of it.  But we are proud and confident.

Then the gap between our intent and our impact becomes visible to all and painful to us. So we are tempted to avoid the social pain of public failure by retreating behind the “Yes, But” smokescreen of defensive reasoning. The “doom loop” as it is sometimes called. The Victim Vortex. “Don’t name, shame and blame me, I was doing my best. I did not intent that to happen. To err is human”.


The good news is that this learning model also signposts a possible way out; a door in the black curtain of ignorance.  It suggests that we can learn how to correct our analysis by using feedback from reality to verify our rhetorical assumptions.  Those assumptions which pass the “reality check” we keep, those which fail the “reality check” we redesign and retest until they pass.  Bit by bit our inner rhetoric comes to more closely match reality and the wisdom of our decisions will improve.

And what we then see is improvement.  Our impact moves closer towards our intent. And we can justifiably feel proud of that achievement. We do not need to be best-compared-with-the-rest; just being better-than-we-were-before is OK. That is learning.

the_learning_curve

And this is how it feels … this is the Learning Curve … or the Nerve Curve as we call it.

What it says is that to be able to assess confidence we must also measure competence. Outcomes. Impact.

And to achieve excellence we have to be prepared to actively look for any gap between intent and impact.  And we have to be prepared to see it as an opportunity rather than as a threat. And we will need to be able to seek feedback and other people’s perspectives. And we need to be to open to asking for examples and explanations from those who have demonstrated competence.

It says that confidence is not a trustworthy surrogate for competence.

It says that we want the confidence that flows from competence because that is the foundation of trust.

Improvement flows at the speed of trust and seeing competence, confidence and trust growing is a joyous thing.

Pride and Joy are OK.

Arrogance and incompetence comes before a fall would be a better proverb.

Value, Verify and Validate

thinker_figure_unsolve_puzzle_150_wht_18309Many of the challenges that we face in delivering effective and affordable health care do not have well understood and generally accepted solutions.

If they did there would be no discussion or debate about what to do and the results would speak for themselves.

This lack of understanding is leading us to try to solve a complicated system design challenge in our heads.  Intuitively.

And trying to do it this way is fraught with frustration and risk because our intuition tricks us. It was this sort of challenge that led Professor Rubik to invent his famous 3D Magic Cube puzzle.

It is difficult enough to learn how to solve the Magic Cube puzzle by trial and error; it is even more difficult to attempt to do it inside our heads! Intuitively.


And we know the Rubik Cube puzzle is solvable, so all we need are some techniques, tools and training to improve our Rubik Cube solving capability.  We can all learn how to do it.


Returning to the challenge of safe and affordable health care, and to the specific problem of unscheduled care, A&E targets, delayed transfers of care (DTOC), finance, fragmentation and chronic frustration.

This is a systems engineering challenge so we need some systems engineering techniques, tools and training before attempting it.  Not after failing repeatedly.

se_vee_diagram

One technique that a systems engineer will use is called a Vee Diagram such as the one shown above.  It shows the sequence of steps in the generic problem solving process and it has the same sequence that we use in medicine for solving problems that patients present to us …

Diagnose, Design and Deliver

which is also known as …

Study, Plan, Do.


Notice that there are three words in the diagram that start with the letter V … value, verify and validate.  These are probably the three most important words in the vocabulary of a systems engineer.


One tool that a systems engineer always uses is a model of the system under consideration.

Models come in many forms from conceptual to physical and are used in two main ways:

  1. To assist the understanding of the past (diagnosis)
  2. To predict the behaviour in the future (prognosis)

And the process of creating a system model, the sequence of steps, is shown in the Vee Diagram.  The systems engineer’s objective is a validated model that can be trusted to make good-enough predictions; ones that support making wiser decisions of which design options to implement, and which not to.


So if a systems engineer presented us with a conceptual model that is intended to assist our understanding, then we will require some evidence that all stages of the Vee Diagram process have been completed.  Evidence that provides assurance that the model predictions can be trusted.  And the scope over which they can be trusted.


Last month a report was published by the Nuffield Trust that is entitled “Understanding patient flow in hospitals”  and it asserts that traffic flow on a motorway is a valid conceptual model of patient flow through a hospital.  Here is a direct quote from the second paragraph in the Executive Summary:

nuffield_report_01
Unfortunately, no evidence is provided in the report to support the validity of the statement and that omission should ring an alarm bell.

The observation that “the hospitals with the least free space struggle the most” is not a validation of the conceptual model.  Validation requires a concrete experiment.


To illustrate why observation is not validation let us consider a scenario where I have a headache and I take a paracetamol and my headache goes away.  I now have some evidence that shows a temporal association between what I did (take paracetamol) and what I got (a reduction in head pain).

But this is not a valid experiment because I have not considered the other seven possible combinations of headache before (Y/N), paracetamol (Y/N) and headache after (Y/N).

An association cannot be used to prove causation; not even a temporal association.

When I do not understand the cause, and I am without evidence from a well-designed experiment, then I might be tempted to intuitively jump to the (invalid) conclusion that “headaches are caused by lack of paracetamol!” and if untested this invalid judgement may persist and even become a belief.


Understanding causality requires an approach called counterfactual analysis; otherwise known as “What if?” And we can start that process with a thought experiment using our rhetorical model.  But we must remember that we must always validate the outcome with a real experiment. That is how good science works.

A famous thought experiment was conducted by Albert Einstein when he asked the question “If I were sitting on a light beam and moving at the speed of light what would I see?” This question led him to the Theory of Relativity which completely changed the way we now think about space and time.  Einstein’s model has been repeatedly validated by careful experiment, and has allowed engineers to design and deliver valuable tools such as the Global Positioning System which uses relativity theory to achieve high positional precision and accuracy.


So let us conduct a thought experiment to explore the ‘faster movement requires more space‘ statement in the case of patient flow in a hospital.

First, we need to define what we mean by the words we are using.

The phrase ‘faster movement’ is ambiguous.  Does it mean higher flow (more patients per day being admitted and discharged) or does it mean shorter length of stage (the interval between the admission and discharge events for individual patients)?

The phrase ‘more space’ is also ambiguous. In a hospital that implies physical space i.e. floor-space that may be occupied by corridors, chairs, cubicles, trolleys, and beds.  So are we actually referring to flow-space or storage-space?

What we have in this over-simplified statement is the conflation of two concepts: flow-capacity and space-capacity. They are different things. They have different units. And the result of conflating them is meaningless and confusing.


However, our stated goal is to improve understanding so let us consider one combination, and let us be careful to be more precise with our terminology, “higher flow always requires more beds“. Does it? Can we disprove this assertion with an example where higher flow required less beds (i.e. space-capacity)?

The relationship between flow and space-capacity is well understood.

The starting point is Little’s Law which was proven mathematically in 1961 by J.D.C. Little and it states:

Average work in progress = Average lead time  X  Average flow.

In the hospital context, work in progress is the number of occupied beds, lead time is the length of stay and flow is admissions or discharges per time interval (which must be the same on average over a long period of time).

(NB. Engineers are rather pedantic about units so let us check that this makes sense: the unit of WIP is ‘patients’, the unit of lead time is ‘days’, and the unit of flow is ‘patients per day’ so ‘patients’ = ‘days’ * ‘patients / day’. Correct. Verified. Tick.)

So, is there a situation where flow can increase and WIP can decrease? Yes. When lead time decreases. Little’s Law says that is possible. We have disproved the assertion.


Let us take the other interpretation of higher flow as shorter length of stay: i.e. shorter length of stay always requires more beds.  Is this correct? No. If flow remains the same then Little’s Law states that we will require fewer beds. This assertion is disproved as well.

And we need to remember that Little’s Law is proven to be valid for averages, does that shed any light on the source of our confusion? Could the assertion about flow and beds actually be about the variation in flow over time and not about the average flow?


And this is also well understood. The original work on it was done almost exactly 100 years ago by Agner Krarup Erlang and the problem he looked at was the quality of customer service of the early telephone exchanges. Specifically, how likely was the caller to get the “all lines are busy, please try later” response.

What Erlang showed was there there is a mathematical relationship between the number of calls being made (the demand), the probability of a call being connected first time (the service quality) and the number of telephone circuits and switchboard operators available (the service cost).


So it appears that we already have a validated mathematical model that links flow, quality and cost that we might use if we substitute ‘patients’ for ‘calls’, ‘beds’ for ‘telephone circuits’, and ‘being connected’ for ‘being admitted’.

And this topic of patient flow, A&E performance and Erlang queues has been explored already … here.

So a telephone exchange is a more valid model of a hospital than a motorway.

We are now making progress in deepening our understanding.


The use of an invalid, untested, conceptual model is sloppy systems engineering.

So if the engineering is sloppy we would be unwise to fully trust the conclusions.

And I share this feedback in the spirit of black box thinking because I believe that there are some valuable lessons to be learned here – by us all.


To vote for this topic please click here.
To subscribe to the blog newsletter please click here.
To email the author please click here.

Patient Traffic Engineering

motorway[Beep] Bob’s computer alerted him to Leslie signing on to the Webex session.

<Bob> Good afternoon Leslie, how are you? It seems a long time since we last chatted.

<Leslie> Hi Bob. I am well and it has been a long time. If you remember, I had to loop out of the Health Care Systems Engineering training because I changed job, and it has taken me a while to bring a lot of fresh skeptics around to the idea of improvement-by-design.

<Bob> Good to hear, and I assume you did that by demonstrating what was possible by doing it, delivering results, and describing the approach.

<Leslie> Yup. And as you know, even with objective evidence of improvement it can take a while because that exposes another gap, the one between intent and impact.  Many people get rather defensive at that point, so I have had to take it slowly. Some people get really fired up though.

 <Bob> Yes. Respect, challenge, patience and persistence are all needed. So, where shall we pick up?

<Leslie> The old chestnut of winter pressures and A&E targets.  Except that it is an all-year problem now and according to what I read in the news, everyone is predicting a ‘melt-down’.

<Bob> Did you see last week’s IS blog on that very topic?

<Leslie> Yes, I did!  And that is what prompted me to contact you and to re-start my CHIPs coaching.  It was a real eye opener.  I liked the black swan code-named “RC9” story, it makes it sound like a James Bond film!

<Bob> I wonder how many people dug deeper into how “RC9” achieved that rock-steady A&E performance despite a rising tide of arrivals and admissions?

<Leslie> I did, and I saw several examples of anti-carve-out design.  I have read though my notes and we have talked about carve out many times.

<Bob> Excellent. Being able to see the signs of competent design is just as important as the symptoms of inept design. So, what shall we talk about?

<Leslie> Well, by co-incidence I was sent a copy of of a report entitled “Understanding patient flow in hospitals” published by one of the leading Think Tanks and I confess it made no sense to me.  Can we talk about that?

<Bob> OK. Can you describe the essence of the report for me?

<Leslie> Well, in a nutshell it said that flow needs space so if we want hospitals to flow better we need more space, in other words more beds.

<Bob> And what evidence was presented to support that hypothesis?

<Leslie> The authors equated the flow of patients through a hospital to the flow of traffic on a motorway. They presented a table of numbers that made no sense to me, I think partly because there are no units stated for some of the numbers … I’ll email you a picture.

traffic_flow_dynamics

<Bob> I agree this is not a very informative table.  I am not sure what the definition of “capacity” is here and it may be that the authors may be equating “hospital bed” to “area of tarmac”.  Anyway, the assertion that hospital flow is equivalent to motorway flow is inaccurate.  There are some similarities and traffic engineering is an interesting subject, but they are not equivalent.  A hospital is more like a busy city with junctions, cross-roads, traffic lights, roundabouts, zebra crossings, pelican crossings and all manner of unpredictable factors such as cyclists and pedestrians. Motorways are intentionally designed without these “impediments”, for obvious reasons! A complex adaptive flow system like a hospital cannot be equated to a motorway. It is a dangerous over-simplification.

<Leslie> So, if the hospital-motorway analogy is invalid then the conclusions are also invalid?

<Bob> Sometimes, by accident, we get a valid conclusion from an invalid method. What were the conclusions?

<Leslie> That the solution to improving A&E performance is more space (i.e. hospital beds) but there is no more money to build them or people to staff them.  So the recommendations are to reduce volume, redesign rehabilitation and discharge processes, and improve IT systems.

<Bob> So just re-iterating the habitual exhortations and nothing about using well-understood systems engineering methods to accurately diagnose the actual root cause of the ‘symptoms’, which is likely to be the endemic carveoutosis multiforme, and then treat accordingly?

<Leslie> No. I could not find the term “carve out” anywhere in the document.

<Bob> Oh dear.  Based on that observation, I do not believe this latest Think Tank report is going to be any more effective than the previous ones.  Perhaps asking “RC9” to write an account of what they did and how they learned to do it would be more informative?  They did not reduce volume, and I doubt they opened more beds, and their annual report suggests they identified some space and flow carveoutosis and treated it. That is what a competent systems engineer would do.

<Leslie> Thanks Bob. Very helpful as always. What is my next step?

<Bob> Some ISP-2 brain-teasers, a juicy ISP-2 project, and some one day training workshops for your all-fired-up CHIPs.

<Leslie> Bring it on!


For more posts like this please vote here.
For more information please subscribe here.

Socrates the Improvement Coach

One of the challenges involved in learning the science of improvement, is to be able to examine our own beliefs.

We need to do that to identify the invalid assumptions that lead us to make poor decisions, and to act in ways that push us off the path to our intended outcome.

Over two thousand years ago, a Greek philosopher developed a way of exposing invalid assumptions.  He was called Socrates.

The Socratic method involves a series of questions that are posed to help a person or group to determine their underlying beliefs and the extent of their knowledge.  It is a way to develop better hypotheses by steadily identifying and eliminating those that lead to contradictions.

Socrates designed his method to force one to examine one’s own beliefs and the validity of such beliefs.


That skill is as valuable today as it was then, and is especially valuable when we explore complex subjects,  such as improving the performance of our health and social care system.

Our current approach is called reactive improvement – and we are reacting to failure.

Reactive improvement zealots seem obsessed with getting away from failure, disappointment, frustration, fear, waste, variation, errors, cost etc. in the belief that what remains after the dross has been removed is the good stuff. The golden nuggets.

And there is nothing wrong with that.

It has a couple of downsides though:

  1. Removing dross leaves holes, that all too easily fill up with different dross!
  2. Reactive improvement needs a big enough problem to drive it.  A crisis!

The implication is that reactive improvement grinds to a halt as the pressure is relieved and as it becomes mired in a different form of bureaucratic dross … the Quality Control Inspectorate!

No wonder we feel as if we are trapped in a perpetual state of chronic and chaotic mediocrity.


Creative improvement is, as the name suggests, focused on creating something that we want in the future.  Something like a health and social care system that is safe, calm, fit-4-purpose, and affordable.

Creative improvement does not need a problem to get started. A compelling vision and a choice to make-it-so is enough.

Creative improvement does not fizzle out as soon as we improve… because our future vision is always there to pull us forward.  And the more we practice creative improvement, the better we get, the more progress we make, and the stronger the pull becomes.


The main thing that blocks us from using creative improvement are our invalid, unconscious beliefs and assumptions about what is preventing us achieving our vision now.

So we need a way to examine our beliefs and assumptions in a disciplined and robust way, and that is the legacy that Socrates left us.


For more posts like this please vote here.
For more information please subscribe here.

Crash Test Dummy

CrashTestDummyThere are two complementary approaches to safety and quality improvement: desire and design.

In the improvement-by-desire world we use a suck-it-and-see approach to fix a problem.

 It is called PDSA. Plan-Do-Study-Act.

Sometimes this works and we pat ourselves on the back, and remember the learning for future use.

Sometimes it works for us but has a side effect: it creates a problem for someone else.  And we may not be aware of the unintended consequence unless someone shouts “Oi!” It may be too late by then of course.

Sometimes it doesn’t work.  And we have to just suck it up, remind ourselves  to “learn to fail or fail to learn”, and get back on the horse.


The more parts in a system, and the more interconnected they are, the more likely it is that a well-intended suck-it-and-see change will fail completely or create an unintended negative impact.

And after we have experienced that disappointment a few times our learned behaviour is to … do nothing … and to put up with the problems.  It seems the safest option.


In the improvement-by-design world we choose to study first, and to find the causal roots of the system behaviour we are seeing.  Our first objective is a causal diagnosis.

With that we can propose rational design changes that we anticipate will deliver the improvement we seek without creating adverse side effects.

And we have learned the hard way that our intuition can trick us … so we need a way to test our proposed designs … in a safe, and controlled, and measured way.

We need a crash test dummy!


What they do is to deliberately experience our design in a controlled experiment, and what they generate for us is constructive, objective and subjective feedback. What did work, and what did not.

A crash test dummy is both tough and sensitive at the same time.  They do not break easily and yet they feel the pain and gain too.  They are robust and resilient.


And with their feedback we can re-visit our design and improve it further, or we can use it to offer evidence-based assurance that our design is fit-for-purpose.

Safety and Quality Assurance is improvement-by-design.

Safety and Quality Control is improvement-by-desire.

If you were a passenger or a patient … which option would you prefer?

PS. It is possible to have both.

Fragmentation Cost

figure_falling_with_arrow_17621The late Russell Ackoff used to tell a great story. It goes like this:

“A team set themselves the stretch goal of building the World’s Best Car.  So the put their heads together and came up with a plan.

First they talked to drivers and drew up a list of all the things that the World’s Best Car would need to have. Safety, speed, low fuel consumption, comfort, good looks, low emissions and so on.

Then they drew up a list of all the components that go into building a car. The engine, the wheels, the bodywork, the seats, and so on.

Then they set out on a quest … to search the world for the best components … and to bring the best one of each back.

Then they could build the World’s Best Car.

Or could they?

No.  All they built was a pile of incompatible parts. The WBC did not work. It was a futile exercise.


Then the penny dropped. The features in their wish-list were not associated with any of the separate parts. Their desired performance emerged from the way the parts worked together. The working relationships between the parts were as necessary as the parts themselves.

And a pile of average parts that work together will deliver a better performance than a pile of best parts that do not.

So the relationships were more important than the parts!


From this they learned that the quickest, easiest and cheapest way to degrade performance is to make working-well-together a bit more difficult.  Irrespective of the quality of the parts.


Q: So how do we reverse this degradation of performance?

A: Add more failure-avoidance targets of course!

But we just discovered that the performance is the effect of how the parts work well together?  Will another failure-metric-fueled performance target help? How will each part know what it needs to do differently – if anything?  How will each part know if the changes they have made are having the intended impact?

Fragmentation has a cost.  Fear, frustration, futility and ultimately financial failure.

So if performance is fading … the quality of the working relationships is a good place to look for opportunities for improvement.

Early Warning System

radar_screen_anim_300_clr_11649The most useful tool that a busy operational manager can have is a reliable and responsive early warning system (EWS).

One that alerts when something is changing and that, if missed or ignored, will cause a big headache in the future.

Rather like the radar system on an aircraft that beeps if something else is approaching … like another aircraft or the ground!


Operational managers are responsible for delivering stuff on time.  So they need a radar that tells them if they are going to deliver-on-time … or not.

And their on-time-delivery EWS needs to alert them soon enough that they have time to diagnose the ‘threat’, design effective plans to avoid it, decide which plan to use, and deliver it.

So what might an effective EWS for a busy operational manager look like?

  1. It needs to be reliable. No missed threats or false alarms.
  2. It needs to be visible. No tomes of text and tables of numbers.
  3. It needs to be simple. Easy to learn and quick to use.

And what is on offer at the moment?

The RAG Chart
This is a table that is coloured red, amber and green. Red means ‘failing’, green means ‘not failing’ and amber means ‘not sure’.  So this meets the specification of visible and simple, but it is reliable?

It appears not.  RAG charts do not appear to have helped to solve the problem.

A RAG chart is generated using historic data … so it tells us where we are now, not how we got here, where we are going or what else is heading our way.  It is a snapshot. One frame from the movie.  Better than complete blindness perhaps, but not much.

The SPC Chart
This is a statistical process control chart and is a more complicated beast.  It is a chart of how some measure of performance has changed over time in the past.  So like the RAG chart it is generated using historic data.  The advantage is that it is not just a snapshot of where were are now, it is a picture of story of how we got to where we are, so it offers the promise of pointing to where we may be heading.  It meets the specification of visible, and while more complicated than a RAG chart, it is relatively easy to learn and quick to use.

Luton_A&E_4Hr_YieldHere is an example. It is the SPC  chart of the monthly A&E 4-hour target yield performance of an acute NHS Trust.  The blue lines are the ‘required’ range (95% to 100%), the green line is the average and the red lines are a measure of variation over time.  What this charts says is: “This hospital’s A&E 4-hour target yield performance is currently acceptable, has been so since April 2012, and is improving over time.”

So that is much more helpful than a RAG chart (which in this case would have been green every month because the average was above the minimum acceptable level).


So why haven’t SPC charts replaced RAG charts in every NHS Trust Board Report?

Could there be a fly-in-the-ointment?

The answer is “Yes” … there is.

SPC charts are a quality audit tool.  They were designed nearly 100 years ago for monitoring the output quality of a process that is already delivering to specification (like the one above).  They are designed to alert the operator to early signals of deterioration, called ‘assignable cause signals’, and they prompt the operator to pay closer attention and to investigate plausible causes.

SPC charts are not designed for predicting if there is a flow problem looming over the horizon.  They are not designed for flow metrics that exhibit expected cyclical patterns.  They are not designed for monitoring metrics that have very skewed distributions (such as length of stay).  They are not designed for metrics where small shifts generate big cumulative effects.  They are not designed for metrics that change more slowly than the frequency of measurement.

And these are exactly the sorts of metrics that a busy operational manager needs to monitor, in reality, and in real-time.

Demand and activity both show strong cyclical patterns.

Lead-times (e.g. length of stay) are often very skewed by variation in case-mix and task-priority.

Waiting lists are like bank accounts … they show the cumulative sum of the difference between inflow and outflow.  That simple fact invalidates the use of the SPC chart.

Small shifts in demand, activity, income and expenditure can lead to big cumulative effects.

So if we abandon our RAG charts and we replace them with SPC charts … then we climb out of the RAG frying pan and fall into the SPC fire.

Oops!  No wonder the operational managers and financial controllers have not embraced SPC.


So is there an alternative that works better?  A more reliable EWS that busy operational managers and financial controllers can use?

Yes, there is, and here is a clue …

… but tread carefully …

… building one of these Flow-Productivity Early Warning Systems is not as obvious as it might first appear.  There are counter-intuitive traps for the unwary and the untrained.

You may need the assistance of a health care systems engineer (HCSE).

Precious Life Time

stick_figure_help_button_150_wht_9911Imagine this scenario:

You develop some non-specific symptoms.

You see your GP who refers you urgently to a 2 week clinic.

You are seen, assessed, investigated and informed that … you have cancer!


The shock, denial, anger, blame, bargaining, depression, acceptance sequence kicks off … it is sometimes called the Kübler-Ross grief reaction … and it is a normal part of the human psyche.

But there is better news. You also learn that your condition is probably treatable, but that it will require chemotherapy, and that there are no guarantees of success.

You know that time is of the essence … the cancer is growing.

And time has a new relevance for you … it is called life time … and you know that you may not have as much left as you had hoped.  Every hour is precious.


So now imagine your reaction when you attend your local chemotherapy day unit (CDU) for your first dose of chemotherapy and have to wait four hours for the toxic but potentially life-saving drugs.

They are very expensive and they have a short shelf-life so the NHS cannot afford to waste any.   The Aseptic Unit team wait until all the safety checks are OK before they proceed to prepare your chemotherapy.  That all takes time, about four hours.

Once the team get to know you it will go quicker. Hopefully.

It doesn’t.

The delays are not the result of unfamiliarity … they are the result of the design of the process.

All your fellow patients seem to suffer repeated waiting too, and you learn that they have been doing so for a long time.  That seems to be the way it is.  The waiting room is well used.

Everyone seems resigned to the belief that this is the best it can be.

They are not happy about it but they feel powerless to do anything.


Then one day someone demonstrates that it is not the best it can be.

It can be better.  A lot better!

And they demonstrate that this better way can be designed.

And they demonstrate that they can learn how to design this better way.

And they demonstrate what happens when they apply their new learning …

… by doing it and by sharing their story of “what-we-did-and-how-we-did-it“.

CDU_Waiting_Room

If life time is so precious, why waste it?

And perhaps the most surprising outcome was that their safer, quicker, calmer design was also 20% more productive.