Sting in the Tail

Monday 19th July 2021 was the official end of COVID-19 restrictions in England – yet the number of positive tests, hospital admissions and deaths is rising. “How can that make any sense!” wail the doom mongers. Is it irresponsible? Are we destined for a deadly third wave? Is a nasty sting in the tail on the way?

To address these questions we need to step back and look at the bigger picture.

As we have seen, the evolution of the COVID-19 pandemic has been tricky to predict because the virus and the host have been co-evolving. The host has implemented social distancing and developed vaccines to attenuate the viral spread and illness severity. The virus has mutated and more contagious variants have emerged as the dominant players.

And trying to work out how all these factors combine together is beyond the computational ability of the 1.4 kg of chimpware between our ears. Our intuition is confounded by the counter-intuitive complexity. We need help.

Here is the published data … the orange line is the daily reported positive COVID tests and the red dotted line is the daily reported COVID deaths. There is a clear temporal association but the size of the peaks don’t seem to make sense – even when we note that the test and death lines are plotted on very different scales.

One problem here is that the number of positive tests reported is very dependent on the testing process. In the first wave only hospital admissions were tested; in the second wave there was much more community-based testing of symptomatic people; and now many people are self-testing regularly to provide evidence of wellness.

The only way to unravel this Gordian Knot of interacting influences is to use the data to build and calibrate a causal structure model (CSM). Conventional statistical analysis is not up to the job because it conflates association and causation. We need something which is able to provide a diagnosis and a prognosis. Something that can use the past to help predict the future.

The blue line in the chart below is the output of a CSM that has been designed using proven principles of epidemic dynamics, and calibrated using historical data. And it predicts that there is indeed a third wave underway and that it is minor in comparison with the first two in terms of the predicted mortality.

The emergence of a third wave is the combined effect of three things:
a) The relaxing of social distancing rules.
b) The emergence and spread of more contagious variants of the virus.
c) The known fact that the vaccine is not 100% effective.
d) The known fact that immunity after illness or vaccination will wane with time.

One use of a CSM is to conduct counterfactual analysis which helps us to deepen our understanding of how complex systems behave. These are called “What would have happened if?” experiments.

One such experiment is “What would have happened if the vaccine was completely effective?

Here is the CSM prediction for a 100% effective vaccine: The first and second waves were the same because the vaccination programme did not start until the peak of the second wave – and there is no third wave even with complete relaxation of social distancing.

But the actual data disproves this causal hypothesis because there is a third wave developing.


So, here is the CSM prediction for a 0% effective vaccine: The first and second waves are largely unchanged and now we have a third wave as bad as the second. A nasty sting in the tail.

But then the epidemic fizzles out because all the host “fuel” of susceptible people has been used up.


Setting the aggregate effectiveness of the vaccine to 75% gives us the best fit to the historical data; and that value is consistent with the pilot studies of vaccine effectiveness.

And what is the most useful evidence that suggests this latest prediction is reliable? It is that the infection rate is predicted to be falling already, despite distancing rules being relaxed, and that is what the data is showing.

And with this re-calibrated CSM we can estimate the impact of the vaccination programme in terms of lives saved … at it comes out at about 40,000 people! That is a lot.

So what next?

Well, we know that immunity will wane with time, and we know that new viral variants will emerge, and we know that coronavirus will be with us for the foreseeable future at a background level.

And we have seen how this pandemic has exposed the vulnerabilities of our current socioeconomic systems – health and social care, education, transport, communication, commerce and so on. Every part of the system has been affected because everything is interconnected.

We cannot just go back to business as usual. The world has been changed. And our immediate challenge is to redesign and rebuild a health care system that is safer, more efficient and more agile and that will serve us better in the future.

Another lesson learned is just how useful systems engineering theory, tools and techniques has been – the CSM demonstrated above is a standard systems engineering technique.

So, we will need some more health care systems engineers. A lot more. And they will need to be embedded at all levels in the NHS as an integral part of the system.

A self-healing health care system.

Emergence

The last year has been dominated by one theme – the SARS-2-CoV global pandemic. It has been a roller coaster ride of ups and downs and twists and turns, often in darkness and accompanied by the baleful drone of doom-mongers and naysayers. But there have also been bright flashes of insight that have illuminated the way and surges of innovation that have carved new designs out of old paradigms.

What we are experiencing is the evolution of a complex adaptive system and what we are seeing is the emergence of a new normal.

Almost nothing will be the same again.

The diagram above tells many inter-weaved story threads that cannot be untangled. Two Chapters are complete – CRC and UTC. We are just starting Chapter 3.

The first Thread of Tragedy is shown by the red dotted line. It is the number of daily COVID-19 associated deaths reported in the UK. The total stands at just over 127,000 which is a more than enough to fill the whole of Wembley Stadium. And a lot more.

The solid red line on the diagram is the result of removing the 7-day oscillation caused the the reporting process which opts to take weekends off.

COVID-19 is busy 24 x 7.

The first reported COVID-19 death in the UK was in the first week of March 2020. The WHO declared a global pandemic the following week, and the UK implemented the first part of a national lock down the week after. It closed some pubs in London. The need for speed was because hospital admissions and deaths were growing exponentially. As the chart shows – deaths were doubling every few days.

The Chancellor’s Magic Blank Chequebook appeared and several Nightingales were rapidly assembled to absorb the predicted storm surge. However, critically ill patients require specialised equipment and highly trained staff – and those necessities were already in short supply. As was the personal protective equipment (PPE) the front-line staff needed to keep them safe.

The Nightingales were never going to be able to sing. It was a doomed design from Day #1.

The bigger problem was the millions of potentially infectious people who would get poorly but not unwell enough to go to hospital. What was the national plan for them? It seemed that there wasn’t one. So, we created our own. The COVID Referral Centre. CRC.

This was Chapter One and the story of that has already been shared here.

The CRC was an innovative drive-thru design and a temporary solution that was conceived, commissioned, constructed and opened in 3 weeks (the red box at the top of the first diagram). It worked as designed and it was disassembled, as planned, at the predicted end of the First Wave (the orange box at the top of the first diagram).

What happened next is even more interesting. We had demonstrated, by doing it, that a drive-thru design was feasible and now we had a new challenge. Most of the elective and urgent services had been mothballed to free up space and staff to fight the First Wave. And we had no clear picture what would happen if lock down restrictions were released. The Nightingales were held in readiness. An expensive and ineffective insurance policy.

Could the drive-thru design be used for a handful of small, temporary urgent treatment centres (UTCs)?

A key lesson from the CRC was the critical importance of managing the inflow to avoid a traffic jam of anxious and potentially very poorly people. We solved this using an electronic triage and referral app that was rapidly designed, developed and delivered for the opening of the CRC. Doing that took a whole week using the JEDI method (Just ‘Effing Do It) also known as Agile.

By August 2020 things were getting back to sort-of-normal. People were having summer holidays. Schools and universities were concocting elaborate plans to re-open in the autumn. And we were thinking ahead to Winter 2020 and the prospect of seasonal flu on top of a possible resurgence of COVID. The much-feared Second Wave.

So, just before the CRC was decommissioned we took the opportunity to measure how many people could be vaccinated in an innovative drive-thru compared with a conventional walk-in. An important constraint was we did not want queues of vulnerable elderly people inside or outside. This time we had the luxury of being able to map and measure the process properly and it revealed that the drive-thru option was feasible.

We now had the information we needed to design a high efficiency flow scheduler which would set the rate at which patients could arrive without causing queues and chaos, and at the same time make good use of the available and valuable resources.

The next design question we had to answer was “How will the booking be done?” and the immediate answers offered were “on-line” and “by the patient”.

But, this was not how the CRC worked. In that service the patient had to speak to a GP who assessed their symptoms and, if deemed necessary, referred them electronically to the CRC for a face-2-face assessment. The e-referral app was designed to limit the number of referrals to prevent a traffic jam and it also automatically assigned the next available free slot to make best use of the resources. There was no patient choice.

The other question that spun out of this exercise was “If patients could book their own appointments for a routine flu jab then could they refer themselves to a drive-thru urgent treatment centre?”

Now we were shaking the trees a bit too hard. The general consensus was “No“. But why not? Surely the patient is best placed to decide how urgent they feel their problem is? And anyway, an online self-referral can be quickly screened and any inappropriate ones addressed proactively. It is probably a better design than a walk-in service.

So, we decided to design a prototype online self-referral system and we looked on the Web for ideas that solved a similar “niggle” of being able provide convenient 24 x 7 online access to a traditional face-to-face 9-5 Mon-Fri service. Rather like the niggle of trying to get an urgent appointment at your GP practice. Or the niggle of finding an increasingly rare Post Office to go to and to get the right postage stamps for an urgent big letter / small parcel.

We discovered that the postage stamp niggle had been solved with an online app for a pay-and-print-postage-label. So, that gave us a validated design to start from.

All this digital innovation was going on during the Blue Period on the first diagram, along with the planning of a cluster of small, temporary, drive-thru UTCs placed in more convenient locations for patients. And by the time the whole caboodle was ready-to-roll it was apparent that the feared Second Wave was building momentum.

The drive-thru UTC service opened its gates in early October 2020 and only four weeks later the nation was commanded to lock down for a second time. The return of pupils to schools and students to university had created the perfect COVID incubator and the emergence of a hyper-contagious Mutant. The first diagram shows when the ‘fire-break’ lock down was eased, and when the Mutant exploded out of its cage, wiped out Christmas and doubled the UK death toll.

But, the drive-thru UTCs weathered the winter storms – figuratively and literally. They valiantly delivered a much needed service while the hospitals were swamped with a third tsunami of critically ill. The NHS was better prepared this time, which is just as well because the Third Wave was much bigger than the First.

And the data the UTCs collected themselves showed that the prototype self-referral app worked as designed. We have seen gradual adoption over the seven months since it was first piloted (see below). The day-to-day variation is not random. The weekly spikes on the chart coincide with weekends when GP practices are shut and A&Es are busy dealing with accidents and emergencies (not anyone and everything).

So what does the future hold?

When COVID is just a bad/sad memory and the NHS is grappling with the elephantine challenge of post-COVID recovery amidst yet another re-disorganisation, would a more permanent drive-thru urgent care service be a viable service delivery option?

Based on the hard evidence shown I would say “Yes“.

Necessity is the Mother of Invention.

Engineers Design Things to be Fit-for-Purpose.

One Year On

This is a picture that tells a story. In fact, it is a picture of millions of stories. Some tragic. Some heroic. Most neither. This is a story of a system adapting to an unexpected and deadly challenge. Over 125,000 souls have been lost. Much has been learned. We cannot return to what was before. The world has changed.

There are three lines on this chart.

The dotted red line is the daily reported deaths, and the obvious pattern is the weekly oscillation. This is caused by the fact that for two days of the week many people do not sit at their computers processing data. These are called weekends. So, they have to catch up with the data backlog when they return to work on Monday.

The solid red line illustrates what actually happened … the actual number of souls lost per day … peaking at over 1000 in January 2021. The ups and downs show the effect of three drastic interventions to limit the spread of a merciless virus that was mutating, evolving and competing with itself to spread faster.

This is a picture of a system learning how the Universe works – the hard, painful way.

The blue line is a prediction of how many souls would be lost, and it is surprisingly accurate. The blue line was generated by a computer. Not a multi-million pound supercomputer like the ones used to predict the weather – but a laptop like those millions of people use every day. And the reason the prediction is so accurate is because epidemics follow simple mathematical rules – and these rules were worked out about 100 years ago.

The tricky bit is turning these simple mathematical formulae into an accurate prediction … in our heads … intuitively. And the reason it is so tricky is because our brains have not evolved to do that. It is not a matter of lack of intelligence … it is just that a human brain is the wrong tool for that job.

But, what our brains are superbly evolved to do is conceptualise, innovate and collaborate to create tools like computers and Excel spreadsheets.

And many have said that in one year we have achieved ten years worth of innovation. We had to. Our lives depended on it.

So, now we have seen what is possible with a burning platform pushing us. How about we keep going with burning ambition pulling us to innovate and improve further?

Our lives and livelihoods will depend on it.

End In Sight

We are a month into Lock-down III.

Is there any light at the end of the tunnel?

Here is the reported UK data.  As feared the Third Wave was worse than the First and the Second, and the cumulative mortality has exceeded 100,000 souls.  But the precipitous fall in reported positive tests is encouraging and it looks like the mortality curve is also turning the corner.

The worst is over.

So, was this turnaround caused by Lock-down III?

It is not possible to say for sure from this data.  We would need a No Lock-down randomised control group to keep the statistical purists happy and we could not do that.

Is there another way?

Yes, there is.  It is called a digital twin.  The basic idea is we design, build, verify and calibrate a digital simulation model of the system that we are interested and use that to explore cause-and-effect hypotheses.  Here is an example: The solid orange line in the chart above (daily reported positive tests) is closely related to the dotted grey line in the chart below (predicted daily prevalence of infectious people).   Note the almost identical temporal pattern and be aware that in the first wave we only reported positive tests of patients admitted to hospital.

What does our digital twin say was the cause?

It says that the primary cause of the fall in daily prevalence of infectious people is because the number of susceptible people (the solid blue line) has fallen to a low enough level for the epidemic to fizzle out on its own.  Without any more help from us.

And it says that Lock-down III has contributed a bit by flattening and lowering the peak of infections, admissions and deaths.

And it says that the vaccination programme has not contributed to the measured fall in prevalence.

What are the implications if our digital twin is speaking the truth?

Firstly, that the epidemic is already self-terminating.
Secondly, that the restrictions will not be needed after the end of February.
Thirdly, that a mass vaccination programme is a belt-and-braces insurance policy.

I would say that is all good news.  The light the end would appear to be in sight.

No Queue Vaccination

Vaccinating millions of vulnerable people in the middle of winter requires a safe, efficient and effective process.

It is not safe to have queues of people waiting outside in the freezing cold.  It is not safe to have queues of people packed into an indoor waiting area.

It is not safe to have queues full stop.

And let us face it, the NHS is not brilliant at avoiding queues.

My experience is that the commonest cause of queues in health care processes something called the Flaw of Averages.

This is where patients are booked to arrive at an interval equal to the average rate they can be done.

For example, suppose I can complete 15 vaccinations in an hour … that is one every 4 minutes on average … so common sense tells me it that the optimum way to book patients for their jab is one every four minutes.  Yes?

Actually, No.  That is the perfect design for generating a queue – and the reason is because, in reality, patients don’t arrive exactly on time, and they don’t arrive at exactly one every three minutes, and  there will be variation in exactly how long it takes me to do each jab, and unexpected things will happen.  In short, there are lots of sources of variation.  Some random and some not.  And just that variation is enough to generate a predictably unpredictable queue.  A chaotic queue.

The Laws of Physics decree it.


So, to illustrate the principles of creating a No Queue design here are some videos of a simulated mass vaccination process.

The process is quite simple – there are three steps that every patient must complete in sequence:

1) Pre-Jab Safety Check – Covid Symptoms + Identity + Clinical Check.
2) The Jab.
3) Post-Jab Safety Check (15 minutes of observation … just-in-case).

And the simplest layout of a sequential process is a linear one with the three steps in sequence.

So, let’s see what happens.

Notice where the queue develops … this tells us that we have a flow design problem.  A queue is signpost that points to the cause.

The first step is to create a “balanced load, resilient flow” design.

Hurrah! The upstream queue has disappeared and we finish earlier.  The time from starting to finishing is called the makespan and the shorter this is, the more efficient the design.

OK. Let’s scale up and have multiple, parallel, balanced-load lanes running with an upstream FIFO (first-in-first-out) buffer and a round-robin stream allocation policy (the sorting hat in the video).  Oh, and can we see some process performance metrics too please.

Good, still no queues.  We are making progress.  Only problem is our average utilisation is less than 90% and The Accountants won’t be happy with that.  Also, the Staff are grumbling that they don’t get rest breaks.

Right, let’s add a Flow Coordinator to help move things along quicker and hit that optimum 100% utilisation target that The Accountants desire.

Oh dear!  Adding a Flow Coordinator seems to made queues worse rather than better; and we’ve increased costs so The Accountants will be even less happy.  And the Staff are still grumbling because they still don’t get any regular rest breaks.  The Flow Coordinator is also grumbling because they are running around like a blue a***d fly.  Everyone is complaining now.  That was not the intended effect.  I wonder what went wrong?

But, to restore peace let’s take out the Flow Coordinator and give the Staff regular rest breaks.

H’mm.  We still seem to have queues.  Maybe we just have to live with the fact that patients have to queue.  So long as The Accountants are happy and the Staff  get their breaks then that’s as good as we can expect. Yes?

But … what if … we flex the Flow Coordinator to fill staggered Staff rest breaks and keep the flow moving calmly and smoothly all day without queues?

At last! Everyone is happy. Patients don’t wait. Staff are comfortably busy and also get regular rest breaks. And we actually have the most productive (value for money) design.

This is health care systems engineering (HCSE) in action.

PS. The Flaw of Averages error is a consequence of two widely held and invalid assumptions:

  1. That time is money. It isn’t. Time costs money but they are not interchangeable.
  2. That utilisation and efficiency are interchangeable.  They aren’t.  It is actually often possible to increase efficiency and reduce utilisation at the same time!

The Final Push

It is New Year 2021 and the spectre of COVID-4-Christmas came true.  We are now in the depths of winter and in the jaws of the Third Wave.  What happened?  Let us look back at the UK data for positive tests and deaths to see how this tragic story unfolded.

There was a Second Wave that started to build when Lock-down I was relaxed in July 2020.  And it looks like Lock-down II in November 2020 did indeed have a beneficial effect – but not as much as was needed.  So, when it too was relaxed at the start of December 2020 then … infections took off again … even faster than before!

That is the nature of epidemics and of exponential growth.  It seems we have not learned those painful lessons well enough.

And we all so desperately wanted a more normal Xmas that we conspired to let the COVID cat out of the bag again.  The steep rise in positive tests is real and we know that because a rise in deaths is following about three weeks behind.  And that means hospitals have filled up again.

Are we back to square one?

The emerging news of an even more contagious variant has only compounded our misery, but it is hard to separate the effect of that from all the other factors that are fuelling the Third Wave.

Is there no end to this recurring nightmare?

The short answer is – “It will end“.  It cannot continue forever.  All epidemics eventually burn themselves out when there are too few susceptible people left to infect and we enter the “endemic” phase.  When that happens the R number will gravitate to 1.0 again which some might find confusing.  The confusion is caused by mixing up Ro and Rt.

How close are we to that end game?

Well, we are certainly a lot closer than we were in July 2020 because millions more people have been exposed, infected and recovered and many of those were completely asymptomatic.  It is estimated that about a third of those who catch it do not have any symptoms – so they will not step forward to be tested and will not appear in the statistics.  But they can unwittingly and silently spread the virus while they are infectious.  And many who are symptomatic do not come get tested so they won’t appear in the statistics either.

And there are now two new players in the COVID-19 Game … the Pfizer vaccine and the Oxford vaccine.  They are the White Knights and they are on our side.

Hurrah!

Now we must manufacture, distribute and administer these sickness-and-death-preventing vaccines to 65 million people as soon as possible.  That alone is a massive logistical challenge when we are already fighting battles on many fronts.  It seems impossible.

Or do we?

It feels obvious but is it the most effective strategy?  Should we divert our limited, hard-pressed, exhausted health care staff to jabbing the worried-well?  Should we eke out our limited supplies of precious vaccine to give more people a first dose by delaying the second dose for others?

Will the White Knights save us?

The short answer is – “Not on their own“.

The maths is simple enough.

Over the last three weeks we have, through Herculean effort, managed to administer 1 million first doses of the Pfizer vaccine.  That sounds like a big number but when put into the context of a UK population of 65 million it represents less than 2% and offers only delayed and partial protection.  The trial evidence confirmed that two doses of the Pfizer vaccine given at a three week interval would confer about 90% protection.  That is the basis of the licence and the patient consent.

So, even if we delay second doses and double the rate of first dose delivery we can only hope to partially protect about 2-3% of the population by the end of January 2021.  That is orders of magnitude too slow.

And the vaccines are not a treatment.  The vaccine cannot mitigate the fact that a large number of people are already infected and will have to run the course of their illness.  Most will recover, but many will not.

So, how do we get our heads around all these interacting influences?  How do we predict how the Coronavirus Game is likely to play out over the next few weeks? How do we decide what to do for the best?

I believe it is already clear that trying to answer these questions using the 1.3 kg of wetware between our ears is fraught with problems.

We need to seek the assistance of some hardware, software and some knowledge of how to configure them to illuminate the terrain ahead.


Here is what the updated SEIR-V model suggests will happen if we continue with the current restrictions and the current vaccination rate.  I’ve updated it with the latest data and added a Vaccination component.

The lines to focus on are the dotted ones: grey = number of infected cases, yellow = number ill enough to justify hospital treatment, red = critically ill and black = not survived.

The vertical black line is Now and the lines to the right of that is the most plausible prediction.

It says that a Third Wave is upon us and that it could be worse than the First Wave.  That is the bad news. The good news is that the reason that the infection rate drops is because the epidemic will finally burn itself out – irrespective of the vaccinations.

So, it would appear that the White Knights cannot rescue us on their own … but we can all help to accelerate the final phase and limit the damage – if we all step up and pull together, at the same time and in the same direction.

We need a three-pronged retaliation:

  1. Lock-down:  “Stay at home. Protect the NHS. Save Lives”.  It worked in the First Wave and it will work in the Third Wave.
  2. Care in the Community:  For those who will become unwell and who will need the support of family, friends, neighbours and the NHS.
  3. Volunteer to Vaccinate:  To protect everyone as soon as is practically feasible.

Here is what it could look like.  All over by Easter.

There is light at the end of the tunnel.  The end is in sight.  We just have to pull together in the final phase of the Game.


PS. For those interested in how an Excel-based SEIR-V model is designed, built and used here’s a short (7 minute) video of the highlights:

This is health care systems engineering (HCSE) in action.

And I believe that the UK will need a new generation of HCSEs to assist in the re-designing and re-building of our shattered care services.  So, if you are interested then click here to explore further.

Second Wave

The summer holidays are over and schools are open again – sort of.

Restaurants, pubs and nightclubs are open again – sort of.

Gyms and leisure facilities are open again – sort of.

And after two months of gradual easing of social restrictions and massive expansion of test-and-trace we now have the spectre of a Second Wave looming.  It has happened in Australia, Italy, Spain and France so it can happen here.

As usual, the UK media are hyping up the general hysteria and we now also have rioting disbelievers claiming it is all a conspiracy and that re-applying local restrictions is an infringement of their liberty.

So, what is all the fuss about?

We need to side-step the gossip and get some hard data from a reliable source (i.e. not a newspaper). Here is what worldometer is sharing …

OMG!  It looks like The Second Wave is here already!  There are already as many cases now as in March and we still have the mantra “Stay At Home – Protect the NHS – Save Lives” ringing in our ears.  But something is not quite right.  No one is shouting that hospitals are bursting at the seams.  No one is reporting that the mortuaries are filling up.  Something is different.  What is going on?  We need more data.That is odd!  We can clearly see that cases and deaths went hand-in-hand in the First Wave with about 1:5 cases not making it.  But this time the deaths are not rising with the cases.

Ah ha!  Maybe that is because the virus has mutated into something much more benign and because we have got much better at diagnosing and treating this illness – the ventilators and steroids saved the day.  Hurrah!  It’s all a big fuss about nothing … we should still be able to have friends round for parties and go on pub crawls again!

But … what if there was a different explanation for the patterns on the charts above?

It is said that “data without context is meaningless” … and I’d go further than that … data without context is dangerous because if it leads to invalid conclusions and inappropriate decisions we can get well-intended actions that cause unintended harm.  Death.

So, we need to check the context of the data.

In the First Wave the availability of the antigen (swab) test was limited so it was only available to hospitals and the “daily new cases” were in patients admitted to hospital – the ones with severe enough symptoms to get through the NHS 111 telephone triage.  Most people with symptoms, even really bad ones, stayed at home to protect the NHS.  They didn’t appear in the statistics.

But did the collective sacrifice of our social lives save actual lives?

The original estimates of the plausible death toll in the UK ranged up to 500,000 from coronavirus alone (and no one knows how many more from the collateral effects of an overwhelmed NHS).  The COVID-19 body count to date is just under 50000, so putting a positive spin on that tragic statistic, 90% of the potential deaths were prevented.  The lock-down worked.  The NHS did not collapse.  The Nightingales stood ready and idle – an expensive insurance policy.  Lives were actually saved.

Why isn’t that being talked about?

And the context changed in another important way.  The antigen testing capacity was scaled up despite being mired in confusing jargon.  Who thought up the idea of calling them “pillars”?

But, if we dig about on the GOV.UK website long enough there is a definition:

So, Pillar 1 = NHS testing capacity Pillar 2 = commercial testing capacity and we don’t actually know how much was in-hospital testing and how much was in-community testing because the definitions seem to reflect budgets rather than patients.  Ever has it been thus in the NHS!

However, we can see from the chart below that testing activity (blue bars) has increased many-fold but the two testing streams (in hospital and outside hospital) are combined in one chart.  Well, it is one big pot of tax-payers cash after all and it is the same test.

To unravel this a bit we have to dig into the website, download the raw data, and plot it ourselves.  Looking at Pillar 2 (commercial) we can see they had a late start, caught the tail of the First Wave, and then ramped up activity as the population testing caught up with the available capacity (because hospital activity has been falling since late April).

Now we can see that the increased number of positive tests could be explained by the fact that we are now testing anyone with possible COVID-19 symptoms who steps up – mainly in the community.  And we were unable to do this before because the testing capacity did not exist.

The important message is that in the First Wave we were not measuring what was happening in the community – it was happening though – it must have been.  We measured the knock on effects: hospital admissions with positive tests and deaths after positive tests.

So, to present the daily positive tests as one time-series chart that conflates both ‘pillars’ is both meaningless and dangerous and it is no surprise that people are confused.


This raises a question: Can we estimate how many people there would have been in the community in the First Wave so that we can get a sense of what the rising positive test rate means now?

The way that epidemiologists do this is to build a generic simulation of the system dynamics of an epidemic (a SEIR multi-compartment model) and then use the measured data to calibrate the this model so that it can then be used for specific prediction and planning.

Here is an example of the output of a calibrated multi-compartment system dynamics model of the UK COVID-19 epidemic for a nominal 1.3 million population.  The compartments that are included are Susceptible, Exposed, Infectious, and Recovered (i.e. not infectious) and this model also simulates the severity of the illness i.e. Severe (in hospital), Critical (in ITU) and Died.

The difference in size of the various compartments is so great that the graph below requires two scales – the solid line (Infectious) is plotted on the left hand scale and the others are plotted on the right hand scale which is 10 times smaller.  The green line is today and the reported data up to that point has been used to calibrate the model and to estimate the historical metrics that we did not measure – such as how many people in the community were infectious (and would have tested positive).

At the peak of the First Wave, for this population of 1.3 million, the model estimates there were about 800 patients in hospital (which there were) and 24,000 patients in the community who would have tested positive if we had been able to test them.  24,000/800 = 30 which means the peak of the grey line is 30 x higher than the peak of the orange line – hence the need for the two Y-axes with a 10-fold difference in scale.

Note the very rapid rise in the number of infectious people from the beginning of March when the first UK death was announced, before the global pandemic was declared and before the UK lock-down was enacted in law and implemented.  Coronavirus was already spreading very rapidly.

Note how this rapid rise in the number of infectious people came to an abrupt halt when the UK lock-down was put into place in the third week of March 2020.  Social distancing breaks the chain of transmission from one infectious person to many other susceptible ones.

Note how the peaks of hospital admissions, critical care admissions and deaths lag after the rise in infectious people (because it takes time for the coronavirus to do its damage) and how each peak is smaller (because only about 1:30 get sick enough to need admission, and only 1:5 of hospital admissions do not survive.

Note how the fall in the infectious group was more gradual than the rise (because the lock-down was partial,  because not everyone could stay at home (essential services like the NHS had to continue), and because there was already a big pool of infectious people in the community.


So, by early July 2020 it was possible to start a gradual relaxation of the lock down and from then we can see a gradual rise in infectious people again.  But now we were measuring them because of the growing capacity to perform antigen tests in the community.  The relatively low level and the relatively slow rise are much less dramatic than what was happening in March (because of the higher awareness and the continued social distancing and use of face coverings).  But it is all too easy to become impatient and complacent.

But by early September 2020 it was clear that the number on infectious people was growing faster in the community – and then we saw hospital admissions reach a minimum and start to rise again.  And then the number if deaths reach a minimum and start to rise again.  And this evidence proves that the current level of social distancing is not enough to keep a lid on this disease.  We are in the foothills of a Second Wave.


So what do we do next?

First, we must estimate the effect that the current social distancing policies are having and one way to do that would be to stop doing them and see what happens.  Clearly that is not an ethical experiment to perform given what we already know.  But, we can simulate that experiment using our calibrated SEIR model.  Here is what is predicted to happen if we went back to the pre-lockdown behaviours: There would be a very rapid spread of the virus followed by a Second Wave that would be many times bigger than the first!!  Then it would burn itself out and those who had survived could go back to some semblance of normality.  The human sacrifice would be considerable though.

So, despite the problems that the current social distancing is causing, they pale into insignificance compared to what could happen if they were dropped.

The previous model shows what is predicted would happen if we continue as we are with no further easing of restrictions and assuming people stick to them.  In short, we will have COVID-for-Christmas and it could be a very nasty business indeed as it would come at the same time as other winter-associated infectious diseases such as influenza and norovirus.

The next chart shows what could happen if we squeeze the social distancing brake a bit harder by focusing only on the behaviours that the track-and-trace-and-test system is highlighting as the key drivers of the growth infections, admissions and deaths.

What we see is an arrest of the rise of the number of infectious people (as we saw before), a small and not sustained increase in hospital admissions, then a slow decline back to the levels that were achieved in early July – and at which point it would be reasonable to have a more normal Christmas.

And another potential benefit of a bit more social distancing might be a much less problematic annual flu epidemic because that virus would also find it harder to spread – plus we have a flu vaccination which we can use to reduce that risk further.


It is not going to be easy.  We will have to sacrifice a bit of face-to-face social life for a bit longer.  We will have to measure, monitor, model and tweak the plan as we go.

And one thing we can do immediately is to share the available information in a more informative and less histrionic way than we are seeing at the moment.


Update: Sunday 1st November 2020

Yesterday the Government had to concede that the policy of regional restrictions had failed and bluffing it out and ignoring the scientific advice was, with the clarity of hindsight, an unwise strategy.

In the face of the hard evidence of rapidly rising COVID+ve hospital admissions and deaths, the decision to re-impose a national 4-week lock-down was announced.  This is the only realistic option to prevent overwhelming the NHS at a time of year that it struggles with seasonal influenza causing a peak of admissions and deaths.

Paradoxically, this year the effect of influenza may be less because social distancing will reduce the spread of that as well and also because there is a vaccination for influenza.  Many will have had their flu jab early … I certainly did.

So, what is the predicted effect of a 4 week lock down?  Well, the calibrated model (also used to generate the charts above) estimates that it could indeed suppress the Second Wave and mitigate a nasty COVID-4-Christmas scenario.  But even with it the hospital admissions and associated mortality will continue to increase until the effect kicks in.

Brace yourselves.

Coronavirus


The start of a new year, decade, century or millennium is always associated with a sense of renewal and hope.  Little did we know that in January 2020 a global threat had hatched and was growing in the city of Wuhan, Hubei Province, China.  A virus of the family coronaviridae had mutated and jumped from animal to man where it found a new host and a vehicle to spread itself.   Several weeks later the World became aware of the new threat and in the West … we ignored it.  Maybe we still remember the SARS epidemic which was heralded as a potential global catastrophe but was contained in the Far East and fizzled out.  So, maybe we assumed this SARS-like virus would do the same.

It didn’t.  This mutant was different.  It caused a milder illness and unwitting victims were infectious before they were symptomatic.  And most got better on their own, so they spread the mutant to many other people.  Combine that mutant behaviour with the winter (when infectious diseases spread more easily because we spend more time together indoors), Chinese New Year and global air travel … and we have the perfect recipe for cooking up a global pandemic of a new infectious disease.  But we didn’t know that at the time and we carried on as normal, blissfully unaware of the catastrophe that was unfolding.

By February 2020 it became apparent that the mutant had escaped containment in China and was wreaking havoc in other countries – with Italy high on the casualty list.  We watched in horror at the scenes on television of Italian hospitals overwhelmed with severely ill people fighting for breath as the virus attacked their lungs.  The death toll rose sharply but we still went on our ski holidays and assumed that the English Channel and our Quarantine Policy would protect us.

They didn’t.  This mutant was different.  We now know that it had already silently gained access into the UK and was growing and spreading.  The first COVID-19 death reported in the UK was in early March 2020 and only then did we sit up and start to take notice.  This was getting too close to home.

But it was too late.  The mathematics of how epidemics spread was worked out 100 years ago, not long after the 1918 pandemic of Spanish Flu that killed tens of millions of people before it burned itself out.  An epidemic is like cancer.  By the time it is obvious it is already far advanced because the growth is not linear – it is exponential.

As a systems engineer I am used to building simulation models to reveal the complex and counter-intuitive behaviour of nonlinear systems using the methods first developed by Jay W. Forrester in the 1950’s.  And when I looked up the equations that describe epidemics (on Wikipedia) I saw that I could build a system dynamics model of a COVID-19 epidemic using no more than an Excel spreadsheet.

So I did.  And I got a nasty surprise.  Using the data emerging from China on the nature of the spread of the mutant virus, the incidence of severe illness and the mortality rate … my simple Excel model predicted that, if COVID-19 was left to run its natural course in the UK, then it would burn itself out over several months but the human cost would be 500,000 deaths and the NHS would be completely overwhelmed with a “tsunami of sick”.  And I could be one of them!  The fact that there is no treatment and no vaccine for this novel threat excluded those options.  My basic Excel model confirmed that the only effective option to mitigate this imminent catastrophe was to limit the spread of the virus through social engineering i.e. an immediate and drastic lock-down.  Everyone who was not essential to maintaining core services should “Stay at home, Protect the NHS and Save lives“.  That would become the mantra.  And others were already saying this – epidemiologists whose careers are spent planning for this sort of eventuality.  But despite all this there still seemed to be little sense of urgency, perhaps because their super-sophisticated models predicted that the peak of the UK epidemic would be in mid-June so there was time to prepare.  My basic model predicted that the peak would be in mid-April, in about 4 weeks, and that it was already too late to prevent about 50,000 deaths.

It turns out I was right.  That is exactly what happened.  By mid-March 2020 London was already seeing an exponential rise in hospital admissions, intensive care admissions and deaths and suddenly the UK woke up and panicked.  By that time I had enlisted the help of a trusted colleague who is a public health doctor and who had studied epidemiology, and together we wrote up and published the emerging story as we saw it:

An Acute Hospital Demand Surge Planning Model for the COVID-19 Epidemic using Stock-and-Flow Simulation in Excel: Part 1. Journal of Improvement Science 2020: 68; 1-20.  The link to download the full paper is here.

I also shared the draft paper with another trusted friend and colleague who works for my local clinical commissioning group (CCG) and I asked “Has the CCG a sense of the speed and magnitude of what is about to happen and has it prepared for the tsunami of sick that primary care will need to see?

What then ensued was an almost miraculous emergence of a coordinated and committed team of health care professionals and NHS managers with a single, crystal clear goal:  To design, build and deliver a high-flow, drive-through community-based facility to safely see-and-assess hundreds of patients per day with suspected COVID-19 who were too sick/worried to be managed on the phone, but not sick enough to go to A&E.  This was not a Nightingale Ward – that was a parallel, more public and much more expensive endeavour designed as a spillover for overwhelmed acute hospitals.  Our purpose was to help to prevent that and the time scale was short.  We had three weeks to do it because Easter weekend was the predicted peak of the COVID-19 surge if the national lock-down policy worked as hoped.  No one really had an accurate estimate how effective the lock-down would be and how big the peak of the tsunami of sick would rise as it crashed into the NHS.  So, we planned for the worst and hoped for the best.  The Covid Referral Centre (CRC) was an insurance policy and we deliberately over-engineered it use to every scrap of space we had been offered in a small car park on the south side of the NEC site.

The CRC needed to open by Sunday 12th April 2020 and we were ready, but the actual opening was delayed by NHS bureaucracy and politics.  It did eventually open on 22nd April 2020, just four weeks after we started, and it worked exactly as designed.  The demand was, fortunately, less than our worst case scenario; partly because we had missed the peak by 10 days and we opened the gates to a falling tide; and partly because the social distancing policy had been more effective than hoped; and partly because it takes time for risk-averse doctors to develop trust and to change their ingrained patterns of working.  A drive-thru COVID-19 see-and-treat facility? That was innovative and untested!!

The CRC expected to see a falling demand as the first wave of COVID-19 washed over, and that exactly is what happened.  So, as soon as that prediction was confirmed, the CRC was progressively repurposed to provide other much needed services such as drive-thru blood tests, drive-thru urgent care, and even outpatient clinics in the indoor part of the facility.

The CRC closed its gates to suspected COVID-19 patients on 31st July 2020, as planned and as guided by the simple Excel computer model.

This is health care systems engineering in action.

And the simple Excel model has been continuously re-calibrated as fresh evidence has emerged.  The latest version predicts that a second peak of COVID-19 (that is potentially worse than the first) will happen in late summer or autumn if social distancing is relaxed too far (see below).

But we don’t know what “too far” looks like in practical terms.  Oh, and a second wave could kick off just just when we expect the annual wave of seasonal influenza to arrive.  Or will it?  Maybe the effect of social distancing for COVID-19 in other countries will suppress the spread of seasonal flu as well?  We don’t know that either but the data of the incidence of flu from Australia certainly supports that hypothesis.

We may need a bit more health care systems engineering in the coming months. We shall see.

Oh, and if we are complacent enough to think a second wave could never happen in the UK … here is what is happening in Australia.

Restoring Pride-in-Work

In 1986, Dr Don Berwick from Boston attended a 4-day seminar run by Dr W. Edwards Deming in Washington.  Dr Berwick was a 40 year old paediatrician who was also interested in health care management and improving quality and productivity.  Dr Deming was an 86 year old engineer and statistician who, when he was in his 40’s, helped the US to improve the quality and productivity of the industrial processes supporting the US and Allies in WWII.

Don Berwick describes attending the seminar as an emotionally challenging life-changing experience when he realised that his well-intended attempts to improve quality by inspection-and-correction was a counterproductive, abusive approach that led to fear, demotivation and erosion of pride-in-work.  His blinding new clarity of insight led directly to the Institute of Healthcare Improvement in the USA in the early 1990’s.

One of the tenets of Dr Deming’s theories is that the ingrained beliefs and behaviours that erode pride-in-work also lead to the very outcomes that management do not want – namely conflict between managers and workers and economic failure.

So, an explicit focus on improving pride-in-work as an early objective in any improvement exercise makes very good economic sense, and is a sign of wise leadership and competent management.


Last week a case study was published that illustrates exactly that principle in action.  The important message in the title is “restore the calm”.

One of the most demotivating aspects of health care that many complain about is the stress caused a chaotic environment, chronic crisis and perpetual firefighting.  So, anything that can restore calm will, in principle, improve motivation – and that is good for staff, patients and organisations.

The case study describes, in detail, how calm was restored in a chronically chaotic chemotherapy day unit … on Weds, June 19th 2019 … in one day and at no cost!

To say that the chemotherapy nurses were surprised and delighted is an understatement.  They were amazed to see that they could treat the same number of patients, with the same number of staff, in the same space and without the stress and chaos.  And they had time to keep up with the paperwork; and they had time for lunch; and they finished work 2 hours earlier than previously!

Such a thing was not possible surely? But here they were experiencing it.  And their patients noticed the flip from chaos-to-strangely-calm too.

The impact of the one-day-test was so profound that the nurses voted to adopt the design change the following week.  And they did.  And the restored calm has been sustained.


What happened next?

The chemotherapy nurses were able to catch up with their time-owing that had accumulated from the historical late finishes.  And the problem of high staff turnover and difficultly in recruitment evaporated.  Highly-trained chemotherapy nurses who had left because of the stressful chaos now want to come back.  Pride-in-work has been re-established.  There are no losers.  It is a win-win-win result for staff, patients and organisations.


So, how was this “miracle” achieved?

Well, first of all it was not a miracle.  The flip from chaos-to-calm was predicted to happen.  In fact, that was the primary objective of the design change.

So, how what this design change achieved?

By establishing the diagnosis first – the primary cause of the chaos – and it was not what the team believed it was.  And that is the reason they did not believe the design change would work; and that is the reason they were so surprised when it did.

So, how was the diagnosis achieved?

By using an advanced systems engineering technique called Complex Physical System (CPS) modelling.  That was the game changer!  All the basic quality improvement techniques had been tried and had not worked – process mapping, direct observation, control charts, respectful conversations, brainstorming, and so on.  The system structure was too complicated. The system behaviour was too complex (i.e. chaotic).

What CPS revealed was that the primary cause of the chaotic behaviour was the work scheduling policy.  And with that clarity of focus, the team were able to re-design the policy themselves using a simple paper-and-pen technique.  That is why it cost nothing to change.

So, why hadn’t they been able to do this before?

Because systems engineering is not a taught component of the traditional quality improvement offerings.  Healthcare is rather different to manufacturing! As the complexity of the health care system increases we need to learn the more advanced tools that are designed for this purpose.

What is the same is the principle of restoring pride-in-work and that is what Dr Berwick learned from Dr Deming in 1986, and what we saw happen on June 19th, 2019.

To read the story of how it was done click here.

Carveoutosis Multiforme Fulminans

This is the name given to an endemic, chronic, systemic, design disease that afflicts the whole NHS that very few have heard of, and even fewer understand.

This week marked two milestones in the public exposure of this elusive but eminently treatable health care system design illness that causes queues, delays, overwork, chaos, stress and risk for staff and patients alike.

The first was breaking news from the team in Swansea led by Chris Jones.

They had been grappling with the wicked problem of chronic queues, delays, chaos, stress, high staff turnover, and escalating costs in their Chemotherapy Day Unit (CDU) at the Singleton Hospital.

The breakthrough came earlier in the year when we used the innovative eleGANTT® system to measure and visualise the CDU chaos in real-time.

This rich set of data enabled us, for the first time, to apply a powerful systems engineering  technique called counterfactual analysis which revealed the primary cause of the chaos – the elusive and counter-intuitive design disease carvoutosis multiforme fulminans.

And this diagnosis implied that the chaos could be calmed quickly and at no cost.

But that news fell on slightly deaf ears because, not surprisingly, the CDU team were highly sceptical that such a thing was possible.

So, to convince them we needed to demonstrate the adverse effect of carveoutosis in a way that was easy to see.  And to do that we used some advanced technology: dice and tiddly winks.

The reaction of the CDU nurses was amazing.  As soon as they ‘saw’ it they clicked and immediately grasped how to apply it in their world.  They designed the change they needed to make in a matter of minutes.


But the proof-of-the-pudding-is-in-the eating and we arranged a one-day-test-of-change of their anti-carveout design.

The appointed day arrived, Wednesday 19th June.  The CDU nurses implemented their new design (which cost nothing to do).  Within an hour of the day starting they reported that the CDU was strangely calm.   And at the end of the day they reported that it had remained strangely calm all day; and that they had time for lunch; and that they had time to do all their admin as they went; and that they finished on time; and that the patients did not wait for their chemotherapy; and that the patients noticed the chaos-to-calm transformation too.

They treated just the same number of patients as usual with the same staff, in the same space and with the same equipment.  It cost nothing to make the change.

To say they they were surprised is an understatement!  They were so surprised and so delighted that they did not want to go back to the old design – but they had to because it was only a one-day-test-of-change.

So, on Thursday and Friday they reverted back to the carveoutosis design.  And the chaos returned.  That nailed it!  There was a riot!!  The CDU nurses refused to wait until later in the year to implement their new design and they voted unanimously to implement it from the following Monday.  And they did.  And calm was restored.


The second milestone happened on Thursday 11th July when we ran a Health Care Systems Engineering (HCSE) Masterclass on the very same topic … chronic systemic carveoutosis multiforme fulminans.

This time we used the dice and tiddly winks to demonstrate the symptoms, signs and the impact of treatment.  Then we explored the known pathophysiology of this elusive and endemic design disease in much more depth.

This is health care systems engineering in action.

It seems to work.

Leverage Points

One of the most surprising aspects of systems is how some big changes have no observable effect and how some small changes are game-changers. Why is that?

The technical name for this phenomenon is leverage points.

When a nudge is made at a leverage point in a real system the impact is amplified – so a small cause can have a big effect.

And when a big kick is made where there is no leverage point the effort is dissipated. Like flogging a dead horse.

Other names for leverage points are triggers, buttons, catalysts, fuses etc.


The fact that there is a big effect does not imply it is a good effect.

Poking a leverage point can trigger a catastrophe just as it can trigger a celebration. It depends on how it is poked.

Perhaps that is one reason people stay away from them.

But when our heath care system performance is in decline, if we do nothing or if we act but stay away from leverage points (i.e. flog the dead horse) then we will deny ourselves the opportunity of improvement.

So, we need a way to (a) identify the leverage points and (b) know how to poke them positively and know how to not poke them into delivering a catastrophe.


Here is a couple of real examples.


The time-series chart above shows the A&E performance of a real acute trust.  Notice the pattern as we read left-to-right; baseline performance is OKish and dips in the winters, and the winter dips get deeper but the baseline performance recovers.  In April 2015 (yellow flag) the system behaviour changes, and it goes into a steady decline with added winter dips.  This is the characteristic pattern of poking a leverage point in the wrong way … and the fact it happened at the start of the financial year suggests that Finance was involved.  Possibly triggered by a cost-improvement programme (CIP) action somewhere else in the system.  Save a bit of money here and create a bigger problem over there. That is how systems work. Not my budget so not my problem.

Here is a different example, again from a real hospital and around the same time.  It starts with a similar pattern of deteriorating performance and there is a clear change in system behaviour in Jan 2015.  But in this case the performance improves and stays improved.  Again, the visible sign of a leverage point being poked but this time in a good way.

In this case I do know what happened.  A contributory cause of the deteriorating performance was correctly diagnosed, the leverage point was identified, a change was designed and piloted, and then implemented and validated.  And it worked as predicted.  It was not a fluke.  It was engineered.


So what is the reason that the first example much more commonly seen than the second?

That is a very good question … and to answer it we need to explore the decision making process that leads up to these actions because I refuse to believe that anyone intentionally makes decisions that lead to actions that lead to deterioration in health care performance.

And perhaps we can all learn how to poke leverage points in a positive way?

Measuring Chaos

One of the big hurdles in health care improvement is that most of the low hanging fruit have been harvested.

These are the small improvement projects that can be done quickly because as soon as the issue is made visible to the stakeholders the cause is obvious and the solution is too.

This is where kaizen works well.

The problem is that many health care issues are rather more difficult because the process that needs improving is complicated (i.e. it has lots of interacting parts) and usually exhibits rather complex behaviour (e.g. chaotic).

One good example of this is a one stop multidisciplinary clinic.

These are widely used in healthcare and for good reason.  It is better for a patient with a complex illness, such as diabetes, to be able to access whatever specialist assessment and advice they need when they need it … i.e. in an outpatient clinic.

The multi-disciplinary team (MDT) is more effective and efficient when it can problem-solve collaboratively.

The problem is that the scheduling design of a one stop clinic is rather trickier than a traditional simple-but-slow-and-sequential new-review-refer design.

A one stop clinic that has not been well-designed feels chaotic and stressful for both staff and patients and usually exhibits the paradoxical behaviour of waiting patients and waiting staff.


So what do we need to do?

We need to map and measure the process and diagnose the root cause of the chaos, and then treat it.  A quick kaizen exercise should do the trick. Yes?

But how do we map and measure the chaotic behaviour of lots of specialists buzzing around like blue-***** flies trying to fix the emergent clinical and operational problems on the hoof?  This is not the linear, deterministic, predictable, standardised machine-dominated production line environment where kaizen evolved.

One approach might be to get the staff to audit what they are doing as they do it. But that adds extra work, usually makes the chaos worse, fuels frustration and results in a very patchy set of data.

Another approach is to employ a small army of observers who record what happens, as it happens.  This is possible and it works, but to be able to do this well requires a lot of experience of the process being observed.  And even if that is achieved the next barrier is the onerous task of transcribing and analysing the ocean of harvested data.  And then the challenge of feeding back the results much later … i.e. when the sands have shifted.


So we need a different approach … one that is able to capture the fine detail of a complex process in real-time, with minimal impact on the process itself, and that can process and present the wealth of data in a visual easy-to-assess format, and in real-time too.

This is a really tough design challenge …
… and it has just been solved.

Here are two recent case studies that describe how it was done using a robust systems engineering method.

Abstract

Abstract

Warts-and-All

This week saw the publication of a landmark paper – one that will bring hope to many.  A paper that describes the first step of a path forward out of the mess that healthcare seems to be in.  A rational, sensible, practical, learnable and enjoyable path.


This week I also came across an idea that triggered an “ah ha” for me.  The idea is that the most rapid learning happens when we are making mistakes about half of the time.

And when I say ‘making a mistake’ I mean not achieving what we predicted we would achieve because that implies that our understanding of the world is incomplete.  In other words, when the world does not behave as we expect, we have an opportunity to learn and to improve our ability to make more reliable predictions.

And that ability is called wisdom.


When we get what we expect about half the time, and do not get what we expect about the other half of the time, then we have the maximum amount of information that we can use to compare and find the differences.

Was it what we did? Was it what we did not do? What are the acts and errors of commission and omission? What can we learn from those? What might we do differently next time? What would we expect to happen if we do?


And to explore this terrain we need to see the world as it is … warts and all … and that is the subject of the landmark paper that was published this week.


The context of the paper is improvement of cancer service delivery, and specifically of reducing waiting time from referral to first appointment.  This waiting is a time of extreme anxiety for patients who have suspected cancer.

It is important to remember that most people with suspected cancer do not have it, so most of the work of an urgent suspected cancer (USC) clinic is to reassure and to relieve the fear that the spectre of cancer creates.

So, the sooner that reassurance can happen the better, and for the unlucky minority who are diagnosed with cancer, the sooner they can move on to treatment the better.

The more important paragraph in the abstract is the second one … which states that seeing the system behaviour as it is, warts-and-all,  in near-real-time, allows us to learn to make better decisions of what to do to achieve our intended outcomes. Wiser decisions.

And the reason this is the more important paragraph is because if we can do that for an urgent suspected cancer pathway then we can do that for any pathway.


The paper re-tells the first chapter of an emerging story of hope.  A story of how an innovative and forward-thinking organisation is investing in building embedded capability in health care systems engineering (HCSE), and is now delivering a growing dividend.  Much bigger than the investment on every dimension … better safety, faster delivery, higher quality and more affordability. Win-win-win-win.

The only losers are the “warts” – the naysayers and the cynics who claim it is impossible, or too “wicked”, or too difficult, or too expensive.

Innovative reality trumps cynical rhetoric … and the full abstract and paper can be accessed here.

So, well done to Chris Jones and the whole team in ABMU.

And thank you for keeping the candle of hope alight in these dark, stormy and uncertain times for the NHS.

Reflect and Celebrate

As we approach the end of 2018 it is a good time to look back and reflect on what has happened this year.

It has been my delight to have had the opportunity to work with front-line teams at University Hospital of North Midlands (UHNM) and to introduce them to the opportunity that health care systems engineering (HCSE) offers.

This was all part of a coordinated, cooperative strategy commissioned by the Staffordshire Clinical Commissioning Groups, and one area we were asked to look at was unscheduled care.

It was not my brief to fix problems.  I was commissioned to demonstrate how a systems engineer might approach them.  The first step was to raise awareness, then develop some belief and then grow some embedded capability – in the system itself.

The rest was up to the teams who stepped up to the challenge.  So what happened?

Winter is always a tough time for the NHS and especially for unscheduled care so let us have a look  and compare UHNM with NHS England as a whole – using the 4 hour A&E target yield – and over a longer time period of 7 years (so that we can see some annual cycles and longer term trends).

The A&E performance for the NHS in England as whole has been deteriorating at an accelerating pace over the 7 years.  This is a system-wide effect and there are a multitude of plausible causes.

The current UHNM system came into being at the end of 2014 with the merger of the Stafford and Stoke Hospital Trusts – and although their combined A&E performance dropped below average for England – the chart above shows that it did not continue to slide.

The NHS across the UK had a very bad time in the winter of 2017/18 – with a double whammy of sequential waves of Flu B and Flu A not helping!

But look at what happened at UHNM since Feb 2018.  Something has changed for the better and this is a macro system effect.  There has been a positive deviation from the expectation with about a 15% improvement in A&E 4-hr yield.  That is outstanding!

Now, I would say that news is worth celebrating and shouting “Well done everyone!” and then asking “How was that achieved?” and “What can we all learn that we can take forward into 2019 and build on?

Merry Christmas.

Seeing The Voice of the System

It is always a huge compliment to see an idea improved and implemented by inspired innovators.

Health care systems engineering (HCSE) brings together concepts from the separate domains of systems engineering and health care.  And one idea that emerged from this union is to regard the health care system as a living, evolving, adapting entity.

In medicine we have the concept of ‘vital signs’ … a small number of objective metrics that we can measure easily and quickly.  With these we can quickly assess the physical health of a patient and decide if we need to act, and when.

With a series of such measurements over time we can see the state of a patient changing … for better or worse … and we can use this to monitor the effect of our actions and to maintain the improvements we achieve.

For a patient, the five vital signs are conscious level, respiratory rate, pulse, blood pressure and temperature. To sustain life we must maintain many flows within healthy ranges and the most critically important is the flow of oxygen to every cell in the body.  Oxygen is carried by blood, so blood flow is critical.

So, what are the vital signs for a health care system where the flows are not oxygen and blood?  They are patients, staff, consumables, equipment, estate, data and cash.

The photograph shows a demonstration of a Vitals Dashboard for a part of the cancer care system in the ABMU health board in South Wales.  The inspirational innovators who created it are Imran Rao (left), Andy Jones (right) and Chris Jones (top left), and they are being supported by ABMU to do this as part of their HCSE training programme.

So well done guys … we cannot wait to hear how being better able to seeing the voice of your cancer system translates into improved care for patients, and improved working life for the dedicated NHS staff, and improved use of finite public resources.  Win-win-win.

Making NHS Data Count

The debate about how to sensibly report NHS metrics has been raging for decades.

So I am delighted to share the news that NHS Improvement have finally come out and openly challenged the dogma that two-point comparisons and red-amber-green (RAG) charts are valid methods for presenting NHS performance data.

Their rather good 147-page guide can be downloaded: HERE


The subject is something called a statistical process control (SPC) chart which sounds a bit scary!  The principle is actually quite simple:

Plot data that emerges over time as a picture that tells a story – #plotthedots

The  main trust of the guide is learning the ropes of how to interpret these pictures in a meaningful way and to avoid two traps (i.e. errors).

Trap #1 = Over-reacting to random variation.
Trap #2 = Under-reacting to non-random variation.

Both of these errors cause problems, but in different ways.


Over-reacting to random variation

Random variation is a fact of life.  No two days in any part of the NHS are the same.  Some days are busier/quieter than others.

Plotting the daily-arrivals-in-A&E dots for a trust somewhere in England gives us this picture.  (The blue line is the average and the purple histogram shows the distribution of the points around this average.)

Suppose we were to pick any two days at random and compare the number of arrivals on those two days? We could get an answer anywhere between an increase of 80% (250 to 450) or a decrease of 44% (450 to 250).

But if we look at the while picture above we get the impression that, over time:

  1. There is an expected range of random-looking variation between about 270 and 380 that accounts for the vast majority of days.
  2. There are some occasional, exceptional days.
  3. There is the impression that average activity fell by about 10% in around August 2017.

So, our two-point comparison method seriously misleads us – and if we react to the distorted message that a two-point comparison generates then we run the risk of increasing the variation and making the problem worse.

Lesson: #plotthedots


One of the downsides of SPC is the arcane and unfamiliar language that is associated with it … terms like ‘common cause variation‘ and ‘special cause variation‘.  Sadly, the authors at NHS Improvement have fallen into this ‘special language’ trap and therefore run the risk of creating a new clique.

The lesson here is that SPC is a specific, simplified application of a more generic method called a system behaviour chart (SBC).

The first SPC chart was designed by Walter Shewhart in 1924 for one purpose and one purpose only – for monitoring the output quality of a manufacturing process in terms of how well the product conformed to the required specification.

In other words: SPC is an output quality audit tool for a manufacturing process.

This has a number of important implications for the design of the SPC tool:

  1. The average is not expected to change over time.
  2. The distribution of the random variation is expected to be bell-shaped.
  3. We need to be alerted to sudden shifts.

Shewhart’s chart was designed to detect early signs of deviation of a well-performing manufacturing process.  To detect possible causes that were worth investigating and minimise the adverse effects of over-reacting or under-reacting.


However,  for many reasons, the tool we need for measuring the behaviour of healthcare processes needs to be more sophisticated than the venerable SPC chart.  Here are three of them:

  1. The average is expected to change over time.
  2. The distribution of the random variation is not expected to be bell-shaped.
  3. We need to be alerted to slow drifts.

Under-Reacting to Non-Random Variation

Small shifts and slow drifts can have big cumulative effects.

Suppose I am a NHS service manager and I have a quarterly performance target to meet, so I have asked my data analyst to prepare a RAG chart to review my weekly data.

The quarterly target I need to stay below is 120 and my weekly RAG chart is set to show green when less than 108 (10% below target) and red when more than 132 (10% above target) because I know there is quite a lot of random week-to-week variation.

On the left is my weekly RAG chart for the first two quarters and I am in-the-green for both quarters (i.e. under target).

Q: Do I need to do anything?

A: The first quarter just showed “greens” and “ambers” so I relaxed and did nothing. There are a few “reds” in the second quarter, but about the same number as the “greens” and lots of “ambers” so it looks like I am about on target. I decide to do nothing again.

At the end of Q3 I’m in big trouble!

The quarterly RAG chart has flipped from Green to Red and I am way over target for the whole quarter. I missed the bus and I’m looking for a new job!

So, would a SPC chart have helped me here?

Here it is for Q1 and Q2.  The blue line is the target and the green line is the average … so below target for both quarters, as the RAG chart said.

The was a dip in Q1 for a few weeks but it was not sustained and the rest of the chart looks stable (all the points inside the process limits).  So, “do nothing” seemed like a perfectly reasonable strategy. Now I feel even more of a victim of fortune!

So, let us look at the full set of weekly date for the financial year and apply our  retrospectoscope.

This is just a plain weekly performance run chart with the target limit plotted as the blue line.

It is clear from this that there is a slow upward drift and we can see why our retrospective quarterly RAG chart flipped from green to red, and why neither our weekly RAG chart nor our weekly SPC chart alerted us in time to avoid it!

This problem is often called ‘leading by looking in the rear view mirror‘.

The variation we needed to see was not random, it was a slowly rising average, but it was hidden in the random variation and we missed it.  So we under-reacted and we paid the price.


This example illustrates another limitation of both RAG charts and SPC charts … they are both insensitive to small shifts and slow drifts when there is lots of random variation around, which there usually is.

So, is there a way to avoid this trap?

Yes. We need to learn to use the more powerful system behaviour charts and the systems engineering techniques and tools that accompany them.


But that aside, the rather good 147-page guide from NHS Improvement is a good first step for those still using two-point comparisons and RAG charts and it can be downloaded: HERE

The 85% Optimum Bed Occupancy Myth

A few years ago I had a rant about the dangers of the widely promoted mantra that 85% is the optimum average measured bed-occupancy target to aim for.

But ranting is annoying, ineffective and often counter-productive.

So, let us revisit this with some calm objectivity and disprove this Myth a step at a time.

The diagram shows the system of interest (SoI) where the blue box represents the beds, the coloured arrows are the patient flows, the white diamond is a decision and the dotted arrow is information about how full the hospital is (i.e. full/not full).

A new emergency arrives (red arrow) and needs to be admitted. If the hospital is not full the patient is moved to an empty bed (orange arrow), the medical magic happens, and some time later the patient is discharged (green arrow).  If there is no bed for the emergency request then we get “spillover” which is the grey arrow, i.e. the patient is diverted elsewhere (n.b. these are critically ill patients …. they cannot sit and wait).


This same diagram could represent patients trying to phone their GP practice for an appointment.  The blue box is the telephone exchange and if all the lines are busy then the call is dropped (grey arrow).  If there is a line free then the call is connected (orange arrow) and joins a queue (blue box) to be answered some time later (green arrow).

In 1917, a Danish mathematician/engineer called Agner Krarup Erlang was working for the Copenhagen Telephone Company and was grappling with this very problem: “How many telephone lines do we need to ensure that dropped calls are infrequent AND the switchboard operators are well utilised?

This is the perennial quality-versus-cost conundrum. The Value-4-Money challenge. Too few lines and the quality of the service falls; too many lines and the cost of the service rises.

Q: Is there a V4M ‘sweet spot” and if so, how do we find it? Trial and error?

The good news is that Erlang solved the problem … mathematically … and the not-so good news is that his equations are very scary to a non mathematician/engineer!  So this solution is not much help to anyone else.


Fortunately, we have a tool for turning scary-equations into easy-2-see-pictures; our trusty Excel spreadsheet. So, here is a picture called a heat-map, and it was generated from one of Erlang’s equations using Excel.

The Erlang equation is lurking in the background, safely out of sight.  It takes two inputs and gives one output.

The first input is the Capacity, which is shown across the top, and it represents the number of beds available each day (known as the space-capacity).

The second input is the Load (or offered load to use the precise term) which is down the left side, and is the number of bed-days required per day (e.g. if we have an average of 10 referrals per day each of whom would require an average 2-day stay then we have an average of 10 x 2 = 20 bed-days of offered load per day).

The output of the Erlang model is the probability that a new arrival finds all the beds are full and the request for a bed fails (i.e. like a dropped telephone call).  This average probability is displayed in the cell.  The colour varies between red (100% failure) and green (0% failure), with an infinite number of shades of red-yellow-green in between.

We can now use our visual heat-map in a number of ways.

a) We can use it to predict the average likelihood of rejection given any combination of bed-capacity and average offered load.

Suppose the average offered load is 20 bed-days per day and we have 20 beds then the heat-map says that we will reject 16% of requests … on average (bottom left cell).  But how can that be? Why do we reject any? We have enough beds on average! It is because of variation. Requests do not arrive in a constant stream equal to the average; there is random variation around that average.  Critically ill patients do not arrive at hospital in a constant stream; so our system needs some resilience and if it does not have it then failures are inevitable and mathematically predictable.

b) We can use it to predict how many beds we need to keep the average rejection rate below an arbitrary but acceptable threshold (i.e. the quality specification).

Suppose the average offered load is 20 bed-days per day, and we want to have a bed available more than 95% of the time (less than 5% failures) then we will need at least 25 beds (bottom right cell).

c) We can use it to estimate the maximum average offered load for a given bed-capacity and required minimum service quality.

Suppose we have 22 beds and we want a quality of >=95% (failure <5%) then we would need to keep the average offered load below 17 bed-days per day (i.e. by modifying the demand and the length of stay because average load = average demand * average length of stay).


There is a further complication we need to be mindful of though … the measured utilisation of the beds is related to the successful admissions (orange arrow in the first diagram) not to the demand (red arrow).  We can illustrate this with a complementary heat map generated in Excel.

For scenario (a) above we have an offered load of 20 bed-days per day, and we have 20 beds but we will reject 16% of requests so the accepted bed load is only 16.8 bed days per day  (i.e. (100%-16%) * 20) which is the reason that the average  utilisation is only 16.8/20 = 84% (bottom left cell).

For scenario (b) we have an offered load of 20 bed-days per day, and 25 beds and will only reject 5% of requests but the average measured utilisation is not 95%, it is only 76% because we have more beds (the accepted bed load is 95% * 20 = 19 bed-days per day and 19/25 = 76%).

For scenario (c) the average measured utilisation would be about 74%.


So, now we see the problem more clearly … if we blindly aim for an average, measured, bed-utilisation of 85% with the untested belief that it is always the optimum … this heat-map says it is impossible to achieve and at the same time offer an acceptable quality (>95%).

We are trading safety for money and that is not an acceptable solution in a health care system.


So where did this “magic” value of 85% come from?

From the same heat-map perhaps?

If we search for the combination of >95% success (<5% fail) and 85% average bed-utilisation then we find it at the point where the offered load reaches 50 bed-days per day and we have a bed-capacity of 56 beds.

And if we search for the combination of >99% success (<1% fail) and 85% average utilisation then we find it with an average offered load of just over 100 bed-days per day and a bed-capacity around 130 beds.

H’mm.  “Houston, we have a problem“.


So, even in this simplified scenario the hypothesis that an 85% average bed-occupancy is a global optimum is disproved.

The reality is that the average bed-occupancy associated with delivering the required quality for a given offered load with a specific number of beds is almost never 85%.  It can range anywhere between 50% and 100%.  Erlang knew that in 1917.


So, if a one-size-fits-all optimum measured average bed-occupancy assumption is not valid then how might we work out how many beds we need and predict what the expected average occupancy will be?

We would design the fit-4-purpose solution for each specific context …
… and to do that we need to learn the skills of complex adaptive system design …
… and that is part of the health care systems engineering (HCSE) skill-set.

 

Cognitive Traps for Hefalumps

One of the really, really cool things about the 1.3 kg of “ChimpWare” between our ears is the way it learns.

We have evolved the ability to predict the likely near-future based on just a small number of past experiences.

And we do that by creating stored mental models.

Not even the most powerful computers can do it as well as we do – and we do it without thinking. Literally. It is an unconscious process.

This ability to pro-gnose (=before-know) gave our ancestors a major survival advantage when we were wandering about on the savanna over 10 million years ago.  And we have used this amazing ability to build societies, mega-cities and spaceships.


But this capability is not perfect.  It has a flaw.  Our “ChimpOS” does not store a picture of reality like a digital camera; it stores a patchy and distorted perception of reality, and then fills in the gaps with guesses (i.e. gaffes).  And we do not notice – consciously.

The cognitive trap is set and sits waiting to be sprung.  And to trip us up.


Here is an example:

“Improvement implies change”

Yes. That is a valid statement because we can show that whenever improvement has been the effect, then some time before that a change happened.  And we can show that when there are no changes, the system continues to behave as it always has.  Status quo.

The cognitive trap is that our ChimpOS is very good at remembering temporal associations – for example an association between “improvement” and “change” because we remember in the present.  So, if two concepts are presented at the same time, and we spice-the-pie with a bit of strong emotion, then we are more likely to associate them. Which is OK.

The problem comes when we play back the memory … it can come back as …

“change implies improvement” which is not valid.  And we do not notice.

To prove it is not valid we just need to find one example where a change led to a deterioration; an unintended negative consequence, a surprising, confusing and disappointing failure to achieve our intended improvement.

An embarrassing gap between our intent and our impact.

And finding that evidence is not hard.  Failures and disappointments in the world of improvement are all too common.


And then we can fall into the same cognitive trap because we generalise from a single, bad experience and the lesson our ChimpOS stores for future reference is “change is bad”.

And forever afterwards we feel anxious whenever the idea of change is suggested.

It is a very effective survival tactic – for a hominid living on the African savanna 10 million years ago, and at risk of falling prey to sharp-fanged, hungry predators.  It is a less useful tactic in the modern world where the risk of being eaten-for-lunch is minimal, and where the pace of change is accelerating.  We must learn to innovate and improve to survive in the social jungle … and we are not well equipped!


Here is another common cognitive trap:

Excellence implies no failures.

Yes. If we are delivering a consistently excellent service then the absence of failures will be a noticeable feature.

No failures implies excellence.

This is not a valid inference.  If quality-of-service is measured on a continuum from Excrement-to-Excellent, then we can be delivering a consistently mediocre service, one that is barely adequate, and also have no failures.


The design flaw here is that our ChimpWare/ChimpOS memory system is lossy.

We do not remember all the information required to reconstruct an accurate memory of reality – because there is too much information.  So we distort, we delete and we generalise.  And we do that because when we evolved it was a good enough solution, and it enabled us to survive as a species, so the ChimpWare/ChimpOS genes were passed on.

We cannot reverse millions of years of evolution.  We cannot get a wetware or a software upgrade.  We need to learn to manage with the limitations of what we have between our ears.

And to avoid the cognitive traps we need to practice the discipline of bringing our unconscious assumptions up to conscious awareness … and we do that by asking carefully framed questions.

Here is another example:

A high-efficiency design implies high-utilisation of resources.

Yes, that is valid. Idle resources means wasted resources which means lower efficiency.

Q1: Is the converse also valid?
Q2: Is there any evidence that disproves the converse is valid?

If high-utilisation does not imply high-efficiency, what are the implications of falling into this cognitive trap?  What is the value of measuring utilisation? Does it have a value?

These are useful questions.

The Strangeness of LoS

It had been some time since Bob and Leslie had chatted so an email from the blue was a welcome distraction from a complex data analysis task.

<Bob> Hi Leslie, great to hear from you. I was beginning to think you had lost interest in health care improvement-by-design.

<Leslie> Hi Bob, not at all.  Rather the opposite.  I’ve been very busy using everything that I’ve learned so far.  It’s applications are endless, but I have hit a problem that I have been unable to solve, and it is driving me nuts!

<Bob> OK. That sounds encouraging and interesting.  Would you be able to outline this thorny problem and I will help if I can.

<Leslie> Thanks Bob.  It relates to a big issue that my organisation is stuck with – managing urgent admissions.  The problem is that very often there is no bed available, but there is no predictability to that.  It feels like a lottery; a quality and safety lottery.  The clinicians are clamoring for “more beds” but the commissioners are saying “there is no more money“.  So the focus has turned to reducing length of stay.

<Bob> OK.  A focus on length of stay sounds reasonable.  Reducing that can free up enough beds to provide the necessary space-capacity resilience to dramatically improve the service quality.  So long as you don’t then close all the “empty” beds to save money, or fall into the trap of believing that 85% average bed occupancy is the “optimum”.

<Leslie> Yes, I know.  We have explored all of these topics before.  That is not the problem.

<Bob> OK. What is the problem?

<Leslie> The problem is demonstrating objectively that the length-of-stay reduction experiments are having a beneficial impact.  The data seems to say they they are, and the senior managers are trumpeting the success, but the people on the ground say they are not. We have hit a stalemate.


<Bob> Ah ha!  That old chestnut.  So, can I first ask what happens to the patients who cannot get a bed urgently?

<Leslie> Good question.  We have mapped and measured that.  What happens is the most urgent admission failures spill over to commercial service providers, who charge a fee-per-case and we have no choice but to pay it.  The Director of Finance is going mental!  The less urgent admission failures just wait on queue-in-the-community until a bed becomes available.  They are the ones who are complaining the most, so the Director of Governance is also going mental.  The Director of Operations is caught in the cross-fire and the Chief Executive and Chair are doing their best to calm frayed tempers and to referee the increasingly toxic arguments.

<Bob> OK.  I can see why a “Reduce Length of Stay Initiative” would tick everyone’s Nice If box.  So, the data analysts are saying “the length of stay has come down since the Initiative was launched” but the teams on the ground are saying “it feels the same to us … the beds are still full and we still cannot admit patients“.

<Leslie> Yes, that is exactly it.  And everyone has come to the conclusion that demand must have increased so it is pointless to attempt to reduce length of stay because when we do that it just sucks in more work.  They are feeling increasingly helpless and hopeless.

<Bob> OK.  Well, the “chronic backlog of unmet need” issue is certainly possible, but your data will show if admissions have gone up.

<Leslie> I know, and as far as I can see they have not.

<Bob> OK.  So I’m guessing that the next explanation is that “the data is wonky“.

<Leslie> Yup.  Spot on.  So, to counter that the Information Department has embarked on a massive push on data collection and quality control and they are adamant that the data is complete and clean.

<Bob> OK.  So what is your diagnosis?

<Leslie> I don’t have one, that’s why I emailed you.  I’m stuck.


<Bob> OK.  We need a diagnosis, and that means we need to take a “history” and “examine” the process.  Can you tell me the outline of the RLoS Initiative.

<Leslie> We knew that we would need a baseline to measure from so we got the historical admission and discharge data and plotted a Diagnostic Vitals Chart®.  I have learned something from my HCSE training!  Then we planned the implementation of a visual feedback tool that would show ward staff which patients were delayed so that they could focus on “unblocking” the bottlenecks.  We then planned to measure the impact of the intervention for three months, and then we planned to compare the average length of stay before and after the RLoS Intervention with a big enough data set to give us an accurate estimate of the averages.  The data showed a very obvious improvement, a highly statistically significant one.

<Bob> OK.  It sounds like you have avoided the usual trap of just relying on subjective feedback, and now have a different problem because your objective and subjective feedback are in disagreement.

<Leslie> Yes.  And I have to say, getting stuck like this has rather dented my confidence.

<Bob> Fear not Leslie.  I said this is an “old chestnut” and I can say with 100% confidence that you already have what you need in your T4 kit bag?

<Leslie>Tee-Four?

<Bob> Sorry, a new abbreviation. It stands for “theory, techniques, tools and training“.

<Leslie> Phew!  That is very reassuring to hear, but it does not tell me what to do next.

<Bob> You are an engineer now Leslie, so you need to don the hard-hat of Improvement-by-Design.  Start with your Needs Analysis.


<Leslie> OK.  I need a trustworthy tool that will tell me if the planned intervention has has a significant impact on length of stay, for better or worse or not at all.  And I need it to tell me that quickly so I can decide what to do next.

<Bob> Good.  Now list all the things that you currently have that you feel you can trust.

<Leslie> I do actually trust that the Information team collect, store, verify and clean the raw data – they are really passionate about it.  And I do trust that the front line teams are giving accurate subjective feedback – I work with them and they are just as passionate.  And I do trust the systems engineering “T4” kit bag – it has proven itself again-and-again.

<Bob> Good, and I say that because you have everything you need to solve this, and it sounds like the data analysis part of the process is a good place to focus.

<Leslie> That was my conclusion too.  And I have looked at the process, and I can’t see a flaw. It is driving me nuts!

<Bob> OK.  Let us take a different tack.  Have you thought about designing the tool you need from scratch?

<Leslie> No. I’ve been using the ones I already have, and assume that I must be using them incorrectly, but I can’t see where I’m going wrong.

<Bob> Ah!  Then, I think it would be a good idea to run each of your tools through a verification test and check that they are fit-4-purpose in this specific context.

<Leslie> OK. That sounds like something I haven’t covered before.

<Bob> I know.  Designing verification test-rigs is part of the Level 2 training.  I think you have demonstrated that you are ready to take the next step up the HCSE learning curve.

<Leslie> Do you mean I can learn how to design and build my own tools?  Special tools for specific tasks?

<Bob> Yup.  All the techniques and tools that you are using now had to be specified, designed, built, verified, and validated. That is why you can trust them to be fit-4-purpose.

<Leslie> Wooohooo! I knew it was a good idea to give you a call.  Let’s get started.


[Postscript] And Leslie, together with the other stakeholders, went on to design the tool that they needed and to use the available data to dissolve the stalemate.  And once everyone was on the same page again they were able to work collaboratively to resolve the flow problems, and to improve the safety, flow, quality and affordability of their service.  Oh, and to know for sure that they had improved it.

The Q-Community

At some point in the life-cycle of an innovation, there is the possibility of crossing an invisible line called the tipping point.

This happens when enough people have experienced the benefits of the innovation and believe that the innovation is the future.  These lone innovators start to connect and build a new community.

It is an emergent behaviour of a complex adaptive system.


This week I experienced what could be a tipping point.

I attended the Q-Community launch event for the West Midlands that was held at the ICC in Birmingham … and it was excellent.

The invited speakers were both engaging and inspiring – boosting the emotional charge in the old engagement batteries; which have become rather depleted of late by the incessant wailing from the all-too-numerous peddlers of doom-and-gloom.

There was an opportunity to re-connect with fellow radicals who, over nearly two decades, have had the persistent temerity to suggest that improvement is necessary, is possible, have invested in learning how to do it, and have disproved the impossibility hypothesis.

There were new connections with like-minded people who want to both share what they know about the science of improvement and to learn what they do not.

And there were hand-outs, side-shows and break-outs.  Something for everyone.


The voice of the Q-Community will grow louder – and for it to be listened to it will need to be patiently and persistently broadcasting the news stories of what has been achieved, and how it was achieved, and who has demonstrated they can walk-the-talk.  News stories like this one:

Improving safety, flow, quality and affordability of unscheduled care of the elderly.


I sincerely hope that in the future, with the benefit of hindsight, we in the West Midlands will say – the 19th July 2017 was our Q-Community tipping point.

And I pledge to do whatever I can to help make that happen.

Catch-22

There is a Catch-22 in health care improvement and it goes a bit like this:

Most people are too busy fire-fighting the chronic chaos to have time to learn how to prevent the chaos, so they are stuck.

There is a deeper Catch-22 as well though:

The first step in preventing chaos is to diagnose the root cause and doing that requires experience, and we don’t have that experience available, and we are too busy fire-fighting to develop it.


Health care is improvement science in action – improving the physical and psychological health of those who seek our help. Patients.

And we have a tried-and-tested process for doing it.

First we study the problem to arrive at a diagnosis; then we design alternative plans to achieve our intended outcome and we decide which plan to go with; and then we deliver the plan.

Study ==> Plan ==> Do.

Diagnose  ==> Design & Decide ==> Deliver.

But here is the catch. The most difficult step is the first one, diagnosis, because there are many different illnesses and they often present with very similar patterns of symptoms and signs. It is not easy.

And if we make a poor diagnosis then all the action plans that follow will be flawed and may lead to disappointment and even harm.

Complaints and litigation follow in the wake of poor diagnostic ability.

So what do we do?

We defer reassuring our patients, we play safe, we request more tests and we refer for second opinions from specialists. Just to be on the safe side.

These understandable tactics take time, cost money and are not 100% reliable.  Diagnostic tests are usually precisely focused to answer specific questions but can have false positive and false negative results.

To request a broad batch of tests in the hope that the answer will appear like a rabbit out of a magician’s hat is … mediocre medicine.


This diagnostic dilemma arises everywhere: in primary care and in secondary care, and in non-urgent and urgent pathways.

And it generates extra demand, more work, bigger queues, longer delays, growing chaos, and mounting frustration, disappointment, anxiety and cost.

The solution is obvious but seemingly impossible: to ensure the most experienced diagnostician is available to be consulted at the start of the process.

But that must be impossible because if the consultants were seeing the patients first, what would everyone else do?  How would they learn to become more expert diagnosticians? And would we have enough consultants?


When I was a junior surgeon I had the great privilege to have the opportunity to learn from wise and experienced senior surgeons, who had seen it, and done it and could teach it.

Mike Thompson is one of these.  He is a general surgeon with a special interest in the diagnosis and treatment of bowel cancer.  And he has a particular passion for improving the speed and accuracy of the diagnosis step; because it can be a life-saver.

Mike is also a disruptive innovator and an early pioneer of the use of endoscopy in the outpatient clinic.  It is called point-of-care testing nowadays, but in the 1980’s it was a radically innovative thing to do.

He also pioneered collecting the symptoms and signs from every patient he saw, in a standard way using a multi-part printed proforma. And he invested many hours entering the raw data into a computer database.

He also did something that even now most clinicians do not do; when he knew the outcome for each patient he entered that into his database too – so that he could link first presentation with final diagnosis.


Mike knew that I had an interest in computer-aided diagnosis, which was a hot topic in the early 1980’s, and also that I did not warm to the Bayesian statistical models that underpinned it.  To me they made too many simplifying assumptions.

The human body is a complex adaptive system. It defies simplification.

Mike and I took a different approach.  We  just counted how many of each diagnostic group were associated with each pattern of presenting symptoms and signs.

The problem was that even his database of 8000+ patients was not big enough! This is why others had resorted to using statistical simplifications.

So we used the approach that an experienced diagnostician uses.  We used the information we had already gleaned from a patient to decide which question to ask next, and then the next one and so on.


And we always have three pieces of information at the start – the patient’s age, gender and presenting symptom.

What surprised and delighted us was how easy it was to use the database to help us do this for the new patients presenting to his clinic; the ones who were worried that they might have bowel cancer.

And what surprised us even more was how few questions we needed to ask arrive at a statistically robust decision to reassure-or-refer for further tests.

So one weekend, I wrote a little computer program that used the data from Mike’s database and our simple bean-counting algorithm to automate this process.  And the results were amazing.  Suddenly we had a simple and reliable way of using past experience to support our present decisions – without any statistical smoke-and-mirror simplifications getting in the way.

The computer program did not make the diagnosis, we were still responsible for that; all it did was provide us with reliable access to a clear and comprehensive digital memory of past experience.


What it then enabled us to do was to learn more quickly by exploring the complex patterns of symptoms, signs and outcomes and to develop our own diagnostic “rules of thumb”.

We learned in hours what it would take decades of experience to uncover. This was hot stuff, and when I presented our findings at the Royal Society of Medicine the audience was also surprised and delighted (and it was awarded the John of Arderne Medal).

So, we called it the Hot Learning System, and years later I updated it with Mike’s much bigger database (29,000+ records) and created a basic web-based version of the first step – age, gender and presenting symptom.  You can have a play if you like … just click HERE.


So what are the lessons here?

  1. We need to have the most experienced diagnosticians at the start of the improvement process.
  2. The first diagnostic assessment can be very quick so long as we have developed evidence-based heuristics.
  3. We can accelerate the training in diagnostic skills using simple information technology and basic analysis techniques.

And exactly the same is true in the health care system improvement.

We need to have an experienced health care improvement practitioner involved at the start, because if we skip this critical study step and move to plan without a correct diagnosis, then we will make errors, poor decisions, and counter-productive actions.  And then generate more work, more queues, more delays, more chaos, more distress and increased costs.

Exactly the opposite of what we want.

Q1: So, how do we develop experienced improvement practitioners more quickly?

Q2: Is there a hot learning system for improvement science?

A: Yes, there is. It can be found here.

The Storyboard

This week about thirty managers and clinicians in South Wales conducted two experiments to test the design of the Flow Design Practical Skills One Day Workshop.

Their collective challenge was to diagnose and treat a “chronically sick” clinic and the majority had no prior exposure to health care systems engineering (HCSE) theory, techniques, tools or training.

Two of the group, Chris and Jat, had been delegates at a previous ODWS, and had then completed their Level-1 HCSE training and real-world projects.

They had seen it and done it, so this experiment was to test if they could now teach it.

Could they replicate the “OMG effect” that they had experienced and that fired up their passion for learning and using the science of improvement?

Continue reading “The Storyboard”

Diagnose-Design-Deliver

A story was shared this week.

A story of hope for the hard-pressed NHS, its patients, its staff and its managers and its leaders.

A story that says “We can learn how to fix the NHS ourselves“.

And the story comes with evidence; hard, objective, scientific, statistically significant evidence.


The story starts almost exactly three years ago when a Clinical Commissioning Group (CCG) in England made a bold strategic decision to invest in improvement, or as they termed it “Achieving Clinical Excellence” (ACE).

They invited proposals from their local practices with the “carrot” of enough funding to allow GPs to carve-out protected time to do the work.  And a handful of proposals were selected and financially supported.

This is the story of one of those proposals which came from three practices in Sutton who chose to work together on a common problem – the unplanned hospital admissions in their over 70’s.

Their objective was clear and measurable: “To reduce the cost of unplanned admissions in the 70+ age group by working with hospital to reduce length of stay.

Did they achieve their objective?

Yes, they did.  But there is more to this story than that.  Much more.


One innovative step they took was to invest in learning how to diagnose why the current ‘system’ was costing what it was; then learning how to design an improvement; and then learning how to deliver that improvement.

They invested in developing their own improvement science skills first.

They did not assume they already knew how to do this and they engaged an experienced health care systems engineer (HCSE) to show them how to do it (i.e. not to do it for them).

Another innovative step was to create a blog to make it easier to share what they were learning with their colleagues; and to invite feedback and suggestions; and to provide a journal that captured the story as it unfolded.

And they measured stuff before they made any changes and afterwards so they could measure the impact, and so that they could assess the evidence scientifically.

And that was actually quite easy because the CCG was already measuring what they needed to know: admissions, length of stay, cost, and outcomes.

All they needed to learn was how to present and interpret that data in a meaningful way.  And as part of their IS training,  they learned how to use system behaviour charts, or SBCs.


By Jan 2015 they had learned enough of the HCSE techniques and tools to establish the diagnosis and start to making changes to the parts of the system that they could influence.


Two years later they subjected their before-and-after data to robust statistical analysis and they had a surprise. A big one!

Reducing hospital mortality was not a stated objective of their ACE project, and they only checked the mortality data to be sure that it had not changed.

But it had, and the “p=0.014” part of the statement above means that the probability that this 20.0% reduction in hospital mortality was due to random chance … is less than 1.4%.  [This is well below the 5% threshold that we usually accept as “statistically significant” in a clinical trial.]

But …

This was not a randomised controlled trial.  This was an intervention in a complicated, ever-changing system; so they needed to check that the hospital mortality for comparable patients who were not their patients had not changed as well.

And the statistical analysis of the hospital mortality for the ‘other’ practices for the same patient group, and the same period of time confirmed that there had been no statistically significant change in their hospital mortality.

So, it appears that what the Sutton ACE Team did to reduce length of stay (and cost) had also, unintentionally, reduced hospital mortality. A lot!


And this unexpected outcome raises a whole raft of questions …


If you would like to read their full story then you can do so … here.

It is a story of hunger for improvement, of humility to learn, of hard work and of hope for the future.

Hugh, Louise and Bob

Bob Jekyll was already sitting at a table, sipping a pint of Black Sheep and nibbling on a bowl of peanuts when Hugh and Louise arrived.

<Hugh> Hello, are you Bob?

<Bob> Yes, indeed! You must be Hugh and Louise. Can I get you a thirst quencher?

<Louise> Lime and soda for me please.

<Hugh> I’ll have the same as you, a Black Sheep.

<Bob> On the way.

<Hugh> Hello Louise, I’m Hugh Lewis.  I am the ops manager for acute medicine at St. Elsewhere’s Hospital. It is good to meet you at last. I have seen your name on emails and performance reports.

<Louise> Good to meet you too Hugh. I am senior data analyst for St. Elsewhere’s and I think we may have met before, but I’m not sure when.  Do you know what this is about? Your invitation was a bit mysterious.

<Hugh> Yes. Sorry about that. I was chatting to a friend of mine at the golf club last week, Dr Bill Hyde who is one of our local GPs.  As you might expect, we got to talking about the chronic pressure we are all under in both primary and secondary care.  He said he has recently crossed paths with an old chum of his from university days who he’d had a very interesting conversation with in this very pub, and he recommended I email him. So I did. And that led to a phone conversation with Bob Jekyll. I have to say he asked some very interesting questions that left me feeling a mixture of curiosity and discomfort. After we talked Bob suggested that we meet for a longer chat and that I invite my senior data analyst along. So here we are.

<Louise> I have to say my curiosity was pricked by your invitation, specifically the phrase ‘system behaviour charts’. That is a new one on me and I have been working in the NHS for some time now. It is too many years to mention since I started as junior data analyst, fresh from university!

<Hugh> That is the term Bob used, and I confess it was new to me too.

<Bob> Here we are, Black Sheep, lime soda and more peanuts.  Thank you both for coming, so shall we talk about the niggle that Hugh raised when we spoke on the phone?

<Hugh> Ah! Louise, please accept my apologies in advance. I think Bob might be referring to when I said that “90% of the performance reports don’t make any sense to me“.

<Louise> There is no need to apologise Hugh. I am actually reassured that you said that. They don’t make any sense to me either! We only produce them that way because that is what we are asked for.  My original degree was geography and I discovered that I loved data analysis! My grandfather was a doctor so I guess that’s how I ended up in doing health care data analysis. But I must confess, some days I do not feel like I am adding much value.

<Hugh> Really? I believe we are in heated agreement! Some days I feel the same way.  Is that why you invited us both Bob?

<Bob> Yes.  It was some of the things that Hugh said when we talked on the phone.  They rang some warning bells for me because, in my line of work, I have seen many people fall into a whole minefield of data analysis traps that leave them feeling confused and frustrated.

<Louise> What exactly is your line of work, Bob?

<Bob> I am a systems engineer.  I design, build, verify, integrate, implement and validate systems. Fit-for-purpose systems.

<Louise> In health care?

<Bob> Not until last week when I bumped into Bill Hyde, my old chum from university.  But so far the health care system looks just like all the other ones I have worked in, so I suspect some of the lessons from other systems are transferable.

<Hugh> That sounds interesting. Can you give us an example?

<Bob> OK.  Hugh, in our first conversation, you often used the words “demand”  and “capacity”. What do you mean by those terms?

<Hugh> Well, demand is what comes through the door, the flow of requests, the workload we are expected to manage.  And capacity is the resources that we have to deliver the work and to meet our performance targets.  Capacity is the staff, the skills, the equipment, the chairs, and the beds. The stuff that costs money to provide.  As a manager, I am required to stay in-budget and that consumes a big part of my day!

<Bob> OK. Speaking as an engineer I would like to know the units of measurement of “demand” and “capacity”?

<Hugh> Oh! Um. Let me think. Er. I have never been asked that question before. Help me out here Louise.  I told you Bob asks tricky questions!

<Louise> I think I see what Bob is getting at.  We use these terms frequently but rather loosely. On reflection they are not precisely defined, especially “capacity”. There are different sorts of capacity all of which will be measured in different ways so have different units. No wonder we spend so much time discussing and debating the question of if we have enough capacity to meet the demand.  We are probably all assuming different things.  Beds cannot be equated to staff, but too often we just seem to lump everything together when we talk about “capacity”.  So by doing that what we are really asking is “do we have enough cash in the budget to pay for the stuff we thing we need?”. And if we are failing one target or another we just assume that the answer is “No” and we shout for “more cash”.

<Bob> Exactly my point. And this was one of the warning bells.  Lack of clarity on these fundamental definitions opens up a minefield of other traps like the “Flaw of Averages” and “Time equals Money“.  And if we are making those errors then they will, unwittingly, become incorporated into our data analysis.

<Louise> But we use averages all the time! What is wrong with an average?

<Bob> I can sense you are feeling a bit defensive Louise.  There is no need to.  An average is perfectly OK and is very useful tool.  The “flaw” is when it is used inappropriately.  Have you heard of Little’s Law?

<Louise> No. What’s that?

<Bob> It is the mathematically proven relationship between flow, work-in-progress and lead time.  It is a fundamental law of flow physics and it uses averages. So averages are OK.

<Hugh> So what is the “Flaw of Averages”?

<Bob> It is easier to demonstrate it than to describe it.  Let us play a game.  I have some dice and we have a big bowl of peanuts.  Let us simulate a simple two step process.  Hugh you are Step One and Louise you are Step Two.  I will be the the source of demand.

I will throw a dice and count that many peanuts out of the bowl and pass them to Hugh.  Hugh, you then throw the dice and move that many peanuts from your heap to Louise, then Louise throws the dice and moves that many from her pile to the final heap which we will call activity.

<Hugh> Sounds easy enough.  If we all use the same dice then the average flow through each step will be the same so after say ten rounds we should have, um …

<Louise> … thirty five peanuts in the activity heap.  On average.

<Bob> OK.  That’s the theory, let’s see what happens in reality.  And no eating the nuts-in-progress please.


They play the game and after a few minutes they have completed the ten rounds.


<Hugh> That’s odd.  There are only 30 nuts in the activity heap and we expected 35.  Nobody nibbled any nuts so its just chance I suppose.  Lets play again. It should average out.

…..  …..

<Louise> Thirty four this time which is better, but is still below the predicted average.  That could still be a chance effect though.  Let us run the ‘nutty’ game this a few more times.

….. …..

<Hugh> We have run the same game six times with the same nuts and the same dice and we delivered activities of 30, 34, 30, 24, 23 and 31 and there are usually nuts stuck in the process at the end of each game, so it is not due to a lack of demand.  We are consistently under-performing compared with our theoretical prediction.  That is weird.  My head says we were just unlucky but I have a niggling doubt that there is more to it.

<Louise> Is this the Flaw of Averages?

<Bob> Yes, it is one of them. If we set our average future flow-capacity to the average historical demand and there is any variation anywhere in the process then we will see this effect.

<Hugh> H’mmm.  But we do this all the time because we assume that the variation will average out over time. Intuitively it must average out over time.  What would happen if we kept going for more cycles?

<Bob> That is a very good question.  And your intuition is correct.  It does average out eventually but there is a catch.

<Hugh> What is the catch?

<Bob>  The number of peanuts in the process and the time it takes for one peanut to get through is very variable.

<Louise> Is there any pattern to the variation? Is it predictable?

<Bob> Another excellent question.  Yes, there is a pattern.  It is called “chaos”.  Predictable chaos if you like.

<Hugh> So is that the reason you said on the phone that we should present our metrics as time-series charts?

<Bob> Yes, one of them.  The appearance of chaotic system behaviour is very characteristic on a time-series chart.

<Louise> And if we see the chaos pattern on our charts then we could conclude that we have made the Flaw of Averages error?

<Bob> That would be a reasonable hypothesis.

<Hugh> I think I understand the reason you invited us to a face-to-face demonstration.  It would not have worked if you had just described it.  You have to experience it because it feels so counter-intuitive.  And this is starting to feel horribly familiar; perpetual chaos about sums up my working week!

<Louise> You also mentioned something you referred to as the “time equals money” trap.  Is that somehow linked to this?

<Bob> Yes.  We often equate time and money but they do not behave the same way.  If have five pounds today and I only spend four pounds then I can save the remaining one pound for tomorrow and spend it then – so the Law of Averages works.  But if I have five minutes today and I only use four minutes then the other minute cannot be saved and used tomorrow, it is lost forever.  That is why the Law of Averages does not work for time.

<Hugh> But that means if we set our budgets based on the average demand and the cost of people’s time then not only will we have queues, delays and chaos, we will also consistently overspend the budget too.  This is sounding more and more familiar by the minute!  This is nuts, if you will excuse the pun.

<Louise> So what is the solution?  I hope you would not have invited us here if there was no solution.

<Bob> Part of the solution is to develop our knowledge of system behaviour and how we need to present it in a visual format. With that we develop a deeper understanding of what the system behaviour charts are saying to us.  With that we can develop our ability to make wiser decisions that will lead to effective actions which will eliminate the queues, delays, chaos and cost-pressures.

<Hugh> This is possible?

<Bob> Yes. It is called systems engineering. That’s what I do.

<Louise> When do we start?

<Bob> We have started.

Dr Hyde and Mr Jekyll

Dr Bill Hyde was already at the bar when Bob Jekyll arrived.

Bill and  Bob had first met at university and had become firm friends, but their careers had diverged and it was only by pure chance that their paths had crossed again recently.

They had arranged to meet up for a beer and to catch up on what had happened in the 25 years since they had enjoyed the “good old times” in the university bar.

<Dr Bill> Hi Bob, what can I get you? If I remember correctly it was anything resembling real ale. Will this “Black Sheep” do?

<Bob> Hi Bill, Perfect! I’ll get the nibbles. Plain nuts OK for you?

<Dr Bill> My favourite! So what are you up to now? What doors did your engineering degree open?

<Bob> Lots!  I’ve done all sorts – mechanical, electrical, software, hardware, process, all except civil engineering. And I love it. What I do now is a sort of synthesis of all of them.  And you? Where did your medical degree lead?

<Dr Bill> To my hearts desire, the wonderful Mrs Hyde, and of course to primary care. I am a GP. I always wanted to be a GP since I was knee-high to a grasshopper.

<Bob> Yes, you always had that “I’m going to save the world one patient at a time!” passion. That must be so rewarding! Helping people who are scared witless by the health horror stories that the media pump out.  I had a fright last year when I found a lump.  My GP was great, she confidently diagnosed a “hernia” and I was all sorted in a matter of weeks with a bit of nifty day case surgery. I was convinced my time had come. It just shows how damaging the fear of the unknown can be!

<Dr Bill> Being a GP is amazingly rewarding. I love my job. But …

<Bob> But what? Are you alright Bill? You suddenly look really depressed.

<Dr Bill> Sorry Bob. I don’t want to be a damp squib. It is good to see you again, and chat about the old days when we were teased about our names.  And it is great to hear that you are enjoying your work so much. I admit I am feeling low, and frankly I welcome the opportunity to talk to someone I know and trust who is not part of the health care system. If you know what I mean?

<Bob> I know exactly what you mean.  Well, I can certainly offer an ear, “a problem shared is a problem halved” as they say. I can’t promise to do any more than that, but feel free to tell me the story, from the beginning. No blood-and-guts gory details though please!

<Dr Bill> Ha! “Tell me the story from the beginning” is what I say to my patients. OK, here goes. I feel increasingly overwhelmed and I feel like I am drowning under a deluge of patients who are banging on the practice door for appointments to see me. My intuition tells me that the problem is not the people, it is the process, but I can’t seem to see through the fog of frustration and chaos to a clear way forward.

<Bob> OK. I confess I know nothing about how your system works, so can you give me a bit more context.

<Dr Bill> Sorry. Yes, of course. I am what is called a single-handed GP and I have a list of about 1500 registered patients and I am contracted to provide primary care for them. I don’t have to do that 24 x 7, the urgent stuff that happens in the evenings and weekends is diverted to services that are designed for that. I work Monday to Friday from 9 AM to 5 PM, and I am contracted to provide what is needed for my patients, and that means face-to-face appointments.

<Bob> OK. When you say “contracted” what does that mean exactly?

<Dr Bill> Basically, the St. Elsewhere’s® Practice is like a small business. It’s annual income is a fixed amount per year for each patient on the registration list, and I have to provide the primary care service for them from that pot of cash. And that includes all the costs, including my income, our practice nurse, and the amazing Mrs H. She is the practice receptionist, manager, administrator and all-round fixer-of-anything.

<Bob> Wow! What a great design. No need to spend money on marketing, research, new product development, or advertising! Just 100% pure service delivery of tried-and-tested medical know-how to a captive audience for a guaranteed income. I have commercial customers who would cut off their right arms for an offer like that!

<Dr Bill> Really? It doesn’t feel like that to me. It feels like the more I offer, the more the patients expect. The demand is a bottomless well of wants, but the income is capped and my time is finite!

<Bob> H’mm. Tell me more about the details of how the process works.

<Dr Bill> Basically, I am a problem-solving engine. Patients phone for an appointment, Mrs H books one, the patient comes at the appointed time, I see them, and I diagnose and treat the problem, or I refer on to a specialist if it’s more complicated. That’s basically it.

<Bob> OK. Sounds a lot simpler than 99% of the processes that I’m usually involved with. So what’s the problem?

<Dr Bill> I don’t have enough capacity! After all the appointments for the day are booked Mrs H has to say “Sorry, please try again tomorrow” to every patient who phones in after that.  The patients who can’t get an appointment are not very happy and some can get quite angry. They are anxious and frustrated and I fully understand how they feel. I feel the same.

<Bob> We will come back to what you mean by “capacity”. Can you outline for me exactly how a patient is expected to get an appointment?

<Dr Bill> We tell them to phone at 8 AM for an appointment, there is a fixed number of bookable appointments, and it is first-come-first-served.  That is the only way I can protect myself from being swamped and is the fairest solution for patients.  It wasn’t my idea; it is called Advanced Access. Each morning at 8 AM we switch on the phones and brace ourselves for the daily deluge.

<Bob> You must be pulling my leg! This design is a batch-and-queue phone-in appointment booking lottery!  I guess that is one definition of “fair”.  How many patients get an appointment on the first attempt?

<Dr Bill> Not many.  The appointments are usually all gone by 9 AM and a lot are to people who have been trying to get one for several days. When they do eventually get to see me they are usually grumpy and then spring the trump card “And while I’m here doctor I have a few other things that I’ve been saving up to ask you about“. I help if I can but more often than not I have to say, “I’m sorry, you’ll have to book another appointment!“.

<Bob> I’m not surprised you patients are grumpy. I would be too. And my recollection of seeing my GP with my scary lump wasn’t like that at all. I phoned at lunch time and got an appointment the same day. Maybe I was just lucky, or maybe my GP was as worried as me. But it all felt very calm. When I arrived there was only one other patient waiting, and I was in and out in less than ten minutes – and mightily reassured I can tell you! It felt like a high quality service that I could trust if-and-when I needed it, which fortunately is very infrequently.

<Dr Bill> I dream of being able to offer a service like that! I am prepared to bet you are registered with a group practice and you see whoever is available rather than your own GP. Single-handed GPs like me who offer the old fashioned personal service are a rarity, and I can see why. We must be suckers!

<Bob> OK, so I’m starting to get a sense of this now. Has it been like this for a long time?

<Dr Bill> Yes, it has. When I was younger I was more resilient and I did not mind going the extra mile.  But the pressure is relentless and maybe I’m just getting older and grumpier.  My real fear is I end up sounding like the burned-out cynics that I’ve heard at the local GP meetings; the ones who crow about how they are counting down the days to when they can retire and gloat.

<Bob> You’re the same age as me Bill so I don’t think either of us can use retirement as an exit route, and anyway, that’s not your style. You were never a quitter at university. Your motto was always “when the going gets tough the tough get going“.

<Dr Bill> Yeah I know. That’s why it feels so frustrating. I think I lost my mojo a long time back. Maybe I should just cave in and join up with the big group practice down the road, and accept the inevitable loss of the personal service. They said they would welcome me, and my list of 1500 patients, with open arms.

<Bob> OK. That would appear to be an option, or maybe a compromise, but I’m not sure we’ve exhausted all the other options yet.  Tell me, how do you decide how long a patient needs for you to solve their problem?

<Dr Bill> That’s easy. It is ten minutes. That is the time recommended in the Royal College Guidelines.

<Bob> Eh? All patients require exactly ten minutes?

<Dr Bill> No, of course not!  That is the average time that patients need.  The Royal College did a big survey and that was what most GPs said they needed.

<Bob> Please tell me if I have got this right.  You work 9-to-5, and you carve up your day into 10-minute time-slots called “appointments” and, assuming you are allowed time to have lunch and a pee, that would be six per hour for seven hours which is 42 appointments per day that can be booked?

<Dr Bill> No. That wouldn’t work because I have other stuff to do as well as see patients. There are only 25 bookable 10-minute appointments per day.

<Bob> OK, that makes more sense. So where does 25 come from?

<Dr Bill> Ah! That comes from a big national audit. For an average GP with and average  list of 1,500 patients, the average number of patients seeking an appointment per day was found to be 25, and our practice population is typical of the national average in terms of age and deprivation.  So I set the upper limit at 25. The workload is manageable but it seems to generate a lot of unhappy patients and I dare not increase the slots because I’d be overwhelmed with the extra workload and I’m barely coping now.  I feel stuck between a rock and a hard place!

<Bob> So you have set the maximum slot-capacity to the average demand?

<Dr Bill> Yes. That’s OK isn’t it? It will average out over time. That is what average means! But it doesn’t feel like that. The chaos and pressure never seems to go away.


There was a long pause while Bob mulls over what he had heard, sips his pint of Black Sheep and nibbles on the dwindling bowl of peanuts.  Eventually he speaks.


<Bob> Bill, I have some good news and some not-so-good news and then some more good news.

<Dr Bill> Oh dear, you sound just like me when I have to share the results of tests with one of my patients at their follow up appointment. You had better give me the “bad news sandwich”!

<Bob> OK. The first bit of good news is that this is a very common, and easily treatable flow problem.  The not-so-good news is that you will need to change some things.  The second bit of good news is that the changes will not cost anything and will work very quickly.

<Dr Bill> What! You cannot be serious!! Until ten minutes ago you said that you knew nothing about how my practice works and now you are telling me that there is a quick, easy, zero cost solution.  Forgive me for doubting your engineering know-how but I’ll need a bit more convincing than that!

<Bob> And I would too if I were in your position.  The clues to the diagnosis are in the story. You said the process problem was long-standing; you said that you set the maximum slot-capacity to the average demand; and you said that you have a fixed appointment time that was decided by a subjective consensus.  From an engineering perspective, this is a perfect recipe for generating chronic chaos, which is exactly the symptoms you are describing.

<Dr Bill> Is it? OMG. You said this is well understood and resolvable? So what do I do?

<Bob> Give me a minute.  You said the average demand is 25 per day. What sort of service would you like your patients to experience? Would “90% can expect a same day appointment on the first call” be good enough as a starter?

<Dr Bill> That would be game changing!  Mrs H would be over the moon to be able to say “Yes” that often. I would feel much less anxious too, because I know the current system is a potentially dangerous lottery. And my patients would be delighted and relieved to be able to see me that easily and quickly.

<Bob> OK. Let me work this out. Based on what you’ve said, some assumptions, and a bit of flow engineering know-how; you would need to offer up to 31 appointments per day.

<Dr Bill> What! That’s impossible!!! I told you it would be impossible! That would be another hour a day of face-to-face appointments. When would I do the other stuff? And how did you work that out anyway?

<Bob> I did not say they would have to all be 10-minute appointments, and I did not say you would expect to fill them all every day. I did however say you would have to change some things.  And I did say this is a well understood flow engineering problem.  It is called “resilience design“. That’s how I was able to work it out on the back of this Black Sheep beer mat.

<Dr Bill> H’mm. That is starting to sound a bit more reasonable. What things would I have to change? Specifically?

<Bob> I’m not sure what specifically yet.  I think in your language we would say “I have taken a history, and I have a differential diagnosis, so next I’ll need to examine the patient, and then maybe do some tests to establish the actual diagnosis and to design and decide the treatment plan“.

<Dr Bill> You are learning the medical lingo fast! What do I need to do first? Brace myself for the forensic rubber-gloved digital examination?

<Bob> Alas, not yet and certainly not here. Shall we start with the vital signs? Height, weight, pulse, blood pressure, and temperature? That’s what my GP did when I went with my scary lump.  The patient here is not you, it is your St. Elsewhere’s® Practice, and we will need to translate the medical-speak into engineering-speak.  So one thing you’ll need to learn is a bit of the lingua-franca of systems engineering.  By the way, that’s what I do now. I am a systems engineer, or maybe now a health care systems engineer?

<Dr Bill> Point me in the direction of the HCSE dictionary! The next round is on me. And the nuts!

<Bob> Excellent. I’ll have another Black Sheep and some of those chilli-coated ones. We have work to do.  Let me start by explaining what “capacity” actually means to an engineer. Buckle up. This ride might get a bit bumpy.


This story is fictional, but the subject matter is factual.

Bob’s diagnosis and recommendations are realistic and reasonable.

Chapter 1 of the HCSE dictionary can be found here.

And if you are a GP who recognises these “symptoms” then this may be of interest.

Miracle on Tavanagh Avenue

Sometimes change is dramatic. A big improvement appears very quickly. And when that happens we are caught by surprise (and delight).

Our emotional reaction is much faster than our logical response. “Wow! That’s a miracle!


Our logical Tortoise eventually catches up with our emotional Hare and says “Hare, we both know that there is no such thing as miracles and magic. There must be a rational explanation. What is it?

And Hare replies “I have no idea, Tortoise.  If I did then it would not have been such a delightful surprise. You are such a kill-joy! Can’t you just relish the relief without analyzing the life out of it?

Tortoise feels hurt. “But I just want to understand so that I can explain to others. So that they can do it and get the same improvement.  Not everyone has a ‘nothing-ventured-nothing-gained’ attitude like you! Most of us are too fearful of failing to risk trusting the wild claims of improvement evangelists. We have had our fingers burned too often.


The apparent miracle is real and recent … here is a snippet of the feedback:

Notice carefully the last sentence. It took a year of discussion to get an “OK” and a month of planning to prepare the “GO”.

That is not a miracle and some magic … that took a lot of hard work!

The evangelist is the customer. The supplier is an engineer.


The context is the chronic niggle of patients trying to get an appointment with their GP, and the chronic niggle of GPs feeling overwhelmed with work.

Here is the back story …

In the opening weeks of the 21st Century, the National Primary Care Development Team (NPDT) was formed.  Primary care was a high priority and the government had allocated £168m of investment in the NHS Plan, £48m of which was earmarked to improve GP access.

The approach the NPDT chose was:

harvest best practice +
use a panel of experts +
disseminate best practice.

Dr (later Sir) John Oldham was the innovator and figure-head.  The best practice was copied from Dr Mark Murray from Kaiser Permanente in the USA – the Advanced Access model.  The dissemination method was copied from from Dr Don Berwick’s Institute of Healthcare Improvement (IHI) in Boston – the Collaborative Model.

The principle of Advanced Access is “today’s-work-today” which means that all the requests for a GP appointment are handled the same day.  And the proponents of the model outlined the key elements to achieving this:

1. Measure daily demand.
2. Set capacity so that is sufficient to meet the daily demand.
3. Simple booking rule: “phone today for a decision today”.

But that is not what was rolled out. The design was modified somewhere between aspiration and implementation and in two important ways.

First, by adding a policy of “Phone at 08:00 for an appointment”, and second by adding a policy of “carving out” appointment slots into labelled pots such as ‘Dr X’ or ‘see in 2 weeks’ or ‘annual reviews’.

Subsequent studies suggest that the tweaking happened at the GP practice level and was driven by the fear that, by reducing the waiting time, they would attract more work.

In other words: an assumption that demand for health care is supply-led, and without some form of access barrier, the system would be overwhelmed and never be able to cope.


The result of this well-intended tampering with the Advanced Access design was to invalidate it. Oops!

To a systems engineer this is meddling was counter-productive.

The “today’s work today” specification is called a demand-led design and, if implemented competently, will lead to shorter waits for everyone, no need for urgent/routine prioritization and slot carve-out, and a simpler, safer, calmer, more efficient, higher quality, more productive system.

In this context it does not mean “see every patient today” it means “assess and decide a plan for every patient today”.

In reality, the actual demand for GP appointments is not known at the start; which is why the first step is to implement continuous measurement of the daily number and category of requests for appointments.

The second step is to feed back this daily demand information in a visual format called a time-series chart.

The third step is to use this visual tool for planning future flow-capacity, and for monitoring for ‘signals’, such as spikes, shifts, cycles and slopes.

That was not part of the modified design, so the reasonable fear expressed by GPs was (and still is) that by attempting to do today’s-work-today they would unleash a deluge of unmet need … and be swamped/drowned.

So a flood defense barrier was bolted on; the policy of “phone at 08:00 for an appointment today“, and then the policy of  channeling the over spill into pots of “embargoed slots“.

The combined effect of this error of omission (omitting the measured demand visual feedback loop) and these errors of commission (the 08:00 policy and appointment slot carve-out policy) effectively prevented the benefits of the Advanced Access design being achieved.  It was a predictable failure.

But no one seemed to realize that at the time.  Perhaps because of the political haste that was driving the process, and perhaps because there were no systems engineers on the panel-of-experts to point out the risks of diluting the design.

It is also interesting to note that the strategic aim of the NPCT was to develop a self-sustaining culture of quality improvement (QI) in primary care. That didn’t seem to have happened either.


The roll out of Advanced Access was not the success it was hoped. This is the conclusion from the 300+ page research report published in 2007.


The “Miracle on Tavanagh Avenue” that was experienced this week by both patients and staff was the expected effect of this tampering finally being corrected; and the true potential of the original demand-led design being released – for all to experience.

Remember the essential ingredients?

1. Measure daily demand and feed it back as a visual time-series chart.
2. Set capacity so that is sufficient to meet the daily demand.
3. Use a simple booking rule: “phone anytime for a decision today”.

But there is also an extra design ingredient that has been added in this case, one that was not part of the original Advanced Access specification, one that frees up GP time to provide the required “resilience” to sustain a same-day service.

And that “secret” ingredient is how the new design worked so quickly and feels like a miracle – safe, calm, enjoyable and productive.

This is health care systems engineering (HCSE) in action.


So congratulations to Harry Longman, the whole team at GP Access, and to Dr Philip Lusty and the team at Riverside Practice, Tavangh Avenue, Portadown, NI.

You have demonstrated what was always possible.

The fear of failure prevented it before, just as it prevented you doing this until you were so desperate you had no other choices.

To read the fuller story click here.

PS. Keep a close eye on the demand time-series chart and if it starts to rise then investigate the root cause … immediately.


How Do We Know We Have Improved?

Phil and Pete are having a coffee and a chat.  They both work in the NHS and have been friends for years.

They have different jobs. Phil is a commissioner and an accountant by training, Pete is a consultant and a doctor by training.

They are discussing a challenge that affects them both on a daily basis: unscheduled care.

Both Phil and Pete want to see significant and sustained improvements and how to achieve them is often the focus of their coffee chats.


<Phil> We are agreed that we both want improvement, both from my perspective as a commissioner and from your perspective as a clinician. And we agree that what we want to see improvements in patient safety, waiting, outcomes, experience for both patients and staff, and use of our limited NHS resources.

<Pete> Yes. Our common purpose, the “what” and “why”, has never been an issue.  Where we seem to get stuck is the “how”.  We have both tried many things but, despite our good intentions, it feels like things are getting worse!

<Phil> I agree. It may be that what we have implemented has had a positive impact and we would have been even worse off if we had done nothing. But I do not know. We clearly have much to learn and, while I believe we are making progress, we do not appear to be learning fast enough.  And I think this knowledge gap exposes another “how” issue: After we have intervened, how do we know that we have (a) improved, (b) not changed or (c) worsened?

<Pete> That is a very good question.  And all that I have to offer as an answer is to share what we do in medicine when we ask a similar question: “How do I know that treatment A is better than treatment B?”  It is the essence of medical research; the quest to find better treatments that deliver better outcomes and at lower cost.  The similarities are strong.

<Phil> OK. How do you do that? How do you know that “Treatment A is better than Treatment B” in a way that anyone will trust the answer?

 <Pete> We use a science that is actually very recent on the scientific timeline; it was only firmly established in the first half of the 20th century. One reason for that is that it is rather a counter-intuitive science and for that reason it requires using tools that have been designed and demonstrated to work but which most of us do not really understand how they work. They are a bit like magic black boxes.

<Phil> H’mm. Please forgive me for sounding skeptical but that sounds like a big opportunity for making mistakes! If there are lots of these “magic black box” tools then how do you decide which one to use and how do you know you have used it correctly?

<Pete> Those are good questions! Very often we don’t know and in our collective confusion we generate a lot of unproductive discussion.  This is why we are often forced to accept the advice of experts but, I confess, very often we don’t understand what they are saying either! They seem like the medieval Magi.

<Phil> H’mm. So these experts are like ‘magicians’ – they claim to understand the inner workings of the black magic boxes but are unable, or unwilling, to explain in a language that a ‘muggle’ would understand?

<Pete> Very well put. That is just how it feels.

<Phil> So can you explain what you do understand about this magical process? That would be a start.


<Pete> OK, I will do my best.  The first thing we learn in medical research is that we need to be clear about what it is we are looking to improve, and we need to be able to measure it objectively and accurately.

<Phil> That  makes sense. Let us say we want to improve the patient’s subjective quality of the A&E experience and objectively we want to reduce the time they spend in A&E. We measure how long they wait. 

<Pete> The next thing is that we need to decide how much improvement we need. What would be worthwhile? So in the example you have offered we know that reducing the average time patients spend in A&E by just 30 minutes would have a significant effect on the quality of the patient and staff experience, and as a by-product it would also dramatically improve the 4-hour target performance.

<Phil> OK.  From the commissioning perspective there are lots of things we can do, such as commissioning alternative paths for specific groups of patients; in effect diverting some of the unscheduled demand away from A&E to a more appropriate service provider.  But these are the sorts of thing we have been experimenting with for years, and it brings us back to the question: How do we know that any change we implement has had the impact we intended? The system seems, well, complicated.

<Pete> In medical research we are very aware that the system we are changing is very complicated and that we do not have the power of omniscience.  We cannot know everything.  Realistically, all we can do is to focus on objective outcomes and collect small samples of the data ocean and use those in an attempt to draw conclusions can trust. We have to design our experiment with care!

<Phil> That makes sense. Surely we just need to measure the stuff that will tell us if our impact matches our intent. That sounds easy enough. What’s the problem?

<Pete> The problem we encounter is that when we measure “stuff” we observe patient-to-patient variation, and that is before we have made any changes.  Any impact that we may have is obscured by this “noise”.

<Phil> Ah, I see.  So if the our intervention generates a small impact then it will be more difficult to see amidst this background noise. Like trying to see fine detail in a fuzzy picture.

<Pete> Yes, exactly like that.  And it raises the issue of “errors”.  In medical research we talk about two different types of error; we make the first type of error when our actual impact is zero but we conclude from our data that we have made a difference; and we make the second type of error when we have made an impact but we conclude from our data that we have not.

<Phil> OK. So does that imply that the more “noise” we observe in our measure for-improvement before we make the change, the more likely we are to make one or other error?

<Pete> Precisely! So before we do the experiment we need to design it so that we reduce the probability of making both of these errors to an acceptably low level.  So that we can be assured that any conclusion we draw can be trusted.

<Phil> OK. So how exactly do you do that?

<Pete> We know that whenever there is “noise” and whenever we use samples then there will always be some risk of making one or other of the two types of error.  So we need to set a threshold for both. We have to state clearly how much confidence we need in our conclusion. For example, we often use the convention that we are willing to accept a 1 in 20 chance of making the Type I error.

<Phil> Let me check if I have heard you correctly. Suppose that, in reality, our change has no impact and we have set the risk threshold for a Type 1 error at 1 in 20, and suppose we repeat the same experiment 100 times – are you saying that we should expect about five of our experiments to show data that says our change has had the intended impact when in reality it has not?

<Pete> Yes. That is exactly it.

<Phil> OK.  But in practice we cannot repeat the experiment 100 times, so we just have to accept the 1 in 20 chance that we will make a Type 1 error, and we won’t know we have made it if we do. That feels a bit chancy. So why don’t we just set the threshold to 1 in 100 or 1 in 1000?

<Pete> We could, but doing that has a consequence.  If we reduce the risk of making a Type I error by setting our threshold lower, then we will increase the risk of making a Type II error.

<Phil> Ah! I see. The old swings-and-roundabouts problem. By the way, do these two errors have different names that would make it  easier to remember and to explain?

<Pete> Yes. The Type I error is called a False Positive. It is like concluding that a patient has a specific diagnosis when in reality they do not.

<Phil> And the Type II error is called a False Negative?

<Pete> Yes.  And we want to avoid both of them, and to do that we have to specify a separate risk threshold for each error.  The convention is to call the threshold for the false positive the alpha level, and the threshold for the false negative the beta level.

<Phil> OK. So now we have three things we need to be clear on before we can do our experiment: the size of the change that we need, the risk of the false positive that we are willing to accept, and the risk of a false negative that we are willing to accept.  Is that all we need?

<Pete> In medical research we learn that we need six pieces of the experimental design jigsaw before we can proceed. We only have three pieces so far.

<Phil> What are the other three pieces then?

<Pete> We need to know the average value of the metric we are intending to improve, because that is our baseline from which improvement is measured.  Improvements are often framed as a percentage improvement over the baseline.  And we need to know the spread of the data around that average, the “noise” that we referred to earlier.

<Phil> Ah, yes!  I forgot about the noise.  But that is only five pieces of the jigsaw. What is the last piece?

<Pete> The size of the sample.

<Phil> Eh?  Can’t we just go with whatever data we can realistically get?

<Pete> Sadly, no.  The size of the sample is how we control the risk of a false negative error.  The more data we have the lower the risk. This is referred to as the power of the experimental design.

<Phil> OK. That feels familiar. I know that the more experience I have of something the better my judgement gets. Is this the same thing?

<Pete> Yes. Exactly the same thing.

<Phil> OK. So let me see if I have got this. To know if the impact of the intervention matches our intention we need to design our experiment carefully. We need all six pieces of the experimental design jigsaw and they must all fall inside our circle of control. We can measure the baseline average and spread; we can specify the impact we will accept as useful; we can specify the risks we are prepared to accept of making the false positive and false negative errors; and we can collect the required amount of data after we have made the intervention so that we can trust our conclusion.

<Pete> Perfect! That is how we are taught to design research studies so that we can trust our results, and so that others can trust them too.

<Phil> So how do we decide how big the post-implementation data sample needs to be? I can see we need to collect enough data to avoid a false negative but we have to be pragmatic too. There would appear to be little value in collecting more data than we need. It would cost more and could delay knowing the answer to our question.

<Pete> That is precisely the trap than many inexperienced medical researchers fall into. They set their sample size according to what is achievable and affordable, and then they hope for the best!

<Phil> Well, we do the same. We analyse the data we have and we hope for the best.  In the magical metaphor we are asking our data analysts to pull a white rabbit out of the hat.  It sounds rather irrational and unpredictable when described like that! Have medical researchers learned a way to avoid this trap?

<Pete> Yes, it is a tool called a power calculator.

<Phil> Ooooo … a power tool … I like the sound of that … that would be a cool tool to have in our commissioning bag of tricks. It would be like a magic wand. Do you have such a thing?

<Pete> Yes.

<Phil> And do you understand how the power tool magic works well enough to explain to a “muggle”?

<Pete> Not really. To do that means learning some rather unfamiliar language and some rather counter-intuitive concepts.

<Phil> Is that the magical stuff I hear lurks between the covers of a medical statistics textbook?

<Pete> Yes. Scary looking mathematical symbols and unfathomable spells!

<Phil> Oh dear!  Is there another way for to gain a working understanding of this magic? Something a bit more pragmatic? A path that a ‘statistical muggle’ might be able to follow?

<Pete> Yes. It is called a simulator.

<Phil> You mean like a flight simulator that pilots use to learn how to control a jumbo jet before ever taking a real one out for a trip?

<Pete> Exactly like that.

<Phil> Do you have one?

<Pete> Yes. It was how I learned about this “stuff” … pragmatically.

<Phil> Can you show me?

<Pete> Of course.  But to do that we will need a bit more time, another coffee, and maybe a couple of those tasty looking Danish pastries.

<Phil> A wise investment I’d say.  I’ll get the the coffee and pastries, if you fire up the engines of the simulator.

“Houston, we have a problem!”

The immortal words from Apollo 13 that alerted us to an evolving catastrophe …

… and that is what we are seeing in the UK health and social care system … using the thermometer of A&E 4-hour performance. England is the red line.

uk_ae_runchart

The chart shows that this is not a sudden change, it has been developing over quite a long period of time … so why does it feel like an unpleasant surprise?


One reason may be that NHS England is using performance management techniques that were out of date in the 1980’s and are obsolete in the 2010’s!

Let me show you what I mean. This is a snapshot from the NHS England Board Minutes for November 2016.

nhse_rag_nov_2016
RAG stands for Red-Amber-Green and what we want to see on a Risk Assessment is Green for the most important stuff like safety, flow, quality and affordability.

We are not seeing that.  We are seeing Red/Amber for all of them. It is an evolving catastrophe.

A risk RAG chart is an obsolete performance management tool.

Here is another snippet …

nhse_ae_nov_2016

This demonstrates the usual mix of single point aggregates for the most recent month (October 2016); an arbitrary target (4 hours) used as a threshold to decide failure/not failure; two-point comparisons (October 2016 versus October 2015); and a sprinkling of ratios. Not a single time-series chart in sight. No pictures that tell a story.

Click here for the full document (which does also include some very sensible plans to maintain hospital flow through the bank holiday period).

The risk of this way of presenting system performance data is that it is a minefield of intuitive traps for the unwary.  Invisible pitfalls that can lead to invalid conclusions, unwise decisions, potentially ineffective and/or counter-productive actions, and failure to improve. These methods are risky and that is why they should be obsolete.

And if NHSE is using obsolete tools than what hope do CCGs and Trusts have?


Much better tools have been designed.  Tools that are used by organisations that are innovative, resilient, commercially successful and that deliver safety, on-time delivery, quality and value for money. At the same time.

And they are obsolete outside the NHS because in the competitive context of the dog-eat-dog real world, organisations do not survive if they do not innovate, improve and learn as fast as their competitors.  They do not have the luxury of being shielded from reality by having a central tax-funded monopoly!

And please do not misinterpret my message here; I am a 100% raving fan of the NHS ethos of “available to all and free at the point of delivery” and an NHS that is funded centrally and fairly. That is not my issue.

My issue is the continued use of obsolete performance management tools in the NHS.


Q: So what are the alternatives? What do the successful commercial organisations use instead?

A: System behaviour charts.

SBCs are pictures of how the system is behaving over time – pictures that tell a story – pictures that have meaning – pictures that we can use to diagnose, design and deliver a better outcome than the one we are heading towards.

Pictures like the A&E performance-over-time chart above.

Click here for more on how and why.


Therefore, if the DoH, NHSE, NHSI, STPs, CCGs and Trust Boards want to achieve their stated visions and missions then the writing-on-the-wall says that they will need to muster some humility and learn how successful organisations do this.

This is not a comfortable message to hear and it is easier to be defensive than receptive.

The NHS has to change if it wants to survive and continue serve the people who pay the salaries. And time is running out. Continuing as we are is not an option. Complaining and blaming are not options. Doing nothing is not an option.

Learning is the only option.

Anyone can learn to use system behaviour charts.  No one needs to rely on averages, two-point comparisons, ratios, targets, and the combination of failure-metrics and us-versus-them-benchmarking that leads to the chronic mediocrity trap.

And there is hope for those with enough hunger, humility and who are prepared to do the hard-work of developing their personal, team, department and organisational capability to use better management methods.


Apollo 13 is a true story.  The catastrophe was averted.  The astronauts were brought home safely.  The film retells the story of how that miracle was achieved. Perhaps watching the whole film would be somewhere to start, because it holds many valuable lessons for us all – lessons on how effective teams behave.

Defensive Reasoning

monkey_on_back_anim_150_wht_11200

About 25 years ago a paper was published in the Harvard Business Review with the interesting title of “Teaching Smart People How To Learn

The uncomfortable message was that many people who are top of the intellectual rankings are actually very poor learners.

This sounds like a paradox.  How can people be high-achievers and yet be unable to learn?


Health care systems are stuffed full of super-smart, high-achieving professionals. The cream of educational crop. The top 2%. They are called “doctors”.

And we have a problem with improvement in health care … a big problem … the safety, delivery, quality and affordability of the NHS is getting worse. Not better.

Improvement implies change and change implies learning, so if smart people struggle to learn then could that explain why health care systems find self-improvement so difficult?

This paragraph from the 1991 HBR paper feels uncomfortably familiar:

defensive_reasoning_2

The author, Chris Argyris, refers to something called “single-loop learning” and if we translate this management-speak into the language of medicine it would come out as “treating the symptom and ignoring the disease“.  That is poor medicine.

Chris also suggests an antidote to this problem and gave it the label “double-loop learning” which if translated into medical speak becomes “diagnosis“.  And that is something that doctors can relate to because without a diagnosis, a justifiable treatment is difficult to formulate.


We need to diagnose the root cause(s) of the NHS disease.


The 1991 HBR paper refers back to an earlier 1977 HBR paper called Double Loop Learning in Organisations where we find the theory that underpins it.

The proposed hypothesis is that we all have cognitive models that we use to decide our actions (and in-actions), what I have referred to before as ChimpWare.  In it is a reference to a table published in a 1974 book and the message is that Single-Loop learning is a manifestation of a Model 1 theory-in-action.

defensive_reasoning_models


And if we consider the task that doctors are expected to do then we can empathize with their dominant Model 1 approach.  Health care is a dangerous business.  Doctors can cause a lot of unintentional harm – both physical and psychological.  Doctors are dealing with a very, very complex system – a human body – that they only partially understand.  No two patients are exactly the same and illness is a dynamic process.  Everyone’s expectations are high. We have come a long way since the days of blood-letting and leeches!  Failure is not tolerated.

Doctors are intelligent and competitive … they had to be to win the education race.

Doctors must make tough decisions and have to have tough conversations … many, many times … and yet not be consumed in the process.  They often have to suppress emotions to be effective.

Doctors feel the need to protect patients from harm – both physical and emotional.

And collectively they do a very good job.  Doctors are respected and trusted professionals.


But …  to quote Chris Argyris …

“Model I blinds people to their weaknesses. For instance, the six corporate presidents were unable to realize how incapable they were of questioning their assumptions and breaking through to fresh understanding. They were under the illusion that they could learn, when in reality they just kept running around the same track.”

This blindness is self-reinforcing because …

“All parties withheld information that was potentially threatening to themselves or to others, and the act of cover-up itself was closed to discussion.”


How many times have we seen this in the NHS?

The Mid-Staffordshire Hospital debacle that led to the Francis Report is all the evidence we need.


So what is the way out of this double-bind?

Chris gives us some hints with his Model II theory-in-use.

  1. Valid information – Study.
  2. Free and informed choice – Plan.
  3. Constant monitoring of the implementation – Do.

The skill required is to question assumptions and break through to fresh understanding and we can do that with design-led approach because that is what designers do.

They bring their unconscious assumptions up to awareness and ask “Is that valid?” and “What if” questions.

It is called Improvement-by-Design.

And the good news is that this Model II approach works in health care, and we know that because the evidence is accumulating.

 

Value, Verify and Validate

thinker_figure_unsolve_puzzle_150_wht_18309Many of the challenges that we face in delivering effective and affordable health care do not have well understood and generally accepted solutions.

If they did there would be no discussion or debate about what to do and the results would speak for themselves.

This lack of understanding is leading us to try to solve a complicated system design challenge in our heads.  Intuitively.

And trying to do it this way is fraught with frustration and risk because our intuition tricks us. It was this sort of challenge that led Professor Rubik to invent his famous 3D Magic Cube puzzle.

It is difficult enough to learn how to solve the Magic Cube puzzle by trial and error; it is even more difficult to attempt to do it inside our heads! Intuitively.


And we know the Rubik Cube puzzle is solvable, so all we need are some techniques, tools and training to improve our Rubik Cube solving capability.  We can all learn how to do it.


Returning to the challenge of safe and affordable health care, and to the specific problem of unscheduled care, A&E targets, delayed transfers of care (DTOC), finance, fragmentation and chronic frustration.

This is a systems engineering challenge so we need some systems engineering techniques, tools and training before attempting it.  Not after failing repeatedly.

se_vee_diagram

One technique that a systems engineer will use is called a Vee Diagram such as the one shown above.  It shows the sequence of steps in the generic problem solving process and it has the same sequence that we use in medicine for solving problems that patients present to us …

Diagnose, Design and Deliver

which is also known as …

Study, Plan, Do.


Notice that there are three words in the diagram that start with the letter V … value, verify and validate.  These are probably the three most important words in the vocabulary of a systems engineer.


One tool that a systems engineer always uses is a model of the system under consideration.

Models come in many forms from conceptual to physical and are used in two main ways:

  1. To assist the understanding of the past (diagnosis)
  2. To predict the behaviour in the future (prognosis)

And the process of creating a system model, the sequence of steps, is shown in the Vee Diagram.  The systems engineer’s objective is a validated model that can be trusted to make good-enough predictions; ones that support making wiser decisions of which design options to implement, and which not to.


So if a systems engineer presented us with a conceptual model that is intended to assist our understanding, then we will require some evidence that all stages of the Vee Diagram process have been completed.  Evidence that provides assurance that the model predictions can be trusted.  And the scope over which they can be trusted.


Last month a report was published by the Nuffield Trust that is entitled “Understanding patient flow in hospitals”  and it asserts that traffic flow on a motorway is a valid conceptual model of patient flow through a hospital.  Here is a direct quote from the second paragraph in the Executive Summary:

nuffield_report_01
Unfortunately, no evidence is provided in the report to support the validity of the statement and that omission should ring an alarm bell.

The observation that “the hospitals with the least free space struggle the most” is not a validation of the conceptual model.  Validation requires a concrete experiment.


To illustrate why observation is not validation let us consider a scenario where I have a headache and I take a paracetamol and my headache goes away.  I now have some evidence that shows a temporal association between what I did (take paracetamol) and what I got (a reduction in head pain).

But this is not a valid experiment because I have not considered the other seven possible combinations of headache before (Y/N), paracetamol (Y/N) and headache after (Y/N).

An association cannot be used to prove causation; not even a temporal association.

When I do not understand the cause, and I am without evidence from a well-designed experiment, then I might be tempted to intuitively jump to the (invalid) conclusion that “headaches are caused by lack of paracetamol!” and if untested this invalid judgement may persist and even become a belief.


Understanding causality requires an approach called counterfactual analysis; otherwise known as “What if?” And we can start that process with a thought experiment using our rhetorical model.  But we must remember that we must always validate the outcome with a real experiment. That is how good science works.

A famous thought experiment was conducted by Albert Einstein when he asked the question “If I were sitting on a light beam and moving at the speed of light what would I see?” This question led him to the Theory of Relativity which completely changed the way we now think about space and time.  Einstein’s model has been repeatedly validated by careful experiment, and has allowed engineers to design and deliver valuable tools such as the Global Positioning System which uses relativity theory to achieve high positional precision and accuracy.


So let us conduct a thought experiment to explore the ‘faster movement requires more space‘ statement in the case of patient flow in a hospital.

First, we need to define what we mean by the words we are using.

The phrase ‘faster movement’ is ambiguous.  Does it mean higher flow (more patients per day being admitted and discharged) or does it mean shorter length of stage (the interval between the admission and discharge events for individual patients)?

The phrase ‘more space’ is also ambiguous. In a hospital that implies physical space i.e. floor-space that may be occupied by corridors, chairs, cubicles, trolleys, and beds.  So are we actually referring to flow-space or storage-space?

What we have in this over-simplified statement is the conflation of two concepts: flow-capacity and space-capacity. They are different things. They have different units. And the result of conflating them is meaningless and confusing.


However, our stated goal is to improve understanding so let us consider one combination, and let us be careful to be more precise with our terminology, “higher flow always requires more beds“. Does it? Can we disprove this assertion with an example where higher flow required less beds (i.e. space-capacity)?

The relationship between flow and space-capacity is well understood.

The starting point is Little’s Law which was proven mathematically in 1961 by J.D.C. Little and it states:

Average work in progress = Average lead time  X  Average flow.

In the hospital context, work in progress is the number of occupied beds, lead time is the length of stay and flow is admissions or discharges per time interval (which must be the same on average over a long period of time).

(NB. Engineers are rather pedantic about units so let us check that this makes sense: the unit of WIP is ‘patients’, the unit of lead time is ‘days’, and the unit of flow is ‘patients per day’ so ‘patients’ = ‘days’ * ‘patients / day’. Correct. Verified. Tick.)

So, is there a situation where flow can increase and WIP can decrease? Yes. When lead time decreases. Little’s Law says that is possible. We have disproved the assertion.


Let us take the other interpretation of higher flow as shorter length of stay: i.e. shorter length of stay always requires more beds.  Is this correct? No. If flow remains the same then Little’s Law states that we will require fewer beds. This assertion is disproved as well.

And we need to remember that Little’s Law is proven to be valid for averages, does that shed any light on the source of our confusion? Could the assertion about flow and beds actually be about the variation in flow over time and not about the average flow?


And this is also well understood. The original work on it was done almost exactly 100 years ago by Agner Krarup Erlang and the problem he looked at was the quality of customer service of the early telephone exchanges. Specifically, how likely was the caller to get the “all lines are busy, please try later” response.

What Erlang showed was there there is a mathematical relationship between the number of calls being made (the demand), the probability of a call being connected first time (the service quality) and the number of telephone circuits and switchboard operators available (the service cost).


So it appears that we already have a validated mathematical model that links flow, quality and cost that we might use if we substitute ‘patients’ for ‘calls’, ‘beds’ for ‘telephone circuits’, and ‘being connected’ for ‘being admitted’.

And this topic of patient flow, A&E performance and Erlang queues has been explored already … here.

So a telephone exchange is a more valid model of a hospital than a motorway.

We are now making progress in deepening our understanding.


The use of an invalid, untested, conceptual model is sloppy systems engineering.

So if the engineering is sloppy we would be unwise to fully trust the conclusions.

And I share this feedback in the spirit of black box thinking because I believe that there are some valuable lessons to be learned here – by us all.


To vote for this topic please click here.
To subscribe to the blog newsletter please click here.
To email the author please click here.

Patient Traffic Engineering

motorway[Beep] Bob’s computer alerted him to Leslie signing on to the Webex session.

<Bob> Good afternoon Leslie, how are you? It seems a long time since we last chatted.

<Leslie> Hi Bob. I am well and it has been a long time. If you remember, I had to loop out of the Health Care Systems Engineering training because I changed job, and it has taken me a while to bring a lot of fresh skeptics around to the idea of improvement-by-design.

<Bob> Good to hear, and I assume you did that by demonstrating what was possible by doing it, delivering results, and describing the approach.

<Leslie> Yup. And as you know, even with objective evidence of improvement it can take a while because that exposes another gap, the one between intent and impact.  Many people get rather defensive at that point, so I have had to take it slowly. Some people get really fired up though.

 <Bob> Yes. Respect, challenge, patience and persistence are all needed. So, where shall we pick up?

<Leslie> The old chestnut of winter pressures and A&E targets.  Except that it is an all-year problem now and according to what I read in the news, everyone is predicting a ‘melt-down’.

<Bob> Did you see last week’s IS blog on that very topic?

<Leslie> Yes, I did!  And that is what prompted me to contact you and to re-start my CHIPs coaching.  It was a real eye opener.  I liked the black swan code-named “RC9” story, it makes it sound like a James Bond film!

<Bob> I wonder how many people dug deeper into how “RC9” achieved that rock-steady A&E performance despite a rising tide of arrivals and admissions?

<Leslie> I did, and I saw several examples of anti-carve-out design.  I have read though my notes and we have talked about carve out many times.

<Bob> Excellent. Being able to see the signs of competent design is just as important as the symptoms of inept design. So, what shall we talk about?

<Leslie> Well, by co-incidence I was sent a copy of of a report entitled “Understanding patient flow in hospitals” published by one of the leading Think Tanks and I confess it made no sense to me.  Can we talk about that?

<Bob> OK. Can you describe the essence of the report for me?

<Leslie> Well, in a nutshell it said that flow needs space so if we want hospitals to flow better we need more space, in other words more beds.

<Bob> And what evidence was presented to support that hypothesis?

<Leslie> The authors equated the flow of patients through a hospital to the flow of traffic on a motorway. They presented a table of numbers that made no sense to me, I think partly because there are no units stated for some of the numbers … I’ll email you a picture.

traffic_flow_dynamics

<Bob> I agree this is not a very informative table.  I am not sure what the definition of “capacity” is here and it may be that the authors may be equating “hospital bed” to “area of tarmac”.  Anyway, the assertion that hospital flow is equivalent to motorway flow is inaccurate.  There are some similarities and traffic engineering is an interesting subject, but they are not equivalent.  A hospital is more like a busy city with junctions, cross-roads, traffic lights, roundabouts, zebra crossings, pelican crossings and all manner of unpredictable factors such as cyclists and pedestrians. Motorways are intentionally designed without these “impediments”, for obvious reasons! A complex adaptive flow system like a hospital cannot be equated to a motorway. It is a dangerous over-simplification.

<Leslie> So, if the hospital-motorway analogy is invalid then the conclusions are also invalid?

<Bob> Sometimes, by accident, we get a valid conclusion from an invalid method. What were the conclusions?

<Leslie> That the solution to improving A&E performance is more space (i.e. hospital beds) but there is no more money to build them or people to staff them.  So the recommendations are to reduce volume, redesign rehabilitation and discharge processes, and improve IT systems.

<Bob> So just re-iterating the habitual exhortations and nothing about using well-understood systems engineering methods to accurately diagnose the actual root cause of the ‘symptoms’, which is likely to be the endemic carveoutosis multiforme, and then treat accordingly?

<Leslie> No. I could not find the term “carve out” anywhere in the document.

<Bob> Oh dear.  Based on that observation, I do not believe this latest Think Tank report is going to be any more effective than the previous ones.  Perhaps asking “RC9” to write an account of what they did and how they learned to do it would be more informative?  They did not reduce volume, and I doubt they opened more beds, and their annual report suggests they identified some space and flow carveoutosis and treated it. That is what a competent systems engineer would do.

<Leslie> Thanks Bob. Very helpful as always. What is my next step?

<Bob> Some ISP-2 brain-teasers, a juicy ISP-2 project, and some one day training workshops for your all-fired-up CHIPs.

<Leslie> Bring it on!


For more posts like this please vote here.
For more information please subscribe here.

Outliers

reading_a_book_pa_150_wht_3136An effective way to improve is to learn from others who have demonstrated the capability to achieve what we seek.  To learn from success.

Another effective way to improve is to learn from those who are not succeeding … to learn from failures … and that means … to learn from our own failings.

But from an early age we are socially programmed with a fear of failure.

The training starts at school where failure is not tolerated, nor is challenging the given dogma.  Paradoxically, the effect of our fear of failure is that our ability to inquire, experiment, learn, adapt, and to be resilient to change is severely impaired!

So further failure in the future becomes more likely, not less likely. Oops!


Fortunately, we can develop a healthier attitude to failure and we can learn how to harness the gap between intent and impact as a source of energy, creativity, innovation, experimentation, learning, improvement and growing success.

And health care provides us with ample opportunities to explore this unfamiliar terrain. The creative domain of the designer and engineer.


The scatter plot below is a snapshot of the A&E 4 hr target yield for all NHS Trusts in England for the month of July 2016.  The required “constitutional” performance requirement is better than 95%.  The delivered whole system average is 85%.  The majority of Trusts are failing, and the Trust-to-Trust variation is rather wide. Oops!

This stark picture of the gap between intent (95%) and impact (85%) prompts some uncomfortable questions:

Q1: How can one Trust achieve 98% and yet another can do no better than 64%?

Q2: What can all Trusts learn from these high and low flying outliers?

[NB. I have not asked the question “Who should we blame for the failures?” because the name-shame-blame-game is also a predictable consequence of our fear-of-failure mindset.]


Let us dig a bit deeper into the information mine, and as we do that we need to be aware of a trap:

A snapshot-in-time tells us very little about how the system and the set of interconnected parts is behaving-over-time.

We need to examine the time-series charts of the outliers, just as we would ask for the temperature, blood pressure and heart rate charts of our patients.

Here are the last six years by month A&E 4 hr charts for a sample of the high-fliers. They are all slightly different and we get the impression that the lower two are struggling more to stay aloft more than the upper two … especially in winter.


And here are the last six years by month A&E 4 hr charts for a sample of the low-fliers.  The Mark I Eyeball Test results are clear … these swans are falling out of the sky!


So we need to generate some testable hypotheses to explain these visible differences, and then we need to examine the available evidence to test them.

One hypothesis is “rising demand”.  It says that “the reason our A&E is failing is because demand on A&E is rising“.

Another hypothesis is “slow flow”.  It says that “the reason our A&E is failing is because of the slow flow through the hospital because of delayed transfers of care (DTOCs)“.

So, if these hypotheses account for the behaviour we are observing then we would predict that the “high fliers” are (a) diverting A&E arrivals elsewhere, and (b) reducing admissions to free up beds to hold the DTOCs.

Let us look at the freely available data for the highest flyer … the green dot on the scatter gram … code-named “RC9”.

The top chart is the A&E arrivals per month.

The middle chart is the A&E 4 hr target yield per month.

The bottom chart is the emergency admissions per month.

Both arrivals and admissions are increasing, while the A&E 4 hr target yield is rock steady!

And arranging the charts this way allows us to see the temporal patterns more easily (and the images are deliberately arranged to show the overall pattern-over-time).

Patterns like the change-for-the-better that appears in the middle of the winter of 2013 (i.e. when many other trusts were complaining that their sagging A&E performance was caused by “winter pressures”).

The objective evidence seems to disprove the “rising demand”, “slow flow” and “winter pressure” hypotheses!

So what can we learn from our failure to adequately explain the reality we are seeing?


The trust code-named “RC9” is Luton and Dunstable, and it is an average district general hospital, on the surface.  So to reveal some clues about what actually happened there, we need to read their Annual Report for 2013-14.  It is a public document and it can be downloaded here.

This is just a snippet …

… and there are lots more knowledge nuggets like this in there …

… it is a treasure trove of well-known examples of good system flow design.

The results speak for themselves!


Q: How many black swans does it take to disprove the hypothesis that “all swans are white”.

A: Just one.

“RC9” is a black swan. An outlier. A positive deviant. “RC9” has disproved the “impossibility” hypothesis.

And there is another flock of black swans living in the North East … in the Newcastle area … so the “Big cities are different” hypothesis does not hold water either.


The challenge here is a human one.  A human factor.  Our learned fear of failure.

Learning-how-to-fail is the way to avoid failing-how-to-learn.

And to read more about that radical idea I strongly recommend reading the recently published book called Black Box Thinking by Matthew Syed.

It starts with a powerful story about the impact of human factors in health care … and here is a short video of Martin Bromiley describing what happened.

The “black box” that both Martin and Matthew refer to is the one that is used in air accident investigations to learn from what happened, and to use that learning to design safer aviation systems.

Martin Bromiley has founded a charity to support the promotion of human factors in clinical training, the Clinical Human Factors Group.

So if we can muster the courage and humility to learn how to do this in health care for patient safety, then we can also learn to how do it for flow, quality and productivity.

Our black swan called “RC9” has demonstrated that this goal is attainable.

And the body of knowledge needed to do this already exists … it is called Health and Social Care Systems Engineering (HSCSE).


For more posts like this please vote here.
For more information please subscribe here.
To email the author please click here.


Postscript: And I am pleased to share that Luton & Dunstable features in the House of Commons Health Committee report entitled Winter Pressures in A&E Departments that was published on 3rd Nov 2016.

Here is part of what L&D shared to explain their deviant performance:

luton_nuggets

These points describe rather well the essential elements of a pull design, which is the antidote to the rather more prevalent pressure cooker design.

Righteous Indignation

On 5th July 2018, the NHS will be 70 years old, and like many of those it was created to serve, it has become elderly and frail.

We live much longer, on average, than we used to and the growing population of frail elderly are presenting an unprecedented health and social care challenge that the NHS was never designed to manage.

The creases and cracks are showing, and each year feels more pressured than the last.


This week a story that illustrates this challenge was shared with me along with permission to broadcast …

“My mother-in-law is 91, in general she is amazingly self-sufficient, able to arrange most of her life with reasonable care at home via a council tendered care provider.

She has had Parkinson’s for years, needing regular medication to enable her to walk and eat (it affects her jaw and swallowing capability). So the care provision is time critical, to get up, have lunch, have tea and get to bed.

She’s also going deaf, profoundly in one ear, pretty bad in the other. She wears a single ‘in-ear’ aid, which has a micro-switch on/off toggle, far too small for her to see or operate. Most of the carers can’t put it in, and fail to switch it off.

Her care package is well drafted, but rarely adhered to. It should be 45 minutes in the morning, 30, 15, 30 through the day. Each time administering the medications from the dossette box. Despite the register in/out process from the carers, many visits are far less time than designed (and paid for by the council), with some lasting 8 minutes instead of 30!

Most carers don’t ensure she takes her meds, which sometimes leads to dropped pills on the floor, with no hope of picking them up!

While the care is supposedly ‘time critical’ the provider don’t manage it via allocated time slots, they simply provide lists, that imply the order of work, but don’t make it clear. My mother-in-law (Mum) cannot be certain when the visit will occur, which makes going out very difficult.

The carers won’t cook food, but will micro-wave it, thus if a cooked meal is to happen, my Mum will start it, with the view of the carers serving it. If they arrive early, the food is under-cooked (“Just put vinegar on it, it will taste better”) and if they arrive late, either she’ll try to get it out herself, or it will be dried out / cremated.

Her medication pattern should be every 4 to 5 hours in the day, with a 11:40 lunch visit, and a 17:45 tea visit, followed by a 19:30 bed prep visit, she finishes up with too long between meds, followed by far too close together. Her GP has stated that this is making her health and Parkinson’s worse.

Mum also rarely drinks enough through the day, in the hot whether she tends to dehydrate, which we try to persuade her must be avoided. Part of the problem is Parkinson’s related, part the hassle of getting to the toilet more often. Parkinson’s affects swallowing, so she tends to sip, rather than gulp. By sipping often, she deludes herself that she is drinking enough.

She also is stubbornly not adjusting methods to align to issues. She drinks tea and water from her lovely bone china cups. Because her grip is not good and her hand shakes, we can’t fill those cups very high, so her ‘cup of tea’ is only a fraction of what it could be.

As she can walk around most days, there’s no way of telling whether she drinks enough, and she frequently has several different carers in a day.

When Mum gets dehydrated, it affects her memory and her reasoning, similar to the onset of dementia. It also seems to increase her probability of falling, perhaps due to forgetting to be defensive.

When she falls, she cannot get up, thus usually presses her alarm dongle, resulting in me going round to get her up, check for concussion, and check for other injuries, prior to settling her down again. These can be ten weeks apart, through to a few in a week.

When she starts to hallucinate, we do our very best to increase drinking, seeking to re-hydrate.

On Sunday, something exceptional happened, Mum fell out of bed and didn’t press her alarm. The carer found her and immediately called the paramedics and her GP, who later called us in. For the first time ever she was not sufficiently mentally alert to press her alarm switch.

After initial assessment, she was taken to A&E, luckily being early on Sunday morning it was initially quite quiet.

Hospital

The Hospital is on the boundary between two counties, within a large town, a mixture of new build elements, between aging structures. There has been considerable investment within A&E, X-ray etc. due partly to that growth industry and partly due to the closures of cottage hospitals and reducing GP services out of hours.

It took some persuasion to have Mum put on a drip, as she hadn’t had breakfast or any fluids, and dehydration was a probable primary cause of her visit. They took bloods, an X-ray of her chest (to check for fall related damage) and a CT scan of her head, to see if there were issues.

I called the carers to tell them to suspend visits, but the phone simply rang without be answered (not for the first time.)

After about six hours, during which time she was awake, but not very lucid, she was transferred to the day ward, where after assessment she was given some meds, a sandwich and another drip.

Later that evening we were informed she was to be kept on a drip for 24 hours.

The next day (Bank Holiday Monday) she was transferred to another ward. When we arrived she was not on a drip, so their decisions had been reversed.

I spoke at length with her assigned staff nurse, and was told the following: Mum could come out soon if she had a 24/7 care package, and that as well as the known issues mum now has COPD. When I asked her what COPD was, she clearly didn’t know, but flustered a ‘it is a form of heart failure that affects breathing’. (I looked it up on my phone a few minutes later.)

So, to get mum out, I had to arrange a 24/7 care package, and nowhere was open until the next day.

Trying to escalate care isn’t going to be easy, even in the short term. My emails to ‘usually very good’ social care people achieved nothing to start with on Tuesday, and their phone was on the ‘out of hours’ setting for evenings and weekends, despite being during the day of a normal working week.

Eventually I was told that there would be nothing to achieve until the hospital processed the correct exit papers to Social Care.

When we went in to the hospital (on Tuesday) a more senior nurse was on duty. She explained that mum was now medically fit to leave hospital if care can be re-established. I told her that I was trying to set up 24/7 care as advised. She looked through the notes and said 24/7 care was not needed, the normal 4 x a day was enough. (She was clearly angry).

I then explained that the newly diagnosed COPD may be part of the problem, she said that she’s worked with COPD patients for 16 years, and mum definitely doesn’t have COPD. While she was amending the notes, I noticed that mum’s allergy to aspirin wasn’t there, despite us advising that on entry. The nurse also explained that as the hospital is in one county, but almost half their patients are from another, they are always stymied on ‘joined up working’

While we were talking with mum, her meds came round and she was only given paracetamol for her pain, but NOT her meds for Parkinson’s. I asked that nurse why that was the case, and she said that was not on her meds sheet. So I went back to the more senior nurse, she checked the meds as ordered and Parkinson’s was required 4 x a day, but it was NOT transferred onto the administration sheet. The doctor next to us said she would do it straight away, and I was told, “Thank God you are here to get this right!”

Mum was given her food, it consisted of some soup, which she couldn’t spoon due to lack of meds and a dry tough lump of gammon and some mashed sweet potato, which she couldn’t chew.

When I asked why meds were given at five, after the delivery of food, they said ‘That’s our system!’, when I suggested that administering Parkinson’s meds an hour before food would increase the ability to eat the food they said “that’s a really good idea, we should do that!”

On Wednesday I spoke with Social Care to try to re-start care to enable mum to get out. At that time the social worker could neither get through to the hospital nor the carers. We spoke again after I had arrived in hospital, but before I could do anything.

On arrival at the hospital I was amazed to see the white-board declaring that mum would be discharged for noon on Monday (in five days-time!). I spoke with the assigned staff nurse who said, “That’s the earliest that her carers can re-start, and anyway its nearly the weekend”.

I said that “mum was medically OK for discharge on Tuesday, after only two days in the hospital, and you are complacent to block the bed for another six days, have you spoken with the discharge team?”

She replied, “No they’ll have gone home by now, and I’ve not seen them all day” I told her that they work shifts, and that they will be here, and made it quite clear if she didn’t contact SHEDs that I’d go walkabout to find them. A few minutes later she told me a SHED member would be with me in 20 minutes.

While the hospital had resolved her medical issues, she was stuck in a ward, with no help to walk, the only TV via a complex pay-for system she had no hope of understanding, with no day room, so no entertainment, no exercise, just boredom encouraged to lay in bed, wear a pad because she won’t be taken to the loo in time.

When the SHED worker arrived I explained the staff nurse attitude, she said she would try to improve those thinking processes. She took lots of details, then said that so long as mum can walk with assistance, she could be released after noon, to have NHS carer support, 4 times a day, from the afternoon. She walked around the ward for the first time since being admitted, and while shaky was fine.

Hopefully all will be better now?”


This story is not exceptional … I have heard it many times from many people in many different parts of the UK.  It is the norm rather than the exception.

It is the story of a fragmented and fractured system of health and social care.

It is the story of frustration for everyone – patients, family, carers, NHS staff, commissioners, and tax-payers.  A fractured care system is unsafe, chaotic, frustrating and expensive.

There are no winners here.  It is not a trade off, compromise or best possible.

It is just poor system design.


What we want has a name … it is called a Frail Safe design … and this is not a new idea.  It is achievable. It has been achieved.

http://www.frailsafe.org.uk

So why is this still happening?

The reason is simple – the NHS does not know any other way.  It does not know how to design itself to be safe, calm, efficient, high quality and affordable.

It does not know how to do this because it has never learned that this is possible.

But it is possible to do, and it is possible to learn, and that learning does not take very long or cost very much.

And the return vastly outnumbers the investment.


The title of this blog is Righteous Indignation

… if your frail elderly parents, relatives or friends were forced to endure a system that is far from frail safe; and you learned that this situation was avoidable and that a safer design would be less expensive; and all you hear is “can’t do” and “too busy” and “not enough money” and “not my job” …  wouldn’t you feel a sense of righteous indignation?

I do.


For more posts like this please vote here.
For more information please subscribe here.

Fragmentation Cost

figure_falling_with_arrow_17621The late Russell Ackoff used to tell a great story. It goes like this:

“A team set themselves the stretch goal of building the World’s Best Car.  So the put their heads together and came up with a plan.

First they talked to drivers and drew up a list of all the things that the World’s Best Car would need to have. Safety, speed, low fuel consumption, comfort, good looks, low emissions and so on.

Then they drew up a list of all the components that go into building a car. The engine, the wheels, the bodywork, the seats, and so on.

Then they set out on a quest … to search the world for the best components … and to bring the best one of each back.

Then they could build the World’s Best Car.

Or could they?

No.  All they built was a pile of incompatible parts. The WBC did not work. It was a futile exercise.


Then the penny dropped. The features in their wish-list were not associated with any of the separate parts. Their desired performance emerged from the way the parts worked together. The working relationships between the parts were as necessary as the parts themselves.

And a pile of average parts that work together will deliver a better performance than a pile of best parts that do not.

So the relationships were more important than the parts!


From this they learned that the quickest, easiest and cheapest way to degrade performance is to make working-well-together a bit more difficult.  Irrespective of the quality of the parts.


Q: So how do we reverse this degradation of performance?

A: Add more failure-avoidance targets of course!

But we just discovered that the performance is the effect of how the parts work well together?  Will another failure-metric-fueled performance target help? How will each part know what it needs to do differently – if anything?  How will each part know if the changes they have made are having the intended impact?

Fragmentation has a cost.  Fear, frustration, futility and ultimately financial failure.

So if performance is fading … the quality of the working relationships is a good place to look for opportunities for improvement.

Precious Life Time

stick_figure_help_button_150_wht_9911Imagine this scenario:

You develop some non-specific symptoms.

You see your GP who refers you urgently to a 2 week clinic.

You are seen, assessed, investigated and informed that … you have cancer!


The shock, denial, anger, blame, bargaining, depression, acceptance sequence kicks off … it is sometimes called the Kübler-Ross grief reaction … and it is a normal part of the human psyche.

But there is better news. You also learn that your condition is probably treatable, but that it will require chemotherapy, and that there are no guarantees of success.

You know that time is of the essence … the cancer is growing.

And time has a new relevance for you … it is called life time … and you know that you may not have as much left as you had hoped.  Every hour is precious.


So now imagine your reaction when you attend your local chemotherapy day unit (CDU) for your first dose of chemotherapy and have to wait four hours for the toxic but potentially life-saving drugs.

They are very expensive and they have a short shelf-life so the NHS cannot afford to waste any.   The Aseptic Unit team wait until all the safety checks are OK before they proceed to prepare your chemotherapy.  That all takes time, about four hours.

Once the team get to know you it will go quicker. Hopefully.

It doesn’t.

The delays are not the result of unfamiliarity … they are the result of the design of the process.

All your fellow patients seem to suffer repeated waiting too, and you learn that they have been doing so for a long time.  That seems to be the way it is.  The waiting room is well used.

Everyone seems resigned to the belief that this is the best it can be.

They are not happy about it but they feel powerless to do anything.


Then one day someone demonstrates that it is not the best it can be.

It can be better.  A lot better!

And they demonstrate that this better way can be designed.

And they demonstrate that they can learn how to design this better way.

And they demonstrate what happens when they apply their new learning …

… by doing it and by sharing their story of “what-we-did-and-how-we-did-it“.

CDU_Waiting_Room

If life time is so precious, why waste it?

And perhaps the most surprising outcome was that their safer, quicker, calmer design was also 20% more productive.

Resuscitate-Review-Repair

Portsmouth_News_20160609We form emotional attachments to places where we have lived and worked.  And it catches our attention when we see them in the news.

So this headline caught my eye, because I was a surgical SHO in Portsmouth in the closing years of the Second Millennium.  The good old days when we still did 1:2 on call rotas (i.e. up to 104 hours per week) and we were paid 70% LESS for the on call hours than the Mon-Fri 9-5 work.  We also had stable ‘firms’, superhuman senior registrars, a canteen that served hot food and strong coffee around the clock, and doctors mess parties that were … well … messy!  A lot has changed.  And not all for the better.

Here is the link to the fuller story about the emergency failures.

And from it we get the impression that this is a recent problem.  And with a bit of a smack and some name-shame-blame-game feedback from the CQC, then all will be restored to robust health. H’mm. I am not so sure that is the full story.


Portsmouth_A&E_4Hr_YieldHere is the monthly aggregate A&E 4-hour target performance chart for Portsmouth from 2010 to date.

It says “this is not a new problem“.

It also says that the ‘patient’ has been deteriorating spasmodically over six years and is now critically-ill.

And giving a critically-ill hospital a “good telling off” is about as effective as telling a critically-ill patient to “pull themselves together“.  Inept management.

In A&E a critically-ill patient requires competent resuscitation using a tried-and-tested process of ABC.  Airway, Breathing, Circulation.


Also, the A&E 4-hour performance is only a symptom of the sickness in the whole urgent care system.  It is the reading on an emotometer inserted into the A&E orifice of the acute hospital!  Just one piece in a much bigger flow jigsaw.

It only tells us the degree of distress … not the diagnosis … nor the required treatment.


So what level of A&E health can we realistically expect to be able to achieve? What is possible in the current climate of austerity? Just how chilled-out can the A&E cucumber run?

Luton_A&E_4Hr_Yield

This is the corresponding A&E emotometer chart for a different district general hospital somewhere else in NHS England.

Luton & Dunstable Hospital to be specific.

This A&E happiness chart looks a lot healthier and it seems to be getting even healthier over time too.  So this is possible.


Yes, but … if our hospital deteriorates enough to be put on the ‘critical list’ then we need to call in an Emergency Care Intensive Support Team (ECIST) to resuscitate us.

Kettering_A&E_4Hr_YieldA very good idea.

And how do their critically-ill patients fare?

Here is the chart of one of them. The significant improvement following the ‘resuscitation’ is impressive to be sure!

But, disappointingly, it was not sustained and the patient ‘crashed’ again. Perhaps they were just too poorly? Perhaps the first resuscitation call was sent out too late? But at least they tried their best.

An experienced clinician might comment: Those are indeed a plausible explanations, but before we conclude that is the actual cause, can I check that we did not just treat the symptoms and miss the disease?


Q: So is it actually possible to resuscitate and repair a sick hospital?  Is it possible to restore it to sustained health, by diagnosing and treating the cause, and not just the symptoms?


Monklands_A&E_4Hr_YieldHere is the corresponding A&E emotometer chart of yet another hospital.

It shows the same pattern of deteriorating health. And it shows a dramatic improvement.  It appears to have responded to some form of intervention.

And this time the significant improvement has sustained. The patient did not crash-and-burn again.

So what has happened here that explains this different picture?

This hospital had enough insight and humility to seek the assistance of someone who knew what to do and who had a proven track record of doing it.  Dr Kate Silvester to be specific.  A dual-trained doctor and manufacturing systems engineer.

Dr Kate is now a health care systems engineer (HCSE), and an experienced ‘hospital doctor’.

Dr Kate helped them to learn how to diagnose the root causes of their A&E 4-hr fever, and then she showed them how to design an effective treatment plan.

They did the re-design; they tested it; and they delivered their new design. Because they owned it, they understood it, and they trusted their own diagnosis-and-design competence.

And the evidence of their impact matching their intent speaks for itself.

A Recipe for Chaos

growing_workload_anim_6858There is an easy and quick-to-cook recipe for chaos.

All we have to do is to ensure that the maximum number of jobs that we can do in a given time is set equal to the average number of jobs that we are required to do in the same period of time.

Eh?

That does not make sense.  Our intuition says that looks like the perfect recipe for a hyper-efficient, zero-waste, zero idle-time design which is what we want.


I know it does, but it isn’t.  Our intuition is tricking us.

It is the recipe for chaos – and to prove it all we will have to do a real world experiment – because to prove it using maths is really difficult. So difficult in fact that the formula was not revealed until 1962 – by a mathematician called John Kingman while a postgraduate student at Pembroke College, Cambridge.

The empirical experiment is very easy to do – all we need is a single step process – and a stream of jobs to do.

And we could do it for real, or we can simulate it using an Excel spreadsheet – which is much quicker.


So we set up our spreadsheet to simulate a new job arriving every X minutes and each job taking X minutes to complete.

Our operator can only do one job at a time so if a job arrives and the operator is busy the job joins the back of a queue of jobs and waits.

When the operator finishes a job it takes the next one from the front of the queue, the one that has been waiting longest.

And if there is no queue the operator will wait until the next job arrives.

Simple.

And when we run simulation the we see that there is indeed no queue, no jobs waiting and the operator is always busy (i.e. 100% utilised). Perfection!

BUT ….

This is not a realistic scenario.  In reality there is always some random variation.  Not all jobs require the same length of time, and jobs do not arrive at precisely the right intervals.

No matter, our confident intuition tells us. It will average out.  Swings-and-roundabouts. Give-and-take.

It doesn’t.

And if you do not believe me just build the simple Excel model outlined above, verify that it works, then add some random variation to the time it takes to do each job … and observe what happens to the average waiting time.

What you will discover is that as soon as we add even a small amount of random variation we get a queue, and waiting and idle resources as well!

But not a steady, stable, predictable queue … Oh No! … We get an unsteady, unstable and unpredictable queue … we get chaos.

Try it.


So what? How does this abstract ‘queue theory’ apply to the real world?


Well, suppose we have a single black box system called ‘a hospital’ – patients arrive and we work hard to diagnose and treat them.  And so long as we have enough resource-time to do all the jobs we are OK. No unstable queues. No unpredictable waiting.

But time-costs-money and we have an annual cost improvement target (CIP) that we are required to meet so we need to ‘trim’ resource-time capacity to push up resource utilisation.  And we will call that an ‘efficiency improvement’ which is good … yes?

It isn’t actually.  I can just as easily push up my ‘utilisation’ by working slower, or doing stuff I do not need to, or by making mistakes that I have to check for and then correct.  I can easily make myself busier and delude myself I am working harder.

And we are also a victim of our own success … the better we do our job … the longer people live and the more workload they put on the health and social care system.

So we have the perfect storm … the perfect recipe for chaos … slowly rising demand … slowly shrinking budgets … and an inefficient ‘business’ design.

And that in a nutshell is the reason the NHS is descending into chaos.


So what is the solution?

Reduce demand? Stop people getting sick? Or make them sicker so they die quicker?

Increase budgets? Where will the money come from? Beg? Borrow? Steal? Economic growth?

Improve the design?  Now there’s a thought. But how? By using the same beliefs and behaviours that have created the current chaos?

Maybe we need to challenge some invalid beliefs and behaviours … and replace those that fail the Reality Test with some more effective ones.

The NHS Cockpit Dashboard

A few weeks ago I raised the undiscussable issue that the NHS feels like it is on a downward trajectory … and that what might be needed are some better engines … and to design, test, build and install them we will need some health care system engineers (HCSEs) … and that we do not have appear to have enough of those. None in fact.

The feedback shows that many people resonated with this sentiment.


This week I had the opportunity to peek inside the NHS Cockpit and look at the Dashboard … and this is what I saw on the A&E Performance panel.

UK_Type_1_ED_Monthly_4hr_Yield

This is the monthly aggregate A&E 4-hour performance for England (red), Scotland (purple), Wales (brown) and Northern Ireland (grey) for the last six years.

The trajectory looked alarmingly obvious to me – the NHS is on a predictable path to destruction – a controlled flight into terrain (CFIT).

The repeating up-and-down pattern is the annual cycle of seasons; better in the summer and worse in the winter.  This signal is driven by the celestial clock … the movement of the planets … which is beyond our power to influence.

The downward trajectory is the cumulative effect of our current design … which is the emergent effect of our collective beliefs, behaviours, policies and politics … which are completely within our gift to change.

If we chose to and if we knew how to – which we do not appear to.

Our collective ineptitude is not a topic for discussion. It is a taboo subject.


And I know that because if it were for discussion then this dashboard would be on public view on a website hosted by the NHS.

It isn’t.


George_DonaldIt was created by George Donald, a member of the public, a disappointed patient, and a retired IT consultant.  And it was shared, free for all to see and use via Twitter (@GMDonald).

The information source is open, public, shared NHS data, but it takes a lot of work to winkle it out and present it like this.  So well done George … keep up the great work!


Now have a closer look at the Dashboard Display … look at the most recent data for England and Scotland.  What do you see?

Does it look like Scotland is pulling out of the dive and England is heading down even faster?

Hard to say for sure; there are lots of signals and noise all mixed up.


So we need to use some Systems Engineering tools to help us separate the signals from the noise; and for this a statistical process control (SPC) chart is useless.  We need a system behaviour chart (SBC) and its handy helper the deviation from aim (DFA) chart.

I will not bore you with the technical details but, suffice it to say, it is a tried-and-tested technique called the Method of Residuals.

Scotland_A&E_DFA_02 Exhibit #1 is the DFA chart for Scotland.  The middle 4 years (2011-2014) are used to create a ‘predictive model’;  the model projection is then compared with measured performance; and the difference is plotted as the DFA chart.

What this “says” is that the 2015/16 performance in Scotland is significantly better than projected, and the change of direction seemed to start in the first half of 2015.

This evidence seems to support the results of our Mark I Eyeball test.

England_A&E_DFA_02

Exhibit #2 – the DFA for England suggests the 2015/16 performance is significantly worse than projected, and this deterioration appears to have started later in 2015.

Oh dear! I do not believe that was the intention, but it appears to be the impact.


So what are England and Scotland doing differently?
What can we all learn from this?
What can we all do differently in the future?

Isn’t that a question that more people like you, me and George could reasonably ask of those whom we entrust to design, build and fly our NHS?

Isn’t that a reasonable question that could be asked by the 65 million people in the UK who might, at any time, be unlucky enough to require a trip to their local A&E department.

So, let us all grasp the nettle and get the Elephant in the Room into plain view and say in unison “The Emperor Has No Clothes!”

We are suffering from mass ineptitude and hubris, to use Dr Atul Gawande’s language, and we need a better collective strategy.


And there is hope.

Some innovative hospitals have had the courage to grasp the nettle. They have seen what is coming; they have fully accepted the responsibility for their own fate; they have stepped up to the challenge; they have looked-listened-and-learned from others, and they are proving what is possible.

They have a name. They are called positive deviants.

Have a look at this short video … it is jaw-dropping … it is humbling … it is inspiring … and it is challenging … because it shows what has been achieved already.

It shows what is possible. Now, and here in the UK.

Luton and Dunstable

System of Profound Knowledge

 

Don_Berwick_2016

This week I had the great pleasure of watching Dr Don Berwick sharing the story of his own ‘near religious experience‘ and his conversion to a belief that a Science of Improvement exists.  In 1986, Don attended one of W.Edwards Deming’s famous 4-day workshops.  It was an emotional roller coaster ride for Don! See here for a link to the whole video … it is worth watching all of it … the best bit is at the end.


Don outlines Deming’s System of Profound Knowledge (SoPK) and explores each part in turn. Here is a summary of SoPK from the Deming website.

Deming_SOPK

W.Edwards Deming was a physicist and statistician by training and his deep understanding of variation and appreciation for a system flows from that.  He was not trained as a biologist, psychologist or educationalist and those parts of the SoPK appear to have emerged later.

Here are the summaries of these parts – psychology first …

Deming_SOPK_Psychology

Neurobiologists and psychologists now know that we are the product of our experiences and our learning. What we think consciously is just the emergent tip of a much bigger cognitive iceberg. Most of what is happening is operating out of awareness. It is unconscious.  Our outward behaviour is just a visible manifestation of deeply ingrained values and beliefs that we have learned – and reinforced over and over again.  Our conscious thoughts are emergent effects.


So how do we learn?  How do we accumulate these values and beliefs?

This is the summary of Deming’s Theory of Knowledge …

Deming_SOPK_PDSA

But to a biologist, neuroanatomist, neurophysiologist, doctor, system designer and improvement coach … this does not feel correct.

At the most fundamental biological level we do not learn by starting with a theory; we start with a sensory.  The simplest element of the animal learning system – the nervous system – is called a reflex arc.

Sensor_Processor_EffectorFirst, we have some form of sensor to gather data from the outside world. Eyes, ears, smell, taste, touch, temperature, pain and so on.  Let us consider pain.

That signal is transmitted via a sensory nerve to the processor, the grey matter in this diagram, where it is filtered, modified, combined with other data, filtered again and a binary output generated. Act or Not.

If the decision is ‘Act’ then this signal is transmitted by a motor nerve to an effector, in this case a muscle, which results in an action.  The muscle twitches or contracts and that modifies the outside world – we pull away from the source of pain.  It is a harm avoidance design. Damage-limitation. Self-preservation.

Another example of this sensor-processor-effector design template is a knee-jerk reflex, so-named because if we tap the tendon just below the knee we can elicit a reflex contraction of the thigh muscle.  It is actually part of a very complicated, dynamic, musculoskeletal stability cybernetic control system that allows us to stand, walk and run … with almost no conscious effort … and no conscious awareness of how we are doing it.

But we are not born able to walk. As youngsters we do not start with a theory of how to walk from which we formulate a plan. We see others do it and we attempt to emulate them. And we fail repeatedly. Waaaaaaah! But we learn.


Human learning starts with study. We then process the sensory data using our internal mental model – our rhetoric; we then decide on an action based on our ‘current theory’; and then we act – on the external world; and then we observe the effect.  And if we sense a difference between our expectation and our experience then that triggers an ‘adjustment’ of our internal model – so next time we may do better because our rhetoric and the reality are more in sync.

The biological sequence is Study-Adjust-Plan-Do-Study-Adjust-Plan-Do and so on, until we have achieved our goal; or until we give up trying to learn.


So where does psychology come in?

Well, sometimes there is a bigger mismatch between our rhetoric and our reality. The world does not behave as we expect and predict. And if the mismatch is too great then we are left with feelings of confusion, disappointment, frustration and fear.  (PS. That is our unconscious mind telling us that there is a big rhetoric-reality mismatch).

We can see the projection of this inner conflict on the face of a child trying to learn to walk.  They screw up their faces in conscious effort, and they fall over, and they hurt themselves and they cry.  But they do not want us to do it for them … they want to learn to do it for themselves. Clumsily at first but better with practice. They get up and try again … and again … learning on each iteration.

Study-Adjust-Plan-Do over and over again.


There is another way to avoid the continual disappointment, frustration and anxiety of learning.  We can distort our sensation of external reality to better fit with our internal rhetoric.  When we do that the inner conflict goes away.

We learn how to tamper with our sensory filters until what we perceive is what we believe. Inner calm is restored (while outer chaos remains or increases). We learn the psychological defense tactics of denial and blame.  And we practice them until they are second-nature. Unconscious habitual reflexes. We build a reality-distortion-system (RDS) and it has a name – the Ladder of Inference.


And then one day, just by chance, somebody or something bypasses our RDS … and that is the experience that Don Berwick describes.

Don went to a 4-day workshop to hear the wisdom of W.Edwards Deming first hand … and he was forced by the reality he saw to adjust his inner model of the how the world works. His rhetoric.  It was a stormy transition!

The last part of his story is the most revealing.  It exposes that his unconscious mind got there first … and it was his conscious mind that needed to catch up.

Study-(Adjust)-Plan-Do … over-and-over again.


In Don’s presentation he suggests that Frederick W. Taylor is the architect of the failure of modern management. This is a commonly held belief, and everyone is equally entitled to an opinion, that is a definition of mutual respect.

But before forming an individual opinion on such a fundamental belief we should study the raw evidence. The words written by the person who wrote them not just the words written by those who filtered the reality through their own perceptual lenses.  Which we all do.

Culture – cause or effect?

The Harvard Business Review is worth reading because many of its articles challenge deeply held assumptions, and then back up the challenge with the pragmatic experience of those who have succeeded to overcome the limiting beliefs.

So the heading on the April 2016 copy that awaited me on my return from an Easter break caught my eye: YOU CAN’T FIX CULTURE.


 

HBR_April_2016

The successful leaders of major corporate transformations are agreed … the cultural change follows the technical change … and then the emergent culture sustains the improvement.

The examples presented include the Ford Motor Company, Delta Airlines, Novartis – so these are not corporate small fry!

The evidence suggests that the belief of “we cannot improve until the culture changes” is the mantra of failure of both leadership and management.


A health care system is characterised by a culture of risk avoidance. And for good reason. It is all too easy to harm while trying to heal!  Primum non nocere is a core tenet – first do no harm.

But, change and improvement implies taking risks – and those leaders of successful transformation know that the bigger risk by far is to become paralysed by fear and to do nothing.  Continual learning from many small successes and many small failures is preferable to crisis learning after a catastrophic failure!

The UK healthcare system is in a state of chronic chaos.  The evidence is there for anyone willing to look.  And waiting for the NHS culture to change, or pushing for culture change first appears to be a guaranteed recipe for further failure.

The HBR article suggests that it is better to stay focussed; to work within our circles of control and influence; to learn from others where knowledge is known, and where it is not – to use small, controlled experiments to explore new ground.


And I know this works because I have done it and I have seen it work.  Just by focussing on what is important to every member on the team; focussing on fixing what we could fix; not expecting or waiting for outside help; gathering and sharing the feedback from patients on a continuous basis; and maintaining patient and team safety while learning and experimenting … we have created a micro-culture of high safety, high efficiency, high trust and high productivity.  And we have shared the evidence via JOIS.

The micro-culture required to maintain the safety, flow, quality and productivity improvements emerged and evolved along with the improvements.

It was part of the effect, not the cause.


So the concept of ‘fix the system design flaws and the continual improvement culture will emerge’ seems to work at macro-system and at micro-system levels.

We just need to learn how to diagnose and treat healthcare system design flaws. And that is known knowledge.

So what is the next excuse?  Too busy?

FrailSafe Design

frailsafeSafe means avoiding harm, and safety is an emergent property of a well-designed system.

Frail means infirm, poorly, wobbly and at higher risk of harm.

So we want our health care system to be a FrailSafe Design.

But is it? How would we know? And what could we do to improve it?


About ten years ago I was involved in a project to improve the safety design of a specific clinical stream flowing through the hospital that I work in.

The ‘at risk’ group of patients were frail elderly patients admitted as an emergency after a fall and who had suffered a fractured thigh bone. The neck of the femur.

Historically, the outcome for these patients was poor.  Many do not survive, and many of the survivors never returned to independent living. They become even more frail.


The project was undertaken during an organisational transition, the hospital was being ‘taken over’ by a bigger one.  This created a window of opportunity for some disruptive innovation, and the project was labelled as a ‘Lean’ one because we had been inspired by similar work done at Bolton some years before and Lean was the flavour of the month.

The actual change was small: it was a flow design tweak that cost nothing to implement.

First we asked two flow questions:
Q1: How many of these high-risk frail patients do we admit a year?
A1: About one per day on average.
Q2: What is the safety critical time for these patients?
A2: The first four days.  The sooner they have hip surgery and are able to be actively mobilise the better their outcome.

Second we applied Little’s Law which showed the average number of patients in this critical phase is four. This was the ‘work in progress’ or WIP.

And we knew that variation is always present, and we knew that having all these patients in one place would make it much easier for the multi-disciplinary teams to provide timely care and to avoid potentially harmful delays.

So we suggested that one six-bedded bay on one of the trauma wards be designated the Fractured Neck Of Femur bay.

That was the flow diagnosis and design done.

The safety design was created by the multi-disciplinary teams who looked after these patients: the geriatricians, the anaesthetists, the perioperative emergency care team (PECT), the trauma and orthopaedic team, the physiotherapists, and so on.

They designed checklists to ensure that all #NOF patients got what they needed when they needed it and so that nothing important was left to chance.

And that was basically it.

And the impact was remarkable. The stream flowed. And one measured outcome was a dramatic and highly statistically significant reduction in mortality.

Injury_2011_Results
The full paper was published in Injury 2011; 42: 1234-1237.

We had created a FrailSafe Design … which implied that what was happening before was clearly not safe for these frail patients!


And there was an improved outcome for the patients who survived: A far larger proportion rehabilitated and returned to independent living, and a far smaller proportion required long-term institutional care.

By learning how to create and implement a FrailSafe Design we had added both years-to-life and life-to-years.

It cost nothing to achieve and the message was clear, as this quote is from the 2011 paper illustrates …

Injury_2011_Message

What was a bit disappointing was the gap of four years between delivering this dramatic and highly significant patient safety and quality improvement and the sharing of the story.


What is more exciting is that the concept of FrailSafe is growing, evolving and spreading.

Grit in the Oyster

Pearl_and_OysterThe word pearl is a metaphor for something rare, beautiful, and valuable.

Pearls are formed inside the shell of certain mollusks as a defense mechanism against a potentially threatening irritant.

The mollusk creates a pearl sac to seal off the irritation.


And so it is with change and improvement.  The growth of precious pearls of improvement wisdom – the ones that develop slowly over time – are triggered by an irritant.

Someone asking an uncomfortable question perhaps, or presenting some information that implies that an uncomfortable question needs to be asked.


About seven years ago a question was asked “Would improving healthcare flow and quality result in lower costs?”

It is a good question because some believe that it would and some believe that it would not.  So an experiment to test the hypothesis was needed.

The Health Foundation stepped up to the challenge and funded a three year project to find the answer. The design of the experiment was simple. Take two oysters and introduce an irritant into them and see if pearls of wisdom appeared.

The two ‘oysters’ were Sheffield Hospital and Warwick Hospital and the irritant was Dr Kate Silvester who is a doctor and manufacturing system engineer and who has a bit-of-a-reputation for asking uncomfortable questions and backing them up with irrefutable information.


Two rare and precious pearls did indeed grow.

In Sheffield, it was proved that by improving the design of their elderly care process they improved the outcome for their frail, elderly patients.  More went back to their own homes and fewer left via the mortuary.  That was the quality and safety improvement. They also showed a shorter length of stay and a reduction in the number of beds needed to store the work in progress.  That was the flow and productivity improvement.

What was interesting to observe was how difficult it was to get these profoundly important findings published.  It appeared that a further irritant had been created for the academic peer review oyster!

The case study was eventually published in Age and Aging 2014; 43: 472-77.

The pearl that grew around this seed is the Sheffield Microsystems Academy.


In Warwick, it was proved that the A&E 4 hour performance could be improved by focussing on improving the design of the processes within the hospital, downstream of A&E.  For example, a redesign of the phlebotomy and laboratory process to ensure that clinical decisions on a ward round are based on todays blood results.

This specific case study was eventually published as well, but by a different path – one specifically designed for sharing improvement case studies – JOIS 2015; 22:1-30

And the pearls of wisdom that developed as a result of irritating many oysters in the Warwick bed are clearly described by Glen Burley, CEO of Warwick Hospital NHS Trust in this recent video.


Getting the results of all these oyster bed experiments published required irritating the Health Foundation oyster … but a pearl grew there too and emerged as the full Health Foundation report which can be downloaded here.


So if you want to grow a fistful of improvement and a bagful of pearls of wisdom … then you will need to introduce a bit of irritation … and Dr Kate Silvester is a proven source of grit for your oyster!

Raising Awareness

SaveTheNHSGameThe first step in the process of improvement is raising awareness, and this has to be done carefully.

Most of us spend most of our time in a mental state called blissful ignorance.  We are happily unaware of the problems, and of their solutions.

Some of us spend some of our time in a different mental state called denial.

And we enter that from yet another mental state called painful awareness.

By raising awareness we are deliberately nudging ourselves, and others, out of our comfort zones.

But suddenly moving from blissful ignorance to painful awareness is not a comfortable transition. It feels like a shock. We feel confused. We feel vulnerable. We feel frightened. And we have a choice: freeze, flee or fight.

Freeze is shock. We feel paralysed by the mismatch between rhetoric and reality.

Flee is denial.  We run away from a new and uncomfortable reality.

Fight is anger. Directed first at others (blame) and then at ourselves (guilt).

It is this anger-passion that we must learn to channel and focus as determination to listen, learn and then lead.


The picture is of a recent awareness-raising event; it happened this week.

The audience is a group of NHS staff from across the depth and breadth of a health and social care system.

On the screen is the ‘Save the NHS Game’.  It is an interactive, dynamic flow simulation of a whole health care system; and its purpose is educational.  It is designed to illustrate the complex and counter-intuitive flow behaviour of a system of interdependent parts: primary care, an acute hospital, intermediate care, residential care, and so on.

We all became aware of a lot of unfamiliar concepts in a short space of time!

We all learned that a flow system can flip from calm to chaotic very quickly.

We all learned that a small change in one part of a system of interdependent parts can have a big effect in another part – either harmful or beneficial and often both.

We all learned that there is often a long time-lag between the change and the effect.

We all learned that we cannot reverse the effect just by reversing the change.

And we all learned that this high sensitivity to small changes is the result of the design of our system; i.e. our design.


Learning all that in one go was a bit of a shock!  Especially the part where we realised that we had, unintentionally, created near perfect conditions for chaos to emerge. Oh dear!

Denial felt like a very reasonable option; as did blame and guilt.

What emerged was a collective sense of determination.  “Let’s Do It!” captured the mood.


puzzle_lightbulb_build_PA_150_wht_4587The second step in the process of improvement is to show the door to the next phase of learning; the phase called ‘know how’.

This requires demonstrating that there is an another way out of the zone of painful awareness.  An alternative to denial.

This is where how-to-diagnose-and-correct-the-design-flaws needs to be illustrated. A step-at-a-time.

And when that happens it feels like a light bulb has been switched on.  What before was obscure and confusing suddenly becomes clear and understandable; and we say ‘Ah ha!’


So, if we deliberately raise awareness about a problem then, as leaders of change and improvement, we also have the responsibility to raise awareness about feasible solutions.


Because only then are we able to ask “Would we like to learn how to do this ourselves!”

And ‘Yes, please’ is what 68% of the people said after attending the awareness raising event.  Only 15% said ‘No, thank you’ and only 17% abstained.

Raising awareness is the first step to improvement.
Choosing the path out of the pain towards knowledge is the second.
And taking the first step on that path is the third.

The Cost of Chaos

british_pound_money_three_bundled_stack_400_wht_2425This week I conducted an experiment – on myself.

I set myself the challenge of measuring the cost of chaos, and it was tougher than I anticipated it would be.

It is easy enough to grasp the concept that fire-fighting to maintain patient safety amidst the chaos of healthcare would cost more in terms of tears and time …

… but it is tricky to translate that concept into hard numbers; i.e. cash.


Chaos is an emergent property of a system.  Safety, delivery, quality and cost are also emergent properties of a system. We can measure cost, our finance departments are very good at that. We can measure quality – we just ask “How did your experience match your expectation”.  We can measure delivery – we have created a whole industry of access target monitoring.  And we can measure safety by checking for things we do not want – near misses and never events.

But while we can feel the chaos we do not have an easy way to measure it. And it is hard to improve something that we cannot measure.


So the experiment was to see if I could create some chaos, then if I could calm it, and then if I could measure the cost of the two designs – the chaotic one and the calm one.  The difference, I reasoned, would be the cost of the chaos.

And to do that I needed a typical chunk of a healthcare system: like an A&E department where the relationship between safety, flow, quality and productivity is rather important (and has been a hot topic for a long time).

But I could not experiment on a real A&E department … so I experimented on a simplified but realistic model of one. A simulation.

What I discovered came as a BIG surprise, or more accurately a sequence of big surprises!

  1. First I discovered that it is rather easy to create a design that generates chaos and danger.  All I needed to do was to assume I understood how the system worked and then use some averaged historical data to configure my model.  I could do this on paper or I could use a spreadsheet to do the sums for me.
  2. Then I discovered that I could calm the chaos by reactively adding lots of extra capacity in terms of time (i.e. more staff) and space (i.e. more cubicles).  The downside of this approach was that my costs sky-rocketed; but at least I had restored safety and calm and I had eliminated the fire-fighting.  Everyone was happy … except the people expected to foot the bill. The finance director, the commissioners, the government and the tax-payer.
  3. Then I got a really big surprise!  My safe-but-expensive design was horribly inefficient.  All my expensive resources were now running at rather low utilisation.  Was that the cost of the chaos I was seeing? But when I trimmed the capacity and costs the chaos and danger reappeared.  So was I stuck between a rock and a hard place?
  4. Then I got a really, really big surprise!!  I hypothesised that the root cause might be the fact that the parts of my system were designed to work independently, and I was curious to see what happened when they worked interdependently. In synergy. And when I changed my design to work that way the chaos and danger did not reappear and the efficiency improved. A lot.
  5. And the biggest surprise of all was how difficult this was to do in my head; and how easy it was to do when I used the theory, techniques and tools of Improvement-by-Design.

So if you are curious to learn more … I have written up the full account of the experiment with rationale, methods, results, conclusions and references and I have published it here.

Anti-Chaos

Hypothesis: Chaotic behaviour of healthcare systems is inevitable without more resources.

This appears to be a rather widely held belief, but what is the evidence?

Can we disprove this hypothesis?

Chaos is a predictable, emergent behaviour of many systems, both natural and man made, a discovery that was made rather recently, in the 1970’s.  Chaotic behaviour is not the same as random behaviour.  The fundamental difference is that random implies independence, while chaos requires the opposite: chaotic systems have interdependent parts.

Chaotic behaviour is complex and counter-intuitive, which may explain why it took so long for the penny to drop.


Chaos is a complex behaviour and it is tempting to assume that complicated structures always lead to complex behaviour.  But they do not.  A mechanical clock is a complicated structure but its behaviour is intentionally very stable and highly predictable – that is the purpose of a clock.  It is a fit-for-purpose design.

The healthcare system has many parts; it too is a complicated system; it has a complicated structure.  It is often seen to demonstrate chaotic behaviour.

So we might propose that a complicated system like healthcare could also be stable and predictable. If it were designed to be.


But there is another critical factor to take into account.

A mechanical clock only has inanimate cogs and springs that only obey the Laws of Physics – and they are neither adaptable nor negotiable.

A healthcare system is different. It is a living structure. It has patients, providers and purchasers as essential components. And the rules of how people work together are both negotiable and adaptable.

So when we are thinking about a healthcare system we are thinking about a complex adaptive system or CAS.

And that changes everything!


The good news is that adaptive behaviour can be a very effective anti-chaos strategy, if it is applied wisely.  The not-so-good news is that if it is not applied wisely then it can actually generate even more chaos.


Which brings us back to our hypothesis.

What if the chaos we are observing on out healthcare system is actually iatrogenic?

What if we are unintentionally and unconsciously generating it?

These questions require an answer because if we are unwittingly contributing to the chaos, with insight, understanding and wisdom we can intentionally calm it too.

These questions also challenge us to study our current way of thinking and working.  And in that challenge we will need to demonstrate a behaviour called humility. An ability to acknowledge that there are gaps in our knowledge and our understanding. A willingness to learn.


This all sounds rather too plausible in theory. What about an example?

Let us consider the highest flow process in healthcare: the outpatient clinic stream.

The typical design is a three-step process called the New-Test-Review design. This sequential design is simpler because the steps are largely independent of each other. And this simplicity is attractive because it is easier to schedule so is less likely to be chaotic. The downsides are the queues and delays between the steps and the risk of getting lost in the system. So if we are worried that a patient may have a serious illness that requires prompt diagnosis and treatment (e.g. cancer), then this simpler design is actually a potentially unsafe design.

A one-stop clinic is a better design because the New-Test-Review steps are completed in one visit, and that is better for everyone. But, a one-stop clinic is a more challenging scheduling problem because all the steps are now interdependent, and that is fertile soil for chaos to emerge.  And chaos is exactly what we often see.

Attending a chaotic one-stop clinic is frustrating experience for both patients and staff, and it is also less productive use of resources. So the chaos and cost appears to be price we are asked to pay for a quicker and safer design.

So is the one stop clinic chaos inevitable, or is it avoidable?

Simple observation of a one stop clinic shows that the chaos is associated with queues – which are visible as a waiting room full of patients and front-of-house staff working very hard to manage the queue and to signpost and soothe the disgruntled patients.

What if the one stop clinic queue and chaos is iatrogenic? What if it was avoidable without investing in more resources? Would the chaos evaporate? Would the quality improve?  Could we have a safer, calmer, higher quality and more productive design?

Last week I shared evidence that proved the one-stop clinic chaos was iatrogenic – by showing it was avoidable.

A team of healthcare staff were shown how to diagnose the cause of the queue and were then able to remove that cause, and to deliver the same outcome without the queue and the associated chaos.

And the most surprising lesson that the team learned was that they achieved this improvement using the same resources as before; and that those resources also felt the benefit of the chaos evaporating. Their work was easier, calmer and more predictable.

The impossible-without-more-resources hypothesis had been disproved.

So, where else in our complicated and complex healthcare system might we apply anti-chaos?

Everywhere?


And for more about complexity science see Santa Fe Institute

Melting the Queue

custom_meter_15256[Drrrrrrring]

<Leslie> Hi Bob, I hope I am not interrupting you.  Do you have five minutes?

<Bob> Hi Leslie. I have just finished what I was working on and a chat would be a very welcome break.  Fire away.

<Leslie> I really just wanted to say how much I enjoyed the workshop this week, and so did all the delegates.  They have been emailing me to say how much they learned and thanking me for organising it.

<Bob> Thank you Leslie. I really enjoyed it too … and I learned lots … I always do.

<Leslie> As you know I have been doing the ISP programme for some time, and I have come to believe that you could not surprise me any more … but you did!  I never thought that we could make such a dramatic improvement in waiting times.  The queue just melted away and I still cannot really believe it.  Was it a trick?

<Bob> Ahhhh, the siren-call of the battle-hardened sceptic! It was no trick. What you all saw was real enough. There were no computers, statistics or smoke-and-mirrors used … just squared paper and a few coloured pens. You saw it with your own eyes; you drew the charts; you made the diagnosis; and you re-designed the policy.  All I did was provide the context and a few nudges.

<Leslie> I know, and that is why I think seeing the before and after data would help me. The process felt so much better, but I know I will need to show the hard evidence to convince others, and to convince myself as well, to be brutally honest.  I have the before data … do you have the after data?

<Bob> I do. And I was just plotting it as BaseLine charts to send to you.  So you have pre-empted me.  Here you are.

StE_OSC_Before_and_After
This is the waiting time run chart for the one stop clinic improvement exercise that you all did.  The leftmost segment is the before, and the rightmost are the after … your two ‘new’ designs.

As you say, the queue and the waiting has melted away despite doing exactly the same work with exactly the same resources.  Surprising and counter-intuitive but there is the evidence.

<Leslie> Wow! That fits exactly with how it felt.  Quick and calm! But I seem to remember that the waiting room was empty, particularly in the case of the design that Team 1 created. How come the waiting is not closer to zero on the chart?

<Bob> You are correct.  This is not just the time in the waiting room, it also includes the time needed to move between the rooms and the changeover time within the rooms.  It is what I call the ‘tween-time.

<Leslie> OK, that makes sense now.  And what also jumps out of the picture for me is the proof that we converted an unstable process into a stable one.  The chaos was calmed.  So what is the root cause of the difference between the two ‘after’ designs?

<Bob> The middle one, the slightly better of the two, is the one where all patients followed the newly designed process.  The rightmost one was where we deliberately threw a spanner in the works by assuming an unpredictable case mix.

<Leslie> Which made very little difference!  The new design was still much, much better than before.

<Bob> Yes. What you are seeing here is the footprint of resilient design. Do you believe it is possible now?

<Leslie> You bet I do!

The Magic Black Box

stick_figure_magic_carpet_150_wht_5040It was the appointed time for Bob and Leslie’s regular coaching session as part of the improvement science practitioner programme.

<Leslie> Hi Bob, I am feeling rather despondent today so please excuse me in advance if you hear a lot of “Yes, but …” language.

<Bob> I am sorry to hear that Leslie. Do you want to talk about it?

<Leslie> Yes, please.  The trigger for my gloom was being sent on a mandatory training workshop.

<Bob> OK. Training to do what?

<Leslie> Outpatient demand and capacity planning!

<Bob> But you know how to do that already, so what is the reason you were “sent”?

<Leslie> Well, I am no longer sure I know how to it.  That is why I am feeling so blue.  I went more out of curiosity and I came away utterly confused and with my confidence shattered.

<Bob> Oh dear! We had better start at the beginning.  What was the purpose of the workshop?

<Leslie> To train everyone in how to use an Outpatient Demand and Capacity planning model, an Excel one that we were told to download along with the User Guide.  I think it is part of a national push to improve waiting times for outpatients.

<Bob> OK. On the surface that sounds reasonable. You have designed and built your own Excel flow-models already; so where did the trouble start?

<Leslie> I will attempt to explain.  This was a paragraph in the instructions. I felt OK with this because my Improvement Science training has given me a very good understanding of basic demand and capacity theory.

IST_DandC_Model_01<Bob> OK.  I am guessing that other delegates may have felt less comfortable with this. Was that the case?

<Leslie> The training workshops are targeted at Operational Managers and the ones I spoke to actually felt that they had a good grasp of the basics.

<Bob> OK. That is encouraging, but a warning bell is ringing for me. So where did the trouble start?

<Leslie> Well, before going to the workshop I decided to read the User Guide so that I had some idea of how this magic tool worked.  This is where I started to wobble – this paragraph specifically …

IST_DandC_Model_02

<Bob> H’mm. What did you make of that?

<Leslie> It was complete gibberish to me and I felt like an idiot for not understanding it.  I went to the workshop in a bit of a panic and hoped that all would become clear. It didn’t.

<Bob> Did the User Guide explain what ‘percentile’ means in this context, ideally with some visual charts to assist?

<Leslie> No and the use of ‘th’ and ‘%’ was really confusing too.  After that I sort of went into a mental fog and none of the workshop made much sense.  It was all about practising using the tool without any understanding of how it worked. Like a black magic box.


<Bob> OK.  I can see why you were confused, and do not worry, you are not an idiot.  It looks like the author of the User Guide has unwittingly used some very confusing and ambiguous terminology here.  So can you talk me through what you have to do to use this magic box?

<Leslie> First we have to enter some of our historical data; the number of new referrals per week for a year; and the referral and appointment dates for all patients for the most recent three months.

<Bob> OK. That sounds very reasonable.  A run chart of historical demand and the raw event data for a Vitals Chart® is where I would start the measurement phase too – so long as the data creates a valid 3 month reporting window.

<Leslie> Yes, I though so too … but that is not how the black box model seems to work. The weekly demand is used to draw an SPC chart, but the event data seems to disappear into the innards of the black box, and recommendations pop out of it.

<Bob> Ah ha!  And let me guess the relationship between the term ‘percentile’ and the SPC chart of weekly new demand was not explained?

<Leslie> Spot on.  What does percentile mean?


<Bob> It is statistics jargon. Remember that we have talked about the distribution of the data around the average on a BaseLine chart; and how we use the histogram feature of BaseLine to show it visually.  Like this example.

IST_DandC_Model_03<Leslie> Yes. I recognise that. This chart shows a stable system of demand with an average of around 150 new referrals per week and the variation distributed above and below the average in a symmetrical pattern, falling off to zero around the upper and lower process limits.  I believe that you said that over 99% will fall within the limits.

<Bob> Good.  The blue histogram on this chart is called a probability distribution function, to use the terminology of a statistician.

<Leslie> OK.

<Bob> So, what would happen if we created a Pareto chart of demand using the number of patients per week as the categories and ignoring the time aspect? We are allowed to do that if the behaviour is stable, as this chart suggests.

<Leslie> Give me a minute, I will need to do a rough sketch. Does this look right?

IST_DandC_Model_04

<Bob> Perfect!  So if you now convert the Y-axis to a percentage scale so that 52 weeks is 100% then where does the average weekly demand of about 150 fall? Read up from the X-axis to the line then across to the Y-axis.

<Leslie> At about 26 weeks or 50% of 52 weeks.  Ah ha!  So that is what a percentile means!  The 50th percentile is the average, the zeroth percentile is around the lower process limit and the 100th percentile is around the upper process limit!

<Bob> In this case the 50th percentile is the average, it is not always the case though.  So where is the 85th percentile line?

<Leslie> Um, 52 times 0.85 is 44.2 which, reading across from the Y-axis then down to the X-axis gives a weekly demand of about 170 per week.  That is about the same as the average plus one sigma according to the run chart.

<Bob> Excellent. The Pareto chart that you have drawn is called a cumulative probability distribution function … and that is usually what percentiles refer to. Comparative Statisticians love these but often omit to explain their rationale to non-statisticians!


<Leslie> Phew!  So, now I can see that the 65th percentile is just above average demand, and 85th percentile is above that.  But in the confusing paragraph how does that relate to the phrase “65% and 85% of the time”?

<Bob> It doesn’t. That is the really, really confusing part of  that paragraph. I am not surprised that you looped out at that point!

<Leslie> OK. Let us leave that for another conversation.  If I ignore that bit then does the rest of it make sense?

<Bob> Not yet alas. We need to dig a bit deeper. What would you say are the implications of this message?


<Leslie> Well.  I know that if our flow-capacity is less than our average demand then we will guarantee to create an unstable queue and chaos. That is the Flaw of Averages trap.

<Bob> OK.  The creator of this tool seems to know that.

<Leslie> And my outpatient manager colleagues are always complaining that they do not have enough slots to book into, so I conclude that our current flow-capacity is just above the 50th percentile.

<Bob> A reasonable hypothesis.

<Leslie> So to calm the chaos the message is saying I will need to increase my flow capacity up to the 85th percentile of demand which is from about 150 slots per week to 170 slots per week. An increase of 7% which implies a 7% increase in costs.

<Bob> Good.  I am pleased that you did not fall into the intuitive trap that a increase from the 50th to the 85th percentile implies a 35/50 or 70% increase! Your estimate of 7% is a reasonable one.

<Leslie> Well it may be theoretically reasonable but it is not practically possible. We are exhorted to reduce costs by at least that amount.

<Bob> So we have a finance versus governance bun-fight with the operational managers caught in the middle: FOG. That is not the end of the litany of woes … is there anything about Did Not Attends in the model?


<Leslie> Yes indeed! We are required to enter the percentage of DNAs and what we do with them. Do we discharge them or re-book them.

<Bob> OK. Pragmatic reality is always much more interesting than academic rhetoric and this aspect of the real system rather complicates things, at least for a comparative statistician. This is where the smoke and mirrors will appear and they will be hidden inside the black magic box.  To solve this conundrum we need to understand the relationship between demand, capacity, variation and yield … and it is rather counter-intuitive.  So, how would you approach this problem?

<Leslie> I would use the 6M Design® framework and I would start with a map and not with a model; least of all a magic black box one that I did not design, build and verify myself.

<Bob> And how do you know that will work any better?

<Leslie> Because at the One Day ISP Workshop I saw it work with my own eyes. The queues, waits and chaos just evaporated.  And it cost nothing.  We already had more than enough “capacity”.

<Bob> Indeed you did.  So shall we do this one as an ISP-2 project?

<Leslie> An excellent suggestion.  I already feel my confidence flowing back and I am looking forward to this new challenge. Thank you again Bob.

Hot and Cold

stick_figure_on_cloud_150_wht_9604Last week Bob and Leslie were exploring the data analysis trap called a two-points-in-time comparison: as illustrated by the headline “This winter has not been as bad as last … which proves that our winter action plan has worked.

Actually it doesn’t.

But just saying that is not very helpful. We need to explain the reason why this conclusion is invalid and therefore potentially dangerous.


So here is the continuation of Bob and Leslie’s conversation.

<Bob> Hi Leslie, have you been reflecting on the two-points-in-time challenge?

<Leslie> Yes indeed, and you were correct, I did know the answer … I just didn’t know I knew if you get my drift.

<Bob> Yes, I do. So, are you willing to share your story?

<Leslie> OK, but before I do that I would like to share what happened when I described what we talked about to some colleagues.  They sort of got the idea but got lost in the unfamiliar language of ‘variance’ and I realized that I needed an example to illustrate.

<Bob> Excellent … what example did you choose?

<Leslie> The UK weather – or more specifically the temperature.  My reasons for choosing this were many: first it is something that everyone can relate to; secondly it has strong seasonal cycle; and thirdly because the data is readily available on the Internet.

<Bob> OK, so what specific question were you trying to answer and what data did you use?

<Leslie> The question was “Are our winters getting warmer?” and my interest in that is because many people assume that the colder the winter the more people suffer from respiratory illness and the more that go to hospital … contributing to the winter A&E and hospital pressures.  The data that I used was the maximum monthly temperature from 1960 to the present recorded at our closest weather station.

<Bob> OK, and what did you do with that data?

<Leslie> Well, what I did not do was to compare this winter with last winter and draw my conclusion from that!  What I did first was just to plot-the-dots … I created a time-series chart … using the BaseLine© software.

MaxMonthTemp1960-2015

And it shows what I expected to see, a strong, regular, 12-month cycle, with peaks in the summer and troughs in the winter.

<Bob> Can you explain what the green and red lines are and why some dots are red?

<Leslie> Sure. The green line is the average for all the data. The red lines are called the upper and lower process limits.  They are calculated from the data and what they say is “if the variation in this data is random then we will expect more than 99% of the points to fall between these two red lines“.

<Bob> So, we have 55 years of monthly data which is nearly 700 points which means we would expect fewer than seven to fall outside these lines … and we clearly have many more than that.  For example, the winter of 1962-63 and the summer of 1976 look exceptional – a run of three consecutive dots outside the red lines. So can we conclude the variation we are seeing is not random?

<Leslie> Yes, and there is more evidence to support that conclusion. First is the reality check … I do not remember either of those exceptionally cold or hot years personally, so I asked Dr Google.

BigFreeze_1963This picture from January 1963 shows copper telephone lines that are so weighed down with ice, and for so long, that they have stretched down to the ground.  In this era of mobile phones we forget this was what telecommunication was like!

 

 

HeatWave_1976

And just look at the young Michal Fish in the Summer of ’76! Did people really wear clothes like that?

And there is more evidence on the chart. The red dots that you mentioned are indicators that BaseLine© has detected other non-random patterns.

So the large number of red dots confirms our Mark I Eyeball conclusion … that there are signals mixed up with the noise.

<Bob> Actually, I do remember the Summer of ’76 – it was the year I did my O Levels!  And your signals-in-the-noise phrase reminds me of SETI – the search for extra-terrestrial intelligence!  I really enjoyed the 1997 film of Carl Sagan’s book Contact with Jodi Foster playing the role of the determined scientist who ends up taking a faster-than-light trip through space in a machine designed by ET and built by humans. And especially the line about 10 minutes from the end when those-in-high-places who had discounted her story as “unbelievable” realized they may have made an error … the line ‘Yes, that is interesting isn’t it’.

<Leslie> Ha ha! Yes. I enjoyed that film too. It had lots of great characters – her glory seeking boss; the hyper-suspicious head of national security who militarized the project; the charismatic anti-hero; the ranting radical who blew up the first alien machine; and John Hurt as her guardian angel. I must watch it again.

Anyway, back to the story. The problem we have here is that this type of time-series chart is not designed to extract the overwhelming cyclical, annual pattern so that we can search for any weaker signals … such as a smaller change in winter temperature over a longer period of time.

<Bob>Yes, that is indeed the problem with these statistical process control charts.  SPC charts were designed over 60 years ago for process quality assurance in manufacturing not as a diagnostic tool in a complex adaptive system such a healthcare. So how did you solve the problem?

<Leslie> I realized that it was the regularity of  the cyclical pattern that was the key.  I realized that I could use that to separate out the annual cycle and to expose the weaker signals.  I did that using the rational grouping feature of BaseLine© with the month-of-the-year as the group.

MaxMonthTemp1960-2015_ByMonth

Now I realize why the designers of the software put this feature in! With just one mouse click the story jumped out of the screen!

<Bob> OK. So can you explain what we are looking at here?

<Leslie> Sure. This chart shows the same data as before except that I asked BaseLine© first to group the data by month and then to create a mini-chart for each month-group independently.  Each group has its own average and process limits.  So if we look at the pattern of the averages, the green lines, we can clearly see the annual cycle.  What is very obvious now is that the process limits for each sub-group are much narrower, and that there are now very few red points  … other than in the groups that are coloured red anyway … a niggle that the designers need to nail in my opinion!

<Bob> I will pass on your improvement suggestion! So are you saying that the regular annual cycle has accounted for the majority of the signal in the previous chart and that now we have extracted that signal we can look for weaker signals by looking for red flags in each monthly group?

<Leslie> Exactly so.  And the groups I am most interested in are the November to March ones.  So, next I filtered out the November data and plotted it as a separate chart; and I then used another cool feature of BaseLine© called limit locking.

MaxTempNov1960-2015_LockedLimits

What that means is that I have used the November maximum temperature data for the first 30 years to get the baseline average and natural process limits … and we can see that there are no red flags in that section, no obvious signals.  Then I locked these limits at 1990 and this tells BaseLine© to compare the subsequent 25 years of data against these projected limits.  That exposed a lot of signal flags, and we can clearly see that most of the points in the later section are above the projected average from the earlier one.  This confirms that there has been a significant increase in November maximum temperature over this 55 year period.

<Bob> Excellent! You have answered part of your question. So what about December onwards?

<Leslie> I was on a roll now! I also noticed from my second chart that the December, January and February groups looked rather similar so I filtered that data out and plotted them as a separate chart.

MaxTempDecJanFeb1960-2015_GroupedThese were indeed almost identical so I lumped them together as a ‘winter’ group and compared the earlier half with the later half using another BaseLine© feature called segmentation.

MaxTempDecJanFeb1960-2015-SplitThis showed that the more recent winter months have a higher maximum temperature … on average. The difference is just over one degree Celsius. But it also shows that that the month-to-month and year-to-year variation still dominates the picture.

<Bob> Which implies?

<Leslie> That, with data like this, a two-points-in-time comparison is meaningless.  If we do that we are just sampling random noise and there is no useful information in noise. Nothing that we can  learn from. Nothing that we can justify a decision with.  This is the reason the ‘this year was better than last year’ statement is meaningless at best; and dangerous at worst.  Dangerous because if we draw an invalid conclusion, then it can lead us to make an unwise decision, then decide a counter-productive action, and then deliver an unintended outcome.

By doing invalid two-point comparisons we can too easily make the problem worse … not better.

<Bob> Yes. This is what W. Edwards Deming, an early guru of improvement science, referred to as ‘tampering‘.  He was a student of Walter A. Shewhart who recognized this problem in manufacturing and, in 1924, invented the first control chart to highlight it, and so prevent it.  My grandmother used the term meddling to describe this same behavior … and I now use that term as one of the eight sources of variation. Well done Leslie!

The Two-Points-In-Time Comparison Trap

comparing_information_anim_5545[Bzzzzzz] Bob’s phone vibrated to remind him it was time for the regular ISP remote coaching session with Leslie. He flipped the lid of his laptop just as Leslie joined the virtual meeting.

<Leslie> Hi Bob, and Happy New Year!

<Bob> Hello Leslie and I wish you well in 2016 too.  So, what shall we talk about today?

<Leslie> Well, given the time of year I suppose it should be the Winter Crisis.  The regularly repeating annual winter crisis. The one that feels more like the perpetual winter crisis.

<Bob> OK. What specifically would you like to explore?

<Leslie> Specifically? The habit of comparing of this year with last year to answer the burning question “Are we doing better, the same or worse?”  Especially given the enormous effort and political attention that has been focused on the hot potato of A&E 4-hour performance.

<Bob> Aaaaah! That old chestnut! Two-Points-In-Time comparison.

<Leslie> Yes. I seem to recall you usually add the word ‘meaningless’ to that phrase.

<Bob> H’mm.  Yes.  It can certainly become that, but there is a perfectly good reason why we do this.

<Leslie> Indeed, it is because we see seasonal cycles in the data so we only want to compare the same parts of the seasonal cycle with each other. The apples and oranges thing.

<Bob> Yes, that is part of it. So what do you feel is the problem?

<Leslie> It feels like a lottery!  It feels like whether we appear to be better or worse is just the outcome of a random toss.

<Bob> Ah!  So we are back to the question “Is the variation I am looking at signal or noise?” 

<Leslie> Yes, exactly.

<Bob> And we need a scientifically robust way to answer it. One that we can all trust.

<Leslie> Yes.

<Bob> So how do you decide that now in your improvement work?  How do you do it when you have data that does not show a seasonal cycle?

<Leslie> I plot-the-dots and use an XmR chart to alert me to the presence of the signals I am interested in – especially a change of the mean.

<Bob> Good.  So why can we not use that approach here?

<Leslie> Because the seasonal cycle is usually a big signal and it can swamp the smaller change I am looking for.

<Bob> Exactly so. Which is why we have to abandon the XmR chart and fall back the two points in time comparison?

<Leslie> That is what I see. That is the argument I am presented with and I have no answer.

<Bob> OK. It is important to appreciate that the XmR chart was not designed for doing this.  It was designed for monitoring the output quality of a stable and capable process. It was designed to look for early warning signs; small but significant signals that suggest future problems. The purpose is to alert us so that we can identify the root causes, correct them and the avoid a future problem.

<Leslie> So we are using the wrong tool for the job. I sort of knew that. But surely there must be a better way than a two-points-in-time comparison!

<Bob> There is, but first we need to understand why a TPIT is a poor design.

<Leslie> Excellent. I’m all ears.

<Bob> A two point comparison is looking at the difference between two values, and that difference can be positive, zero or negative.  In fact, it is very unlikely to be zero because noise is always present.

<Leslie> OK.

<Bob> Now, both of the values we are comparing are single samples from two bigger pools of data.  It is the difference between the pools that we are interested in but we only have single samples of each one … so they are not measurements … they are estimates.

<Leslie> So, when we do a TPIT comparison we are looking at the difference between two samples that come from two pools that have inherent variation and may or may not actually be different.

<Bob> Well put.  We give that inherent variation a name … we call it variance … and we can quantify it.

<Leslie> So if we do many TPIT comparisons then they will show variation as well … for two reasons; first because the pools we are sampling have inherent variation; and second just from the process of sampling itself.  It was the first lesson in the ISP-1 course.

<Bob> Well done!  So the question is: “How does the variance of the TPIT sample compare with the variance of the pools that the samples are taken from?”

<Leslie> My intuition tells me that it will be less because we are subtracting.

<Bob> Your intuition is half-right.  The effect of the variation caused by the signal will be less … that is the rationale for the TPIT after all … but the same does not hold for the noise.

<Leslie> So the noise variation in the TPIT is the same?

<Bob> No. It is increased.

<Leslie> What! But that would imply that when we do this we are less likely to be able to detect a change because a small shift in signal will be swamped by the increase in the noise!

<Bob> Precisely.  And the degree that the variance increases by is mathematically predictable … it is increased by a factor of two.

<Leslie> So as we usually present variation as the square root of the variance, to get it into the same units as the metric, then that will be increased by the square root of two … 1.414

<Bob> Yes.

<Leslie> I need to put this counter-intuitive theory to the test!

<Bob> Excellent. Accept nothing on faith. Always test assumptions. And how will you do that?

<Leslie> I will use Excel to generate a big series of normally distributed random numbers; then I will calculate a series of TPIT differences using a fixed time interval; then I will calculate the means and variations of the two sets of data; and then I will compare them.

<Bob> Excellent.  Let us reconvene in ten minutes when you have done that.


10 minutes later …


<Leslie> Hi Bob, OK I am ready and I would like to present the results as charts. Is that OK?

<Bob> Perfect!

<Leslie> Here is the first one.  I used our A&E performance data to give me some context. We know that on Mondays we have an average of 210 arrivals with an approximately normal distribution and a standard deviation of 44; so I used these values to generate the random numbers. Here is the simulated Monday Arrivals chart for two years.

TPIT_SourceData

<Bob> OK. It looks stable as we would expect and I see that you have plotted the sigma levels which look to be just under 50 wide.

<Leslie> Yes, it shows that my simulation is working. So next is the chart of the comparison of arrivals for each Monday in Year 2 compared with the corresponding week in Year 1.

TPIT_DifferenceData <Bob> Oooookaaaaay. What have we here?  Another stable chart with a mean of about zero. That is what we would expect given that there has not been a change in the average from Year 1 to Year 2. And the variation has increased … sigma looks to be just over 60.

<Leslie> Yes!  Just as the theory predicted.  And this is not a spurious answer. I ran the simulation dozens of times and the effect is consistent!  So, I am forced by reality to accept the conclusion that when we do two-point-in-time comparisons to eliminate a cyclical signal we will reduce the sensitivity of our test and make it harder to detect other signals.

<Bob> Good work Leslie!  Now that you have demonstrated this to yourself using a carefully designed and conducted simulation experiment, you will be better able to explain it to others.

<Leslie> So how do we avoid this problem?

<Bob> An excellent question and one that I will ask you to ponder on until our next chat.  You know the answer to this … you just need to bring it to conscious awareness.


 

A Case of Chronic A&E Pain: Part 1

 

Dr_Bob_Thumbnail

The blog last week seems to have caused a bit of a stir … so this week we will continue on the same theme.

I’m Dr Bob and I am a hospital doctor: I help to improve the health of poorly hospitals.

And I do that using the Science of Improvement – which is the same as all sciences, there is a method to it.

Over the next few weeks I will outline, in broad terms, how this is done in practice.

And I will use the example of a hospital presenting with pain in their A&E department.  We will call it St.Elsewhere’s ® Hospital … a fictional name for a real patient.


It is a while since I learned science at school … so I thought a bit of a self-refresher would be in order … just to check that nothing fundamental has changed.

Science_Sequence

This is what I found on page 2 of a current GCSE chemistry textbook.

Note carefully that the process starts with observations; hypotheses come after that; then predictions and finally designing experiments to test them.

The scientific process starts with study.

Which is reassuring because when helping a poorly patient or a poorly hospital that is exactly where we start.

So, first we need to know the symptoms; only then can we start to suggest some hypotheses for what might be causing those symptoms – a differential diagnosis; and then we look for more specific and objective symptoms and signs of those hypothetical causes.


<Dr Bob> What is the presenting symptom?

<StE> “Pain in the A&E Department … or more specifically the pain is being felt by the Executive Department who attribute the source to the A&E Department.  Their pain is that of 4-hour target failure.

<Dr Bob> Are there any other associated symptoms?

<StE> “Yes, a whole constellation.  Complaints from patients and relatives; low staff morale, high staff turnover, high staff sickness, difficulty recruiting new staff, and escalating locum and agency costs. The list is endless.”

<Dr Bob> How long have these symptoms been present?

<StE> “As long as we can remember.”

<Dr Bob> Are the symptoms staying the same, getting worse or getting better?

<StE> “Getting worse. It is worse in the winter and each winter is worse than the last.”

<Dr Bob> And what have you tried to relieve the pain?

<StE> “We have tried everything and anything – business process re-engineering, balanced scorecards, Lean, Six Sigma, True North, Blue Oceans, Golden Hours, Perfect Weeks, Quality Champions, performance management, pleading, podcasts, huddles, cuddles, sticks, carrots, blogs  and even begging. You name it we’ve tried it! The current recommended treatment is to create a swarm of specialist short-stay assessment units – medical, surgical, trauma, elderly, frail elderly just to name a few.” 

<Dr Bob> And how effective have these been?

<StE> “Well some seemed to have limited and temporary success but nothing very spectacular or sustained … and the complexity and cost of our processes just seem to go up and up with each new initiative. It is no surprise that everyone is change weary and cynical.”


The pattern of symptoms is that of a chronic (longstanding) illness that has seasonal variation, which is getting worse over time and the usual remedies are not working.

And it is obvious that we do not have a clear diagnosis; or know if our unclear diagnosis is incorrect; or know if we are actually dealing with an incurable disease.

So first we need to focus on establishing the diagnosis.

And Dr Bob is already drawing up a list of likely candidates … with carveoutosis at the top.


<Dr Bob> Do you have any data on the 4-hour target pain?  Do you measure it?

<StE> “We are awash with data! I can send the quarterly breach performance data for the last ten years!”

<Dr Bob> Excellent, that will be useful as it should confirm that this is a chronic and worsening problem but it does not help establish a diagnosis.  What we need is more recent, daily data. Just the last six months should be enough. Do you have that?

<StE> “Yes, that is how we calculate the quarterly average that we are performance managed on. Here is the spreadsheet. We are ‘required’ to have fewer than 5% 4-hour breaches on average. Or else.”


This is where Dr Bob needs some diagnostic tools.  He needs to see the pain scores presented as  picture … so he can see the pattern over time … because it is a very effective way to generate plausible causal hypotheses.

Dr Bob can do this on paper, or with an Excel spreadsheet, or use a tool specifically designed for the job. He selects his trusted visualisation tool : BaseLine©.


StE_4hr_Pain_Chart

<Dr Bob> This is your A&E pain data plotted as a time-series chart.  At first glance it looks very chaotic … that is shown by the wide and flat histogram. Is that how it feels?

<StE> “That is exactly how it feels … earlier in the year it was unremitting pain and now we have a constant background ache with sharp, severe, unpredictable stabbing pains on top. I’m not sure what is worse!

<Dr Bob> We will need to dig a bit deeper to find the root cause of this chronic pain … we need to identify the diagnosis or diagnoses … and your daily pain data should offer us some clues.

StE_4hr_Pain_Chart_RG_DoWSo I have plotted your data in a different way … grouping by day of the week … and this shows there is a weekly pattern to your pain. It looks worse on Mondays and least bad on Fridays.  Is that your experience?

<StE> “Yes, the beginning of the week is definitely worse … because it is like a perfect storm … more people referred by their GPs on Mondays and the hospital is already full with the weekend backlog of delayed discharges so there are rarely beds to admit new patients into until late in the day. So they wait in A&E.  


Dr Bob’s differential diagnosis is firming up … he still suspects acute-on-chronic carveoutosis as the primary cause but he now has identified an additional complication … Forrester’s Syndrome.

And Dr Bob suspects an unmentioned problem … that the patient has been traumatised by a blunt datamower!

So that is the evidence we will look for next … here