Big Data

database_transferring_data_150_wht_10400The Digital Age is changing the context of everything that we do – and that includes how we use information for improvement.

Historically we have used relatively small, but carefully collected, samples of data and we subjected these to rigorous statistical analysis. Or rather the statisticians did.  Statistics is a dark and mysterious art to most people.

As the digital age ramped up in the 1980’s the data storage, data transmission and data processing power became cheap and plentiful.  The World Wide Web appeared; desktop computers with graphical user interfaces appeared; data warehouses appeared, and very quickly we were all drowning in the data ocean.

Our natural reaction was to centralise but it became quickly obvious that even an army of analysts and statisticians could not keep up.

So our next step was to automate and Business Intelligence was born; along with its beguiling puppy-faced friend, the Performance Dashboard.

The ocean of data could now be boiled down into a dazzling collection of animated histograms, pie-charts, trend-lines, dials and winking indicators. We could slice-and-dice,  we could zoom in-and-out, and we could drill up-and-down until our brains ached.

And none of it has helped very much in making wiser decisions that lead to effective actions that lead to improved outcomes.

Why?

The reason is that the missing link was not a lack of data processing power … it was a lack of an effective data processing paradigm.

The BI systems are rooted in the closed, linear, static, descriptive statistics of the past … trend lines, associations, correlations, p-values and so on.

Real systems are open, non-linear and dynamic; they are eternally co-evolving. Nothing stays still.

And it is real systems that we live in … so we need a new data processing paradigm that suits our current reality.

Some are starting to call this the Big Data Era and it is very different.

  • Business Intelligence uses descriptive statistics and data with high information density to measure things, detect trends etc.;
  • Big Data uses inductive statistics and concepts from non-linear system identification to infer laws (regressions, non-linear relationships, and causal effects) from large data sets to reveal relationships, dependencies and perform predictions of outcomes and behaviours.

And each of us already has a powerful Big Data processor … the 1.3 kg of caveman wet-ware sitting between our ears.

Our brain processes billions of bits of data every second and looks for spatio-temporal relationships to identify patterns, to derive models, to create action options, to predict short-term outcomes and to make wise survival decisions.

The problem is that our Brainy Big Data Processor is easily tricked when we start looking at time-dependent systems … data from multiple simultaneous flows that are interacting dynamically with each other.

It did not evolve to do that … it evolved to help us to survive in the Wild – as individuals.

And it has been very successful … as the burgeoning human population illustrates.

But now we have a new collective survival challenge  and we need new tools … and the out-of-date Business Intelligence Performance Dashboard is just not going to cut the mustard!

Big Data on TED Talks

 

The Productive Meeting

networking_people_PA_300_wht_1844The engine of improvement is a productive meeting.

Complex adaptive systems (CAS) are those that  learn and change themselves.

The books of ‘rules’ are constantly revised and refreshed as the CAS co-evolves with its environment.

System improvement is the outcome of effective actions.

Effective actions are the outcomes of wise decisions.

Wise decisions are the output of productive meetings.

So the meeting process must be designed to be productive: which means both effective and efficient.


One of the commonest niggles that individuals report is ‘Death by Meeting’.

That alone is enough evidence that our current design for meetings is flawed.


One common error of omission is lack of clarity about the purpose of the meeting.

This cause has two effects:

1. The wrong sort of meeting design is used for the problem(s) under consideration.

A meeting designed for tactical  (how to) planning will not work well for strategic (why to) problems.

2. A mixed bag of problems is dumped into the all-purpose-less meeting.

Mixing up short term tactical and long term strategic problems on a single overburdened agenda is doomed to fail.


Even when the purpose of  a meeting  is clear and agreed it is common to observe an unproductive meeting process.

The process may be unproductive because it is ineffective … there are no wise decisions made and so no effective actions implemented.

Worse even than that … decisions are made that are unwise and the actions that follow lead to unintended negative consequences.

The process may also be unproductive because it is inefficient … it requires too much input to get any output.

Of course we want both an effective and an efficient meeting process … and we need to be aware that effectiveness  comes first.  Designing the meeting process to be a more efficient generator of unwise decisions is not a good idea! The result is an even bigger problem!


So our meeting design focus is ‘How could we make wise decisions as a group?’

But if we knew the answer to that we would probably already be doing it!

So we can ask the same question another way: ‘How do we make unwise decisions as a group?

The second question is easier to answer. We just reflect on our current experience.

Some ways we appear to unintentionally generate unwise decisions are:

a) Ensure we have no clarity of purpose – confusion is a good way to defuse effective feedback.
b) Be selective in who we invite to the meeting – group-think facilitates consensus.
c) Ignore the pragmatic, actual, reality and only use academic, theoretical, rhetoric.
d) Encourage the noisy – quiet people are non-contributors.
e) Engage in manipulative styles of behaviour – people cannot be trusted.
f) Encourage the  sceptics and cynics to critique and cull innovative suggestions.
g) Have a trump card – keep the critical ‘any other business’ to the end – just in case.

If we adopt all these tactics we can create meetings that are ‘lively’, frustrating, inefficient and completely unproductive. That of course protects us from making unwise decisions.


So one approach to designing meetings to be more productive is simply to recognise and challenge the unproductive behaviours – first as individuals and then as groups.

The place to start is within our own circle of influence – with those we trust – and to pledge to each other to consciously monitor for unproductive behaviours and to respectfully challenge them.

These behaviours are so habitual that we are often unaware that we are doing them.

And it feels strange at first but it get easier with practice and when you see the benefits.

Competent and Conscious

Conscious_and_CompetentThis week I was made mindful again of a simple yet powerful model that goes a long way to explaining why we find change so difficult.

It is the conscious-competent model.

There are two dimensions which gives four combinations that are illustrated in the diagram.

We all start in the bottom left corner. We do not know what we do not know.  We are ignorant and incompetent and unconscious of the  fact.

Let us call that Blissful Ignorance.

Then suddenly we get a reality check. A shock. A big enough one to start us on the emotional roller coaster ride we call the Nerve Curve.

We become painfully aware of our ignorance (and incompetence). Conscious of it.

That is not a happy place to be and we have a well-developed psychological first line of defence to protect us. It is called Denial.

“That’s a load of rubbish!” we say.

But denial does not change reality and eventually we are reminded. Reality does not go away.

Our next line of defence is to shoot the messenger. We get angry and aggressive.

Who the **** are you to tell me that I do not know what I am doing!” we say.

Sometimes we are openly aggressive.  More often we use passive aggressive tactics. We resort to below-the-belt behind-the-back corridor-gossip behaviour.

But that does not change reality either.  And we are slowly forced to accept that we need to change. But not yet …

Our next line of defence is to bargain for more time (in the hope that reality will swing back in our favour).

There may be something in this but I am too busy at the moment … I will look at this  tomorrow/next week/next month/after my holiday/next quarter/next financial year/in my next job/when I retire!” we wheedle.

Our strategy usually does not work – it just wastes time – and while we prevaricate the crisis deepens. Reality is relentless.

Our last line of defence has now been breached and now we sink into depression and despair.

It is too late. Too difficult for me. I need rescuing. Someone help me!” we wail.

That does not work either. There is no one there. It is up to us. It is sink-or-swim time.

What we actually need now is a crumb of humility.

And with that we can start on the road to Know How. We start by unlearning the old stuff and then we can  replace it with the new stuff.  Step-by-step we climb out of the dark depths of Painful Awareness.

And then we get a BIG SURPRISE.

It is not as difficult as we assumed. And we discover that learning-by-doing is fun. And we find that demonstrating to others what we are learning is by far the most effective way to consolidate our new conscious competence.

And by playing to our strengths, with persistence, with practice and with reality-feedback our new know how capability gradually becomes second nature. Business as usual. The way we do things around here. The culture.

Then, and only then, will the improvement sustain … and spread … and grow.

 

N-N-N-N Feedback

4NChartOne of the essential components of an adaptive system is effective feedback.

Without feedback we cannot learn – we can only guess and hope.

So the design of our feedback loops is critical-to-success.

Many people do not like getting feedback because they live in a state of fear: fear of criticism. This is a learned behaviour.

Many people do not like giving feedback because they too live in a state of fear: fear of conflict. This is a learned behaviour.

And what is learned can be unlearned; with training, practice and time.

But before we will engage in unlearning our current habit we need to see the new habit that will replace it. The one that will work better for us. The one that is more effective.  The one that will require less effort. The one that is more efficient use of our most precious resource: life-time.

There is an effective and efficient feedback technique called The 4N Chart®.  And I know it works because I have used it and demonstrated to myself and others that  it works. And I have seen others use it and demonstrate to themselves and others that it works too.

The 4N Chart® has two dimensions – Time (Now and Future) and Emotion (Happy and Unhappy).

This gives four combinations each of which is given a label that begins with the letter ‘N’ – Niggles, Nuggets, NoNos and NiceIfs.

The N has a further significance … it reminds us which order to move through the  chart.

We start bottom left with the Niggles.  What is happening now that causes us to feel unhappy. What are these root causes of our niggles? And more importantly, which of these do we have control over?  Knowing that gives us a list of actions that we can do that will have the effect of reducing our niggles. And we can start that immediately because we do not need permission.

Next we move top-left to the Nuggets. What is happening now that causes us to feel happy? What are the root causes of our nuggets? Which of these do we control? We need to recognise these too and to celebrate them.  We need to give ourselves a pat on the back for them because that helps reinforce the habit to keep doing them.

Now we look to the future – and we need to consider two things: what we do not want to feel in the future and what we do want to feel in the future. These are our NoNos and our NiceIfs. It does not matter which order we do this … but  we must consider both.

Many prefer to consider dangers and threats first … that is SAFETY FIRST  thinking and is OK. First Do No Harm. Primum non nocere.

So with the four corners of our 4N Chart® filled in we have a balanced perspective and we can set off on the journey of improvement with confidence. Our 4N Chart® will help us stay on track. And we will update it as we go, as we study, as we plan and as we do things. As we convert NiceIfs into Nuggets and  Niggles into NoNos.

It sounds simple.  It is in theory. It is not quite as easy to do.

It takes practice … particularly the working backwards from the effect (the feeling) to the cause (the facts). This is done step-by-step using Reality as a guide – not our rhetoric. And we must be careful not to make assumptions in lieu of evidence. We must be careful not to jump to unsupported conclusions. That is called pre-judging.  Prejudice.

But when you get the hang of using The 4N Chart® you will be amazed at how much more easily and more quickly you make progress.

Economy-of-Scale vs Economy-of-Flow

We_Need_Small_HospitalsThis was an interesting headline to see on the front page of a newspaper yesterday!

The Top Man of the NHS is openly challenging the current Centralisation-is-The-Only-Way-Forward Mantra;  and for good reason.

Mass centralisation is poor system design – very poor.

Q: So what is driving the centralisation agenda?

A: Money.

Or to be more precise – rather simplistic thinking about money.

The misguided money logic goes like this:

1. Resources (such as highly trained doctors, nurses and AHPs) cost a lot of money to provide.
[Yes].

2. So we want all these resources to be fully-utilised to get value-for-money.
[No, not all – just the most expensive].

3. So we will gather all the most expensive resources into one place to get the Economy-of-Scale.
[No, not all the most expensive – just the most specialised]

4. And we will suck /push all the work through these super-hubs to keep our expensive specialist resources busy all the time.
[No, what about the growing population of older folks who just need a bit of expert healthcare support, quickly, and close to home?]

This flawed logic confuses two complementary ways to achieve higher system productivity/economy/value-for-money without  sacrificing safety:

Economies of Scale (EoS) and Economies of Flow (EoF).

Of the two the EoF is the more important because by using EoF principles we can increase productivity in huge leaps at almost no cost; and without causing harm and disappointment. EoS are always destructive.

But that is impossible. You are talking rubbish … because if it were possible we would be doing it!

It is not impossible and we are doing it … but not at scale and pace in healthcare … and the reason for that is we are not trained in Economy-of-Flow methods.

And those who are trained and who have have experienced the effects of EoF would not do it any other way.

Example:

In a recent EoF exercise an ISP (Improvement Science Practitioner) helped a surgical team to increase their operating theatre productivity by 30% overnight at no cost.  The productivity improvement was measured and sustained for most of the last year. [it did dip a bit when the waiting list evaporated because of the higher throughput, and again after some meddlesome middle management madness was triggered by end-of-financial-year target chasing].  The team achieved the improvement using Economy of Flow principles and by re-designing some historical scheduling policies. The new policies  were less antagonistic. They were designed to line the ducks up and as a result the flow improved.


So the specific issue of  Super Hospitals vs Small Hospitals is actually an Economy of Flow design challenge.

But there is another critical factor to take into account.

Specialisation.

Medicine has become super-specialised for a simple reason: it is believed that to get ‘good enough’ at something you have to have a lot of practice. And to get the practice you have to have high volumes of the same stuff – so you need to specialise and then to sort undifferentiated work into separate ‘speciologist’ streams or sequence the work through separate speciologist stages.

Generalists are relegated to second-class-citizen status; mere tripe-skimmers and sign-posters.

Specialisation is certainly one way to get ‘good enough’ at doing something … but it is not the only way.

Another way to learn the key-essentials from someone who already knows (and can teach) and then to continuously improve using feedback on what works and what does not – feedback from everywhere.

This second approach is actually a much more effective and efficient way to develop expertise – but we have not been taught this way.  We have only learned the scrape-the-burned-toast-by-suck-and-see method.

We need to experience another way.

We need to experience rapid acquisition of expertise!

And being able to gain expertise quickly means that we can become expert generalists.

There is good evidence that the broader our skill-set the more resilient we are to change, and the more innovative we are when faced with novel challenges.

In the Navy of the 1800’s sailors were “Jacks of All Trades and Master of One” because if only one person knew how to navigate and they got shot or died of scurvy the whole ship was doomed.  Survival required resilience and that meant multi-skilled teams who were good enough at everything to keep the ship afloat – literally.


Specialisation has another big drawback – it is very expensive and on many dimensions. Not just Finance.

Example:

Suppose we have six-step process and we have specialised to the point where an individual can only do one step to the required level of performance (safety/flow/quality/productivity).  The minimum number of people we need is six and the process only flows when we have all six people. Our minimum costs are high and they do not scale with flow.

If any one of the six are not there then the whole process stops. There is no flow.  So queues build up and smooth flow is sacrificed.

Out system behaves in an unstable and chaotic feast-or-famine manner and rapidly shifting priorities create what is technically called ‘thrashing’.

And the special-six do not like the constant battering.

And the special-six have the power to individually hold the whole system to ransom – they do not even need to agree.

And then we aggravate the problem by paying them the high salary that it is independent of how much they collectively achieve.

We now have the perfect recipe for a bigger problem!  A bunch of grumpy, highly-paid specialists who blame each other for the chaos and who incessantly clamour for ‘more resources’ at every step.

This is not financially viable and so creates the drive for economy-of-scale thinking in which to get us ‘flow resilience’ we need more than one specialist at each of the six steps so that if one is on holiday or off sick then the process can still flow.  Let us call these tribes of ‘speciologists’ there own names and budgets, and now we need to put all these departments somewhere – so we will need a big hospital to fit them in – along with the queues of waiting work that they need.

Now we make an even bigger design blunder.  We assume the ‘efficiency’ of our system is the same as the average utilisation of all the departments – so we trim budgets until everyone’s utilisation is high; and we suck any-old work in to ensure there is always something to do to keep everyone busy.

And in so doing we sacrifice all our Economy of Flow opportunities and we then scratch our heads and wonder why our total costs and queues are escalating,  safety and quality are falling, the chaos continues, and our tribes of highly-paid specialists are as grumpy as ever they were!   It must be an impossible-to-solve problem!


Now contrast that with having a pool of generalists – all of whom are multi-skilled and can do any of the six steps to the required level of expertise.  A pool of generalists is a much more resilient-flow design.

And the key phrase here is ‘to the required level of expertise‘.

That is how to achieve Economy-of-Flow on a small scale without compromising either safety or quality.

Yes, there is still a need for a super-level of expertise to tackle the small number of complex problems – but that expertise is better delivered as a collective-expertise to an individual problem-focused process.  That is a completely different design.

Designing and delivering a system that that can achieve the synergy of the pool-of-generalists and team-of-specialists model requires addressing a key error of omission first: we are not trained how to do this.

We are not trained in Complex-Adaptive-System Improvement-by-Design.

So that is where we must start.

 

The Learning Labyrinth

Minecraft There is an amazing phenomenon happening right now – a whole generation of people are learning to become system designers and they are doing it by having fun.

There is a game called Minecraft which millions of people of all ages are rapidly discovering.  It is creative, fun and surprisingly addictive.

This is what it says on the website.

“Minecraft is a game about breaking and placing blocks. At first, people built structures to protect against nocturnal monsters, but as the game grew players worked together to create wonderful, imaginative things.”

The principle is that before you can build you have to dig … you have to gather the raw materials you need … and then you have to use what you have gathered in novel and imaginative ways.  You need tools too, and you need to learn what they are used for, and what they are useless for. And the quickest way to learn the necessary survival and creative  skills is by exploring, experimenting, seeking help, and sharing your hard-won knowledge and experience with others.

The same principles hold in the real world of Improvement Science.

The treasure we are looking for is less tangible though … but no less difficult to find … unless you know where to look.

The treasure we seek is learning; how to achieve significant and sustained improvement on all dimensions.

And there is a mountain of opportunity that we can mine into. It is called Reality.

And when we do that we uncover nuggets of knowledge, jewels of understanding, and pearls of wisdom.

There are already many tunnels that have been carved out by others who have gone before us. They branch and join to form a vast cave network. A veritable labyrinth. Complicated and not always well illuminated or signposted.

And stored in the caverns is a vast treasure trove of experience we can dip into – and an even greater horde of new treasure waiting to be discovered.

But even now there there is no comprehensive map of the labyrinth. So it is easy to get confused and to get lost. Not all junctions have signposts and not all the signposts are correct. There are caves with many entrances and exits, there are blind-ending tunnels, and there are many hazards and traps for the unwary.

So to enter the Learning Labyrinth and to return safety with Improvement treasure we need guides. Those who know the safe paths and the unsafe ones. And as we explore we all need to improve the signage and add warning signs where hazards lurk.

And we need to work at the edge of knowledge  to extend the tunnels further. We need to seal off the dead-ends, and to draw and share up-to-date maps of the paths.

We need to grow a Community of Improvement Science Minecrafters.

And the first things we need are some basic improvement tools and techniques … and they can be found here.

SuDoKu

sudokuAn Improvement-by-Design challenge is very like a Sudoku puzzle. The rules are deceptively simple but the solving the puzzle is not so simple.

For those who have never tried a Sudoku puzzle the objective is to fill in all the empty boxes with a number between 1 and 9. The constraint is that each row, column and 3×3 box (outlined in bold) must include all the numbers between 1 and 9 i.e. no duplicates.

What you will find when you try is that, at each point in the puzzle solving process there are more than one choice for  most empty cells.

The trick is to find the empty cells that have only one option and fill those in. That changes the puzzle and makes it ‘easier’.

And when you keep following this strategy, and so long as you do not make any mistakes, then you will solve the puzzle.  It just takes concentration, attention to detail, and discipline.

In the example above, the top-right cell in the left-box on the middle-row can only hold a 6; and the top-middle cell in the middle-box on the bottom-row must be a 3.

So we can see already there are three ways ‘into’ the solution – put the 6 in and see where that takes us; put the 3 in and see where that takes us; or put both in and see where that takes us.

The final solution will be the same – so there are multiple paths from where we are to our objective.  Some may involve more mental work than others but all will involve completing the same number of empty cells.

What is also clear is that the sequence order that we complete the empty cells is not arbitrary. Usually the boxes and rows with the fewest empty cells get competed earlier and those with the most empty cells at the start get completed later.

And even if the final configuration is the same, if we start with a different set of missing cells the solution path will be different. It may be very easy, very hard or even impossible without some ‘guessing’ and hoping for the best.


Exactly the same is true of improvement-by-design challenges.

The rules of flow science  are rather simple; but when we have a system of parallel streams (the rows) interacting with parallel stages (the columns); and when we have safety, delivery, and economy constraints to comply with at every part of the system … then finding and ‘improvement plan’ that will deliver our objective is a tough challenge.

But it is possible with concentration, attention-to-detail and discipline; and that requires some flow science training and some improvement science practice.

OK – I am off for lunch and then maybe indulge in a Sudoku puzzle or two – just for fun – and then maybe design an improvement plan to two – just for fun!

 

Firewall

buncefield_fireFires are destructive, indifferent, and they can grow and spread very fast.

The picture is of  the Buncefield explosion and conflagration that occurred on 11th December 2005 near Hemel Hempstead in the UK.  The root cause was a faulty switch that failed to prevent tank number 912 from being overfilled. This resulted in an initial 300 gallon petrol spill which created the perfect conditions for an air-fuel explosion.  The explosion was triggered by a spark and devastated the facility. Over 2000 local residents needed to be evacuated and the massive fuel fire took days to bring under control. The financial cost of the accident has been estimated to run into tens of millions of pounds.

The Great Fire of London in September 1666 led directly to the adoption of new building standards – notably brick and stone instead of wood because they are more effective barriers to fire.

A common design to limit the spread of a fire is called a firewall.

And we use the same principle in computer systems to limit the spread of damage when a computer system goes out of control.


Money is the fuel that keeps the wheels of healthcare systems turning.  And healthcare is an expensive business so every drop of cash-fuel is precious.  Healthcare is also a risky business – from both a professional and a financial perspective. Mistakes can quickly lead to loss of livelihood, expensive recovery plans and huge compensation claims. The social and financial equivalent of a conflagration.

Financial fires spread just like real ones – quickly. So it makes good sense not to have all the cash-fuel in one big pot.  It makes sense to distribute it to smaller pots – in each department – and to distribute the cash-fuel intermittently. These cash-fuel silos are separated by robust financial firewalls and they are called Budgets.

The social sparks that ignite financial fires are called ‘Niggles‘.  They are very numerous but we have effective mechanisms for containing them. The problem happens when a multiple sparks happen at the same time and place and together create a small chain reaction. Then we get a complaint. A ‘Not Again‘.  And we are required to spend some of our precious cash-fuel investigating and apologizing.  We do not deal with the root cause, we just scrape the burned toast.

And then one day the chain reaction goes a bit further and we get a ‘Near Miss‘.  That has a different  reporting mechanism so it stimulates a bigger investigation and it usually culminates in some recommendations that involve more expensive checking, documenting and auditing of the checking and documentation.  The root cause, the Niggles, go untreated – because there are too many of them.

But this check-and-correct reaction is also  expensive and we need even more cash-fuel to keep the organizational engine running – but we do not have any more. Our budgets are capped. So we start cutting corners. A bit here and a bit there. And that increases the risk of more Niggles, Not Agains, and Near Misses.

Then the ‘Never Event‘ happens … a Safety and Quality catastrophe that triggers the financial conflagration and toasts the whole organization.


So although our financial firewalls, the Budgets, are partially effective they also have downsides:

1. Paradoxically they can create the perfect condition for a financial conflagration when too small a budget leads to corner-cutting on safety.

2. They lead to ‘off-loading’ which means that too-expensive-to-solve problems are chucked over the financial firewalls into the next department.  The cost is felt downstream of the source – in a different department – and is often much larger. The sparks are blown downwind.

For example: a waiting list management department is under financial pressure and is running short staffed as a recruitment freeze has been imposed. The overburdening of the remaining staff leads to errors in booking patients for operations. The knock on effect that is patients being cancelled on the day and the allocated operating theatre time is wasted.  The additional cost of wasted theatre time is orders of magnitude greater than the cost-saving achieved in the upstream stage.  The result is a lower quality service, a greater cost to the whole system, and the risk that safety corners will be cut leading to a Near Miss or a Never Event.

The nature of real systems is that small perturbations can be rapidly amplified by a ‘tight’ financial design to create a very large and expensive perturbation called a ‘catastrophe’.  A silo-based financial budget design with a cost-improvement thumbscrew feature increases the likelihood of this universally unwanted outcome.

So if we cannot use one big fuel tank or multiple, smaller, independent fuel tanks then what is the solution?

We want to ensure smooth responsiveness of our healthcare engine, we want healthcare  cash-fuel-efficiency and we want low levels of toxic emissions (i.e. complaints) at the same time. How can we do that?

Fuel-injection.

fuel_injectorsElectronic Fuel Injection (EFI) designs have now replaced the old-fashioned, inefficient, high-emission  carburettor-based engines of the 1970’s and 1980’s.

The safer, more effective and more efficient cash-flow design is to inject the cash-fuel where and when it is needed and in just the right amount.

And to do that we need to have a robust, reliable and rapid feedback system that controls the cash-injectors.

But we do not have such a feedback system in healthcare so that is where we need to start our design work.

Designing an automated cash-injection system requires understanding how the Seven Flows of any  system work together and the two critical flows are Data Flow and Cash Flow.

And that is possible.

The Improvement Pyramid

tornada_150_wht_10155The image of a tornado is what many associate with improvement.  An unpredictable, powerful, force that sweeps away the wood in its path. It certainly transforms – but it leaves a trail of destruction and disappointment in its wake. It does not discriminate  between the green wood and the dead wood.

A whirlwind is created by a combination of powerful forces – but the trigger that unleashes the beast is innocuous. The classic ‘butterfly wing effect’. A spark that creates an inferno.

This is not the safest way to achieve significant and sustained improvement. A transformation tornado is a blunt and destructive tool.  All it can hope to achieve is to clear the way for something more elegant. Improvement Science.

We need to build the capability for improvement progressively and to build it effective, efficient, strong, reliable, and resilient. In a word  – trustworthy. We need a durable structure.

But what sort of structure?  A tower from whose lofty penthouse we can peer far into the distance?  A bridge between the past and the future? A house with foundations, walls and a roof? Do these man-made edifices meet our criteria?  Well partly.

Let us see what nature suggests. What are the naturally durable designs?

Suppose we have a bag of dry sand – an unstructured mix of individual grains – and that each grain represents an improvement idea.

Suppose we have a specific issue that we would like to improve – a Niggle.

Let us try dropping the Improvement Sand on the Niggle – not in a great big reactive dollop – but in a proactive, exploratory bit-at-a-time way.  What shape emerges?

hourglass_150_wht_8762What we see is illustrated by the hourglass.  We get a pyramid.

The shape of the pyramid is determined by two factors: how sticky the sand is and how fast we pour it.

What we want is a tall pyramid – one whose sturdy pinnacle gives us the capability to see far and to do much.

The stickier the sand the steeper the sides of our pyramid.  The faster we pour the quicker we get the height we need. But there is a limit. If we pour too quickly we create instability – we create avalanches.

So we need to give the sand time to settle into its stable configuration; time for it to trickle to where it feels most comfortable.

And, in translating this metaphor to building improvement capability in system we could suggest that the ‘stickiness’ factor is how well ideas hang together and how well individuals get on with each other and how well they share ideas and learning. How cohesive our people are.  Distrust and conflict represent repulsive forces.  Repulsion creates a large, wide, flat structure  – stable maybe but incapable of vision and improvement. That is not what we need

So when developing a strategy for building improvement capability we build small pyramids where the niggles point to. Over time they will merge and bigger pyramids will appear and merge – until we achieve the height. Then was have a stable and capable improvement structure. One that we can use and we can trust.

Just from sprinkling Improvement Science Sand on our Niggles.

Our Iceberg Is Melting

hold_your_ground_rope_300_wht_6223[Dring Dring] The telephone soundbite announced the start of the coaching session.

<Bob> Good morning Leslie. How are you today?

<Leslie> I have been better.

<Bob> You seem upset. Do you want to talk about it?

<Leslie> Yes, please. The trigger for my unhappiness is that last week I received an email demanding that I justify the time I spend doing improvement work and  a summons to a meeting to ‘discuss some issues that have been raised‘.

<Bob> OK. I take it that you do not know what or who has triggered this inquiry.

<Leslie> You are correct. My working hypothesis is that it is the end of the financial year and budget holders are looking for opportunities to do some pruning – to meet their cost improvement program targets!

<Bob> So what is the problem? You have shared the output of your work. You have demonstrated significant improvements in safety, flow, quality and productivity and you have described both them and the methodology clearly.

<Leslie> I know. That us why I was so upset to get this email. It is as if everything that we have achieved has been ignored. It is almost as if it is resented.

<Bob> Ah! You may well be correct.  This is the nature of paradigm shifts. Those who have the greatest vested interest in the current paradigm get spooked when they feel it start to wobble. Each time you share the outcome of your improvement work you create emotional shock-waves. The effects are cumulative and eventually there will be is a ‘crisis of confidence’ in those who feel most challenged by the changes that you are demonstrating are possible.  The whole process is well described in Thomas Kuhn’s The Structure of Scientific Revolutions. That is not a book for an impatient reader though – for those who prefer something lighter I recommend “Our Iceberg is Melting” by John Kotter.

<Leslie> Thanks Bob. I will get a copy of Kotter’s book – that sounds more my cup of tea. Will that tell me what to do?

<Bob> It is a parable – a fictional story of a colony of penguins who discover that their iceberg is melting and are suddenly faced with a new and urgent potential risk of not surviving the storms of the approaching winter. It is not a factual account of a real crisis or a step-by-step recipe book for solving all problems  – it describes some effective engagement strategies in general terms.

<Leslie> I will still read it. What I need is something more specific to my actual context.

<Bob> This is an improvement-by-design challenge. The only difference from the challenges you have done already is that this time the outcome you are looking for is a smooth transition from the ‘old’ paradigm to the ‘new’ one.  Kuhn showed that this transition will not start to happen until there is a new paradigm because individuals choose to take the step from the old to the new and they do not all do that at the same time.  Your work is demonstrating that there is a new paradigm. Some will love that message, some will hate it. Rather like Marmite.

<Leslie> Yes, that make sense.  But how do I deal with an unseen enemy who is stirring up trouble behind my back?

<Bob> Are you are referring to those who have ‘raised some issues‘?

<Leslie> Yes.

<Bob> They will be the ones who have most invested in the current status quo and they will not be in senior enough positions to challenge you directly so they are going around spooking the inner Chimps of those who can. This is expected behaviour when the relentlessly changing reality starts to wobble the concrete current paradigm.

<Leslie> Yes! That is  exactly how it feels.

<Bob> The danger lurking here is that your inner Chimp is getting spooked too and is conjuring up Gremlins and Goblins from the Computer! Left to itself your inner Chimp will steer you straight into the Victim Vortex.  So you need to take it for a long walk, let it scream and wave its hairy arms about, listen to it, and give it lots of bananas to calm it down. Then put your put your calmed-down Chimp into its cage and your ‘paradigm transition design’ into the Computer. Only then will you be ready for the ‘so-justify-yourself’ meeting.  At the meeting your Chimp will be out of its cage like a shot and interpreting everything as a threat. It will disable you and go straight to the Computer for what to do – and it will read your design and follow the ‘wise’ instructions that you have put in there.

<Leslie> Wow! I see how you are using the Chimp Paradox metaphor to describe an incredibly complex emotional process in really simple language. My inner Chimp is feeling happier already!

<Bob> And remember that you are in all in the same race. Your collective goal is to cross the finish line as quickly as possible with the least chaos, pain and cost.  You are not in a battle – that is lose-lose inner Chimp thinking.  The only message that your interrogators must get from you is ‘Win-win is possible and here is how we can do it‘. That will be the best way to soothe their inner Chimps – the ones who fear that you are going to sink their boat by rocking it.

<Leslie> That is really helpful. Thank you again Bob. My inner Chimp is now snoring gently in its cage and while it is asleep I have some Improvement-by-Design work to do and then some Computer programming.

Reducing Avoidable Harm

patient_stumbling_with_bandages_150_wht_6861Primum non nocere” is Latin for “First do no harm”.

It is a warning mantra that had been repeated by doctors for thousands of years and for good reason.

Doctors  can be bad for your health.

I am not referring to the rare case where the doctor deliberately causes harm.  Such people are criminals and deserve to be in prison.

I am referring to the much more frequent situation where the doctor has no intention to cause harm – but harm is the outcome anyway.

Very often the risk of harm is unavoidable. Healthcare is a high risk business. Seriously unwell patients can be very unstable and very unpredictable.  Heroic efforts to do whatever can be done can result in unintended harm and we have to accept those risks. It is the nature of the work.  Much of the judgement in healthcare is balancing benefit with risk on a patient by patient basis. It is not an exact science. It requires wisdom, judgement, training and experience. It feels more like an art than a science.

The focus of this essay is not the above. It is on unintentionally causing avoidable harm.

Or rather unintentionally not preventing avoidable harm which is not quite the same thing.

Safety means prevention of avoidable harm. A safe system is one that does that. There is no evidence of harm to collect. A safe system does not cause harm. Never events never happen.

Safe systems are designed to be safe.  The root causes of harm are deliberately designed out one way or another.  But it is not always easy because to do that we need to understand the cause-and-effect relationships that lead to unintended harm.  Very often we do not.


In 1847 a doctor called Ignaz Semmelweis made a very important discovery. He discovered that if the doctors and medical students washed their hands in disinfectant when they entered the labour ward, then the number of mothers and babies who died from infection was reduced.

And the number dropped a lot.

It fell from an annual average of 10% to less than 2%!  In really bad months the rate was 30%.

The chart below shows the actual data plotted as a time-series chart. The yellow flag in 1848 is just after Semmelweis enforced a standard practice of hand-washing.

Vienna_Maternal_Mortality_1785-1848

Semmelweis did not know the mechanism though. This was not a carefully designed randomised controlled trial (RCT). He was desperate. And he was desperate because this horrendous waste of young lives was only happening on the doctors ward.  On the nurses ward, which was just across the corridor, the maternal mortality was less than 2%.

The hospital authorities explained it away as ‘bad air’ from outside. That was the prevailing belief at the time. Unavoidable. A risk that had to be just accepted.

Semmeleis could not do a randomized controlled trial because they were not invented until a century later.

And Semmelweis suspected that the difference between the mortality on the nurses and the doctors wards was something to do with the Mortuary. Only the doctors performed the post-mortems and the practice of teaching anatomy to medical students using post-mortem dissection was an innovation pioneered in Vienna in 1823 (the first yellow flag on the chart above). But Semmelweis did not have this data in 1847.  He collated it later and did not publish it until 1861.

What Semmelweis demonstrated was the unintended and avoidable deaths were caused by ignorance of the mechanism of how microorganisms cause disease. We know that now. He did not.

It would be another 20 years before Louis Pasteur demonstrated the mechanism using the famous experiment with the swan neck flask. Pasteur did not discover microorganisms;  he proved that they did not appear spontaneously in decaying matter as was believed. He proved that by killing the bugs by boiling, the broth in the flask  stayed fresh even though it was exposed to the air. That was a big shock but it was a simple and repeatable experiment. He had a mechanism. He was believed. Germ theory was born. A Scottish surgeon called Joseph Lister read of this discovery and surgical antisepsis was born.

Semmelweis suspected that some ‘agent’ may have been unwittingly transported from the dead bodies to the live mothers and babies on the hands of the doctors.  It was a deeply shocking suggestion that the doctors were unwittingly killing their patients.

The other doctors did not take this suggestion well. Not well at all. They went into denial. They discounted the message and they discharged the messenger. Semmelweis never worked in Vienna again. He went back to Hungary and repeated the experiment. It worked.


Even today the message that healthcare practitioners can unwittingly bring avoidable harm to their patients is disturbing. We still seek solace in denial.

Hospital acquired infections (HAI) are a common cause of harm and many are avoidable using simple, cheap and effective measures such as hand-washing.

The harm does not come from what we do. It comes from what we do not do. It happens when we omit to follow the simple safety measures that have be proven to work. Scientifically. Statistically Significantly. Understood and avoidable errors of omission.


So how is this “statistically significant scientific proof” acquired?

By doing experiments. Just like the one Ignaz Semmelweis conducted. But the improvement he showed was so large that it did not need statistical analysis to validate it.  And anyway such analysis tools were not available in 1847. If they had been he might have had more success influencing his peers. And if he had achieved that goal then thousands, if not millions, of deaths from hospital acquired infections may have been prevented.  With the clarity of hindsight we now know this harm was avoidable.

No. The problem we have now is because the improvement that follows a single intervention is not very large. And when the causal mechanisms are multi-factorial we need more than one intervention to achieve the improvement we want. The big reduction in avoidable harm. How do we do that scientifically and safely?


About 20% of hospital acquired infections occur after surgical operations.

We have learned much since 1847 and we have designed much safer surgical systems and processes. Joseph Lister ushered in the era of safe surgery, much has happened since.

We routinely use carefully designed, ultra-clean operating theatres, sterilized surgical instruments, gloves and gowns, and aseptic techniques – all to reduce bacterial contamination from outside.

But surgical site infections (SSIs) are still common place. Studies show that 5% of patients on average will suffer this complication. Some procedures are much higher risk than others, despite the precautions we take.  And many surgeons assume that this risk must just be accepted.

Others have tried to understand the mechanism of SSI and their research shows that the source of the infections is the patients themselves. We all carry a ‘bacterial flora’ and normally that is no problem. Our natural defense – our skin – is enough.  But when that biological barrier is deliberately breached during a surgical operation then we have a problem. The bugs get in and cause mischief. They cause surgical site infections.

So we have done more research to test interventions to prevent this harm. Each intervention has been subject to well-designed, carefully-conducted, statistically-valid and very expensive randomized controlled trials.  And the results are often equivocal. So we repeat the trials – bigger, better controlled trials. But the effects of the individual interventions are small and they easily get lost in the noise. So we pool the results of many RCTs in what is called a ‘meta-analysis’ and the answer from that is very often ‘not proven’ – either way.  So individual surgeons are left to make the judgement call and not surprisingly there is wide variation in practice.  So is this the best that medical science can do?

No. There is another way. What we can do is pool all the learning from all the trials and design a multi-facetted intervention. A bundle of care. And the idea of a bundle is that the  separate small effects will add or even synergise to create one big effect.  We are not so much interested in the mechanism as the outcome. Just like Ignaz Semmelweiss.

And we can now do something else. We can test our bundle of care using statistically robust tools that do not require a RCT.  They are just as statistically valid as a RCT but a different design.

And the appropriate tool for this to measure the time interval between adverse the events  – and then to plot this continuous metric as a time-series chart.

But we must be disciplined. First we must establish the baseline average interval and then we introduce our bundle and then we just keep measuring the intervals.

If our bundle works then the interval between the adverse events gets longer – and we can easily prove that using our time-series chart. The longer the interval the more ‘proof’ we have.  In fact we can even predict how long we need to observe to prove that ‘no events’ is a statistically significant improvement. That is an elegant an efficient design.


Here is a real and recent example.

The time-series chart below shows the interval in days between surgical site infections following routine hernia surgery. These are not life threatening complications. They rarely require re-admission or re-operation. But they are disruptive for patients. They cause pain, require treatment with antibiotics, and the delay recovery and return to normal activities. So we would like to avoid them if possible.

Hernia_SSI_CareBundle

The green and red lines show the baseline period. The  green line says that the average interval between SSIs is 14 days.  The red line says that an interval more than about 60 days would be surprisingly long: valid statistical evidence of an improvement.  The end of the green and red lines indicates when the intervention was made: when the evidence-based designer care bundle was adopted together with the discipline of applying it to every patient. No judgement. No variation.

The chart tells the story. No complicated statistical analysis is required. It shows a statistically significant improvement.  And the SSI rate fell by over 80%. That is a big improvement.

We still do not know how the care bundle works. We do not know which of the seven simultaneous simple and low-cost interventions we chose are the most important or even if they work independently or in synergy.  Knowledge of the mechanism was not our goal.

Our goal was to improve outcomes for our patients – to reduce avoidable harm – and that has been achieved. The evidence is clear.

That is Improvement Science in action.

And to read the full account of this example of the Science of Improvement please go to:

http://www.journalofimprovementscience.org

It is essay number 18.

And avoid another error of omission. If you have read this far please share this message – it is important.

The Battle of the Chimps

Chimp_BattleImprovement implies change.
Change implies action.
Action implies decision.

So how is the decision made?
With Urgency?
With Understanding?

Bitter experience teaches us that often there is an argument about what to do and when to do it.  An argument between two factions. Both are motivated by a combination of anger and fear. One side is motivated more by anger than fear. They vote for action because of the urgency of the present problem. The other side is motivated more by fear than anger. They vote for inaction because of their fear of future failure.

The outcome is unhappiness for everyone.

If the ‘action’ party wins the vote and a failure results then there is blame and recrimination. If the ‘inaction’ party wins the vote and a failure results then there is blame and recrimination. If either party achieves a success then there is both gloating and resentment. Lose Lose.

The issue is not the decision and how it is achieved.The problem is the battle.

Dr Steve Peters is a psychiatrist with 30 years of clinical experience.  He knows how to help people succeed in life through understanding how the caveman wetware between their ears actually works.

In the run up to the 2012 Olympic games he was the sports psychologist for the multiple-gold-medal winning UK Cycling Team.  The World Champions. And what he taught them is described in his book – “The Chimp Paradox“.

Chimp_Paradox_SmallSteve brilliantly boils the current scientific understanding of the complexity of the human mind down into a simple metaphor.

One that is accessible to everyone.

The metaphor goes like this:

There are actually two ‘beings’ inside our heads. The Chimp and the Human. The Chimp is the older, stronger, more emotional and more irrational part of our psyche. The Human is the newer, weaker, logical and rational part.  Also inside there is the Computer. It is just a memory where both the Chimp and the Human store information for reference later. Beliefs, values, experience. Stuff like that. Stuff they use to help them make decisions.

And when some new information arrives through our senses – sight and sound for example – the Chimp gets first dibs and uses the Computer to look up what to do.  Long before the Human has had time to analyse the new information logically and rationally. By the time the Human has even started on solving the problem the Chimp has come to a decision and signaled it to the Human and associated it with a strong emotion. Anger, Fear, Excitement and so on. The Chimp operates on basic drives like survival-of-the-self and survival-of-the-species. So if the Chimp gets spooked or seduced then it takes control – and it is the stronger so it always wins the internal argument.

But the human is responsible for the actions of the Chimp. As Steve Peters says ‘If your dog bites someone you cannot blame the dog – you are responsible for the dog‘.  So it is with our inner Chimps. Very often we end up apologising for the bad behaviour of our inner Chimp.

Because our inner Chimp is the stronger we cannot ‘control’ it by force. We have to learn how to manage the animal. We need to learn how to soothe it and to nurture it. And we need to learn how to remove the Gremlins that it has programmed into the Computer. Our inner Chimp is not ‘bad’ or ‘mad’ it is just a Chimp and it is an essential part of us.

Real chimpanzees are social, tribal and territorial.  They live in family groups and the strongest male is the boss. And it is now well known that a troop of chimpanzees in the wild can plan and wage battles to acquire territory from neighbouring troops. With casualties on both sides.  And so it is with people when their inner Chimps are in control.

Which is most of the time.

Scenario:
A hospital is failing one of its performance targets – the 18 week referral-to-treatment one – and is being threatened with fines and potential loss of its autonomy. The fear at the top drives the threat downwards. Operational managers are forced into action and do so using strategies that have not worked in the past. But they do not have time to learn how to design and test new ones. They are bullied into Plan-Do mode. The hospital is also required to provide safe care and the Plan-Do knee-jerk triggers fear-of-failure in the minds of the clinicians who then angrily oppose the diktat or quietly sabotage it.

This lose-lose scenario is being played out  in  100’s if not 1000’s of hospitals across the globe as we speak.  The evidence is there for everyone to see.

The inner Chimps are in charge and the outcome is a turf war with casualties on all sides.

So how does The Chimp Paradox help dissolve this seemingly impossible challenge?

First it is necessary to appreciate that both sides are being controlled by their inner Chimps who are reacting from a position of irrational fear and anger. This means that everyone’s behaviour is irrational and their actions likely to be counter-productive.

What is needed is for everyone to be managing their inner Chimps so that the Humans are back in control of the decision making. That way we get wise decisions that lead to effective actions and win-win outcomes. Without chaos and casualties.

To do this we all need to learn how to manage our own inner Chimps … and that is what “The Chimp Paradox” is all about. That is what helped the UK cyclists to become gold medalists.

In the scenario painted above we might observe that the managers are more comfortable in the Pragmatist-Activist (PA) half of the learning cycle. The Plan-Do part of PDSA  – to translate into the language of improvement. The clinicians appear more comfortable in the Reflector-Theorist (RT) half. The Study-Act part of PDSA.  And that difference of preference is fueling the firestorm.

Improvement Science tells us that to achieve and sustain improvement we need all four parts of the learning cycle working  smoothly and in sequence.

So what at first sight looks like it must be pitched battle which will result in two losers; in reality is could be a three-legged race that will result in everyone winning. But only if synergy between the PA and the RT halves can be achieved.

And that synergy is achieved by learning to respect, understand and manage our inner Chimps.

Jiggling

hurry_with_the_SFQP_kit[Dring] Bob’s laptop signaled the arrival of Leslie for their regular ISP remote coaching session.

<Bob> Hi Leslie. Thanks for emailing me with a long list of things to choose from. It looks like you have been having some challenging conversations.

<Leslie> Hi Bob. Yes indeed! The deepening gloom and the last few blog topics seem to be polarising opinion. Some are claiming it is all hopeless and others, perhaps out of desperation, are trying the FISH stuff for themselves and discovering that it works.  The ‘What Ifs’ are engaged in war of words with the ‘Yes Buts’.

<Bob> I like your metaphor! Where would you like to start on the long list of topics?

<Leslie> That is my problem. I do not know where to start. They all look equally important.

<Bob> So, first we need a way to prioritise the topics to get the horse-before-the-cart.

<Leslie> Sounds like a good plan to me!

<Bob> One of the problems with the traditional improvement approaches is that they seem to start at the most difficult point. They focus on ‘quality’ first – and to be fair that has been the mantra from the gurus like W.E.Deming. ‘Quality Improvement’ is the Holy Grail.

<Leslie>But quality IS important … are you saying they are wrong?

<Bob> Not at all. I am saying that it is not the place to start … it is actually the third step.

<Leslie>So what is the first step?

<Bob> Safety. Eliminating avoidable harm. Primum Non Nocere. The NoNos. The Never Events. The stuff that generates the most fear for everyone. The fear of failure.

<Leslie> You mean having a service that we can trust not to harm us unnecessarily?

<Bob> Yes. It is not a good idea to make an unsafe design more efficient – it will deliver even more cumulative harm!

<Leslie> OK. That makes perfect sense to me. So how do we do that?

<Bob> It does not actually matter.  Well-designed and thoroughly field-tested checklists have been proven to be very effective in the ‘ultra-safe’ industries like aerospace and nuclear.

<Leslie> OK. Something like the WHO Safe Surgery Checklist?

<Bob> Yes, that is a good example – and it is well worth reading Atul Gawande’s book about how that happened – “The Checklist Manifesto“.  Gawande is a surgeon who had published a lot on improvement and even so was quite skeptical that something as simple as a checklist could possibly work in the complex world of surgery. In his book he describes a number of personal ‘Ah Ha!’ moments that illustrate a phenomenon that I call Jiggling.

<Leslie> OK. I have made a note to read Checklist Manifesto and I am curious to learn more about Jiggling – but can we stick to the point? Does quality come after safety?

<Bob> Yes, but not immediately after. As I said, Quality is the third step.

<Leslie> So what is the second one?

<Bob> Flow.

There was a long pause – and just as Bob was about to check that the connection had not been lost – Leslie spoke.

<Leslie> But none of the Improvement Schools teach basic flow science.  They all focus on quality, waste and variation!

<Bob> I know. And attempting to improve quality before improving flow is like papering the walls before doing the plastering.  Quality cannot grow in a chaotic context. The flow must be smooth before that. And the fear of harm must be removed first.

<Leslie> So the ‘Improving Quality through Leadership‘ bandwagon that everyone is jumping on will not work?

<Bob> Well that depends on what the ‘Leaders’ are doing. If they are leading the way to learning how to design-for-safety and then design-for-flow then the bandwagon might be a wise choice. If they are only facilitating collaborative agreement and group-think then they may be making an unsafe and ineffective system more efficient which will steer it over the edge into faster decline.

<Leslie>So, if we can stabilize safety using checklists do we focus on flow next?

<Bob>Yup.

<Leslie> OK. That makes a lot of sense to me. So what is Jiggling?

<Bob> This is Jiggling. This conversation.

<Leslie> Ah, I see. I am jiggling my understanding through a series of ‘nudges’ from you.

<Bob>Yes. And when the learning cogs are a bit rusty, some Improvement Science Oil and a bit of Jiggling is more effective and much safer than whacking the caveman wetware with a big emotional hammer.

<Leslie>Well the conversation has certainly jiggled Safety-Flow-Quality-and-Productivity into a sensible order for me. That has helped a lot. I will sort my to-do list into that order and start at the beginning. Let me see. I have a plan for safety, now I can focus on flow. Here is my top flow niggle. How do I design the resource capacity I need to ensure the flow is smooth and the waiting times are short enough to avoid ‘persecution’ by the Target Time Police?

<Bob> An excellent question! I will send you the first ISP Brainteaser that will nudge us towards an answer to that question.

<Leslie> I am ready and waiting to have my brain-teased and my niggles-nudged!

The Speed of Trust

London_UndergroundSystems are built from intersecting streams of work called processes.

This iconic image of the London Underground shows a system map – a set of intersecting transport streams.

Each stream links a sequence of independent steps – in this case the individual stations.  Each step is a system in itself – it has a set of inner streams.

For a system to exhibit stable and acceptable behaviour the steps must be in synergy – literally ‘together work’. The steps also need to be in synchrony – literally ‘same time’. And to do that they need to be aligned to a common purpose.  In the case of a transport system the design purpose is to get from A to B safety, quickly, in comfort and at an affordable cost.

In large socioeconomic systems called ‘organisations’ the steps represent groups of people with special knowledge and skills that collectively create the desired product or service.  This creates an inevitable need for ‘handoffs’ as partially completed work flows through the system along streams from one step to another. Each step contributes to the output. It is like a series of baton passes in a relay race.

This creates the requirement for a critical design ingredient: trust.

Each step needs to be able to trust the others to do their part:  right-first-time and on-time.  All the steps are directly or indirectly interdependent.  If any one of them is ‘untrustworthy’ then the whole system will suffer to some degree. If too many generate dis-trust then the system may fail and can literally fall apart. Trust is like social glue.

So a critical part of people-system design is the development and the maintenance of trust-bonds.

And it does not happen by accident. It takes active effort. It requires design.

We are social animals. Our default behaviour is to trust. We learn distrust by experiencing repeated disappointments. We are not born cynical – we learn that behaviour.

The default behaviour for inanimate systems is disorder – and it has a fancy name – it is called ‘entropy’. There is a Law of Physics that says that ‘the average entropy of a system will increase over time‘. The critical word is ‘average’.

So, if we are not aware of this and we omit to pay attention to the hand-offs between the steps we will observe increasing disorder which leads to repeated disappointments and erosion of trust. Our natural reaction then is ‘self-protect’ which implies ‘check-and-reject’ and ‘check and correct’. This adds complexity and bureaucracy and may prevent further decline – which is good – but it comes at a cost – quite literally.

Eventually an equilibrium will be achieved where our system performance is limited by the amount of check-and-correct bureaucracy we can afford.  This is called a ‘mediocrity trap’ and it is very resilient – which means resistant to change in any direction.


To escape from the mediocrity trap we need to break into the self-reinforcing check-and-reject loop and we do that by developing a design that challenges ‘trust eroding behaviour’.  The strategy is to develop a skill called  ‘smart trust’.

To appreciate what smart trust is we need to view trust as a spectrum: not as a yes/no option.

At one end is ‘nonspecific distrust’ – otherwise known as ‘cynical behaviour’. At the other end is ‘blind trust’ – otherwise  known and ‘gullible behaviour’.  Neither of these are what we need.

In the middle is the zone of smart trust that spans healthy scepticism  through to healthy optimism.  What we need is to maintain a balance between the two – not to eliminate them. This is because some people are ‘glass-half-empty’ types and some are ‘glass-half-full’. And both views have a value.

The action required to develop smart trust is to respectfully challenge every part of the organisation to demonstrate ‘trustworthiness’ using evidence.  Rhetoric is not enough. Politicians always score very low on ‘most trusted people’ surveys.

The first phase of this smart trust development is for steps to demonstrate trustworthiness to themselves using their own evidence, and then to share this with the steps immediately upstream and downstream of them.

So what evidence is needed?

SFQP1Safety comes first. If a step cannot be trusted to be safe then that is the first priority. Safe systems need to be designed to be safe.

Flow comes second. If the streams do not flow smoothly then we experience turbulence and chaos which increases stress,  the risk of harm and creates disappointment for everyone. Smooth flow is the result of careful  flow design.

Third is Quality which means ‘setting and meeting realistic expectations‘.  This cannot happen in an unsafe, chaotic system.  Quality builds on Flow which builds on Safety. Quality is a design goal – an output – a purpose.

Fourth is Productivity (or profitability) and that does not automatically follow from the other three as some QI Zealots might have us believe. It is possible to have a safe, smooth, high quality design that is unaffordable.  Productivity needs to be designed too.  An unsafe, chaotic, low quality design is always more expensive.  Always. Safe, smooth and reliable can be highly productive and profitable – if designed to be.

So whatever the driver for improvement the sequence of questions is the same for every step in the system: “How can I demonstrate evidence of trustworthiness for Safety, then Flow, then Quality and then Productivity?”

And when that happens improvement will take off like a rocket. That is the Speed of Trust.  That is Improvement Science in Action.

Our Irrational Inner Chimp

single_file_line_PA_150_wht_3113The modern era in science started about 500 years ago when an increasing number of people started to challenge the dogma that our future is decided by Fates and Gods. That we had no influence. And to appease the ‘Gods’ we had to do as we were told. That was our only hope of Salvation.

This paradigm came under increasing pressure as the evidence presented by Reality did not match the Rhetoric.  Many early innovators paid for their impertinence with their fortunes, their freedom and often their future. They were burned as heretics.

When the old paradigm finally gave way and the Age of Enlightenment dawned the pendulum swung the other way – and the new paradigm became the ‘mechanical universe’. Isaac Newton showed that it was possible to predict, with very high accuracy, the motion of the planets just by adopting some simple rules and a new form of mathematics called calculus. This opened a door into a more hopeful world – if Nature follows strict rules and we know what they are then we can learn to control Nature and get what we need without having to appease any Gods (or priests).

This was the door to the Industrial Revolutions – there have been more that one – each lasting about 100 years (18th C, 19th C and 20th C). Each was associated with massive population growth as we systematically eliminated the causes of early mortality – starvation and infectious disease.

But not everything behaved like the orderly clockwork of the planets and the pendulums. There was still the capricious and unpredictable behaviour that we call Lady Luck.  Had the Gods retreated but were still playing dice?

Progress was made here too – and the history of the ‘understanding of chance’ is peppered with precocious and prickly mathematical savants who discovered that chance follows rules too. Probability theory was born and that spawned a troublesome child called Statistics. This was a trickier one to understand. To most people statistics is just mathematical gobbledygook.

But from that emerged a concept called the Rational Man – which underpinned the whole of Economic Theory for 250 years. Until very recently.  The RM hypothesis stated that we make unconscious but rational judgements when presented with uncertain win/lose choices.  And from that seed sprouted concepts such as the Law of Supply and Demand – when the supply of things we  demand are limited then we (rationally) value them more and will choose to pay more so prices go up so fewer can afford them so demand drops. Foxes and Rabbits. A negative feedback loop. The economic system becomes self-adjusting and self-stabilising.  The outcome of this assumption is a belief that ‘because people are collectively rational the economic system will be self-stabilising and it will correct the adverse short term effects of any policy blunders we make‘.  The ‘let-the-market-decide’ belief that experimental economic meddling is harmless over the long term and what is learned from ‘laissez-faire’ may even be helpful. It is a no-lose long term improvement strategy. Losers are just unlucky, stupid or both.

In 2002 the Nobel Prize for Economics was not awarded to an economist. It was awarded to a psychologist – Daniel Kahneman – who showed that the model of the Rational Man did not stand up to rigorous psychological experiment.  Reality demonstrated we are Irrational Chimps. The economists had omitted to test their hypothesis. Oops!


This lesson has many implications for the Science of Improvement.  One of which is a deeper understanding of the nemesis of improvement – resistance to change.

One of the surprising findings is that our judgements are biased – and our bias operates at an unconscious level – what Kahneman describes as the System One level. Chimp level. We are not aware we are making biased decisions.

For example. Many assume that we prefer certainty to uncertainty. We fear the unpredictable. We avoid it. We seek the predictable and the stable. And we will put up with just about anything so long as it is predictable. We do not like surprises.  And when presented with that assertion most people nod and say ‘Yes’ – that feels right.

We also prefer gain to loss.  We love winning. We hate losing. This ‘competitive spirit’ is socially reinforced from day one by our ‘pushy parents’ – we all know the ones – but we all do it to some degree. Do better! Work harder! Be a success! Optimize! Be the best! Be perfect! Be Perfect! BE PERFECT.

So which is more important to us? Losing or uncertainty? This is one question that Kahneman asked. And the answer he discovered was surprising – because it effectively disproved the Rational Man hypothesis.  And this is how a psychologist earned a Nobel Prize for Economics.

Kahneman discovered that loss is more important to us than uncertainty.

To demonstrate this he presented subjects with a choice between two win/lose options; and he presented the choice in two ways. To a statistician and a Rational Man the outcomes were exactly the same in terms of gain or loss.  He designed the experiment to ensure that it was the unconscious judgement that was being measured – the intuitive gut reaction. So if our gut reactions are Rational then the choice and the way the choice was presented would have no significant effect.

There was an effect. The hypothesis was disproved.

The evidence showed that our gut reactions are biased … and in an interesting way.

If we are presented with the choice between a certain gain and an uncertain gain/loss (so the average gain is the same) then we choose the certain gain much more often.  We avoid uncertainty. Uncertainly =1 Loss=0.

BUT …

If we are presented with a choice between certain loss and an uncertain loss/gain (so the average outcome is again the same) then we choose the uncertain option much more often. This is exactly the opposite of what was expected.

And it did not make any difference if the subject knew the results of the experiment before doing it. The judgement is made out of awareness and communicated to our consciousness via an emotion – a feeling – that biases our slower, logical, conscious decision process.

This means that the sense of loss has more influence on our judgement than the sense of uncertainty.

This behaviour is hard-wired. It is part of our Chimp brain design. And once we know this we can see the effect of it everywhere.

1. We will avoid the pain of uncertainty and resist any change that might deliver a gain when we believe that future loss is uncertain. We are conservative and over-optimistic.

2. We will accept the pain of uncertainty and only try something new (and risky) when we believe that to do otherwise will result in certain loss. The Backs Against The Wall scenario.  The Cornered Rat is Unpredictable and Dangerous scenario.

This explains why we resist any change right up until the point when we see Reality clearly enough to believe that we are definitely going to lose something important if we do nothing. Lose our reputation, lose our job, lose our security, lose our freedom or lose our lives. That is a transformational event.  A Road to Damascus moment.

monkey_on_back_anim_150_wht_11200Understanding that we behave like curious, playful, social but irrational chimps is one key to unlocking significant and sustained improvement.

We need to celebrate our inner chimp – it is key to innovation.

And we need to learn how to team up with our inner chimp rather than be hijacked by it.

If we do not we will fail – the Laws of Physics, Probability and Psychology decree it.

What is my P.A.R.T?

four_way_puzzle_people_200_wht_4883Improvement implies change, but change does not imply improvement.

Change follows action. Action follows planning. Effective planning follows from an understanding of the system because it is required to make the wise decisions needed to achieve the purpose.

The purpose is the intended outcome.

Learning follows from observing the effect of change – whatever it is. Understanding follows from learning to predict the effect of both actions and in-actions.

All these pieces of the change jigsaw are different and they are inter-dependent. They fit together. They are a system.

And we can pick out four pieces: the Plan piece, the Action piece, the Observation piece and the Learning piece – and they seem to follow that sequence – it looks like a learning cycle.

This is not a new idea.

It is the same sequence as the Scientific Method: hypothesis, experiment, analysis, conclusion. The preferred tool of  Academics – the Thinkers.

It is also the same sequence as the Shewhart Cycle: plan, do, check, act. The preferred tool of the Pragmatists – the Doers.

So where does all the change conflict come from? What is the reason for the perpetual debate between theorists and activists? The incessant game of “Yes … but!”

One possible cause was highlighted by David Kolb  in his work on ‘experiential learning’ which showed that individuals demonstrate a learning style preference.

We tend to be thinkers or doers and only a small proportion us say that we are equally comfortable with both.

The effect of this natural preference is that real problems bounce back-and-forth between the Tribe of Thinkers and the Tribe of Doers.  Together we are providing separate parts of the big picture – but as two tribes we appear to be unaware of the synergistic power of the two parts. We are blocked by a power struggle.

The Experiential Learning Model (ELM) was promoted and developed by Peter Honey and Alan Mumford (see learning styles) and their work forms the evidence behind the Learning Style Questionnaire that anyone can use to get their ‘score’ on the four dimensions:

  • Pragmatist – the designer and planner
  • Activist – the action person
  • Reflector – the observer and analyst
  • Theorist – the abstracter and hypothesis generator

The evidence from population studies showed that individuals have a preference for one of these styles, sometimes two, occasionally three and rarely all four.

That observation, together with the fact that learning from experience requires moving around the whole cycle, leads to an awareness that both individuals and groups can get ‘stuck’ in their learning preference comfort zone. If the learning wheel is unbalanced it will deliver a bumpy ride when it turns! So it may be more comfortable just to remain stationary and not to learn.

Which means not to change. Which means not to improve.


So if we are embarking on an improvement exercise – be it individual or collective – then we are committed to learning. So where do we start on the learning cycle?

The first step is action. To do something – and the easiest and safest thing to do is just look. Observe what is actually happening out there in the real world – outside the office – outside our comfort zone. We need to look outside our rhetorical inner world of assumptions, intuition and pre-judgements. The process starts with Study.

The next step is to reflect on what we see – we look in the mirror – and we compare what are actually seeing with what we expected to see. That is not as easy as it sounds – and a useful tool to help is to draw charts. To make it visual. All sorts of charts.

The result is often a shock. There is often a big gap between what we see and what we perceive; between what we expect and what we experience; between what we want and what we get; between our intent and our impact.

That emotional shock is actually what we need to power us through the next phase – the Realm of the Theorist – where we ask three simple questions:
Q1: What could be causing the reality that I am seeing?
Q2: How would I know which of the plausible causes is the actual cause?
Q3: What experiment can I do to answer my question and clarify my understanding of Reality?

This is the world of the Academic.

The third step is design an experiment to test our new hypothesis.  The real world is messy and complicated and we need to be comfortable with ‘good enough’ and ‘reasonable uncertainty’.  Design is about practicalities – making something that works well enough in practice – in the real world. Something that is fit-for-purpose. We are not expecting perfection; not looking for optimum; not striving for best – just significantly better than what we have now. And the more we can test our design before we implement it the better because we want to know what to expect before we make the change and we want to avoid unintended negative consequences – the NoNos. This is Plan.

twisting_arrow_200_wht_11738Then we act … and the cycle of learning has come one revolution … but we are not back at the start – we have moved forward. Our understanding is already different from when were were at this stage before: it is deeper and wider.  We are following the trajectory of a spiral – our capability for improvement is expanding over time.

So we need to balance our learning wheel before we start the journey or we will have a slow, bumpy and painful ride!

We need to study, then plan, then do, then study the impact.


One plausible approach is to stay inside our comfort zones, play to our strengths and to say “What we need is a team made of people with complementary strengths. We need a Department of Action for the Activists; a Department of Analysis for the Reflectors; a Department of Research for the Theorists and a Department of Planning for the Pragmatists.

But that is what we have now and what is the impact? The Four Departments have become super-specialised and more polarised.  There is little common ground or shared language.  There is no common direction, no co-ordination, no oil on the axle of the wheel of change. We have ground to a halt. We have chaos. Each part is working but independently of the others in an unsynchronised mess.

We have cultural fibrillation. Change output has dropped to zero.


A better design is for everyone to focus first on balancing their own learning wheel by actively redirecting emotional energy from their comfort zone, their strength,  into developing the next step in their learning cycle.

Pragmatists develop their capability for Action.
Activists develop their capability for Reflection.
Reflectors develop their capability for Hypothesis.
Theorists develop their capability for Design.

The first step in the improvement spiral is Action – so if you are committed to improvement then investing £10 and 20 minutes in the 80-question Learning Style Questionnaire will demonstrate your commitment to yourself.  And that is where change always starts.

The Recipe for Chaos

boxes_group_PA4_150_wht_4916There are only four ingredients required to create Chaos.

The first is Time.

All processes and systems are time-dependent.

The second ingredient is a Metric of Interest (MoI).

That means a system performance metric that is important to all – such as a Safety or Quality or Cost; and usually all three.

The third ingredient is a feedback loop of a specific type – it is called a Negative Feedback Loop.  The NFL  is one that tends to adjust, correct and stabilise the behaviour of the system.

Negative feedback loops are very useful – but they have a drawback. They resist change and they reduce agility. The name is also a disadvantage – the word ‘negative feedback’ is often associated with criticism.

The fourth and final ingredient in our Recipe for Chaos is also a feedback loop but one of a different design – a Positive Feedback Loop (PFL)- one that amplifies variation and change.

Positive feedback loops are also very useful – they are required for agility – quick reactions to unexpected events. Fast reflexes.

The downside of a positive feedback loop is that increases instability.

The name is also confusing – ‘positive feedback’ is associated with encouragement and praise.

So, in this context it is better to use the terms ‘stabilizing feedback’ and ‘destabilizing feedback’  loops.

When we mix these four ingredients in just the right amounts we get a system that may behave chaotically. That is surprising and counter-intuitive. But it is how the Universe works.

For example:

Suppose our Metric of Interest is the amount of time that patients spend in a Accident and Emergency Department. We know that the longer this time is the less happy they are and the higher the risk of avoidable harm – so it is a reasonable goal to reduce it.

Longer-than-possible waiting times have many root causes – it is a non-specific metric.  That means there are many things that could be done to reduce waiting time and the most effective actions will vary from case-to-case, day-to-day and even minute-to-minute.  There is no one-size-fits-all solution.

This implies that those best placed to correct the causes of these delays are the people who know the specific system well – because they work in it. Those who actually deliver urgent care. They are the stabilizing ingredient in our Recipe for Chaos.

The destabilizing ingredient is the hit-the-arbitrary-target policy which drives a performance management feedback loop.

This policy typically involves:
(1) Setting a performance target that is desirable but impossible for the current design to achieve reliably;
(2) inspecting how close to the target we are; then
(3) using the real-time data to justify threats of dire consequences for failure.

Now we have a perfect Recipe for Chaos.

The higher the failure rate the more inspections, reports, meetings, exhortations, threats, interruptions, and interventions that are generated.  Fear-fuelled management meddling. This behaviour consumes valuable time – so leaves less time to do the worthwhile work. Less time to devote to safety, flow, and quality. The queues build and the pressure increases and the system becomes hyper-sensitive to small fluctuations. Delays multiply and errors are more likely and spawn more workload, more delays and more errors.  Tempers become frayed and molehills are magnified into mountains. Irritations become arguments.  And all of this makes the problem worse rather than better. Less stable. More variable. More chaotic. More dangerous. More expensive.

It is actually possible to write a simple equation that captures this complex dynamic behaviour characteristic of real systems.  And that was a very surprising finding when it was discovered in 1976 by a mathematician called Robert May.

This equation is called the logistic equation.

Here is the abstract of his seminal paper.

Nature 261, 459-467 (10 June 1976)

Simple mathematical models with very complicated dynamics

First-order difference equations arise in many contexts in the biological, economic and social sciences. Such equations, even though simple and deterministic, can exhibit a surprising array of dynamical behaviour, from stable points, to a bifurcating hierarchy of stable cycles, to apparently random fluctuations. There are consequently many fascinating problems, some concerned with delicate mathematical aspects of the fine structure of the trajectories, and some concerned with the practical implications and applications. This is an interpretive review of them.

The fact that this chaotic behaviour is completely predictable and does not need any ‘random’ element was a big surprise. Chaotic is not the same as random. The observed chaos in the urgent healthcare care system is the result of the design of the system – or more specifically the current healthcare system management policies.

This has a number of profound implications – the most important of which is this:

If the chaos we observe in our health care systems is the predictable and inevitable result of the management policies we ourselves have created and adopted – then eliminating the chaos will only require us to re-design these policies.

In fact we only need to tweak one of the ingredients of the Recipe for Chaos – such as to reduce the strength of the destabilizing feedback loop. The gain. The volume control on the variation amplifier!

This is called the MM factor – otherwise known as ‘Management Meddling‘.

We need to keep all four ingredients though – because we need our system to have both agility and stability.  It is the balance of ingredients that that is critical.

The flaw is not the Managers themselves – it is their learned behaviour – the Meddling.  This is learned so it can be unlearned. We need to keep the Managers but “tweak” their role slightly. As they unlearn their old habits they move from being ‘Policy-Enforcers and Fire-Fighters’ to becoming ‘Policy-Engineers and Chaos-Calmers’. They focus on learning to understand the root causes of variation that come from outside the circle of influence of the non-Managers.   They learn how to rationally and radically redesign system policies to achieve both agility and stability.

And doing that requires developing systemic-thinking and learning Improvement Science skills – because the causes of chaos are counter-intuitive. If it were intuitively-obvious we would have discovered the nature of chaos thousands of years ago. The fact that it was not discovered until 1976 demonstrates this fact.

It is our homo sapiens intuition that got us into this mess!  The inherent flaws of the chimp-ware between our ears.  Our current management policies are intuitively-obvious, collectively-agreed, rubber-stamped and wrong! They are part of the Recipe for Chaos.

And when we learn to re-design our system policies and upload the new system software then the chaos evaporates as if a magic wand had been waved.

And that comes as a really BIG surprise!

What also comes as a big surprise is just how small the counter-intuitive policy design tweaks often are.

Safe, smooth, efficient, effective, and productive flow is restored. Calm confidence reigns. Safety, Flow, Quality and Productivity all increase – at the same time.  The emotional storm clouds dissipate and the prosperity sun shines again.

Everyone feels better. Everyone. Patients, managers, and non-managers.

This is Win-Win-Win improvement by design. Improvement Science.

Space-and-Time

line_figure_phone_400_wht_9858<Lesley>Hi Bob! How are you today?

<Bob>OK thanks Lesley. And you?

<Lesley>I am looking forward to our conversation. I have two questions this week.

<Bob>OK. What is the first one?

<Lesley>You have taught me that improvement-by-design starts with the “purpose” question and that makes sense to me. But when I ask that question in a session I get an “eh?” reaction and I get nowhere.

<Bob>Quod facere bonum opus et quomodo te cognovi unum?

<Lesley>Eh?

<Bob>I asked you a purpose question.

<Lesley>Did you? What language is that? Latin? I do not understand Latin.

<Bob>So although you recognize the language you do not understand what I asked, the words have no meaning. So you are unable to answer my question and your reaction is “eh?”. I suspect the same is happening with your audience. Who are they?

<Lesley>Front-line clinicians and managers who have come to me to ask how to solve their problems. There Niggles. They want a how-to-recipe and they want it yesterday!

<Bob>OK. Remember the Temperament Treacle conversation last week. What is the commonest Myers-Briggs Type preference in your audience?

<Lesley>It is xSTJ – tough minded Guardians.  We did that exercise. It was good fun! Lots of OMG moments!

<Bob>OK – is your “purpose” question framed in a language that the xSTJ preference will understand naturally?

<Lesley>Ah! Probably not! The “purpose” question is future-focused, conceptual , strategic, value-loaded and subjective.

<Bob>Indeed – it is an iNtuitor question. xNTx or xNFx. Pose that question to a roomful of academics or executives and they will debate it ad infinitum.

<Lesley>More Latin – but that phrase I understand. You are right.  And my own preference is xNTP so I need to translate my xNTP “purpose” question into their xSTJ language?

<Bob>Yes. And what language do they use?

<Lesley>The language of facts, figures, jobs-to-do, work-schedules, targets, budgets, rational, logical, problem-solving, tough-decisions, and action-plans. Objective, pragmatic, necessary stuff that keep the operational-wheels-turning.

<Bob>OK – so what would “purpose” look like in xSTJ language?

<Lesley>Um. Good question. Let me start at the beginning. They came to me in desperation because they are now scared enough to ask for help.

<Bob>Scared of what?

<Lesley>Unintentionally failing. They do not want to fail and they do not need beating with sticks. They are tough enough on themselves and each other.

<Bob>OK that is part of their purpose. The “Avoid” part. The bit they do not want. What do they want? What is the “Achieve” part? What is their “Nice If”?

<Lesley>To do a good job.

<Bob>Yes. And that is what I asked you – but in an unfamiliar language. Translated into English I asked “What is a good job and how do you know you are doing one?”

<Lesley>Ah ha! That is it! That is the question I need to ask. And that links in the first map – The 4N Chart®. And it links in measurement, time-series charts and BaseLine© too. Wow!

<Bob>OK. So what is your second question?

<Lesley>Oh yes! I keep getting asked “How do we work out how much extra capacity we need?” and I answer “I doubt that you need any more capacity.”

<Bob>And their response is?

<Lesley>Anger and frustration! They say “That is obvious rubbish! We have a constant stream of complaints from patients about waiting too long and we are all maxed out so of course we need more capacity! We just need to know the minimum we can get away with – the what, where and when so we can work out how much it will cost for the business case.

<Bob>OK. So what do they mean by the word “capacity”. And what do you mean?

<Lesley>Capacity to do a good job?

<Bob>Very quick! Ho ho! That is a bit imprecise and subjective for a process designer though. The Laws of Physics need the terms “capacity”, “good” and “job” clearly defined – with units of measurement that are meaningful.

<Lesley>OK. Let us define “good” as “delivered on time” and “job” as “a patient with a health problem”.

<Bob>OK. So how do we define and measure capacity? What are the units of measurement?

<Lesley>Ah yes – I see what you mean. We touched on that in FISH but did not go into much depth.

<Bob>Now we dig deeper.

<Lesley>OK. FISH talks about three interdependent forms of capacity: flow-capacity, resource-capacity, and space-capacity.

<Bob>Yes. They are the space-and-time capacities. If we are too loose with our use of these and treat them as interchangeable then we will create the confusion and conflict that you have experienced. What are the units of measurement of each?

<Lesley>Um. Flow-capacity will be in the same units as flow, the same units as demand and activity – tasks per unit time.

<Bob>Yes. Good. And space-capacity?

<Lesley>That will be in the same units as work in progress or inventory – tasks.

<Bob>Good! And what about resource-capacity?

<Lesley>Um – Will that be resource-time – so time?

<Bob>Actually it is resource-time per unit time. So they have different units of measurement. It is invalid to mix them up any-old-way. It would be meaningless to add them for example.

<Lesley>OK. So I cannot see how to create a valid combination from these three! I cannot get the units of measurement to work.

<Bob>This is a critical insight. So what does that mean?

<Lesley>There is something missing?

<Bob>Yes. Excellent! Your homework this week is to work out what the missing pieces of the capacity-jigsaw are.

<Lesley>You are not going to tell me the answer?

<Bob>Nope. You are doing ISP training now. You already know enough to work it out.

<Lesley>OK. Now you have got me thinking. I like it. Until next week then.

<Bob>Have a good week.

Temperament Treacle

stick_figure_help_button_150_wht_9911If the headlines in the newspapers are a measure of social anxiety then healthcare in the UK is in a state of panic: “Hospitals Fear The Winter Crisis Is Here Early“.

The Panic Button is being pressed and the Patient Safety Alarms are sounding.

Closer examination of the statement suggests that the winter crisis is not unexpected – it is just here early.  So we are assuming it will be worse than last year – which was bad enough.

The evidence shows this fear is well founded.  Last year was the worst on the last 5 years and this year is shaping up to be worse still.

So if it is a predictable annual crisis and we have a lot of very intelligent, very committed, very passionate people working on the problem – then why is it getting worse rather than better?

One possible factor is Temperament Treacle.

This is the glacially slow pace of effective change in healthcare – often labelled as “resistance to change” and implying deliberate scuppering of the change boat by powerful forces within the healthcare system.

Resistance to the flow of change is probably a better term. We could call that cultural viscosity.  Treacle has a very high viscosity – it resists flow.  Wading through treacle is very hard work. So pushing change though cultural treacle is hard work. Many give up in exhaustion after a while.

So why the term “Temperament Treacle“?

Improvement Science has three parts – Processes, Politics and Systems.

Process Science is applied physics. It is an objective, logical, rational science. The Laws of Physics are not negotiable. They are absolute.

Political Science is applied psychology. It is a subjective, illogical, irrational science. The Laws of People are totally negotiable.  They are arbitrary.

Systems Science is a combination of Physics and Psychology. A synthesis. A synergy. A greater-than-the-sum-of-the-parts combination.

The Swiss physician Carl Gustav Jung studied psychology – and in 1920 published “Psychological Types“.  When this ground-breaking work was translated into English in 1923 it was picked up by Katherine Cook Briggs and made popular by her daughter Isabel.  Isabel Briggs married Clarence Myers and in 1942 Isabel Myers learned about the Humm-Wadsworth Scale,  a tool for matching people with jobs. So using her knowledge of psychological type differences she set out to develop her own “personality sorting tool”. The first prototype appeared in 1943; in the 1950’s she tested the third iteration and measured the personality types of 5,355 medical students and over 10,000 nurses.   The Myers-Briggs Type Indicator was published 1962 and since then the MBTI® has been widely tested and validated and is the most extensively used personality type instrument. In 1980 Isabel Myers finished writing Gifts Differing just before she died at the age of 82 after a twenty year long battle with cancer.

The essence of Jung’s model is that an individual’s temperament is largely innate and the result of a combination of three dimensions:

1. The input or perceiving  process (P). The poles are Intuitor (N) or Sensor (S).
2. The decision or judging process (J). The poles are Thinker (T) or Feeler (F).
3. The output or doing process. The poles are Extraversion (E) or Intraversion (I).

Each of Jung’s dimensions had two “opposite” poles so when combined they gave eight types.  Isabel Myers, as a result of her extensive empirical testing, added a fourth dimension – which gives the four we see in the modern MBTI®.  The fourth dimension linked the other three together – it describes if the J or the P process is the one shown to the outside world. So the MBTI® has sixteen broad personality types.  In 1998 a book called “Please Understand Me II” written by David Keirsey, the MBTI® is put into an historical context and Keirsey concluded that there are four broad Temperaments – and these have been described since Ancient times.

When Isabel Myers measured different populations using her new tool she discovered a consistent pattern: that the proportions of the sixteen MBTI® types were consistent across a wide range of societies. Personality type is, as Jung had suggested, an innate part of the “human condition”. She also saw that different types clustered in different occupations. Finding the “right job” appeared to be a process of natural selection: certain types fitted certain roles better than others and people self-selected at an early age.  If their choice was poor then the person would be unhappy and would not achieve their potential.

Isabel’s work also showed that each type had both strengths and weaknesses – and that people performed better and felt happier when their role played to their temperament strengths.  It also revealed that considerable conflict could be attributed to type-mismatch.  Polar opposite types have the least psychological “common ground” – so when they attempt to solve a common problem they do so by different routes and using different methods and language. This generates confusion and conflict.  This is why Isabel Myers gave her book the title of “Gifts Differing” and her message was that just having awareness of and respect for the innate type differences was a big step towards reducing the confusion and conflict.

So what relevance does this have to change and improvement?

Well it turns out that certain types are much more open to change than others and certain types are much more resistant.  If an organisation, by the very nature of its work, attracts the more change resistant types then that organisation will be culturally more viscous to the flow of change. It will exhibit the cultural characteristics of temperament treacle.

The key to understanding Temperament and the MBTI® is to ask a series of questions:

Q1. Does the person have the N or S preference on their perceiving function?

A1=N then Q2: Does the person have a T or F preference on their judging function?
A2=T gives the xNTx combination which is called the Rational or phlegmatic temperament.
A2=F gives the xNFx combination which is called the Idealist or choleric temperament.

A1=S then Q3: Does the person show a J or P preference to the outside world?
A3=J gives the xSxJ combination which is called the Guardian or melancholic temperament.
A3=P gives the xSxP combination which is called the Artisan or sanguine temperament.

So which is the most change resistant temperament?  The answer may not be a big surprise. It is the Guardians. The melancholics. The SJ’s.

Bureaucracies characteristically attract SJ types. The upside is that they ensure stability – the downside is that they prevent agility.  Bureaucracies block change.

The NF Idealists are the advocates and the mentors: they love initiating and facilitating transformations with the dream of making the world a better place for everyone. They light the emotional bonfire and upset the apple cart. The NT Rationals are the engineers and the architects. They love designing and building new concepts and things – so once the Idealists have cracked the bureaucratic carapace they can swing into action. The SP Sanguines are the improvisors and expeditors – they love getting the new “concept” designs to actually work in the messy real world.

Unfortunately the grand designs dreamed up by the ‘N’s often do not work in practice – and the scene is set for the we-told-you-so game, and the name-shame-blame game.

So if initiating and facilitating change is the Achilles Heel of the SJ’s then what is their strength?

Let us approach this from a different perspective:

Let us put ourselves in the shoes of patients and ask ourselves: “What do we want from a System of Healthcare and from those who deliver that care – the doctors?”

1. Safe?
2. Reliable?
3. Predictable?
4. Decisive?
5. Dependable?
6. All the above?

These are the strengths of the SJ temperament. So how do doctors measure up?

In a recent observational study, 168 doctors who attended a leadership training course completed their MBTI® self-assessments as part of developing insight into temperament from the perspective of a clinical leader.  From the collective data we can answer our question: “Are there more SJ types in the medical profession than we would expect from the general population?”

Doctor_Temperament The table shows the results – 60% of doctors were SJ compared with 35% expected for the general population.

Statistically this is highly significant difference (p<0.0001). Doctors are different.

It is of enormous practical importance well.

We are reassured that the majority of doctors have a preference for the very traits that patients want from them. That may explain why the Medical Profession always ranks highest in the league table of “trusted professionals”. We need to be able to trust them – it could literally be a matter of life or death.

The table also shows where the doctors were thin on the ground: in the mediating, improvising, developing, constructing temperaments. The very set of skills needed to initiate and facilitate effective and sustained change.

So when the healthcare system is lurching from one predictable crisis to another – the innate temperament of the very people we trust to deliver our health care are the least comfortable with changing the system of care itself.

That is a problem. A big problem.

Studies have show that when we get over-stressed, fearful and start to panic then in a desperate act of survival we tend to resort to the aspects of our temperament that are least well developed.  An SJ who is in panic-mode may resort to NP tactics: opinion-led purposeless conceptual discussion and collective decision paralysis. This is called the “headless chicken and rabbit in the headlights” mode. We have all experienced it.

A system that is no longer delivering fit-for-purpose performance because its purpose has shifted requires redesign.  The temperament treacle inhibits the flow of change so the crisis is not averted. The crisis happens, invokes panic and triggers ineffective and counter-productive behaviour. The crisis deepens and performance can drop catastrophically when the red tape is cut. It was the only thing holding the system together!

But while the bureaucracy is in disarray then innovation can start to flourish. And the next cycle starts.

It is a painful, slow, wasteful process called “reactionary evolution by natural selection“.

Improvement Science is different. It operates from a “proactive revolution through collective design” that is enjoyable, quick and efficient but it requires mastery of synergistic political science and process science. We do not have that capability – yet.

The table offers some hope.  It shows the majority of doctors are xSTJ.  They are Logical Guardians. That means that they solve problems using tried-tested-and-trustworthy logic. So they have no problem with the physics. Show them how to diagnose and design processes and they are inside their comfort zone.

Their collective weak spot is managing the politics – the critical cultural dimension of change. Often the result is manipulation rather than motivation. It does not work. The improvement stalls. Cynicism increases. The treacle gets thicker.

System-redesign requires synergistic support, development, improvisation and mediation. These strengths do exist in the medical profession – but they appear to be in short supply – so they need to be identified, and nurtured.  And change teams need to assemble and respect the different gifts.

One further point about temperament.  It is not immutable. We can all develop a broader set of MBTI® capabilities with guidance and practice – especially the ones that fill the gaps between xSTJ and xNFP.  Those whose comfort zone naturally falls nearer the middle of the four dimensions find this easier. And that is one of the goals of Improvement Science training.

Sorting_HatAnd if you are in a hurry then you might start today by identifying the xSFJ “supporters” and the xNFJ “mentors” in your organisation and linking them together to build a temporary bridge over the change culture chasm.

So to find your Temperament just click here to download the Temperament Sorter.

The Mirror

mirror_mirror[Dring Dring]

The phone announced the arrival of Leslie for the weekly ISP mentoring conversation with Bob.

<Leslie> Hi Bob.

<Bob> Hi Leslie. What would you like to talk about today?

<Leslie> A new challenge – one that I have not encountered before.

<Bob>Excellent. As ever you have pricked my curiosity. Tell me more.

<Leslie> OK. Up until very recently whenever I have demonstrated the results of our improvement work to individuals or groups the usual response has been “Yes, but“. The habitual discount as you call it. “Yes, but your service is simpler; Yes, but your budget is bigger; Yes, but your staff are less militant.” I have learned to expect it so I do not get angry any more.

<Bob> OK. The mantra of the skeptics is to be expected and you have learned to stay calm and maintain respect. So what is the new challenge?

<Leslie>There are two parts to it.  Firstly, because the habitual discounting is such an effective barrier to diffusion of learning;  our system has not changed; the performance is steadily deteriorating; the chaos is worsening and everything that is ‘obvious’ has been tried and has not worked. More red lights are flashing on the patient-harm dashboard and the Inspectors are on their way. There is an increasing  turnover of staff at all levels – including Executive.  There is an anguished call for “A return to compassion first” and “A search for new leaders” and “A cultural transformation“.

<Bob> OK. It sounds like the tipping point of awareness has been reached, enough people now appreciate that their platform is burning and radical change of strategy is required to avoid the ship sinking and them all drowning. What is the second part?

<Leslie> I am getting more emails along the line of “What would you do?

<Bob> And your reply?

<Leslie> I say that I do not know because I do not have a diagnosis of the cause of the problem. I do know a lot of possible causes but I do not know which plausible ones are the actual ones.

<Bob> That is a good answer.  What was the response?

<Leslie>The commonest one is “Yes, but you have shown us that Plan-Do-Study-Act is the way to improve – and we have tried that and it does not work for us. So we think that improvement science is just more snake oil!”

<Bob>Ah ha. And how do you feel about that?

<Leslie>I have learned the hard way to respect the opinion of skeptics. PDSA does work for me but not for them. And I do not understand why that is. I would like to conclude that they are not doing it right but that is just discounting them and I am wary of doing that.

<Bob>OK. You are wise to be wary. We have reached what I call the Mirror-on-the-Wall moment.  Let me ask what your understanding of the history of PDSA is?

<Leslie>It was called Plan-Do-Check-Act by Walter Shewhart in the 1930’s and was presented as a form of the scientific method that could be applied on the factory floor to improving the quality of manufactured products.  W Edwards Deming modified it to PDSA where the “Check” was changed to “Study”.  Since then it has been the key tool in the improvement toolbox.

<Bob>Good. That is an excellent summary.  What the Zealots do not talk about are the limitations of their wonder-tool.  Perhaps that is because they believe it has no limitations.  Your experience would seem to suggest otherwise though.

<Leslie>Spot on Bob. I have a nagging doubt that I am missing something here. And not just me.

<Bob>The reason PDSA works for you is because you are using it for the purpose it was designed for: incremental improvement of small bits of the big system; the steps; the points where the streams cross the stages.  You are using your FISH training to come up with change plans that will work because you understand the Physics of Flow better. You make wise improvement decisions.  In fact you are using PDSA in two separate modes: discovery mode and delivery mode.  In discovery mode we use the Study phase to build your competence – and we learn most when what happens is not what we expected.  In delivery mode we use the Study phase to build our confidence – and that grows most when what happens is what we predicted.

<Leslie>Yes, that makes sense. I see the two modes clearly now you have framed it that way – and I see that I am doing both at the same time, almost by second nature.

<Bob>Yes – so when you demonstrate it you describe PDSA generically – not as two complimentary but contrasting modes. And by demonstrating success you omit to show that there are some design challenges that cannot be solved with either mode.  That hidden gap attracts some of the “Yes, but” reactions.

<Leslie>Do you mean the challenges that others are trying to solve and failing?

<Bob>Yes. The commonest error is to discount the value of improvement science in general; so nothing is done and the inevitable crisis happens because the system design is increasingly unfit for the evolving needs.  The toast is not just burned it is on fire and is now too late to  use the discovery mode of PDSA because prompt and effective action is needed.  So the delivery mode of PDSA is applied to a emergent, ill-understood crisis. The Plan is created using invalid assumptions and guesswork so it is fundamentally flawed and the Do then just makes the chaos worse.  In the ensuing panic the Study and Act steps are skipped so all hope of learning is lost and and a vicious and damaging spiral of knee-jerk Plan-Do-Plan-Do follows. The chaos worsens, quality falls, safety falls, confidence falls, trust falls, expectation falls and depression and despair increase.

<Leslie>That is exactly what is happening and why I feel powerless to help. What do I do?

<Bob>The toughest bit is past. You have looked squarely in the mirror and can now see harsh reality rather than hasty rhetoric. Now you can look out of the window with different eyes.  And you are now looking for a real-world example of where complex problems are solved effectively and efficiently. Can you think of one?

<Leslie>Well medicine is one that jumps to mind.  Solving a complex, emergent clinical problems requires a clear diagnosis and prompt and effective action to stabilise the patient and then to cure the underlying cause: the disease.

<Bob>An excellent example. Can you describe what happens as a PDSA sequence?

<Leslie>That is a really interesting question.  I can say for starters that it does not start with P – we have learned are not to have a preconceived idea of what to do at the start because it badly distorts our clinical judgement.  The first thing we do is assess the patient to see how sick and unstable they are – we use the Vital Signs. So that means that we decide to Act first and our first action is to Study the patient.

<Bob>OK – what happens next?

<Leslie>Then we will do whatever is needed to stabilise the patient based on what we have observed – it is called resuscitation – and only then we can plan how we will establish the diagnosis; the root cause of the crisis.

<Bob> So what does that spell?

<Leslie> A-S-D-P.  It is the exact opposite of P-D-S-A … the mirror image!

<Bob>Yes. Now consider the treatment that addresses the root cause and that cures the patient. What happens then?

<Leslie>We use the diagnosis is used to create a treatment Plan for the specific patient; we then Do that, and we Study the effect of the treatment in that specific patient, using our various charts to compare what actually happens with what we predicted would happen. Then we decide what to do next: the final action.  We may stop because we have achieved our goal, or repeat the whole cycle to achieve further improvement. So that is our old friend P-D-S-A.

<Bob>Yes. And what links the two bits together … what is the bit in the middle?

<Leslie>Once we have a diagnosis we look up the appropriate treatment options that have been proven to work through research trials and experience; and we tailor the treatment to the specific patient. Oh I see! The missing link is design. We design a specific treatment plan using generic principles.

<Bob>Yup.  The design step is the jam in the improvement sandwich and it acts like a mirror: A-S-D-P is reflected back as P-D-S-A

<Leslie>So I need to teach this backwards: P-D-S-A and then Design and then A-S-P-D!

<Bob>Yup – and you know that by another name.

<Leslie> 6M Design®! That is what my Improvement Science Practitioner course is all about.

<Bob> Yup.

<Leslie> If you had told me that at the start it would not have made much sense – it would just have confused me.

<Bob>I know. That is the reason I did not. The Mirror needs to be discovered in order for the true value to appreciated. At the start we look in the mirror and perceive what we want to see. We have to learn to see what is actually there. Us. Now you can see clearly where P-D-S-A and Design fit together and the missing A-S-D-P component that is needed to assemble a 6M Design® engine. That is Improvement-by-Design in a nine-letter nutshell.

<Leslie> Wow! I can’t wait to share this.

<Bob> And what do you expect the response to be?

<Leslie>”Yes, but”?

<Bob> From the die hard skeptics – yes. It is the ones who do not say “Yes, but” that you want to engage with. The ones who are quiet. It is always the quiet ones that hold the key.

Three Essentials

There are three necessary parts before ANY improvement-by-design effort will gain traction. Omit any one of them and nothing happens.

stick_figure_drawing_three_check_marks_150_wht_5283

1. A clear purpose and an outline strategic plan.

2. Tactical measurement of performance-over-time.

3. A generic Improvement-by-Design framework.

These are necessary minimum requirements to be able to safely delegate the day-to-day and week-to-week tactical stuff the delivers the “what is needed”.

These are necessary minimum requirements to build a self-regulating, self-sustaining, self-healing, self-learning win-win-win system.

And this is not a new idea.  It was described by Joseph Juran in the 1960’s and that description was based on 20 years of hands-on experience of actually doing it in a wide range of manufacturing and service organisations.

That is 20 years before  the terms “Lean” or “Six Sigma” or “Theory of Constraints” were coined.  And the roots of Juran’s journey were 20 years before that – when he started work at the famous Hawthorne Works in Chicago – home of the Hawthorne Effect – and where he learned of the pioneering work of  Walter Shewhart.

And the roots of Shewhart’s innovations were 20 years before that – in the first decade of the 20th Century when innovators like Henry Ford and Henry Gantt were developing the methods of how to design and build highly productive processes.

Ford gave us the one-piece-flow high-quality at low-cost production paradigm. Toyota learned it from Ford.  Gantt gave us simple yet powerful visual charts that give us an understanding-at-a-glance of the progress of the work.  And Shewhart gave us the deceptively simple time-series chart that signals when we need to take more notice.

These nuggets of pragmatic golden knowledge have been buried for decades under a deluge of academic mud.  It is nigh time to clear away the detritus and get back to the bedrock of pragmatism. The “how-to-do-it” of improvement. Just reading Juran’s 1964 “Managerial Breakthrough” illustrates just how much we now take for granted. And how ignorant we have allowed ourselves to become.

Acquired Arrogance is a creeping, silent disease – we slip from second nature to blissful ignorance without noticing when we divorce painful reality and settle down with our own comfortable collective rhetoric.

The wake-up call is all the more painful as a consequence: because it is all the more shocking for each one of us; and because it affects more of us.

The pain is temporary – so long as we treat the cause and not just the symptom.

The first step is to acknowledge the gap – and to start filling it in. It is not technically difficult, time-consuming or expensive.  Whatever our starting point we need to put in place the three foundation stones above:

1. Common purpose.
2. Measurement-over-time.
3. Method for Improvement.

Then the rubber meets the road (rather than the sky) and things start to improve – for real. Lots of little things in lots of places at the same time – facilitated by the Junior Managers. The cumulative effect is dramatic. Chaos is tamed; calm is restored; capability builds; and confidence builds. The cynics have to look elsewhere for their sport and the skeptics are able to remain healthy.

Then the Middle Managers feel the new firmness under their feet – where before there were shifting sands. They are able to exert their influence again – to where it makes a difference. They stop chasing Scotch Mist and start reporting real and tangible improvement – with hard evidence. And they rightly claim a slice of the credit.

And the upwelling of win-win-win feedback frees the Senior Managers from getting sucked into reactive fire-fighting and the Victim Vortex; and that releases the emotional and temporal space to start learning and applying System-level Design.  That is what is needed to deliver a significant and sustained improvement.

And that creates the stable platform for the Executive Team to do Strategy from. Which is their job.

It all starts with the Three Essentials:

1. A Clear and Common Constancy of Purpose.
2. Measurement-over-time of the Vital Metrics.
3. A Generic Method for Improvement-by-Design.

The Black Curtain

Black_Curtain_and_DoorA couple of weeks ago an important event happened.  A Masterclass in Demand and Capacity for NHS service managers was run by an internationally renown and very experienced practitioner of Improvement Science.

The purpose was to assist the service managers to develop their capability for designing quality, flow and cost improvement using tried and tested operations management (OM) theory, techniques and tools.

It was assumed that as experienced NHS service managers that they already knew the basic principles of  OM and the foundation concepts, terminology, techniques and tools.

It was advertised as a Masterclass and designed accordingly.

On the day it was discovered that none of the twenty delegates had heard of two fundamental OM concepts: Little’s Law and Takt Time.

These relate to how processes are designed-to-flow. It was a Demand and Capacity Master Class; not a safety, quality or cost one.  The focus was flow.

And it became clear that none of the twenty delegates were aware before the day that there is a well-known and robust science to designing systems to flow.

So learning this fact came as a bit of a shock.

The implications of this observation are profound and worrying:

if a significant % of senior NHS operational managers are unaware of the foundations of operations management then the NHS may have problem it was not aware of …

because …

“if transformational change of the NHS into a stable system that is fit-for-purpose (now and into the future) requires the ability to design processes and systems that deliver both high effectiveness and high efficiency ...”

then …

it raises the question of whether the current generation of NHS managers are fit-for-this-future-purpose“.

No wonder that discovering a Science of  Improvement actually exists came as a bit of a shock!

And saying “Yes, but clinicians do not know this science either!” is a defensive reaction and not a constructive response. They may not but they do not call themselves “operational managers”.

[PS. If you are reading this and are employed by the NHS and do not know what Little’s Law and Takt Time are then it would be worth doing that first. Wikipedia is a good place to start].

And now we have another question:

“Given there are thousands of operational managers in the NHS; what does one sample of 20 managers tell us about the whole population?”

Now that is a good question.

It is also a question of statistics. More specifically quite advanced statistics.

And most people who work in the NHS have not studied statistics to that level. So now we have another do-not-know-how problem.

But it is still an important question that we need to understand the answer to – so we need to learn how and that means taking this learning path one step at a time using what we do know, rather than what we do not.

Step 1:

What do we know? We have one sample of 20 NHS service managers. We know something about our sample because our unintended experiment has measured it: that none of them had heard of Little’s Law or Takt Time. That is 0/20 or 0%.

This is called a “sample statistic“.

What we want to know is “What does this information tell us about the proportion of the whole population of all NHS managers who do have this foundation OM knowledge?”

This proportion of interest is called  the unknown “population parameter“.

And we need to estimate this population parameter from our sample statistic because it is impractical to measure a population parameter directly: That would require every NHS manager completing an independent and accurate assessment of their basic OM knowledge. Which seems unlikely to happen.

The good news is that we can get an estimate of a population parameter from measurements made from small samples of that population. That is one purpose of statistics.

Step 2:

But we need to check some assumptions before we attempt this statistical estimation trick.

Q1: How representative is our small sample of the whole population?

If we chose the delegates for the masterclass by putting the names of all NHS managers in a hat and drawing twenty names out at random, as in a  tombola or lottery, than we have what is called a “random sample” and we can trust our estimate of the wanted population parameter.  This is called “random sampling”.

That was not the case here. Our sample was self-selecting. We were not conducting a research study. This was the real world … so there is a chance of “bias”. Our sample may not be representative and we cannot say what the most likely bias is.

It is possible that the managers who selected themselves were the ones struggling most and therefore more likely than average to have a gap in their foundation OM knowledge. It is also possible that the managers who selected themselves are the most capable in their generation and are very well aware that there is something else that they need to know.

We may have a biased sample and we need to proceed with some caution.

Step 3:

So given the fact that none of our possibly biased sample of mangers were aware of the Foundation OM Knowledge then it is possible that no NHS service managers know this core knowledge.  In other words the actual population parameter is 0%. It is also possible that the managers in our sample were the only ones in the NHS who do not know this.  So, in theory, the sought-for population parameter could be anywhere between 0% and very nearly 100%.  Does that mean it is impossible to estimate the true value?

It is not impossible. In fact we can get an estimate that we can be very confident is accurate. Here is how it is done.

Statistical estimates of population parameters are always presented as ranges with a lower and an upper limit called a “confidence interval” because the sample is not the population. And even if we have an unbiased random sample we can never be 100% confident of our estimate.  The only way to be 100% confident is to measure the whole population. And that is not practical.

So, we know the theoretical limits from consideration of the extreme cases … but what happens when we are more real-world-reasonable and say – “let us assume our sample is actually a representative sample, albeit not a randomly selected one“.  How does that affect the range of our estimate of the elusive number – the proportion of NHS service managers who know basic operation management theory?

Step 4:

To answer that we need to consider two further questions:

Q2. What is the effect of the size of the sample?  What if only 5 managers had come and none of them knew; what if had been 50 or 500 and none of them knew?

Q3. What if we repeated the experiment more times? With the same or different sample sizes? What could we learn from that?

Our intuition tells us that the larger the sample size and the more often we do the experiment then the more confident we will be of the result. In other words  narrower the range of the confidence interval around our sample statistic.

Our intuition is correct because if our sample was 100% of the population we could be 100% confident.

So given we have not yet found an NHS service manager who has the OM Knowledge then we cannot exclude 0%. Our challenge narrows to finding a reasonable estimate of the upper limit of our confidence interval.

Step 5

Before we move on let us review where we have got to already and our purpose for starting this conversation: We want enough NHS service managers who are knowledgeable enough of design-for-flow methods to catalyse a transition to a fit-for-purpose and self-sustaining NHS.

One path to this purpose is to have a large enough pool of service managers who do understand this Science well enough to act as advocates and to spread both the know-of and the know-how.  This is called the “tipping point“.

There is strong evidence that when about 20% of a population knows about something that is useful for the whole population – then that knowledge  will start to spread through the grapevine. Deeper understanding will follow. Wiser decisions will emerge. More effective actions will be taken. The system will start to self-transform.

And in the Brave New World of social media this message may spread further and faster than in the past. This is good.

So if the NHS needs 20% of its operational managers aware of the Foundations of Operations Management then what value is our morsel of data from one sample of 20 managers who, by chance, were all unaware of the Knowledge.  How can we use that data to say how close to the magic 20% tipping point we are?

Step 6:

To do that we need to ask the question in a slightly different way.

Q4. What is the chance of an NHS manager NOT knowing?

We assume that they either know or do not know; so if 20% know then 80% do not.

This is just like saying: if the chance of rolling a “six” is 1-in-6 then the chance of rolling a “not-a-six” is 5-in-6.

Next we ask:

Q5. What is the likelihood that we, just by chance, selected a group of managers where none of them know – and there are 20 in the group?

This is rather like asking: what is the likelihood of rolling twenty “not-a-sixes” in a row?

Our intuition says “an unlikely thing to happen!”

And again our intuition is sort of correct. How unlikely though? Our intuition is a bit vague on that.

If the actual proportion of NHS managers who have the OM Knowledge is about the same chance of rolling a six (about 16%) then we sense that the likelihood of getting a random sample of 20 where not one knows is small. But how small? Exactly?

We sense that 20% is too a high an estimate of a reasonable upper limit.  But how much too high?

The answer to these questions is not intuitively obvious.

We need to work it out logically and rationally. And to work this out we need to ask:

Q6. As the % of Managers-who-Know is reduced from 20% towards 0% – what is the effect on the chance of randomly selecting 20 all of whom are not in the Know?  We need to be able to see a picture of that relationship in our minds.

The good news is that we can work that out with a bit of O-level maths. And all NHS service managers, nurses and doctors have done O-level maths. It is a mandatory requirement.

The chance of rolling a “not-a-six” is 5/6 on one throw – about 83%;
and the chance of rolling only “not-a-sixes” in two throws is 5/6 x 5/6 = 25/36 – about 69%
and the chance of rolling only “not-a-sixes” in three throws is 5/6 x 5/6 x 5/6 – about 58%… and so on.

[This is called the “chain rule” and it requires that the throws are independent of each other – i.e. a random, unbiased sample]

If we do this 20 times we find that the chance of rolling no sixes at all in 20 throws is about 2.6% – unlikely but far from impossible.

We need to introduce a bit of O-level algebra now.

Let us call the proportion of NHS service managers who understand basic OM, our unknown population parameter something like “p”.

So if p is the chance of a “six” then (1-p) is a chance of a “not-a-six”.

Then the chance of no sixes in one throw is (1-p)

and no sixes after 2 throws is (1-p)(1-p) = (1-p)^2 (where ^ means raise to the power)

and no sixes after three throws is (1-p)(1-p)(1-p) = (1-p)^3 and so on.

So the likelihood of  “no sixes in n throws” is (1-p)^n

Let us call this “t”

So the equation we need to solve to estimate the upper limit of our estimate of “p” is

t=(1-p)^20

Where “t” is a measure of how likely we are to choose 20 managers all of whom do not know – just by chance.  And we want that to be a small number. We want to feel confident that our estimate is reasonable and not just a quirk of chance.

So what threshold do we set for “t” that we feel is “reasonable”? 1 in a million? 1 in 1000? 1 in 100? 1 in10?

By convention we use 1 in 20 (t=0.05) – but that is arbitrary. If we are more risk-averse we might choose 1:100 or 1:1000. It depends on the context.

Let us be reasonable – let is say we want to be 95% confident our our estimated upper limit for “p” – which means we are calculating the 95% confidence interval. This means that will accept a 1:20 risk of our calculated confidence interval for “p” being wrong:  a 19:1 odds that the true value of “p” falls outside our calculated range. Pretty good odds! We will be reasonable and we will set the likelihood threshold for being “wrong” at 5%.

So now we need to solve:

0.05= (1-p)^20

And we want a picture of this relationship in our minds so let us draw a graph of t for a range of values of p.

We know the value of p must be between 0 and 1.0 so we have all we need and we can generate this graph easily using Excel.  And every senior NHS operational manager knows how to use Excel. It is a requirement. Isn’t it?

Black_Curtain

The Excel-generated chart shows the relationship between p (horizontal axis) and t (vertical axis) using our equation:

t=(1-p)^20.

Step 7

Let us first do a “sanity check” on what we have drawn. Let us “check the extreme values”.

If 0% of managers know then a sample of 20 will always reveal none – i.e. the leftmost point of the chart. Check!

If 100% of managers know then a sample of 20 will never reveal none – i.e. way off to the right. Check!

What is clear from the chart is that the relationship between p and t  is not a straight line; it is non-linear. That explains why we find it difficult to estimate intuitively. Our brains are not very good at doing non-linear analysis. Not very good at all.

So we need a tool to help us. Our Excel graph.  We read down the vertical “t” axis from 100% to the 5% point, then trace across to the right until we hit the line we have drawn, then read down to the corresponding value for “p”. It says about 14%.

So that is the upper limit of our 95% confidence interval of the estimate of the true proportion of NHS service managers who know the Foundations of Operations Management.  The lower limit is 0%.

And we cannot say better than somewhere between  0%-14% with the data we have and the assumptions we have made.

To get a more precise estimate,  a narrower 95% confidence interval, we need to gather some more data.

[Another way we can use our chart is to ask “If the actual % of Managers who know is x% the what is the chance that no one of our sample of 20 will know?” Solving this manually means marking the x% point on the horizontal axis then tracing a line vertically up until it crosses the drawn line then tracing a horizontal line to the left until it crosses the vertical axis and reading off the likelihood.]

So if in reality 5% of all managers do Know then the chance of no one knowing in an unbiased sample of 20 is about 35% – really quite likely.

Now we are getting a feel for the likely reality. Much more useful than just dry numbers!

But we are 95% sure that 86% of NHS managers do NOT know the basic language  of flow-improvement-science.

And what this chart also tells us is that we can be VERY confident that the true value of p is less than 2o% – the proportion we believe we need to get to transformation tipping point.

Now we need to repeat the experiment experiment and draw a new graph to get a more accurate estimate of just how much less – but stepping back from the statistical nuances – the message is already clear that we do have a Black Curtain problem.

A Black Curtain of Ignorance problem.

Many will now proclaim angrily “This cannot be true! It is just statistical smoke and mirrors. Surely our managers do know this by a different name – how could they not! It is unthinkable to suggest the majority of NHS manages are ignorant of the basic science of what they are employed to do!

If that were the case though then we would already have an NHS that is fit-for-purpose. That is not what reality is telling us.

And it quickly become apparent at the master class that our sample of 20 did not know-this-by-a-different-name.

The good news is that this knowledge gap could hiding the opportunity we are all looking for – a door to a path that leads to a radical yet achievable transformation of the NHS into a system that is fit-for-purpose. Now and into the future.

A system that delivers safe, high quality care for those who need it, in full, when they need it and at a cost the country can afford. Now and for the foreseeable future.

And the really good news is that this IS knowledge gap may be  and extensive deep but it is not wide … the Foundations are is easy to learn, and to start applying immediately.  The basics can be learned in less than a week – the more advanced skills take a bit longer.  And this is not untested academic theory – it is proven pragmatic real-world problem solving know-how. It has been known for over 50 years outside healthcare.

Our goal is not acquisition of theoretical knowledge – is is a deep enough understanding to make wise enough  decisions to achieve good enough outcomes. For everyone. Starting tomorrow.

And that is the design purpose of FISH. To provide those who want to learn a quick and easy way to do so.

Stop Press: Further feedback from the masterclass is that some of the managers are grasping the nettle, drawing back their own black curtains, opening the door that was always there behind it, and taking a peek through into a magical garden of opportunity. One that was always there but was hidden from view.

Improvement-by-Twitter

Sat 5th October

It started with a tweet.

08:17 [JG] The NHS is its people. If you lose them, you lose the NHS.

09:15 [DO] We are in a PEOPLE business – educating people and creating value.

Sun 6th October

08:32 [SD] Who isn’t in people business? It is only people who buy stuff. Plants, animals, rocks and machines don’t.

09:42 [DO] Very true – it is people who use a service and people who deliver a service and we ALL know what good service is.

09:47 [SD] So onus is on us to walk our own talk. If we don’t all improve our small bits of the NHS then who can do it for us?

Then we were off … the debate was on …

10:04 [DO] True – I can prove I am saving over £160 000.00 a year – roll on PBR !?

10:15 [SD] Bravo David. I recently changed my surgery process: productivity up by 35%. Cost? Zero. How? Process design methods.

11:54 [DO] Exactly – cost neutral because we were thinking differently – so how to persuade the rest?

12:10 [SD] First demonstrate it is possible then show those who want to learn how to do it themselves. http://www.saasoft.com/fish/course

We had hard evidence it was possible … and now MC joined the debate …

12:48 [MC] Simon why are there different FISH courses for safety, quality and efficiency? Shouldn’t good design do all of that?

12:52 [SD] Yes – goal of good design is all three. It just depends where you are starting from: Governance, Operations or Finance.

A number of parallel threads then took off and we all had lots of fun exploring  each others knowledge and understanding.

17:28 MC registers on the FISH course.

And that gave me an idea. I emailed an offer – that he could have a complimentary pass for the whole FISH course in return for sharing what he learns as he learns it.  He thought it over for a couple of days then said “OK”.

Weds 9th October

06:38 [MC] Over the last 4 years of so, I’ve been involved in incrementally improving systems in hospitals. Today I’m going to start an experiment.

06:40 [MC] I’m going to see if we can do less of the incremental change and more system redesign. To do this I’ve enrolled in FISH

Fri 11th October

06:47 [MC] So as part of my exploration into system design, I’ve done some studies in my clinic this week. Will share data shortly.

21:21 [MC] Here’s a chart showing cycle time of patients in my clinic. Median cycle time 14 mins, but much longer in 2 pic.twitter.com/wu5MsAKk80

20131019_TTchart

21:22 [MC] Here’s the same clinic from patients’ point if view, wait time. Much longer than I thought or would like

20131019_WTchart

21:24 [MC] Two patients needed to discuss surgery or significant news, that takes time and can’t be rushed.

21:25 [MC] So, although I started on time, worked hard and finished on time. People were waited ages to see me. Template is wrong!

21:27 [MC] By the time I had seen the the 3rd patient, people were waiting 45 mins to see me. That’s poor.

21:28 [MC] The wait got progressively worse until the end of the clinic.

Sunday 13th October

16:02 [MC] As part of my homework on systems, I’ve put my clinic study data into a Gantt chart. Red = waiting, green = seeing me pic.twitter.com/iep2PDoruN

20131019_Ganttchart

16:34 [SD] Hurrah! The visual power of the Gantt Chart. Worth adding the booked time too – there are Seven Sins of Scheduling to find.

16:36 [SD] Excellent – good idea to sort into booked time order – it makes the planned rate of demand easier to see.

16:42 [SD] Best chart is Work In Progress – count the number of patients at each time step and plot as a run chart.

17:23 [SD] Yes – just count how many lines you cross vertically at each time interval. It can be automated in Excel

17:38 [MC] Like this? pic.twitter.com/fTnTK7MdOp

 

20131019_WIPchart

This is the work-in-progress chart. The most useful process monitoring chart of all. It shows the changing size of the queue over time.  Good flow design is associated with small, steady queues.

18:22 [SD] Perfect! You’re right not to plot as XmR – this is a cusum metric. Not a healthy WIP chart this!

There was more to follow but the “ah ha” moment had been seen and shared.

Weds 16th October

MC completes the Online FISH course and receives his well-earned Certificate of Achievement.

This was his with-the-benefit-of-hindsight conclusion:

I wish I had known some of this before. I will have totally different approach to improvement projects now. Key is to measure and model well before doing anything radical.

Improvement Science works.
Improvement-by-Design is a skill that can be learned quickly.
FISH is just a first step.

A Treaty with the Lions

This week I heard an inspiring story of applied Improvement Science that has delivered a win-win-win result. Not in a hospital. Not in a factory. In the red-in-tooth-and-claw reality of rural Kenya.

Africa has vast herds of four-hoofed herbivors called zebra and wildebeast who are accompanied by clever and powerful carnivors – called lions. The sun and rain make the grass grow; the herbivors eat the grass and the carnivors eat the herbivors. It is the way of Nature – and has been so for millions of years.

Enter Man a few thousand years ago with his domesticated cattle and the scene is set for conflict.  Domestic cattle are easy pickings for a hungry lion. Why spend a lot of energy chasing a lively zebra or wildebeast and run the risk of injury that would spell death-by-starvation? Lions are strong and smart but they do not have a social security system to look after the injured and sick. So why not go for the easier option?

Maasai_WarriorsSo Man protects his valuable cattle from hungry lions. And Man is inventive.  The cattle need to eat and sleep like the rest of us – so during the day the cattle are guarded by brave Maasai warriors armed with spears; and at night the cattle are herded into acacia thorn-ringed kraals and watched over by the boys of the tribe.

The lions come at night. Their sense of smell and sight is much better developed than Man’s.

The boys job is to deter the lions from killing the cattle.

And this conflict has been going on for thousands of years.

So when a hungry lion kills a poorly guarded cow or bull – then Man will get revenge and kill the lion.  Everyone loses.

But the application of Improvement Science is changing that ancient conflict.  And it was not done by a scientist or an animal welfare evangelist or a trained Improvementologist. It was done by young Maasai boy called Richard Turere.

He describes the why, the what and the how  … HERE.

Richard_TurereSo what was his breakthrough?

It was noticing that walking about with a torch  was a more effective lion deterrent than a fire or a scarecrow.

That was the chance discovery.  Chance favours the prepared mind.

So how do we create a prepared mind that is receptive to the hints that chance throws at us?

That is one purpose of learning Improvement Science.

What came after the discovery was not luck … it was design.

Richard used what was to hand to design a solution that achieved the required purpose – an effective lion deterrent – in a way that was also an efficient use of his lifetime.

He had bigger dreams than just protecting his tribe’s cattle. His dream was to fly in one of those silver things that he saw passing high over the savannah every day.

And sitting up every night waving a torch to deter hungry lions from eating his father’s cattle was not going to deliver that dream.

So he had to nail that Niggle before he could achieve his Nice If.

Like many budding inventors and engineers Richard is curious about how things work – and he learned a lot about electronics by dismantling his mother’s radio! It got him into a lot of trouble – but the knowledge and understanding that he gained was put to good use when he designed his “lion lights”.

This true story captures the essence of Improvement Science better than any blog, talk, lecture, course or book could.

That is why it was shared by those who learned of his improvement; then to TED; then to the World; then passed to me and I am passing it on too.  It is an inspiring story. It says that anyone can do this sort of thing if they choose to.

And it shows how Improvement Science spreads.  Through the grapevine.  And understanding how that works is part of the Science.

The Power of the Converted Skeptic

puzzle_lightbulb_build_PA_150_wht_4587One of the biggest challenges in Improvement Science is diffusion of an improvement outside the circle of control of the innovator.

It is difficult enough to make a significant improvement in one small area – it is an order of magnitude more difficult to spread the word and to influence others to adopt the new idea!

One strategy is to shame others into change by demonstrating that their attitude and behaviour are blocking the diffusion of innovation.

This strategy does not work.  It generates more resistance and amplifies the differences of opinion.

Another approach is to bully others into change by discounting their opinion and just rolling out the “obvious solution” by top-down diktat.

This strategy does not work either.  It generates resentment – even if the solution is fit-for-purpose – which it usually is not!

So what does work?

The key to it is to convert some skeptics because a converted skeptic is a powerful force for change.

But doesn’t that fly in the face of established change management theory?

Innovation diffuses from innovators to early-adopters, then to the silent majority, then to the laggards and maybe even dinosaurs … doesn’t it?

Yes – but that style of diffusion is incremental, slow and has a very high failure rate.  What is very often required is something more radical, much faster and more reliable.  For that it needs both push from the Confident Optimists and pull from some Converted Pessimists.  The tipping point does not happen until the silent majority start to come off the fence in droves: and they do that when the noisy optimists and equally noisy pessimists start to agree.

The fence-sitters jump when the tug-o-war stalemate stops and the force for change becomes aligned in the direction of progress.

So how is a skeptic converted?

Simple. By another Converted Skeptic.


Here is a real example.

We are all skeptical about many things that we would actually like to improve.

Personal health for instance. Something like weight. Yawn! Not that Old Chestnut!

We are bombarded with shroud-waver stories that we are facing an epidemic of obesity, rapidly rising  rates of diabetes, and all the nasty and life-shortening consequences of that. We are exhorted to eat “five portions of fruit and veg a day” …  or else! We are told that we must all exercise our flab away. We are warned of the Evils of Cholesterol and told that overweight children are caused by bad parenting.

The more gullible and fearful are herded en-masse in the direction of the Get-Thin-Quick sharks who then have a veritable feeding frenzy. Their goal is their short-term financial health not the long-term health of their customers.

The more insightful, skeptical and frustrated seek solace in the chocolate Hob Nob jar.

For their part, the healthcare professionals are rewarded for providing ineffective healthcare by being paid-for-activity not for outcome. They dutifully measure the decline and hand out ineffective advice. Their goal is survival too.

The outcome is predictable and seemingly unavoidable.


So when a disruptive innovation comes along that challenges the current dogma and status quo, the healthy skeptics inevitably line up and proclaim that it will not work.

Not that it does not work. They do not know that because they never try it. They are skeptics. Someone else has to prove it to them.

And I am a healthy skeptic about many things.

I am skeptical about diets – the evidence suggests that their proclaimed benefit is difficult to achieve and even more difficult to sustain: and that is the hall-mark of either a poor design or a deliberate, profit-driven, yet legal scam.

So I decided to put an innovative approach to weight loss to the test.  It is not a diet – it is a design to achieve and sustain a healthier weight to height ratio.  And for it to work it must work for me because I am a diet skeptic.

The start of the story is  HERE

I am now a Converted Healthier Skeptic.

I call the innovative design a “2 out of 7 Lo-CHO” policy and what that means is for two days a week I just cut out as much carbohydrate (CHO) as feasible.  Stuff like bread, potatoes, rice, pasta and sugar. The rest of the time I do what I normally do.  There is no need for me to exercise and no need for me to fill up on Five Fruit and Veg.

LoCHO_Design

The chart above is the evidence of what happened. It shows a 7 kg reduction in weight over 140 days – and that is impressive given that it has required no extra exercise and no need to give up tasty treats completely and definitely no need to boost the bottom-line of a Get-Thin-Quick shark!

It also shows what to expect.  The weight loss starts steeper then tails off as it approaches a new equilibrium weight. This is the classic picture of what happens to a “system” when one of its “operational policies” is wisely re-designed.

Patience, persistence and a time-series chart are all that is needed. It takes less than a minute per day to monitor the improvement.

Even I can afford to invest a minute per day.

The BaseLine© chart clearly shows that the day-to-day variation is quite high: and that is expected – it is inherent in the 2-out-of-7 Lo-CHO design. It is the not the short-term change that is the measure of success – it is the long-term improvement that is important.

It is important to measure daily – because it is the daily habit that keeps me mindful, aligned, and  on-goal.  It is not the measurement itself that is the most important thing – it is the conscious act of measuring and then plotting the dot in the context of the previous dots. The picture tells the story. No further “statistical” analysis is required.

The power of this chart is that it provides hard evidence that is very effective for nudging other skeptics like me into giving the innovative idea a try.  I know because I have done that many times now.  I have converted other skeptics.  It is an innovation infection.

And the same principle appears to apply to other areas.  What is critical to success is tangible and visible proof of progress. That is what skeptics need. Then a rational and logical method and explanation that respects their individual opinion and requirements. The design has to work for them. And it must make sense.

They will come out with a string of “Yes … buts” and that is OK because that is how skeptics work.  Just answer their questions with evidence and explanations. It can get a bit wearing I admit but it is worth the effort.

An effective Improvement Scientist needs to be a healthy skeptic too – i.e. an open minded one.

Celebrating Achievement

CertificateOne of the best things about improvement is the delight that we feel when someone else acknowledges it.

Particularly someone whose opinion we respect.

We feel a warm glow of pride when they notice the difference and take the time to say “Well done!”

We need this affirmative feedback to fuel our improvement engine.

And we need to learn how to give ourselves affirmative feedback because usually there is a LOT of improvement work to do behind the scenes before any externally visible improvement appears.

It is like an iceberg – most of it is hidden from view.

And improvement is tough. We have to wade through Bureaucracy Treacle that is laced with Cynicide and policed by Skeptics.  We know this.

So we need to learn to celebrate the milestones we achieve and to keep reminding ourselves of what we have already done.  Even if no one else notices or cares.

Like the certificates, cups, and medals that we earned at school – still proudly displayed on our mantlepieces and shelves decades later. They are important. Especially to us.

So it is always a joy to celebrate the achievement of others and to say “Well Done” for reaching a significant milestone on the path of learning Improvement Science.

And that has been my great pleasure this week – to prepare and send the Certificates of Achievement to those who have recently completed the FISH course.

The best part of all has been to hear how many times the word “treasured” is used in the “Thank You” replies.

We display our Certificates with pride – not so much that others can see – more to remind ourselves every day to Celebrate Achievement.

 

Fear and Fuel

stick_figure_open_cupboard_150_wht_8038Improvement implies change.

Change requires motivation.

And there are two flavours of motivation juice – Fear and Fuel

Fear is the emotion that comes from anticipated loss in the future.  Loss means some form of damage. Physical, psychological or social harm.  We fear loss of peer-esteem and we fear loss of self-esteem … almost more than we fear physical harm.

Our fear of anticipated loss may be based on reality. Our experience of actual loss in the past.  We remember the emotional pain and we learn from past pain to fear future loss.

Our fear of anticipated loss may also be fueled by rhetoric.  The doom-mongering of the Shroud-Wavers, the Nay-Sayers, the Skeptics and the Cynics.


And there are examples where the rhetorical fear is deliberately generated to drive the fear-of-reality to “the solution” – which of course we have to pay dearly for. This is Machiavellian mass manipulation for commercial gain.

“Fear of germs, fear of fatness, fear of the invisible enemies outside and inside”.

Generating and ameliorating fear is big business. It is a Burn-and-Scrape design.

What we are seeing here is the Drama Triangle operating on a massive scale. The Persecutors create the fear, the Victims run away and the Persecutors then switch role to Rescuers and offer to sell the terrified-and-now-compliant Victims “the  solution” to their fear.  The Victims do not learn.  That is not the purpose – because that would end the Game and derail the Gravy Train.


So fear is not an effective way to motivate for sustained improvement,  and we have ample evidence to support that statement!  It might get us started, but it won’t keep us going.

The Burn-and-Scrape design that we see everywhere is a fear-driven-design.

Any improvements are transitory and usually only achieved at the emotional expense of a passionate idealist. When they get too tired to push any more the toast gets burnt again because the toaster is perfectly designed to burn toast.  Not intentionally designed to burn the toast but perfectly designed to nevertheless.

The use of Delusional Ratios and Arbitrary Targets (DRATs) is a fear-based-design-strategy. It ensures the Fear Game and Gravy Train continue.

And fear has a frightening cost. The cost of checking-and-correcting. The cost of the defensive-bureaucracy that may catch errors before too much local harm results but which itself creates unmeasurable global harm in a different way – by hoovering up the priceless human resource of life-time – like an emotional black hole.

The cost of errors. The cost of queues. The list of fear-based-design costs is long.

A fear-based-design for delivering improvement is a poor design.


So we need a better design.


And a better one is based on a positive-attractive-emotional force pulling us forwards into the future. The anticipation of gains for all. A win-win-win design.

Win-win-win design starts with the Common Purpose: the outcomes that everyone wants; and the outcomes that no-one wants.  We need both.  This balance creates alignment of effort on getting the NiceIfs (the wants) while avoiding the NoNos (the do not wants).

Then we ask the simple question: “What is preventing us having our win-win-win outcome now?

The blockers are the parts of our current design that we need to change: our errors of omission and our errors of commission.  Our gaps and our gaffes.

And to change them we need to be clear what they are; where they are and how they came to be there … and that requires a diagnostic skill that is one of our errors of omission. We have never learned how to diagnose our process design flaws.

Another common blocker is that we believe that a win-win-win outcome is impossible. This is a learned belief. And it is a self-fulfilling prophesy.

We may also believe that all swans are white because we have never seen a black swan – even though we know, in principle, that a black swan could be possible.

Rhetoric and Reality are not the same thing.  Feeling it could be possible and knowing that it actually is possible are different emotions. We need real evidence to challenge our life-limiting rhetoric.

Weary and wary skeptics crave real evidence not rhetorical exhortation.

So when that evidence is presented – and the Impossibility Hypothesis is disproved – then an emotional shock is inevitable.  We are now on the emotional roller-coaster called the Nerve Curve.  And the deeper our skepticism the bigger the shock.


After the shock we characteristically do one of three things:

1. We discount the evidence and go into denial.  We refuse to challenge our own rhetoric. Blissful ignorance is attractive.  The gap between intent and impact is scary.

2. We go quiet because we are now stuck in the the painful awareness of the transition zone between the past and the future. The feelings associated with the transition are anxiety and depression. We don’t want to go back and we don’t know how to go forwards.

3. We sit up, we take notice, we listen harder, we rub our chins, our minds race as we become more and more excited. The feelings associated with the stage of resolution are curiosity, excitement and hope.

It is actually a sequence and it is completely normal.


And those who reach Stage 3 of the Nerve Curve say things like “We have food for thought;  we feel inspired; our passion is re-ignited; we now have a beacon of hope for the future.

That is the flavour of motivation-juice that is needed to fuel the improvement-by-design engine and to deliver win-win-win designs that are both surprising and self-sustaining.

And what actually changes our belief of what is possible is when we learn to do it for ourselves. For real.

That is Improvement Science in action. It is a pragmatic science.

Race for the Line

stick_figures_pulling_door_150_wht_6913It is surprising how competitive most people are. We are constantly comparing ourselves with others and using what we find to decide what to do next. Groan or Gloat.  Chase or Cruise.

This is because we are social animals.  Comparing with other is hard-wired into us. We have little choice.

But our natural competitive behaviour can become counter-productive when we learn that we can look better-by-comparison if we block or trip-up our competitors.  In a vainglorious attempt to make ourselves look better-by-comparison we spike the wheels of our competitors’ chariots.  We fight dirty.

It is not usually openly aggressive fighting.  Most of our spiking is done passively. Often by deliberately not doing something.  A deliberate act of omission.  And if we are challenged we often justify our act of omission by claiming we were too busy.

This habitual passive-aggressive learned behaviour is not only toxic to improvement, it creates a toxic culture too. It is toxic to everything.

And it ensures that we stay stuck in The Miserable Job Swamp.  It is a bad design.

So we need a better one.

One idea is to eliminate competition.  This sounds plausible but it does not work. We are hard-wired to compete because it has proven to be a very effective long term survival strategy. The non-competitive have not survived.  To be deliberately non-competitive will guarantee mediocrity and future failure.

A better design is to leverage our competitive nature and this is surprisingly easy to do.

We flip the “battle” into a “race”.

green_leader_running_the_race_150_wht_3444To do that we need:

1) A clear destination – a shared common purpose – that can be measured. We need to be able to plot our progress using objective evidence.

2) A proven, safe, effective and efficient route plan to get us to our destination.

3) A required arrival time that is realistic.  Open-ended time-scales do not work.

4) Regular feedback to measure our individual progress and to compare ourselves with others.  Selective feedback is ineffective.  Secrecy or anonymous feedback is counter-productive at best and toxic at worst.

5) The ability to re-invest our savings on all three win-win-win dimensions: emotional, temporal and financial.  This fuels the engine of improvement. Us.

The rest just happens – but not by magic – it happens because this is a better Improvement-by-Design.

Find and Fill

Many barriers to improvement are invisible.

This is because they are caused by what is not present rather than what is.  They are gaps or omissions.

Some gaps are blindingly obvious.  This is because we expect to see something there so we notice when it is missing. We would notice the gap if a rope bridge across chasm is obviously missing because only end posts are visible.

Many gaps are not obvious. This is because we have no experience or expectation.  The gap is invisible.  We are blind to the omission.

These are the gaps that we accidentally stumble into. Such as a gap in our knowledge and understanding that we cannot see. These are the gaps that create the fear of failure. And the fear is especially real because the gap is invisible and we only know when it is too late.

minefieldIt is like walking across an emotional minefield.  At any moment we could step on an ignorance mine and our confidence would be blasted into fragments.

So our natural and reasonable reaction is to stay outside the emotional minefield and inside our comfort zones – where we feel safe.  We give up trying to learn and trying to improve. Every-one hopes that Some-one or Any-one will do it for us.  No-one does.

The path to Improvement is always across an emotional minefield because improvement implies unlearning. So we need a better design than blundering about hoping not to fall into an invisible gap.  We need a safer design.

There are a number of options:

Option 1. Ask someone who knows the way across the minefield and can demonstrate it. Someone who knows where the mines are and knows how to avoid them. Someone to tell us where to step and where not to.

Option 2. Clear a new path and mark it clearly so others can trust that it is safe.  Remove the ignorance mines. Find and Fill the knowledge map.

Option 1 is quicker but it leaves the ignorance mines in place.  So sooner or later someone will step on one. Boom!

We need to be able to do Option 2.

The obvious  strategy for Option 2 is to clear the ignorance mines.  We could do this by deliberately blundering about setting off the mines. We could adopt the burn-and-scrape or learn-from-mistakes approach.

Or we could detect, defuse and remove them.

The former requires people willing to take emotional risks; the latter does not require such a sacrifice.

And “learn-by-mistakes” only works if people are able to make mistakes visibly so everyone can learn. In an adversarial, competitive, distrustful context this can not happen: and the result is usually for the unwilling troops to be forced into the minefield with the threat of a firing-squad if they do not!

And where a mistake implies irreversible harm it is not acceptable to learn that way. Mistakes are covered up. The ignorance mines are re-set for the next hapless victim to step on. The emotional carnage continues. Any change 0f sustained, system-wide improvement is blocked.

So in a low-trust cultural context the detect-defuse-and-remove strategy is the safer option.

And this requires a proactive approach to finding the gaps in understanding; a proactive approach to filling the knowledge holes; and a proactive approach to sharing what was learned.

Or we could ask someone who knows where the ignorance mines are and work our way through finding and filling our knowledge gaps. By that means any of us can build a safe, effective and efficient path to sustainable improvement.

And the person to ask is someone who can demonstrate a portfolio of improvement in practice – an experienced Improvement Science Practitioner.

And we can all learn to become an ISP and then guide others across their own emotional minefields.

All we need to do is take the first step on a well-trodden path to sustained improvement.

Fudge? We Love Fudge!

stick_figures_moving_net_150_wht_8609
It is almost autumn again.  The new school year brings anticipation and excitement. The evenings are drawing in and there is a refreshing chill in the early morning air.

This is the time of year for fudge.

Alas not the yummy sweet sort that Grandma cooked up and gave out as treats.

In healthcare we are already preparing the Winter Fudge – the annual guessing game of attempting to survive the Winter Pressures. By fudging the issues.

This year with three landmark Safety and Quality reports under our belts we have more at stake than ever … yet we seem as ill prepared as usual. Mr Francis, Prof Keogh and Dr Berwick have collectively exhorted us to pull up our socks.

So let us explore how and why we resort to fudging the issues.

Watch the animation of a highly simplified emergency department and follow the thoughts of the manager. You can pause, rewind, and replay as much as you like.  Follow the apparently flawless logic – it is very compelling. The exercise is deliberately simplified to eliminate wriggle room. But it is valid because the behaviour is defined by the Laws of Physics – and they are not negotiable.

http://www.youtube.com/watch?v=geRBGP-u5zg&rel=0&loop=1&modestbranding=1

The problem was combination of several planning flaws – two in particular.

First is the “Flaw of Averages” which is where the past performance-over-time is boiled down to one number. An average. And that is then used to predict precise future behaviour. This is a very big mistake.

The second is the “Flaw of Fudge Factors” which is a attempt to mitigate the effects of first error by fudging the answer – by adding an arbitrary “safety margin”.

This pseudo-scientific sleight-of-hand may polish the planning rhetoric and render it more plausible to an unsuspecting Board – but it does not fool Reality.

In reality the flawed design failed – as the animation dramatically demonstrated.  The simulated patients came to harm. Unintended harm to be sure – but harm nevertheless.

So what is the alternative?

The alternative is to learn how to avoid Sir Flaw of Averages and his slippery friend Mr Fudge Factor.

And learning how to do that is possible … it is called Improvement Science.

And you can start right now … click HERE.

Taming the Wicked Bull and the OH Effect

bull_by_the_horns_anim_150_wht_9609Take the bull by the horns” is a phrase that is often heard in Improvement circles.

The metaphor implies that the system – the bull – is an unpredictable, aggressive, wicked, wild animal with dangerous sharp horns.

“Unpredictable” and “Dangerous” is certainly what the newspapers tell us the NHS system is – and this generates fear.  Fear-for-our-safety and fear drives us to avoid the bad tempered beast.

It creates fear in the hearts of the very people the NHS is there to serve – the public.  It is not the intended outcome.

Bullish” is a phrase we use for “aggressive behaviour” and it is disappointing to see those accountable behave in a bullish manner – aggressive, unpredictable and dangerous.

We are taught that bulls are to be  avoided and we are told to not to wave red flags at them! For our own safety.

But that is exactly what must happen for Improvement to flourish.  We all need regular glimpses of the Red Flag of Reality.  It is called constructive feedback – but it still feels uncomfortable.  Our natural tendency to being shocked out of our complacency is to get angry and to swat the red flag waver.  And the more powerful we are,  the sharper our horns are, the more swatting we can do and the more fear we can generate.  Often intentionally.

So inexperienced improvement zealots are prodded into “taking the executive bull by the horns” – but it is poor advice.

Improvement Scientists are not bull-fighters. They are not fearless champions who put themselves at personal risk for personal glory and the entertainment of others.  That is what Rescuers do. The fire-fighters; the quick-fixers; the burned-toast-scrapers; the progress-chasers; and the self-appointed-experts. And they all get gored by an angry bull sooner or later.  Which is what the crowd came to see – Bull Fighter Blood and Guts!

So attempting to slay the wicked bullish system is not a realistic option.

What about taming it?

This is the game of Bucking Bronco.  You attach yourself to the bronco like glue and wear it down as it tries to throw you off and trample you under hoof. You need strength, agility, resilience and persistence. All admirable qualities. Eventually the exhausted beast gives in and does what it is told. It is now tamed. You have broken its spirit.  The stallion is no longer a passionate leader; it is just a passive follower. It has become a Victim.

Improvement requires spirit – lots of it.

Improvement requires the spirit-of-courage to challenge dogma and complacency.
Improvement requires the spirit-of-curiosity to seek out the unknown unknowns.
Improvement requires the spirit-of-bravery to take calculated risks.
Improvement requires the spirit-of-action to make  the changes needed to deliver the improvements.
Improvement requires the spirit-of-generosity to share new knowledge, understanding and wisdom.

So taming the wicked bull is not going to deliver sustained improvement.  It will only achieve stable mediocrity.

So what next?

What about asking someone who has actually done it – actually improved something?

Good idea! Who?

What about someone like Don Berwick – founder of the Institute of Healthcare Improvement in the USA?

Excellent idea! We will ask him to come and diagnose the disease in our system – the one that lead to the Mid-Staffordshire septic safety carbuncle, and the nasty quality rash in 14 Trusts that Professor Sir Bruce Keogh KBE uncovered when he lifted the bed sheet.

[Click HERE to see Dr Bruce’s investigation].

We need a second opinion because the disease goes much deeper – and we need it from a credible, affable, independent, experienced expert. Like Dr Don B.

So Dr Don has popped over the pond,  examined the patient, formulated his diagnosis and delivered his prescription.

[Click HERE to read Dr Don’s prescription].

Of course if you ask two experts the same question you get two slightly different answers.  If you ask ten you get ten.  This is because if there was only one answer that everyone agreed on then there would be no problem, no confusion, and need for experts. The experts know this of course. It is not in their interest to agree completely.

One bit of good news is that the reports are getting shorter.  Mr Robert’s report on the failing of one hospital is huge and has 209 recommendations.  A bit of a bucketful.  Dr Bruce’s report is specific to the Naughty Fourteen who have strayed outside the statistical white lines of acceptable mediocrity.

Dr Don’s is even shorter and it has just 10 recommendations. One for each finger – so easy to remember.

1. The NHS should continually and forever reduce patient harm by embracing wholeheartedly an ethic of learning.

2. All leaders concerned with NHS healthcare – political, regulatory, governance, executive, clinical and advocacy – should place quality of care in general, and patient safety in particular, at the top of their priorities for investment, inquiry, improvement, regular reporting, encouragement and support.

3. Patients and their carers should be present, powerful and involved at all levels of healthcare organisations from wards to the boards of Trusts.

4. Government, Health Education England and NHS England should assure that sufficient staff are available to meet the NHS’s needs now and in the future. Healthcare organisations should ensure that staff are present in appropriate numbers to provide safe care at all times and are well-supported.

5. Mastery of quality and patient safety sciences and practices should be part of initial preparation and lifelong education of all health care professionals, including managers and executives.

6. The NHS should become a learning organisation. Its leaders should create and support the capability for learning, and therefore change, at scale, within the NHS.

7. Transparency should be complete, timely and unequivocal. All data on quality and safety, whether assembled by government, organisations, or professional societies, should be shared in a timely fashion with all parties who want it, including, in accessible form, with the public.

8. All organisations should seek out the patient and carer voice as an essential asset in monitoring the safety and quality of care.

9. Supervisory and regulatory systems should be simple and clear. They should avoid diffusion of responsibility. They should be respectful of the goodwill and sound intention of the vast majority of staff. All incentives should point in the same direction.

10. We support responsive regulation of organisations, with a hierarchy of responses. Recourse to criminal sanctions should be extremely rare, and should function primarily as a deterrent to wilful or reckless neglect or mistreatment.

The meat in the sandwich are recommendations 5 and 6 that together say “Learn Improvement Science“.

And what happens when we commit and engage in that learning journey?

Steve Peak has described what happens in this this very blog. It is called the OH effect.

OH stands for “Obvious-in-Hindsight”.

Obvious means “understandable” which implies visible, sensible, rational, doable and teachable.

Hindsight means “reflection” which implies having done something and learning from reality.

So if you would like to have a sip of Dr Don’s medicine and want to get started on the path to helping to create a healthier healthcare system you can do so right now by learning how to FISH – the first step to becoming an Improvement Science Practitioner.

The good news is that this medicine is neither dangerous nor nasty tasting – it is actually fun!

And that means it is OK for everyone – clinicians, managers, patients, carers and politicians.  All of us.

 

The Five-and-Two Improvement Plan

Fish!One of the reasons that many people find improvement difficult is because they are told that they will undergo a “transformational” change and they will have a “Road-To-Damascus Moment” when the “penny drops” and the “light bulb goes on”.

This is rubbish advice.

The unstated implication is that “and if you do not then there is something wrong with you“.

There is no Improvementologist I know who ever had a massive “ah ha” moment – the insight was gained gradually, bit-by-bit, over a long period of time.

And that is for a good reason.

We are all very weak-willed.

We all very easily slip back into Victim role, and I’m Not OK or They’re not OK thinking.  Especially when bad news is so plentiful and so cheap.

The “Eureka Mantra”  does not work with trying to improve physical health by losing weight so why should it work for anything else?  Diets do not work – if they did we would all be a healthy weight.

A few months ago I ran an experiment – to see if I could lose a significant amount of weight without much effort – certainly without doing any extra exercise.  How?  By “not burning the toast” on the first place. By ingesting fewer carbs.

That experiment has shown it is possible – I have the evidence – hard facts not just fuzzy feelings.

The most surprising lesson was that all I had to do was to reduce carb intake for two days a week. I just skipped the sugar, biscuits, bread, potatoes, crisps etc for two days a week. It was not difficult. In fact it was so easy I am not surprised that the Five-and-Two weight reduction plan is going viral.

So I wonder what would happen if we try the same experiment for other areas of improvement – psychological.  What if we just change the “diet” from “carbs” to “cants”.  What if for two days a week we just restrict our “cant” intake.  What if we turn down the volume of our inner voice that tells us what we cant do?  What if we just ignore the people whose response to every improvement suggestion is “yes …but”?  What if we just do this and measure what happens?

For only two days a week though.

I’m not interested in being suddenly transformed – a gradual metamorphosis is OK by me.  My intuition is that it will be important to maintain a normal diet of whining and denying for the other five days – because I need variation and I do seem to get pleasure from wallowing in my own toxic emotional swamp.

That sounds doable.

I could probably maintain a “negative thought filter” for two days a week – and then return to my curmudgeonly comfort zone for the other five.

I’ll need to choose which days wisely though …  and I had better wear a special hat, tie or badge that indicates which mode I am in – a pessimistic Black Hat five days a week and an optimistic Yellow Hat for the other two perhaps.

I wonder if anyone will notice?

And the idea of choosing your attitude for a day reminds me of a little book called FISH!

The Learning Labyrinth

labyrinth

The mind is a labyrinth of knowledge – a maze with many twists, turns, joins, splits, tunnels, bridges, crevasses and caverns.

Some paths lead to dead ends; others take a long way around but get to the destination in the end.

The shortest path is not obvious – even in hindsight.

And there is another challenge … no two individuals share the same knowledge labyrinth.  An obvious path between problem and solution for one person may be  invisible or incomprehensible to another.

But the greatest challenge, and the greatest opportunity, is that our labyrinth of knowledge can change and does change continuously … through learning.

So if one person can see a path of improvement between current problem and future solution, then how can they guide another who cannot?

This is a challenge that an Improvement Scientist faces every day.

It is not effective to just give a list of instructions – “To get from problem to solution follow this path“.  The path may not exist in the recipients knowledge labyrinth. If they just follow the instructions they will come up against a wall or fall into a hole.

It is not realistic to expect the learner to replace their labyrinth of knowledge with that of the teacher – to clone the teachers way of thinking. Just reciting the Words of the Guru is not improvement – is Zealotry.

One way is for a guide to describe their own labyrinth of knowledge.  To lay it out in a way that any other can explore.  A way that is fully signposted, with explanations and maps that that the explorer can refer to as  they go.  A template against which they can compare their own knowledge labyrinth to reveal the similarities and the differences.

No two people will explore a knowledge labyrinth in the same way … but that does not matter. So long as they are able to uncover and assumptions that misguide them and any gaps in their knowledge block their progress.  With that feedback they can update their own mental signposts and create safe, effective and efficient paths that they can follow in future at will.

And that  is how the online FISH training is designed.  It is the knowledge labyrinth of an experienced Improvement Scientist that can be explored online.

And it keeps changing  …

The Grape Vine

growing_blue_vine_scrolling_down_150_wht_247Improvement Science is a collaborative community activity.

And word about what is possible spreads through The Grape Vine.

And it spreads in a particular way – through stories – personal accounts of “ah ha” moments.

Those “ah ha” moments are generated by a process – a process designed to generate them.

And that process is called the Nerve Curve.  It is rather like an emotional roller-coaster ride.

The Nerve Curve starts comfortably enough with a few gentle ups, downs, twists and turns – just to settle everyone in their seats.

Then it picks up pace and you have to hold on a bit tighter.

Then comes the Challenge – an interactive group-led improvement activity.  Something like the “Save the NHS Game“.

Then comes the Shock!  When the “intuitively obvious” and “collectively agreed” decisions and actions make the problem worse rather than better. The shock is magnified by learning that there is a solution – and that it was hidden from us. We did not know what we did not know. We were blissfully ignorant.

Now we are not. We are painfully aware of what we did not know.

Impossibility_HypothesisThen we head for Denial like a scared rabbit – but the cars are moving fast now and the is no stopping or going back.  We cannot get off – we cannot go back – so we cover our eyes and ears to block out the New Reality.

It does not work very well.  We quickly realize that it is safer to be able to see where we are heading so we can prepare for what is coming.  An emotional brick wall looms up in front of us – and written on it are the words “Impossibility Hypothesis”.  And we are heading right at it. A new emotion bubbles to the surface.

Anger.

Who’s ****** idea was it to get on this infernal contraption?  Why weren’t we warned?  Who is in charge? Who is to blame?

That does not work very well either. So we try a different strategy.

Bargaining.

We desperately want to limit the damage to comfort zone and confidence so we try negotiating a compromise, finding an exit option, and looking for the emergency stop cord.  There isn’t one. Reality is relentless and ruthless. Uncompromising.

Now we are really scared and with no viable options for staying where we were and no credible options for avoiding a catastrophe we are emotionally stuck – and we start to sink into Depression which is the path to Hopelessness, Apathy and Despair (HAD). We have run out of options. And we cannot stay in the past.

But the seed of innovation has been sown.  A hidden problem has been uncovered and an unknown option has been demonstrated. The “Way Over The Impossible-for-Me barrier” is clearly signposted. The light at the end of the tunnel has been switched on. We have a choice.

And at the last second we sweep over the Can’t Do Barrier and when we look back it has disappeared – it was a mirage – a perceptual trick our Intuition was playing on us. It only existed in our minds.

That is the “Ah ha”.

And now we can see a way forward – and how with support, guidance, encouragement and effort we can climb up Acceptance Mountain to Resolution Peak. It is will not be quick.  It will not be comfortable.  We have some unlearning to do. A few old assumptions and habits that need to be challenged, dismantled and re-designed.

It is hard work but it is surprisingly invigorating as a previously unrecognized inner well of hope, enthusiasm and confidence is tapped. We surprise ourselves with what we can do already.  We realize that the only thing that was actually blocking us before was our belief it was too difficult. And the lack of a guide.

And then we share our “ah ha” with others through The Grape Vine.

Here is a shared “ah ha” from this week:

The Post It® Note exercise was my biggest “Aha” moment on a combination of levels. The aspect that particularly resonated was the range of behaviours and responses from the different pairings, an aspect that would have been hidden had I done the exercise on my own. I’m still smiling at the simple elegance of this particular exercise and the depth of learning I am getting from it. [PD, Consultant Paediatrician. 15th July 2013].

The Post It® Note exercise is part of the FISH course … you can try it yourself here

This blog is part of The Grape Vine.

The Nerve Curve is ready and waiting to take you on an exciting ride through Improvement Science!

The Art of Juggling

figure_juggling_balls_150_wht_4301Improvement Science is like three-ball juggling.

And there are different sets of three things that an Improvementologist needs to juggle:

the Quality-Flow-Cost set and
the Governance-Operations-Finance set and
the Customer-Staff-Organization set.

But the problem with juggling is that it looks very difficult to do – so almost impossible to learn – so we do not try.  We give up before we start. And if we are foolhardy enough to try (by teaching ourselves using the suck-it-and-see or trial-and-error method) then we drop all the balls very quickly. We succeed in reinforcing our impossible-for-me belief with evidence.  It is a self-fulfilling prophesy. Only the most tenacious, self-motivated and confident people succeed – which further reinforces the I-Can’t-Do belief of everyone else.

The problem here is that we are making an Error of Omission.

We are omitting to ask ourselves two basic questions “How does a juggler learn their art?” and “How long does it take?

The answer is surprising.

It is possible for just about anyone to learn to juggle in about 10 minutes. Yes – TEN MINUTES.


Skeptical?  Sure you are – if it was that easy we would all be jugglers.  That is the “I Can’t Do” belief talking. Let us silence that confidence-sapping voice once and for all.

Here is how …

You do need to have at least one working arm and one working eyeball and something connecting them … and it is a bit easier with two working arms and two working eyeballs and something connecting them.

And you need something to juggle – fruit is quite good – oranges and apples are about the right size, shape, weight and consistency (and you can eat the evidence later too).

And you need something else.

You need someone to teach you.

And that someone must be able to juggle and more importantly they must be able to teach someone else how to juggle which is a completely different skill.

juggling_at_Keele_June_2013Those are the necessary-and-sufficient requirements to learn to juggle in 10 minutes.

The recent picture shows an apprentice Improvement Scientist at the “two orange” stage – just about ready to move to the “three orange” stage.

Exactly the same is true of learning the Improvement Science juggling trick.

The ability to improve Quality, Flow and Cost at the same time.

The ability to align Governance, Operations and Finance into a win-win-win synergistic system.

The ability to delight customers, motivate staff and support leaders at the same time.


And the trick to learning to juggle is called step-by-step unlearning. It is counter-intuitive.

To learn to juggle you just “unlearn” what is stopping you from juggling. You unlearn the unconscious assumptions and habits that are getting in the way.

And that is why you need a teacher who knows what needs to be unlearned and how to help you do it.

fish
And for an apprentice Improvement Scientist the first step on the Unlearning Journey is FISH.

Step 6 – Maintain

Anyone with much experience of  change will testify that one of the hardest parts is sustaining the hard won improvement.

The typical story is all too familiar – a big push for improvement, a dramatic improvement, congratulations and presentations then six months later it is back where it was before but worse. The cynics are feeding on the corpse of the dead change effort.

The cause of this recurrent nightmare is a simple error of omission.

Failure to complete the change sequence. Missing out the last and most important step. Step 6 – Maintain.

Regular readers may remember the story of the pharmacy project – where a sceptical department were surprised and delighted to discover that zero-cost improvement was achievable and that a win-win-win outcome was not an impossible dream.

Enough time has now passed to ask the question: “Was the improvement sustained?”

TTO_Yield_Nov12_Jun13The BaseLine© chart above shows their daily performance data on their 2-hour turnaround target for to-take-out prescriptions (TTOs) . The weekends are excluded because the weekend system is different from the weekday system. The first split in the data in Jan 2013 is when the improvement-by-design change was made. Step 4 on the 6M Design® sequence – Modify.

There was an immediate and dramatic improvement in performance that was sustained for about six weeks – then it started to drift back. Bit by Bit.  The time-series chart flags it clearly.


So what happened next?

The 12-week review happened next – and it was done by the change leader – in this case the Inspector/Designer/Educator.  The review data plotted as a time-series chart revealed instability and that justified an investigation of the root cause – which was that the final and critical step had not been completed as recommended. The inner feedback loop was missing. Step 6 – Maintain was not in place.

The outer feedback loop had not been omitted. That was the responsibility of the experienced change leader.

And the effect of closing the outer-loop is clearly shown by the third segment – a restoration of stability and improved capability. The system is again delivering the improvement it was designed to deliver.


What does this lesson teach us?

The message here is that the sponsors of improvement have essential parts to play in the initiation and the maintenance of change and improvement. If they fail in their responsibility then the outcome is inevitable and predictable. Mediocrity and cynicism.

Part 1: Setting the clarity and constancy of common purpose.

Without a clear purpose then alignment, focus and effectiveness are thwarted.  Purpose that changes frequently is not a purpose – it is reactive knee-jerk politics.  Constancy of purpose is required because improvement takes time to achieve and to embed.  There is always a lag so moving the target while the arrow is in flight is both dangerous and leads to disengagement.  Establishing common ground is essential to avoiding the time-wasting discussion and negotiation that is inevitable when opinions differ – which they always do.

Part 2: Respectful challenge.

Effective change leadership requires an ability to challenge from a position of mutual respect.  Telling people what to do is not leadership – it is dictatorship.  Dodging the difficult conversations and passing the buck to others is not leadership – it is ineffective delegation. Asking people what they want to do is not leadership – it is abdication of responsibility.  People need their leaders to challenge them and to respect them at the same time.  It is not a contradiction.  It is possible to do both.

And one way that a leader of change can challenge with respect is to expose the need for change; to create the context for change; and then to commit to holding those charged with change to account – including themselves.  And to make it clear at the start what their expectation is as a leader – and what the consequences of disappointment are.

It is a delight to see individuals,  teams, departments and organisations blossom and grow when the context of change is conducive.  And it is disappointing to see them wither and shrink when the context of change is laced with cynicide – the toxic product of cynicism.


So what is the next step?

What could an aspirant change leader do to get this for themselves and their organisations?

One option is to become a Student of Improvementology® – and they can do that here.

Spreading the Word

clock_hands_spinning_import_150_wht_3149Patience is a virtue for an advocate of Improvementology®.

This week Mike Davidge (Head of Measurement for the former NHS Institute for Innovation and Improvement) posted some feedback on the Journal of Improvement Science site.

His feedback is reproduced here in full with Mike’s permission. The rationale for reproduction that the activity data shows that more people the Blog than the Journal.

Feedback posted on 15/06/2013 at 07:35:05 for paper entitled:

Dodds S. A Case Study of a Successful One-Stop Clinic Schedule Design using Formal Methods . Journal of Improvement Science 2012:6; 1-13.

“It’s only taken me a year to get round to reading this, an improvement on your 9 years to write it! It was well worth the read. You should make a serious attempt to publish this where it gets a wider audience. Rank = 5/5”

thank_you_boing_150_wht_5547Mike is a world expert in healthcare system measurement and improvement so this is a huge compliment. Thank you Mike. He is right too – 1 year is a big improvement on 9 years. So why did it take 9 years to write up?

One reason is that publication was not the purpose. Improvement was the purpose. Another reason was that this was a step in a bigger improvement project – one that is described in Three Wins.  There is a third reason: the design flaws of the traditional academic peer review process. This is radical stuff and upsets a lot of people so we need to be careful.

The two primary design flaws of conventional peer-reviewed academic journals are:

1) that it has a long lead time and
2) that it has a low yield.

So it is very expensive in author-lifetime.  Improvement is not the same as research.  Perfection is not the goal. Author lifetime is a very valuable resource. If it is wasted with an inefficient publication process design then the result is less output and less dissemination of valuable Improvement Science.

So if any visitors would like to benefit from Mike’s recommendation then you can download the full text of the essay here. It has not been peer-reviewed so you will have to make you own minds up about the value. And if you have any questions then you are free to ask the author.

PS. The visitor who points out the most spelling and grammar errors will earn themselves a copy of BaseLine© the time-series analysis software used to create the charts.

Resistance and Persistence

[Bing-Bong]

The email from Leslie was unexpected.

Hi Bob, can I change the planned topic of our session today to talk about resistance. We got off to a great start with our improvement project but now I am hitting brick walls and we are losing momentum. I am getting scared we will stall. Leslie”

Bob replied immediately – it was only a few minutes until their regular teleconference call.

Hi Leslie, no problem. Just firing up the Webex session now. Bob”

[Whoop-Whoop]

The sound bite announced Leslie joining in the teleconference.

<Leslie> Hi Bob. Sorry about the last minute change of plan. Can I describe the scenario?

<Bob> Hi Leslie. Please do.

<Leslie> Well we are at stage five of the 6M Design® sequence and we are monitoring the effect of the first set of design changes that we have made. We started by eliminating design flaws that were generating errors and impairing quality.   The information coming in confirms what we predicted at stage 3.  The problem is that a bunch of “fence-sitters” that said nothing at the start are now saying that the data is a load of rubbish and implying we are cooking the books to make it look better than it is! I am pulling my hair out trying to convince them that it is working.

<Bob> OK. What is your measure for improvement?

<Leslie> The percentage yield from the new quality-by-design process. It is improving. The BaseLine© chart says so.

<Bob> And how is that improvement being reported?

<Leslie> As the average yield per week.  I know we should not aggregate for a month because we need to see the impact of the change as it happens and I know there is a seven-day cycle in the system so we set the report interval at one week.

<Bob> Yes. Those are all valid reasons. What is the essence of the argument against your data?

<Leslie> There is no specific argument – it is just being discounted as “rubbish”.

<Bob> So you are feeling resistance?

<Leslie> You betcha!

<Bob> OK. Let us take a different tack on this. How often do you measure the yield?

<Leslie> Daily.

<Bob> And what is the reason you are using the percentage yield as your metric?

<Leslie> So we can compare one day with the next more easily and plot it on a time-series chart. The denominator is different every day so we cannot use just the count of errors.

<Bob> OK. And how do you calculate the weekly average?

<Leslie> From the daily percentage yields. It is not a difficult calculation!

There was a definite hint of irritation and sarcasm in Leslie’s voice.

<Bob> And how confident are you in your answer?

<Leslie> Completely confident. The team are fantastic. They see the value of this and are collecting the data assiduously. They can feel the improvement. They do not need the data to prove it. The feedback is to convince the fence-sitters and skeptics and they are discounting it.

<Bob> OK so you are confident in the quality of the data going in to your calculation – how confident are you in the data coming out?

<Leslie> What do you mean!  It is a simple calculation – a 12 year old could do.

<Bob> How are you feeling Leslie?

<Leslie>Irritated!

<Bob> Does it feel as if I am resisting too?

<Leslie>Yes!!

<Bob> Irritation is anger – the sense of loss in the present. What do you feel you are losing?

<Leslie> My patience and my self-confidence.

<Bob> So what might be my reasons for resisting?

<Leslie> You could be playing games or you could have a good reason.

<Bob> Do I play games?

<Leslie> Not so far! Sorry … no. You do not do that.

<Bob> So what could be my good reason?

<Leslie> Um. You can feel or see something that I cannot. An error?

<Bob> Yes. If I just feel something is not right I cannot do much else but say “That does not feel right”.  If I can see what I is not right I can explain my rationale for resisting.  Can I try to illuminate?

<Leslie> Yes please!

<Bob> OK – have you got a spreadsheet handy?

<Leslie> Yes.

<Bob> OK – create a column of twenty random numbers in the range 20-80 and label them “daily successes”. Next to them create a second column of random numbers in the range 20-100 and label then “daily activity”.

<Leslie> OK – done that.

<Bob> OK – calculate the % yield by day then the average of the column of daily % yield.

<Leslie> OK – that is exactly how I do it.

<Bob> OK – now sum the columns of successes and activities and calculate the average % yield from those two totals.

<Leslie> Yes – I could do that and it will give the same final answer but I do not do that because I cannot use that data on my run chart – for the reasons I said before.

<Bob> Does it give the same answer?

<Leslie> Um – no. Wait. I must have made an error. Let me check. No. I have done it correctly. They are not the same. Uh?

<Bob> What are you feeling?

<Leslie> Confused!  But the evidence is right there in front of me.

<Bob> An assumption you have been making has just been exposed to be invalid. Your rhetoric does not match reality.

<Leslie> But everyone does this … it is standard practice.

<Bob> And that makes it valid?

<Leslie> No .. of course not. That is one of the fundamental principles of Improvement Science. Just doing what everyone else does is not necessarily correct.

<Bob> So now we must understand what is happening. Can you now change the Daily Activity column so it is the same every day – say 60.

<Leslie> OK. Now my method works. The yield answers are the same.

<Bob> Yes.

<Leslie> Why is that?

<Bob> The story goes back to 1948 when Claude Shannon described “Information Theory”.  When you create a ratio you start with two numbers and end up with only one which implies that information is lost in the conversion.  Two numbers can only give one ratio, but that same ratio can be created by an infinite set of two numbers.  The relationship is asymmetric. It is not an equality. And it has nothing to do with the precision of the data. When we throw data away we create ambiguity.

<Leslie> And in my data the activity by day does vary. There is a regular weekly cycle and some random noise. So the way I am calculating the average yield is incorrect, and the message I am sharing is distorted, so others can quite reasonably challenge the data, and because I was 100% confident I was correct I have been assuming that their resistance was just due to cussedness!

<Bob> There may be some cussedness too. It is sometimes difficult to separate skepticism and cynicism.

<Leslie> So what is the lesson here? There must be more to your example than just exposing a basic arithmetic error.

<Bob> The message is that when you feel resistance you must accept the possibility that you are making an error that you cannot see.  The person demonstrating resistance can feel the emotional pain of a rhetoric-reality mismatch but can not explain the cause. You need to strive to see the problem through their eyes. It is OK to say “With respect I do not see it that way because …”.

<Leslie> So feeling “resistance” signals an opportunity for learning?

<Bob> Yes. Always.

<Leslie> So the better response is to pull back and to check assumptions and not to push forward and make the resistance greater or worse still break through the barrier of resistance, celebrate the victory, and then commit an inevitable and avoidable blunder – and then add insult to injury and blame someone else creating even more cynicism on the future.

<Bob> Yes. Well put.

<Leslie> Wow!  And that is why patience and persistence are necessary.  Not persistently pushing but persistently searching for the unconscious assumptions that underpin resistance; consistently using Reality as the arbiter;  and having enough patience to let Reality tell its own story.

<Bob> Yes. And having the patience and persistence to keep learning from our confusion and to keep learning how to explain what we have discovered better and better.

<Leslie> Thanks Bob. Once again you have  opened a new door for me.

<Bob> A door that was always there and yet hidden from view until it was illuminated with an example.

Closing the Two Loops

Over the past few weeks I have been conducting an Improvement Science Experiment (ISE).  I do that a lot.  This one is a health improvement experiment. I do that a lot too.  Specifically – improving my own health. Ah! Not so diligent with that one.

The domain of health that I am focusing on is weight – for several reasons:
(1) because a stable weight that is within “healthy” limits is a good idea for many reasons and
(2) because weight is very easy to measure objectively and accurately.

But like most people I have constraints: motivation constraints, time constraints and money constraints.  What I need is a weight reduction design that requires no motivation, no time, and no money.  That sounds like a tough design challenge – so some consideration is needed.

Design starts with a specific purpose and a way of monitoring progress.  And I have a purpose – weight within acceptable limits; a method for monitoring progress – a dusty set of digital scales. What I need is a design for delivering the improvement and a method for maintaining it. That is the challenge.

So I need a tested design that will deliver the purpose.  I could invent something here but it is usually quicker to learn from others who have done it, or something very similar.  And there is lots of knowledge and experience out there.  And they fall into two broad schools – Eat Healthier or Exercise More and usually Both.

Eat Healthier is sold as  Eat Less of the Yummy Bad Stuff and more of the Yukky Good Stuff. It sounds like a Puritanical Policy and is not very motivating. So with zero motivation as  a constraint this is a problem.  And Yukky Good Stuff seems to come with a high price tag. So with zero budget as a constraint this is a problem too.

Exercise More is sold as Get off Your Bottom and Go for a Walk. It sounds like a Macho Man Mantra. Not very motivating either. It takes time to build up a “healthy” sweat and I have no desire to expose myself as a health-desperado by jogging around my locality in my moth-eaten track suit.  So with zero time as a constraint this is a problem. Gym subscriptions and the necessary hi-tech designer garb do not come cheap.  So with a zero budget constraint this is another problem.

So far all the conventional wisdom is failing to meet any of my design constraints. On all dimensions.

Oh dear!

The rhetoric is not working.  That packet of Chocolate Hob Nobs is calling to me from the cupboard. And I know I will feel better if I put them out of their misery. Just one will not do any harm. Yum Yum.  Arrrgh!!!  The Guilt. The Guilt.

OK – get a grip – time for Improvement Scientist to step in – we need some Science.

[Improvement Science hat on]

The physics and physiology are easy on this one:

(a) What we eat provides us with energy to do necessary stuff (keep warm, move about, think, etc). Food energy  is measured in “Cals”; work energy is measured in “Ergs”.
(b) If we eat more Cals than we burn as Ergs then the difference is stored for later – ultimately as blubber (=fat).
(c) There are four contributors to or weight: dry (bones and stuff), lean (muscles and glands of various sorts), fluid (blood, wee etc), and blubber (fat).
(d) The sum of the dry, lean, and fluids should be constant – we need them – we do not store energy there.
(e) The fat component varies. It is stored energy. Work-in-progress so to speak.
(f) One kilogram of blubber is equivalent to about 9000 Cals.
(g) An adult of average weight, composition, and activity uses between 2000 and 2500 Cals per day – just to stay at a stable weight.

These facts are all we need to build an energy flow model.

Food Cals = Energy In.
Work Ergs = Energy Out.
Difference between Energy In and Energy Out is converted to-and-from blubber at a rate of 1 gram per 9 Cal.
Some of our weight is the accumulated blubber – the accumulated difference between Cals-In and Ergs-Out

The Laws Of Physics are 100% Absolute and 0% Negotiable. The Behaviours of People are 100% Relative and 100% Negotiable.  Weight loss is more about behaviour. Habits. Lifestyle.

Bit more Science needed now:

Which foods have the Cals?

(1) Fat (9 Cal per gram)
(2) Carbs (4 Cal per gram)
(3) Protein (4 Cal per gram)
(4) Water, Vitamins, Minerals, Fibre, Air, Sunshine, Fags, Motivation (0 Cal per gram).

So how much of each do we get from the stuff we nosh?

It is easy enough to work out – but it is very tedious to do so.  This is how calorie counting weight loss diets work. You weigh everything that goes in, look up the Cal conversions per gram in a big book, do some maths and come up with a number.  That takes lots of time. Then you convert to points and engage in a pseudo-accounting game where you save points up and cash them in as an occasional cream cake.  Time is a constraint and Saving-the-Yummies-for-Later is not changing a habit – it is feeding it!

So it is just easier for me to know what a big bowel of tortilla chips translates to as Cals. Then I can make an informed choice. But I do not know that.

Why not?

Because I never invested time in learning.  Like everyone else I gossip, I guess, and I generalise.  I say “Yummy stuff is bad because it is Hi-Cal; Yukky stuff is good because it is Lo-Cal“.  And from this generalisation I conclude “Cutting Cals feels bad“. Which is a problem because my motivation is already rock bottom.  So I do nothing,  and my weight stays the same, and I still feel bad.

The Get-Thin-Quick industry knows this … so they use Shock Tactics to motivate us.  They scare us with stories of fat young people having heart attacks and dying wracked with regret. Those they leave behind are the real victims. The industry bludgeons us into fearful submission and into coughing up cash for their Get Thin Quick Panaceas.  Their real goal is the repeat work – the loyal customers. And using scare mongering and a few whale-to-waif conversions as rabble-rousing  zealots they cook up the ideal design to achieve that.  They know that, for most of us, as soon as the fear subsides, the will weakens, the chips are down (the neck), the blubber builds, and we are back with our heads hung low and our wallets open.

I have no motivation – that is a constraint.  So flogging an over-weight and under-motivated middle-aged curmudgeon will only get a more over-weight, ego-bruised-and-depressed, middle-aged cynic. I may even seek solace in the Chocolate Hob Nob jar.

Nah! I need a better design.

[Improvement Scientist hat back on]

First Rule of Improvement – Check the Assumptions.

Assumption 1:
Yummy => Hi-Cal => Bad for Health
Yukky => Lo-Cal => Good for Health

It turns out this is a gross over-simplification.  Lots of Yummy things are Lo-Cal; lots of Yukky things are Hi-Cal. Yummy and Yukky are subjective. Cals are not.

OK – that knowledge is really useful because if I know which-is-which then I can made wiser decisions. I can do swaps so that the Yummy Score goes higher and the Cals Score goes lower.  That sounds more like it! My Motiv-o-Meter twitches.

Assumption 2:
Hi-Cal => Cheap => Good for Wealth
Lo-Cal => Expensive => Bad for Wealth

This is a gross over-simplification too. Lots of Expensive things are Hi-Cal; lots of Cheap things are Lo-Cal.

OK so what about the combination?

Bingo!  There are lots of Yummy+Cheap+Lo-Cal things out there !  So my process is to swap the Lose-Lose-Lose for the Win-Win-Win. I feel a motivation surge. The needle on my Motiv-o-Meter definitely moved this time.

But how much? And for how long? And how will I know if it is working?

[Improvement Science hat back on]

Second Rule of Improvement Science – Work from the Purpose

We need an output  specification.  What weight reduction in what time-scale?

OK – I work out my target weight – using something called the BMI (body mass index) which uses my height and a recommended healthy BMI range to give a target weight range. I plumb for 75 kg – not just “10% reduction” – I need an absolute goal. (PS. The BMI chart I used is at the end of the blog).

OK – I now I need a time-scale – and I know that motivation theory shows that if significant improvement is not seen within 15 repetitions of a behaviour change then it does not stick. It will not become a new habit. I need immediate feedback. I need to see a significant weight reduction within two weeks. I need a quick win to avoid eroding my fragile motivation.  And so long as a get that I will keep going. And how long to get to target weight?  One or two lunar cycles feels about right. Let us compromise on six weeks.

And what is a “significant improvement”?

Ah ha! Now I am on familiar ground – I have a tool for answering that question – a system behaviour chart (SBC).  I need to measure my weight and plot it on a time-series chart using BaseLine.  And I know that I need 9 points to show a significant shift, and I know I must not introduce variation into my measurements. So I do four things – I ensure my scales have high enough precision (+/- 0.1 kg); I do the weighing under standard conditions (same time of day and same state of dress);  I weigh myself every day or every other day; and I plot-the-dots.

OK – how am I doing on my design checklist?
1. Purpose – check
2. Process – check
3. Progress – check

Anything missing?

Yes – I need to measure the energy input – the Cals per day going in – but I need a easy, quick and low-cost way of doing it.

Time for some brainstorming. What about an App? That fancy new smartphone can earn its living for a change. Yup – lots of free ones for tracking Cals.  Choose one. Works OK. Another flick on the Motiv-o-Meter needle.

OK – next bit of the jigsaw. What is my internal process metric (IPM)?  How many fewer Cals per day on average do I need to achieve … quick bit of beer-mat maths … that many kg reduction times Cal per kg of blubber divided by 6 weeks gives  … 1300 Cals per day less than now (on average).  So what is my daily Cals input now?  I dunno. I do not have a baseline.  And I do not fancy measuring it for a couple of weeks to get one. My feeble motivation will not last that long. I need action. I need a quick win.

OK – I need to approach this a different way.  What if I just change the input to more Yummy+Cheap+Lo-Cal stuff and less Yummy+Cheap+Hi-Cal stuff and just measure what happens.  What if I just do what I feel able to? I can measure the input Cals accurately enough and also the output weight. My curiosity is now pricked too and my Inner Nerd starts to take notice and chips in “You can work out the rest from that. It is a simple S&F model” . Thanks Inner Nerd – you do come in handy occasionally. My Motiv-o-Meter is now in the green – enough emotional fuel for a decision and some action.

I have all the bits of the design jigsaw – Purpose, Process, Progress and Pieces.  Studying, and Planning over – time for Doing.

So what happened?

It is an ongoing experiment – but so far it has gone exactly as the design dictated (and the nerdy S&F model predicted).

And the experience has helped me move some Get-Thin-Quick mantras to the rubbish bin.

I have counted nine so far:

Mantra 1. Do not weight yourself every day –  rubbish – weigh yourself every day using a consistent method and plot the dots.
Mantra 2. Focus on the fatrubbish – it is Cals that count whatever the source – fat, carbs, protein (and alcohol).
Mantra 3. Five fresh fruit and veg a dayrubbish – they are just Hi-Cost+Low-Cal stocking fillers.
Mantra 4. Only eat balanced mealsrubbish –  it is OK to increase protein and reduce both carbs and fat.
Mantra 5. It costs money to get healthyrubbish – it is possible to reduce cost by switching to Yummy+Cheap+Lo-Cal stuff.
Mantra 6. Cholesterol is badrubbish – we make more cholesterol than we eat – just stay inside a recommended range.
Mantra 7. Give up all alcohol – rubbish – just be sensible – just stay inside a recommended range.
Mantra 8. Burn the fat with exercise rubbish – this is scraping-the-burnt-toast thinking – less Cals in first.
Mantra 9. Eat less every dayrubbish – it is OK to have Lo-Cal days and OK-Cal days – it is the average Cals that count.

And the thing that has made the biggest difference is the App.  Just being able to quickly look up the Cals in a “Waitrose Potato Croquette” when-ever and where-ever I want to is what I really needed. I have quickly learned what-is-in-what and that helps me make “Do I need that Chocolate Hob-Nob or not?” decisions on the fly. One tiny, insignificant Chocolate Hob-Nob = 95 Cals. Ouch! Maybe not.

I have been surprised by what I have learned. I now know that before I was making lots of unwise decisions based on completely wrong assumptions. Doh!

The other thing that has helped me build motivation is seeing the effect of those wiser design decisions translated into a tangible improvement – and quickly!  With a low-variation and high-precision weight measurement protocol I can actually see the effect of the Cals ingested yesterday on the Weight recorded today.  Our bodies obey the Laws of Physics. We are what we eat.

So what is the lesson to take away?

That there are two feedback loops that need to be included in all Improvement Science challenges – and both loops need to be closed so information flows if the Improvement exercise is to succeed and to sustain.

First the Rhetoric Feedback loop – where new, specific, knowledge replaces old, generic gossip. We want to expose the myths and mantras and reveal novel options.  Challenge assumptions with scientifically valid evidence. If you do not know then look it up.

Second the Reality Feedback loop – where measured outcomes verifies the wisdom of the decision – the intended purpose was achieved.  Measure the input, internal and output metrics and plot all as time-series charts. Seeing is believing.

So the design challenge has been achieved and with no motivation, no time and no budget.

Now where is that packet of Chocolate Hob Nobs. I think I have earned one. Yum yum.

[PS. This is not a new idea – it is called “double loop learning“.  Do not know of it? Worth looking it up?]


bmi_chart

Invisible Design

Improvement Science is all about making some-thing better in some-way by some-means.

There are lots of things that might be improved – almost everything in fact.

There are lots of ways that those things might be improved. If it was a process we might improve safety, quality, delivery, and productivity. If it was a product we might improve reliability, usability, durability and affordability.

There are lots of means by which those desirable improvements might be achieved – lots of different designs.

Multiply that lot together and you get a very big number of options – so it is no wonder we get stuck in the “what to do first?” decision process.

So how do we approach this problem currently?

We use our intuition.

Intuition steers us to the obvious – hence the phrase intuitively obvious. Which means what looks to our minds-eye to be a good option.And that is OK. It is usually a lot better than guessing (but not always).

However, the problem using “intuitively obvious” is that we end up with mediocrity. We get “about average”. We get “OKish”.  We get “satisfactory”. We get “what we expected”. We get “same as always”. We do not get “significantly better-than-average’. We do not get “reliably good”. We do not get improvement. And we do not because anyone and everyone can do the “intuitively obvious” stuff.

To improve we need a better-than-average functional design. We need a Reliably Good Design. And that is invisible.

By “invisible” I mean not immediately obvious to our conscious awareness.  We do not notice good functional design because it does not get in the way of achieving our intention.  It does not trip us up.

We notice poor functional design because it trips us up. It traps us into making mistakes. It wastes out time. It fails to meet our expectation. And we are left feeling disappointed, irritated, and anxious. We feel Niggled.

We also notice exceptional design – because it works far better than we expected. We are surprised and we are delighted.

We do not notice Good Design because it just works. But there is a trap here. And that is we habitually link expectation to price.  We get what we paid for.  Higher cost => Better design => Higher expectation.

So we take good enough design for granted. And when we take stuff for granted we are on the slippery slope to losing it. As soon as something becomes invisible it is at risk of being discounted and deleted.

If we combine these two aspects of “invisible design” we arrive at an interesting conclusion.

To get from Poor Design to OK Design and then Good Design we have to think “counter-intuitively”.  We have to think “outside the box”. We have to “think laterally”.

And that is not a natural way for us to think. Not for individuals and not for teams. To get improvement we need to learn a method of how to counter our habit of thinking intuitively and we need to practice the method so that we can do it when we need to. When we want to need to improve.

To illustrate what I mean let us consider an real example.

Suppose we have 26 cards laid out in a row on a table; each card has a number on it; and our task is to sort the cards into ascending order. The constraint is that we can only move cards by swapping them.  How do we go about doing it?

There are many sorting designs that could achieve the intended purpose – so how do we choose one?

One criteria might be the time it takes to achieve the result. The quicker the better.

One criteria might be the difficulty of the method we use to achieve the result. The easier the better.

When individuals are given this task they usually do something like “scan the cards for the smallest and swap it with the first from the left, then repeat for the second from the left, and so on until we have sorted all the cards“.

This card-sorting-design is fit for purpose.  It is intuitively obvious, it is easy to explain, it is easy to teach and it is easy to do. But is it the quickest?

The answer is NO. Not by a long chalk.  For 26 randomly mixed up cards it will take about 3 minutes if we scan at a rate of 2 per second. If we have 52 cards it will take us about 12 minutes. Four times as long. Using this intuitively obvious design the time taken grows with the square of the number of cards that need sorting.

In reality there are much quicker designs and for this type of task one of the quickest is called Quicksort. It is not intuitively obvious though, it is not easy to describe, but it is easy to do – we just follow the Quicksort Policy.  (For those who are curious you can read about the method here and make up your own mind about how “intuitively obvious” it is.  Quicksort was not invented until 1960 so given that sorting stuff is not a new requirement, it clearly was not obvious for a few thousand years).

Using Quicksort to sort our 52 cards would take less than 3 minutes! That is a 400% improvement in productivity when we flip from an intuitive to a counter-intuitive design.  And Quicksort was not chance discovery – it was deliberately designed to address a specific sorting problem – and it was designed using robust design principles.

So our natural intuition tends to lead us to solutions that are “effective, easy and inefficient” – and that means expensive in terms of use of resources.

This has an important conclusion – if we are all is given the same improvement assignment and we all used our intuition to solve it then we will get similar and mediocre results.  It will feel OK and it will appear obvious but there will be no improvement.

We then conclude that “OK, this is the best we can expect.” which is intuitively obvious, logically invalid, and wrong. It is that sort of intuitive thinking trap that blocked us from inventing Quicksort for thousands of years.

And remember, to decide what is “best” we have to explore all options exhaustively – both intuitively obvious and counter-intuitively obscure. That impossible in practice.  This is why “best” and “optimum” are generally unhelpful concepts in the context of improvement science.

So how do we improve when good design is so counter-intuitive?

The answer is that we learn a set of “good designs” from a teacher who knows and understands them, and then we prove them to ourselves in practice. We leverage the “obvious in retrospect” effect. And we practice until we understand. And then we then teach others.

So if we wanted to improve the productivity of our designed-by-intuition card sorting process we could:
(a) consult a known list of proven sorting algorithms,
(b) choose one that meets our purpose (our design specification),
(c) compare the measured performance of our current “intuitively obvious” design with the predicted performance of that “counter-intuitively obscure” design,
(d) set about planning how to implement the higher performance design – possibly as a pilot first to confirm the prediction, reassure the fence-sitters, satisfy the skeptics, and silence the cynics.

So if these proven good designs are counter-intuitive then how do we get them?

The simplest and quickest way is to learn from people who already know and understand them. If we adopt the “not invented by us” attitude and attempt to re-invent the wheel then we may get lucky and re-discover a well-known design, we might even discover a novel design; but we are much more likely to waste a lot of time and end up no better off, or worse. This is called “meddling” and is driven by a combination of ignorance and arrogance.

So who are these people who know and understand good design?

They are called Improvement Scientists – and they have learned one-way-or-another what a good design looks like. That lalso means they can see poor design where others see only-possible design.

That difference of perception creates a lot of tension.

The challenge that Improvement Scientists face is explaining how counter-intuitive good design works: especially to highly intelligent, skeptical people who habitually think intuitively. They are called Academics.  And it is a pointless exercise trying to convince them using rhetoric.

Instead our Improvement Scientists side-steps the “theoretical discussion” and the “cynical discounting” by pragmatically demonstrating the measured effect of good design in practice. They use reality to make the case for good design – not rhetoric.

Improvement Scientists are Pragmatists.

And because they have learned how counter-intuitive good design is to the novice – how invisible it is to their intuition – then they are also Voracious Learners. They have enough humility to see themselves as Eternal Novices and enough confidence to be selective students.  They will actively seek learning from those who can demonstrate the “what” and explain the “how”.  They know and understand it is a much quicker and easier way to improve their knowledge and understanding.  It is Good Design.

 

The Green Shoots of Improvement

one_on_one_challenge_150_wht_8069Improvement is a form of innovation and it obeys the same Laws of Innovation.

One of these Laws describes how innovation diffuses and it is called Rogers’ Law.

The principle is that innovations diffuse according to two opposing forces – the Force of Optimism and the Force of Skepticism.  As individuals we differ in our balance of these two preferences.

When we are in status quo the two forces are exactly balanced.

As the Force of Optimism builds (usually from increasing dissatisfaction with the status quo driving Necessity-the-Mother-of-Invention) then the Force of Skepticism tends to build too. It feels like being in a vice that is slowly closing. The emotional stress builds, the strain starts to show and the cracks begin to appear.  Sometimes the Optimism jaw of the vice shatters first, sometimes the Skepticism jaw does – either way the pent-up-tension is relieved. At least for a while.

The way to avoid the Vice is to align the forces of Optimism and Skepticism so that they both pull towards the common goal, the common purpose, the common vision.  And there always is one. People want a win-win-win outcome, they vary in daring to dream that it is possible. It is.

The importance of pull is critical. When we have push forces and a common goal we do get movement – but there is a danger – because things can veer out of control quickly.  Pull is much easier to steer and control than push.  We all know this from our experience of the real world.

And When the status quo starts to move in the direction of the common vision we are seeing tangible evidence of the Green Shoots of Improvement breaking through the surface into our conscious awareness.  Small signs first, tender green shoots, often invisible among the overgrowth, dead wood and weeds.

Sometimes the improvement is a reduction of the stuff we do not want – and that can be really difficult to detect if it is gradual because we adapt quickly and do not notice diffuse, slow changes.

We can detect the change by recording how it feels now then reviewing our records later (very few of us do that – very few of us keep a personal reflective journal). We can also detect change by comparing ourselves with others – but that is a minefield of hidden traps and is much less reliable (but we do that all the time!).

Improvement scientists prepare the Soil-of-Change, sow the Seeds of Innovation, and wait for the Spring to arrive.  As the soil thaws (the burning platform of a crisis may provide some energy for this) some of the Seeds will germinate and start to grow.  They root themselves in past reality and they shoot for the future rhetoric.  But they have a finite fuel store for growth – they need to get to the surface and to sunlight before their stored energy runs out. The preparation, planting and timing are all critical.

plant_growing_anim_150_wht_9902And when the Green Shoots of Improvement appear the Improvement Scientist switches role from Germinator to Grower – providing the seedlings with emotional sunshine in the form of positive feedback, encouragement, essential training, and guidance.  The Grower also has to provide protection from toxic threats that can easily kill a tender improvement seedling – the sources of Cynicide that are always present. The disrespectful sneers of “That will never last!” and “You are wasting your time – nothing good lasts long around here!”

The Improvement Scientist must facilitate harnessing the other parts of the system so that they all pull in the direction of the common vision – at least to some degree.  And the other parts add up to about 85% of it so they collectively they have enough muscle to create movement in the direction of the shared vision. If they are aligned.

And each other part has a different, significant and essential role.

The Disruptive Innovators provide the new ideas – they are always a challenge because they are always questioning “Why do we do it that way?” “What if we did it differently?” “How could we change?”  We do not want too many disruptive innovators because they are – disruptive.  Frustrated disruptive innovations can easily flip to being Cynics – so it is wise not to ignore them.

The Early Adopters provide the filter – they test the new ideas; they reject the ones that do not work; and they shape the ones that do. They provide the robust evidence of possibility. We need more Adopters than Innovators because lots of the ideas do not germinate. Duff seed or hostile soil – it does not matter which.  We want Green Shoots of Improvement.

The Majority provide the route to sharing the Adopter-Endorsed ideas, the Green Shoots of Improvement. They will sit on the fence, consider the options, comment, gossip, listen, ponder and eventually they will commit and change. The Early Majority earlier and the Late Majority later. The Late Majority are also known as the Skeptics. They are willing to be convinced but they need the most evidence. They are most risk-averse and for that reason they are really useful – because they can help guide the Shoots of  Improvement around the Traps. They will help if asked and given a clear role – “Tell us if you see gaps and risks and tell us why so that we can avoid them at the design and development stage”.  And you can tell if they are a True Skeptic or a Cynic-in-Skeptic clothing – because the Cynics will decline to help saying that they are too busy.

The last group, the Cynics, are a threat to significant and sustained improvement. And they can be managed using one or more the these four tactics:

1. Ignore them. This has the advantage of not wasting time but it tends to enrage them and they get noisier and more toxic.
2. Isolate them. This is done by establishing peer group ground rules that are is based on Respectful Challenge.
3. Remove them. This needs senior intervention and a cast-iron case with ample evidence of bad behaviour. Last resort.
4. Engage them. This is the best option if it can be achieved – invite the Cynics to be Skeptics. The choice is theirs.

It is surprising how much improvement follows from just turning blocking some of the sources of Cynicide!

growing_blue_vine_dissolve_150_wht_244So the take home message is a positive one:

  • Look for the Green Shoots of Improvement,
  • Celebrate every one you find,
  • Nurture and Protect them

and they will grow bigger and stronger and one day will flower, fruit and create their own Seeds of Innovation.

The Tyranny of Choice

[Ding-a-Ling]
Bob’s new all-singing-and-dancing touchscreen phone pronounced the arrival of an email from an Improvement Science apprentice. This was always an opportunity for learning so he swiped the flashing icon and read the email. It was from Leslie.

<Leslie>Hi Bob, I have come across a new challenge that I never thought I would see – the team that I am working with are generating so many improvement-by-design ideas that we cannot decide what to try. Can you help?

Bob thumbed a reply immediately:
<Bob>Ah ha! The Tyranny of Choice challenge. Yes, I believe I can help. I am free to talk now if you are.

[“You have a call from Leslie”]
Bob’s new all-singing-and-dancing touchscreen phone said that it was Leslie on the line – (it actually said it in the synthetic robot voice that Bob had set as the default).

<Bob>Hello Leslie.

<Leslie>Hi Bob, thank you for replying so quickly. I gather that you have encountered this challenge before?

<Bob>Yes. It usually appears when a team are nearing the end of a bumpy ride on the Nerve Curve and are starting to see new possibilities that previously were there but hidden.

<Leslie>That is just where we are. The problem is we have flipped from no options to so many we cannot decide what to do.

<Bob>It is often assumed that choice is a good thing, but you can have too much of a good thing. Many studies have shown that when the number of innovative choices are limited then people are more likely to make a decision and actually do something. As the number of choices increase it gets much harder to choose so we default to the more comfortable and familiar status quo. We avoid making a decision and we do nothing. That is the Tyranny of Choice.

<Leslie>Yes, that is just how it feels. Paralyzed by indecision. So how do we get past this barrier?

<Bob>The same way we get past all barriers. We step back,  broaden our situational awareness and list all the obvious things and then consider doing exactly the opposite of what out intuition tells us. We just follow the tried-and-tested 6M Design script.

<Leslie>Arrgh! Yes, of course. We start with a 4N Chart.

<Bob>Yes, and specifically we start with the Nuggets.  We look for what is working despite the odds. The positive deviants. Who do you know is decisive when faced with a host of confusing and conflicting options? Not tyrannized by choice.

<Leslie>Other than you?

<Bob>It does not matter who. How do they do it?

<Leslie>Well – “they” use a special sort of map that I confess I have not mastered yet – the Right-2-Left Map.

<Bob>Yes, an effective way to avoid getting lost in the Labyrinth of Options. What else?

<Leslie>“They” know what the critical steps are and “they” give clear step-by-step guidance of what to do to complete them.

<Bob>This is called “story-boarding”.  It is rather like sketching each scene of a play – then practicing each scene script individually until they are second nature and ready when needed.

<Leslie>That is just like what the emergency medical teams do. They have scripts that they use for emergent situations where it is dangerous to try to plan what to do in the moment.  They call them “care bundles”. It avoids a lot of time-wasting, debate, prevarication and the evidence shows that it delivers better outcomes and saves lives.

<Bob>In an emergency situation the natural feeling of fear creates the emotional drive to act; but without a well-designed and fully-tested script the same fear can paralyze the decision process. It is the rabbit-in-the-headlights effect.  When the feeling of urgency is less a different approach is needed to engage the emotional-power-train.

<Leslie>Do you mean build engagement?

<Bob>Yes, and how do we do that?

<Leslie>We use a combination of subjective stories and objective evidence – heart stuff and head stuff. It is a very effective combination to break through the Carapace of Complacency as you call it. I have seen that work really well in practice.

<Bob>And the 4N Chart comes in handy here again because it helps us see the emotional-terrain in perspective and to align us in moving away from the Niggles towards the NiceIfs while avoiding the NoNos and leveraging the Nuggets.

<Leslie>Yes! I have seen that too. But what do we do when we are in new territory; when we are faced with a swarm of novel options; when we have no pre-designed scripts to help us?

<Bob>We use a meta-script?

<Leslie>A what?

<Bob>A meta-script is one that we use to design a novel action script when we need it.

<Leslie>You mean a single method for creating a plan that we are confident will work?

<Bob>Yes.

<Leslie>That is what the Right-2-Left Map is!

<Bob>Yes.

<Leslie>So the Tyranny of Choice is the result of our habitual Left-2-Right thinking.

<Bob>Yes.

<Leslie>And when the future choices we see are also shrouded in ambiguity it is even harder to make a decision!

<Bob>Yes. We cannot see past the barrier of uncertainty – so we stop and debate because it feels safer.

<Leslie>Which is why so many really clever people seem get stuck in the paralysis of analysis and valueless discussion.

<Bob>Yes.

<Leslie>So all we need to do is switch to the counter-intuitive Right-2-Left thinking and the path becomes clear?

<Bob>Not quite.  The choices become a lot easier so the Tyranny of Choice disappears. We still have choices. There are still many possible paths. But it does not matter which we choose because they all lead to the common goal.

<Leslie>Thank you Bob. I am going to have to mull this one over for a while – red wine may help.

<Bob>Yes – mulled wine is a favorite of mine too. Ching-ching!

Do Not Give Up Too Soon

clock_hands_spinning_import_150_wht_3149Tangible improvement takes time. Sometimes it takes a long time.

The more fundamental the improvement the more people are affected. The more people involved the greater the psychological inertia. The greater the resistance the longer it takes to show tangible effects.

The advantage of deep-level improvement is that the cumulative benefit is greater – the risk is that the impatient Improvementologist may give up too early – sometimes just before the benefit becomes obvious to all.

The seeds of change need time to germinate and to grow – and not all good ideas will germinate. The green shoots of innovation do not emerge immediately – there is often a long lag and little tangible evidence for a long time.

This inevitable  delay is a source of frustration, and the impatient innovator can unwittingly undo their good work.  By pushing too hard they can drag a failure from the jaws of success.

Q: So how do we avoid this trap?

The trick is to understand the effect of the change on the system.  This means knowing where it falls on our Influence Map that is marked with the Circles of Control, Influence and Concern.

Our Circle of Concern includes all those things that we are aware of that present a threat to our future survival – such as a chunk of high-velocity space rock smashing into the Earth and wiping us all out in a matter of milliseconds. Gulp! Very unlikely but not impossible.

Some concerns are less dramatic – such as global warming – and collectively we may have more influence over changing that. But not individually.

Our Circle of Influence lies between the limit of our individual control and the limit of our collective control. This a broad scope because “collective” can mean two, twenty, two hundred, two thousand, two million, two billion and so on.

Making significant improvements is usually a Circle of Influence challenge and only collectively can we make a difference.  But to deliver improvement at this level we have to influence others to change their knowledge, understanding, attitudes, beliefs and behaviour. That is not easy and that is not quick. It is possible though – with passion, plausibility, persistence, patience – and an effective process.

It is here that we can become impatient and frustrated and are at risk of giving up too soon – and our temperaments influence the risk. Idealists are impatient for fundamental change. Rationals, Guardians and Artisans do not feel the same pain – and it is a rich source of conflict.

So if we need to see tangible results quickly then we have to focus closer to home. We have to work inside our Circle of Individual Influence and inside our Circle of Control.  The scope of individual influence varies from person-to-person but our Circle of Control is the same for all of us: the outer limit is our skin.  We all choose our behaviour and it is that which influences others: for better or for worse.  It is not what we think it is what we do. We cannot read or control each others minds. We can all choose our attitudes and our actions.

So if we want to see tangible improvement quickly then we must limit the scope of our action to our Circle of Individual Influence and get started.  We do what we can and as soon as we can.

Choosing what to do and what not do requires wisdom. That takes time to develop too.


Making an impact outside the limit of our Circle of Individual Influence is more difficult because it requires influencing many other people.

So it is especially rewarding for to see examples of how individual passion, persistence and patience have led to profound collective improvement.  It proves that it is still possible. It provides inspiration and encouragement for others.

One example is the recently published Health Foundation Quality, Cost and Flow Report.

This was a three-year experiment to test if the theory, techniques and tools of Improvement Science work in healthcare: specifically in two large UK acute hospitals – Sheffield and Warwick.

The results showed that Improvement Science does indeed work in healthcare and it worked for tough problems that were believed to be very difficult if not impossible to solve. That is very good news for everyone – patients and practitioners.

But the results have taken some time to appear in published form – so it is really good news to report that the green shoots of improvement are now there for all to see.

The case studies provide hard evidence that win-win-win outcomes are possible and achievable in the NHS.

The Impossibility Hypothesis has been disproved. The cynics can step off the bus. The skeptics have their evidence and can now become adopters.

And the report offers a lot of detail on how to do it including two references that are available here:

  1. A Recipe for Improvement PIE
  2. A Study of Productivity Improvement Tactics using a Two-Stream Production System Model

These references both describe the fundamentals of how to align financial improvement with quality and delivery improvement to achieve the elusive win-win-win outcome.

A previously invisible door has opened to reveal a new Land of Opportunity. A land inhabited by Improvementologists who mark the path to learning and applying this new knowledge and understanding.

There are many who do not know what to do to solve the current crisis in healthcare – they now have a new vista to explore.

Do not give up too soon –  there is a light at the end of the dark tunnel.

And to get there safely and quickly we just need to learn and apply the Foundations of Improvement Science in Healthcare – and we first learn to FISH in our own ponds first.

fish

Time-Reversed Insight

stick_figure_wheels_turning_150_wht_4572Thinking-in-reverse sounds like an odd thing to do but it delivers more insight and solves tougher problems than thinking forwards.  That is the reason it is called Time-Reversed Insight.   And once we have mastered how to do it, we discover that it comes in handy in all sorts of problematic situations where thinking forwards only hits a barrier or even makes things worse.

Time-reversed thinking is not the same thing as undoing what you just did. It is reverse thinking – not reverse acting.

We often hear the advice “Start with the end in mind …” and that certainly sounds like it might be time-reversed thinking, but it is often followed by “… to help guide your first step.” The second part tells us it is not. Jumping from outcome to choosing the first step is actually time-forward thinking.

Time-forward thinking comes in many other disguises: “Seeking your True North” is one and “Blue Sky Thinking” is another. They are certainly better than discounting the future and they certainly do help us to focus and to align our efforts – but they are still time-forward thinking. We know that because the next question is always “What do we do first? And then? And then?” in other words “What is our Plan?”.

This is not time-reversed insightful thinking: it is good old, tried-and-tested, cause-and-effect thinking. Great for implementation but a largely-ineffective, and a hugely-inefficient way to dissolve “difficult” problems. In those situation it becomes keep-busy behaviour. Plan-Do-Plan-Do-Plan-Do ……..


In time-reversed thinking the first question looks similar. It is a question about outcome but it is very specific.  It is “What outcome do we want? When do we want it? and How would we know we have got it?”  It is not a direction. It is a destination. The second question in time-reversed thinking is the clincher. It is  “What happened just before?” and is followed by “And before that? And before that?“.

We actually do this all the time but we do it unconsciously and we do it very fast.  It is called the “blindingly obvious in hindsight” phenomenon.  What happens is we feel the good or bad outcome and then we flip to the cause in one unconscious mental leap. Ah ha!

And we do this because thinking backwards in a deliberate, conscious, sequential way is counter-intuitive.

Our unconscious mind seems to have no problem doing it though. And that is because it is wired differently. Some psychologists believe that we literally have “two brains”: one that works sequentially in the direction of forward time – and one that works in parallel and in a forward-and backward in time fashion. It is the sequential one that we associate with conscious thinking; it is the parallel one that we associate with unconscious feeling. We do both and usually they work in synergy – but not always. Sometimes they antagonise each other.

The problem is that our sequential, conscious brain does not  like working backwards. Just like we do not like walking backwards, or driving backwards.  We have evolved to look, think, and move forwards. In time.

So what is so useful about deliberate, conscious, time-reversed thinking?

It can give us an uniquely different perspective – one that generates fresh insight – and that new view enables us to solve problems that we believed were impossible when looked at in a time-forward way.


An example of time-reverse thinking:

The 4N Chart is an emotional mapping tool.  More specifically it is an emotion-over-time mapping technique. The way it is used is quite specific and quite counter-intuitive.  If we ask ourselves the question “What is my top Niggle?” our reply is usually something like “Not enough time!” or “Person x!” or “Too much work!“.  This is not how The 4N Chart is designed to be used.  The question is “What is my commonest negative feeling?” and then the question “What happened just before I felt it?“.  What was the immediately preceding cause of  the Niggle? And then the questions continue deliberately and consciously to think backwards: “And before that?”, “And before that?” until the root causes are laid bare.

A typical Niggle-cause exposing dialog might be:

Q: What is my most commonest negative feeling?
A: I feel angry!
Q: What happened just before?
A: My boss gives me urgent jobs to do at half past 4 on Friday afternoon!
Q: And before that?
A: Reactive crisis management meetings are arranged at very short notice!
Q: And before that?
A: We have regular avoidable crises!
Q: And before that?
A: We are too distracted with other important work to spot each crisis developing!
Q: And before that?
A: We were not able to recruit when a valuable member of staff left.
Q: And before that?
A: Our budget was cut!

This is time-reversed  thinking and we can do this reasonably easily because we are working backwards from the present – so we can use our memory to help us. And we can do this individually and collectively. Working backwards from the actual outcome is safer because we cannot change the past.

It is surprisingly effective though because by doing this time-reverse thinking consciously we uncover where best to intervene in the cause-and-effect pathway that generates our negative emotions. Where it crosses the boundary of our Circle of Control. And all of us have the choice to step-in just before the feeling is triggered. We can all choose if we are going to allow the last cause to trigger to a negative feeling in us. We can all learn to dodge the emotional hooks. It takes practice but it is possible. And having deflected the stimulus and avoided being hijacked by our negative emotional response we are then able to focus our emotional effort into designing a way to break the cause-effect-sequence further upstream.

We might leave ourselves a reminder to check on something that could develop into a crisis without us noticing. Averting just one crisis would justify all the checking!

This is what calm-in-a-crisis people do. They disconnect their feelings. It is very helpful but it has a risk.

robot_builder_textThe downside is that they can disconnect all their feelings – including the positive ones. They can become emotionless, rational, logical, tough-minded robots.  And that can be destructive to individual and team morale. It is the antithesis of improvement.

So be careful when disconnecting emotional responses – do it only for defense – never for attack.


A more difficult form of time-reversed thinking is thinking backwards from future-to-present.  It is more difficult for many reasons, one of which is because we do not have a record of what actually happened to help us.  We do however have experience of  similar things from the past so we can make a good guess at the sort of things that could cause a future outcome.

Many people do this sort of thinking in a risk-avoidance way with the objective of blocking all potential threats to safety at an early stage. When taken to extreme it can manifest as turgid, red-taped, blind bureaucracy that impedes all change. For better or worse.

Future-to-present thinking can be used as an improvement engine – by unlocking potential opportunity at an early stage. Innovation is a fragile flower and can easily be crushed. Creative thinking needs to be nurtured long enough to be tested.

Change is deliberately destablising so this positive form of future-to-present thinking can also be counter-productive if taken to extreme when it becomes incessant meddling. Change for change sake is also damaging to morale.

So, either form of future-to-present thinking is OK in moderation and when used in synergy the effect is like magic!

Synergistic future-to-present time-reversed thinking is called Design Thinking and one formulation is called 6M Design.

The Writing on the Wall – Part II

Who_Is_To_BlameThe retrospectoscope is the favourite instrument of the forensic cynic – the expert in the after-the-event-and-I-told-you-so rhetoric. The rabble-rouser for the lynch-mob.

It feels better to retrospectively nail-to-a-cross the person who committed the Cardinal Error of Omission, and leave them there in emotional and financial pain as a visible lesson to everyone else.

This form of public feedback has been used for centuries.

It is called barbarism, and it has no place in a modern civilised society.


A more constructive question to ask is:

Could the evolving Mid-Staffordshire crisis have been detected earlier … and avoided?”

And this question exposes a tricky problem: it is much more difficult to predict the future than to explain the past.  And if it could have been detected and avoided earlier, then how is that done?  And if the how-is-known then is everyone else in the NHS using this know-how to detect and avoid their own evolving Mid-Staffs crisis?

To illustrate how it is currently done let us use the actual Mid-Staffs data. It is conveniently available in Figure 1 embedded in Figure 5 on Page 360 in Appendix G of Volume 1 of the first Francis Report.  If you do not have it at your fingertips I have put a copy of it below.

MS_RawData

The message does not exactly leap off the page and smack us between the eyes does it? Even with the benefit of hindsight.  So what is the problem here?

The problem is one of ergonomics. Tables of numbers like this are very difficult for most people to interpret, so they create a risk that we ignore the data or that we just jump to the bottom line and miss the real message. And It is very easy to miss the message when we compare the results for the current period with the previous one – a very bad habit that is spread by accountants.

This was a slowly emerging crisis so we need a way of seeing it evolving and the better way to present this data is as a time-series chart.

As we are most interested in safety and outcomes, then we would reasonably look at the outcome we do not want – i.e. mortality.  I think we will all agree that it is an easy enough one to measure.

MS_RawDeathsThis is the raw mortality data from the table above, plotted as a time-series chart.  The green line is the average and the red-lines are a measure of variation-over-time. We can all see that the raw mortality is increasing and the red flags say that this is a statistically significant increase. Oh dear!

But hang on just a minute – using raw mortality data like this is invalid because we all know that the people are getting older, demand on our hospitals is rising, A&Es are busier, older people have more illnesses, and more of them will not survive their visit to our hospital. This rise in mortality may actually just be because we are doing more work.

Good point! Let us plot the activity data and see if there has been an increase.

MS_Activity

Yes – indeed the activity has increased significantly too.

Told you so! And it looks like the activity has gone up more than the mortality. Does that mean we are actually doing a better job at keeping people alive? That sounds like a more positive message for the Board and the Annual Report. But how do we present that message? What about as a ratio of mortality to activity? That will make it easier to compare ourselves with other hospitals.

Good idea! Here is the Raw Mortality Ratio chart.

MS_RawMortality_RatioAh ha. See! The % mortality is falling significantly over time. Told you so.

Careful. There is an unstated assumption here. The assumption that the case mix is staying the same over time. This pattern could also be the impact of us doing a greater proportion of lower complexity and lower risk work.  So we need to correct this raw mortality data for case mix complexity – and we can do that by using data from all NHS hospitals to give us a frame of reference. Dr Foster can help us with that because it is quite a complicated statistical modelling process. What comes out of Dr Fosters black magic box is the Global Hospital Raw Mortality (GHRM) which is the expected number of deaths for our case mix if we were an ‘average’ NHS hospital.

MS_ExpectedMortality_Ratio

What this says is that the NHS-wide raw mortality risk appears to be falling over time (which may be for a wide variety of reasons but that is outside the scope of this conversation). So what we now need to do is compare this global raw mortality risk with our local raw mortality risk  … to give the Hospital Standardised Mortality Ratio.

MS_HSMRThis gives us the Mid Staffordshire Hospital HSMR chart.  The blue line at 100 is the reference average – and what this chart says is that Mid Staffordshire hospital had a consistently higher risk than the average case-mix adjusted mortality risk for the whole NHS. And it says that it got even worse after 2001 and that it stayed consistently 20% higher after 2003.

Ah! Oh dear! That is not such a positive message for the Board and the Annual Report. But how did we miss this evolving safety catastrophe?  We had the Dr Foster data from 2001

This is not a new problem – a similar thing happened in Vienna between 1820 and 1850 with maternal deaths caused by Childbed Fever. The problem was detected by Dr Ignaz Semmelweis who also discovered a simple, pragmatic solution to the problem: hand washing.  He blew the whistle but unfortunately those in power did not like the implication that they had been the cause of thousands of avoidable mother and baby deaths.  Semmelweis was vilified and ignored, and he did not publish his data until 1861. And even then the story was buried in tables of numbers.  Semmelweis went mad trying to convince the World that there was a problem.  Here is the full story.

Also, time-series charts were not invented until 1924 – and it was not in healthcare – it was in manufacturing. These tried-and-tested safety and quality improvement tools are only slowly diffusing into healthcare because the barriers to innovation appear somewhat impervious.

And the pores have been clogged even more by the social poison called “cynicide” – the emotional and political toxin exuded by cynics.

So how could we detect a developing crisis earlier – in time to avoid a catastrophe?

The first step is to estimate the excess-death-equivalent. Dr Foster does this for you.MS_ExcessDeathsHere is the data from the table plotted as a time-series chart that shows that the estimated-excess-death-equivalent per year. It has an average of 100 (that is two per week) and the average should be close to zero. More worryingly the number was increasing steadily over time up to 200 per year in 2006 – that is about four excess deaths per week – on average.  It is important to remember that HSMR is a risk ratio and mortality is a multi-factorial outcome. So the excess-death-equivalent estimate does not imply that a clear causal chain will be evident in specific deaths. That is a complete misunderstanding of the method.

I am sorry – you are losing me with the statistical jargon here. Can you explain in plain English what you mean?

OK. Let us use an example.

Suppose we set up a tombola at the village fete and we sell 50 tickets with the expectation that the winner bags all the money. Each ticket holder has the same 1 in 50 risk of winning the wad-of-wonga and a 49 in 50 risk of losing their small stake. At the appointed time we spin the barrel to mix up the ticket stubs then we blindly draw one ticket out. At that instant the 50 people with an equal risk changes to one winner and 49 losers. It is as if the grey fog of risk instantly condenses into a precise, black-and-white, yes-or-no, winner-or-loser, reality.

Translating this concept back into HSMR and Mid Staffs – the estimated 1200 deaths are the just the “condensed risk of harm equivalent”.  So, to then conduct a retrospective case note analysis of specific deaths looking for the specific cause would be equivalent to trying to retrospectively work out the reason the particular winning ticket in the tombola was picked out. It is a search that is doomed to fail. To then conclude from this fruitless search that HSMR is invalid, is only to compound the delusion further.  The actual problem here is ignorance and misunderstanding of the basic Laws of Physics and Probability, because our brains are not good at solving these sort of problems.

But Mid Staffs is a particularly severe example and  it only shows up after years of data has accumulated. How would a hospital that was not as bad as this know they had a risk problem and know sooner? Waiting for years to accumulate enough data to prove there was a avoidable problem in the past is not much help. 

That is an excellent question. This type of time-series chart is not very sensitive to small changes when the data is noisy and sparse – such as when you plot the data on a month-by-month timescale and avoidable deaths are actually an uncommon outcome. Plotting the annual sum smooths out this variation and makes the trend easier to see, but it delays the diagnosis further. One way to increase the sensitivity is to plot the data as a cusum (cumulative sum) chart – which is conspicuous by its absence from the data table. It is the running total of the estimated excess deaths. Rather like the running total of swings in a game of golf.

MS_ExcessDeaths_CUSUMThis is the cusum chart of excess deaths and you will notice that it is not plotted with control limits. That is because it is invalid to use standard control limits for cumulative data.  The important feature of the cusum chart is the slope and the deviation from zero. What is usually done is an alert threshold is plotted on the cusum chart and if the measured cusum crosses this alert-line then the alarm bell should go off – and the search then focuses on the precursor events: the Near Misses, the Not Agains and the Niggles.

I see. You make it look easy when the data is presented as pictures. But aren’t we still missing the point? Isn’t this still after-the-avoidable-event analysis?

Yes! An avoidable death should be a Never-Event in a designed-to-be-safe healthcare system. It should never happen. There should be no coffins to count. To get to that stage we need to apply exactly the same approach to the Near-Misses, and then the Not-Agains, and eventually the Niggles.

You mean we have to use the SUI data and the IR1 data and the complaint data to do this – and also ask our staff and patients about their Niggles?

Yes. And it is not the number of complaints that is the most useful metric – it is the appearance of the cumulative sum of the complaint severity score. And we need a method for diagnosing and treating the cause of the Niggles too. We need to convert the feedback information into effective action.

Ah ha! Now I understand what the role of the Governance Department is: to apply the tools and techniques of Improvement Science proactively.  But our Governance Department have not been trained to do this!

Then that is one place to start – and their role needs to evolve from Inspectors and Supervisors to Demonstrators and Educators – ultimately everyone in the organisation needs to be a competent Healthcare Improvementologist.

OK – I now now what to do next. But wait a minute. This is going to cost a fortune!

This is just one small first step.  The next step is to redesign the processes so the errors do not happen in the first place. The cumulative cost saving from eliminating the repeated checking, correcting, box-ticking, documenting, investigating, compensating and insuring is much much more than the one-off investment in learning safe system design.

So the Finance Director should be a champion for safety and quality too.

Yup!

Brill. Thanks. And can I ask one more question? I do not want to appear to skeptical but how do we know we can trust that this risk-estimation system has been designed and implemented correctly? How do we know we are not being bamboozled by statisticians? It has happened before!

That is the best question yet.  It is important to remember that HSMR is counting deaths in hospital which means that it is not actually the risk of harm to the patient that is measured – it is the risk to the reputation of hospital! So the answer to your question is that you demonstrate your deep understanding of the rationle and method of risk-of-harm estimation by listing all the ways that such a system could be deliberately “gamed” to make the figures look better for the hospital. And then go out and look for hard evidence of all the “games” that you can invent. It is a sort of creative poacher-becomes-gamekeeper detective exercise.

OK – I sort of get what you mean. Can you give me some examples?

Yes. The HSMR method is based on deaths-in-hospital so discharging a patient from hospital before they die will make the figures look better. Suppose one hospital has more access to end-of-life care in the community than another: their HSMR figures would look better even though exactly the same number of people died. Another is that the HSMR method is weighted towards admissions classified as “emergencies” – so if a hospital admits more patients as “emergencies” who are not actually very sick and discharges them quickly then this will inflated their estimated deaths and make their actual mortality ratio look better – even though the risk-of-harm to patients has not changed.

OMG – so if we have pressure to meet 4 hour A&E targets and we get paid more for an emergency admission than an A&E attendance then admitting to an Assessmen Area and discharging within one day will actually reward the hospital financially, operationally and by apparently reducing their HSMR even though there has been no difference at all to the care that patients actually recieve?

Yes. It is an inevitable outcome of the current system design.

But that means that if I am gaming the system and my HSMR is not getting better then the risk-of-harm to patients is actually increasing and my HSMR system is giving me false reassurance that everything is OK.   Wow! I can see why some people might not want that realisation to be public knowledge. So what do we do?

Design the system so that the rewards are aligned with lower risk of harm to patients and improved outcomes.

Is that possible?

Yes. It is called a Win-Win-Win design.

How do we learn how to do that?

Improvement Science.

Footnote I:

The graphs tell a story but they may not create a useful sense of perspective. It has been said that there is a 1 in 300 chance that if you go to hospital you will not leave alive for avoidable causes. What! It cannot be as high as 1 in 300 surely?

OK – let us use the published Mid-Staffs data to test this hypothesis. Over 12 years there were about 150,000 admissions and an estimated 1,200 excess deaths (if all the risk were concentrated into the excess deaths which is not what actually happens). That means a 1 in 130 odds of an avoidable death for every admission! That is twice as bad as the estimated average.

The Mid Staffordshire statistics are bad enough; but the NHS-as-a-whole statistics are cumulatively worse because there are 100’s of other hospitals that are each generating not-as-obvious avoidable mortality. The data is very ‘noisy’ so it is difficult even for a statistical expert to separate the message from the morass.

And remember – that  the “expected” mortality is estimated from the average for the whole NHS – which means that if this average is higher than it could be then there is a statistical bias and we are being falsely reassured by being ‘not statistically significantly different’ from the pack.

And remember too – for every patient and family that suffers and avoidable death there are many more that have to live with the consequences of avoidable but non-fatal harm.  That is called avoidable morbidity.  This is what the risk really means – everyone has a higher risk of some degree of avoidable harm. Psychological and physical harm.

This challenge is not just about preventing another Mid Staffs – it is about preventing 1000’s of avoidable deaths and 100,000s of patients avoidably harmed every year in ‘average’ NHS trusts.

It is not a mass conspiracy of bad nurses, bad doctors, bad managers or bad policians that is the root cause.

It is poorly designed processes – and they are poorly designed because the nurses, doctors and managers have not learned how to design better ones.  And we do not know how because we were not trained to.  And that education gap was an accident – an unintended error of omission.  

Our urgently-improve-NHS-safety-challenge requires a system-wide safety-by-design educational and cultural transformation.

And that is possible because the knowledge of how to design, test and implement inherently safe processes exists. But it exists outside healthcare.

And that safety-by-design training is a worthwhile investment because safer-by-design processes cost less to run because they require less checking, less documenting, less correcting – and all the valuable nurse, doctor and manager time freed up by that can be reinvested in more care, better care and designing even better processes and systems.

Everyone Wins – except the cynics who have a choice: to eat humble pie or leave.

Footnote II:

In the debate that has followed the publication of the Francis Report a lot of scrutiny has been applied to the method by which an estimated excess mortality number is created and it is necessary to explore this in a bit more detail.

The HSMR is an estimate of relative risk – it does not say that a set of specific patients were the ones who came to harm and the rest were OK. So looking at individual deaths and looking for the specific causes is to completely misunderstand the method. So looking at the actual deaths individually and looking for identifiable cause-and-effect paths is an misuse of the message.  When very few if any are found to conclude that HSMR is flawed is an error of logic and exposes the ignorance of the analyst further.

HSMR is not perfect though – it has weaknesses.  It is a benchmarking process the”standard” of 100 is always moving because the collective goal posts are moving – the reference is always changing . HSMR is estimated using data submitted by hospitals themselves – the clinical coding data.  So the main weakness is that it is dependent on the quality of the clinicial coding – the errors of comission (wrong codes) and the errors of omission (missing codes). Garbage In Garbage Out.

Hospitals use clinically coded data for other reasons – payment. The way hospitals are now paid is based on the volume and complexity of that activity – Payment By Results (PbR) – using what are called Health Resource Groups (HRGs). This is a better and fairer design because hospitals with more complex (i.e. costly to manage) case loads get paid more per patient on average.  The HRG for each patient is determined by their clinical codes – including what are called the comorbidities – the other things that the patient has wrong with them. More comorbidites means more complex and more risky so more money and more risk of death – roughly speaking.  So when PbR came in it becamevery important to code fully in order to get paid “properly”.  The problem was that before PbR the coding errors went largely unnoticed – especially the comorbidity coding. And the errors were biassed – it is more likely to omit a code than to have an incorrect code. Errors of omission are harder to detect. This meant that by more complete coding (to attract more money) the estimated casemix complexity would have gone up compared with the historical reference. So as actual (not estimated) NHS mortality has gone down slightly then the HSMR yardstick becomes even more distorted.  Hospitals that did not keep up with the Coding Game would look worse even though  their actual risk and mortality may be unchanged.  This is the fundamental design flaw in all types of  benchmarking based on self-reported data.

The actual problem here is even more serious. PbR is actually a payment for activity – not a payment for outcomes. It is calculated from what it cost to run the average NHS hospital using a technique called Reference Costing which is the same method that manufacturing companies used to decide what price to charge for their products. It has another name – Absorption Costing.  The highest performers in the manufacturing world no longer use this out-of-date method. The implication of using Reference Costing and PbR in the NHS are profound and dangerous:

If NHS hospitals in general have poorly designed processes that create internal queues and require more bed days than actually necessary then the cost of that “waste” becomes built into the future PbR tariff. This means average length of stay (LOS) is financially rewarded. Above average LOS is financially penalised and below average LOS makes a profit.  There is no financial pressure to improve beyound average. This is called the Regression to the Mean effect.  Also LOS is not a measure of quality – so there is a to shorten length of stay for purely financial reasons – to generate a surplus to use to fund growth and capital investment.  That pressure is non-specific and indiscrimiate.  PbR is necessary but it is not sufficient – it requires an quality of outcome metric to complete it.    

So the PbR system is based on an out-of-date cost-allocation model and therefore leads to the very problems that are contributing to the MidStaffs crisis – financial pressure causing quality failures and increased risk of mortality.  MidStaffs may be a chance victim of a combination of factors coming together like a perfect storm – but those same factors are present throughout the NHS because they are built into the current design.

One solution is to move towards a more up-to-date financial model called stream costing. This uses the similar data to reference costing but it estimates the “ideal” cost of the “necessary” work to achieve the intended outcome. This stream cost becomes the focus for improvement – the streams where there is the biggest gap between the stream cost and the reference cost are the focus of the redesign activity. Very often the root cause is just poor operational policy design; sometimes it is quality and safety design problems. Both are solvable without investment in extra capacity. The result is a higher quality, quicker, lower-cost stream. Win-win-win. And in the short term that  is rewarded by a tariff income that exceeds cost and a lower HSMR.

Radically redesigning the financial model for healthcare is not a quick fix – and it requires a lot of other changes to happen first. So the sooner we start the sooner we will arrive. 

The Writing On The Wall – Part I

writing_on_the_wallThe writing is on the wall for the NHS.

It is called the Francis Report and there is a lot of it. Just the 290 recommendations runs to 30 pages. It would need a very big wall and very small writing to put it all up there for all to see.

So predictably the speed-readers have latched onto specific words – such as “Inspectors“.

Recommendation 137Inspection should remain the central method for monitoring compliance with fundamental standards.”

And it goes further by recommending “A specialist cadre of hospital inspectors should be established …”

A predictable wail of anguish rose from the ranks “Not more inspectors! The last lot did not do much good!”

The word “cadre” is not one that is used in common parlance so I looked it up:

Cadre: 1. a core group of people at the center of an organization, especially military; 2. a small group of highly trained people, often part of a political movement.

So it has a military, centralist, specialist, political flavour. No wonder there was a wail of anguish! Perhaps this “cadre of inspectors” has been unconsciously labelled with another name? Persecutors.

Of more interest is the “highly trained” phrase. Trained to do what? Trained by whom? Clearly none of the existing schools of NHS management who have allowed the fiasco to happen in the first place. So who – exactly? Are these inspectors intended to be protectors, persecutors, or educators?

And what would they inspect?

And how would they use the output of such an inspection?

Would the fear of the inspection and its possible unpleasant consequences be the stick to motivate compliance?

Is the language of the Francis Report going to create another brick wall of resistance from the rubble of the ruins of the reputation of the NHS?  Many self-appointed experts are already saying that implementing 290 recommendations is impossible.

They are incorrect.

The number of recommendations is a measure of the breadth and depth of the rot. So the critical-to-success factor is to implement them in a well-designed order. Get the first few in place and working and the rest will follow naturally.  Get the order wrong and the radical cure will kill the patient.

So where do we start?

Let us look at the inspection question again.  Why would we fear an external inspection? What are we resisting? There are three facets to this: first we do not know what is expected of us;  second we do not know if we can satisfy the expectation; and third we fear being persecuted for failing to achieve the impossible.

W Edwards Deming used a very effective demonstration of the dangers of well-intended but badly-implemented quality improvement by inspection: it was called the Red Bead Game.  The purpose of the game was to illustrate how to design an inspection system that actually helps to achieve the intended goal. Sustained improvement.

This is applied Improvement Science and I will illustrate how it is done with a real and current example.


I am assisting a department in a large NHS hospital to improve the quality of their service. I have been sent in as an external inspector.  The specific quality metric they have been tasked to improve is the turnaround time of the specialist work that they do. This is a flow metric because a patient cannot leave hospital until this work is complete – and more importantly it is a flow and quality metric because when the hospital is full then another patient, one who urgently needs to be admitted, will be waiting for the bed to be vacated. One in one out.

The department have been set a standard to meet, a target, a specification, a goal. It is very clear and it is easily measurable. They have to turnaround each job of work in less than 2 hours.  This is called a lead time specification and it is arbitrary.  But it is not unreasonable from the perspective of the patient waiting to leave and for the patient waiting to be admitted. Neither want to wait.

The department has a sophisticated IT system that measures their performance. They use it to record when each job starts and when each job is finished and from those two events the software calculates the lead time for each job in real-time. At the end of each day the IT system counts how many jobs were completed in less than 2 hours and compares this with how many were done in total and calculates a ratio which it presents as a percentage in the range of 0 and 100. This is called the process yield.  The department are dedicated and they work hard and they do all the work that arrives each day the same day – no matter how long it takes. And at the end of each day they have their score for that day. And it is almost never 100%.  Not never. Almost never. But it is not good enough and they are being blamed for it. In turn they blame others for making their job more difficult. It is a blame-game and it has been going on for years.

So how does an experienced Improvement Science-trained Inspector approach this sort of “wicked” problem?

First we need to get the writing on the wall – we need to see the reality – we need to “plot the dots” – we need to see what the performance is doing over time – we need to see the voice of the process. And that requires only their data, a pencil, some paper and for the chart to be put on the on the wall where everyone can see it.

Chart_1This is what their daily % yield data for three consecutive weeks looked like as a time-series chart. The thin blue line is the 100% yield target.

The 100% target was only achieved on three days – and they were all Sundays. On the other Sunday it was zero (which may mean that there was no data to calculate a ratio from).

There is wide variation from one day to the next and it is the variation as well as the average that is of interest to an improvement scientist. What is the source of the variation it? If 100% yield can be achieved some days then what is different about those days?

Chart_2

So our Improvement science-trained Inspector will now re-plot the data in a different way – as rational groups. This exposes the issue clearly. The variation on Weekends is very wide and the performance during the Weekdays is much less variable.  What this says is that the weekend system and the weekday system are different. This means that it is invalid to combine the data for both.

It also raises the question of why there is such high variation in yield only at weekends?  The chart cannot answer the question, so our IS-trained Inspector digs a bit deeper and discovers that the volume of work done at the weekend is low, the staffing of the department is different, and that the recording of the events is less reliable. In short – we cannot even trust the weekend data – so we have two reasons to justify excluding it from our chart and just focusing on what happens during the week.

Chart_3We re-plot our chart, marking the excluded weekend data as not for analysis.

We can now see that the weekday performance of our system is visible, less variable, and the average is a long way from 100%.

The team are working hard and still only achieving mediocre performance. That must mean that they need something that is missing. Motivating maybe. More people maybe. More technology maybe.  But there is no more money for more people or technology and traditional JFDI motivation does not seem to have helped.

This looks like an impossible task!

Chart_4

So what does our Inspector do now? Mark their paper with a FAIL and put them on the To Be Sacked for Failing to Meet an Externally Imposed Standard heap?

Nope.

Our IS-trained Inspector calculates the limits of expected performance from the data  and plots these limits on the chart – the red lines.  The computation is not difficult – it can be done with a calculator and the appropriate formula. It does not need a sophisticated IT system.

What this chart now says is “The current design of this process is capable of delivering between 40% and 85% yield. To expect it do do better is unrealistic”.  The implication for action is “If we want 100% yield then the process needs to be re-designed.” Persecution will not work. Blame will not work. Hoping-for-the-best will not work. The process must be redesigned.

Our improvement scientist then takes off the Inspector’s hat and dons the Designer’s overalls and gets to work. There is a method to this and it is called 6M Design®.

Chart_5

First we need to have a way of knowing if any future design changes have a statistically significant impact – for better or for worse. To do this the chart is extended into the future and the red lines are projected forwards in time as the black lines called locked-limits.  The new data is compared with this projected baseline as it comes in.  The weekends and bank holidays are excluded because we know that they are a different system. On one day (20/12/2012) the yield was surprisingly high. Not 100% but more than the expected upper limit of 85%.

Chart_6The alerts us to investigate and we found that it was a ‘hospital bed crisis’ and an ‘all hands to the pumps’ distress call went out.

Extra capacity was pulled to the process and less urgent work was delayed until later.  It is the habitual reaction-to-a-crisis behaviour called “expediting” or “firefighting”.  So after the crisis had waned and the excitement diminished the performance returned to the expected range. A week later the chart signals us again and we investigate but this time the cause was different. It was an unusually quiet day and there was more than enough hands on the pumps.

Both of these days are atypically good and we have an explanation for each of them. This is called an assignable cause. So we are justified in excluding these points from our measure of the typical baseline capability of our process – the performance the current design can be expected to deliver.

An inexperienced manager might conclude from these lessons that what is needed is more capacity. That sounds and feels intuitively obvious and it is correct that adding more capacity may improve the yield – but that does not prove that lack of capacity is the primary cause.  There are many other causes of long lead times  just as there are many causes of headaches other than brain tumours! So before we can decide the best treatment for our under-performing design we need to establish the design diagnosis. And that is done by inspecting the process in detail. And we need to know what we are looking for; the errors of design commission and the errors of design omission. The design flaws.

Only a trained and experienced process designer can spot the flaws in a process design. Intuition will trick the untrained and inexperienced.


Once the design diagnosis is established then the redesign stage can commence. Design always works to a specification and in this case it was clear – to significantly improve the yield to over 90% at no cost.  In other words without needing more people, more skills, more equipment, more space, more anything. The design assignment was made trickier by the fact that the department claimed that it was impossible to achieve significant improvement without adding extra capacity. That is why the Inspector had been sent in. To evaluate that claim.

The design inspection revealed a complex adaptive system – not a linear, deterministic, production-line that manufactures widgets.  The department had to cope with wide variation in demand, wide variation in quality of request, wide variation in job complexity, and wide variation in urgency – all at the same time.  But that is the nature of healthcare and acute hospital work. That is the expected context.

The analysis of the current design revealed that it was not well suited for this requirement – and the low yield was entirely predictable. The analysis also revealed that the root cause of the low yield was not lack of either flow-capacity or space-capacity.

This insight led to the suggestion that it would be possible to improve yield without increasing cost. The department were polite but they did not believe it was possible. They had never seen it, so why should they be expected to just accept this on faith?

Chart_7So, the next step was to develop, test and demonstrate a new design and that was done in three stages. The final stage was the Reality Test – the actual process design was changed for just one day – and the yield measured and compared with the predicted improvement.

This was the validity test – the proof of the design pudding. And to visualise the impact we used the same technique as before – extending the baseline of our time-series chart, locking the limits, and comparing the “after” with the “before”.

The yellow point marks the day of the design test. The measured yield was well above the upper limit which suggested that the design change had made a significant improvement. A statistically significant improvement.  There was no more capacity than usual and the day was not unusually quiet. At the end of the day we held a team huddle.

Our first question was “How did the new design feel?” The consensus was “Calmer, smoother, fewer interruptions” and best of all “We finished on time – there was no frantic catch up at the end of the day and no one had to stay late to complete the days work!”

The next question was “Do we want to continue tomorrow with this new design or revert back to the old one?” The answer was clear “Keep going with the new design. It feels better.”

The same chart was used to show what happened over the next few days – excluding the weekends as before. The improvement was sustained – it did not revert to the original because the process design had been changed. Same work, same capacity, different process – higher yield. The red flags on the charts mark the statistically significant evidence of change and the cluster of red flags is very strong statistical evidence that the improvement is not due to chance.

The next phase of the 6M Design® method is to continue to monitor the new process to establish the new baseline of expectation. That will require at least twelve data points and it is in progress. But we have enough evidence of a significant improvement. This means that we have no credible justification to return to the old design, and it also implies that it is no longer valid to compare the new data against the old projected limits. Our chart tells us that we need to split the data into before-and-after and to calculate new averages and limits for each segment separately. We have changed the voice of the process by changing the design.

Chart_8And when we split the data at the point-of-change then the red flags disappear – which means that our new design is stable. And it has a new capability – a better one. We have moved closer to our goal of 100% yield. It is still early days and we do not really have enough data to calculate the new capability.

What we can say is that we have improved average quality yield from 63% to about 90% at no cost using a sequence of process diagnose, design, deliver.  Study-Plan-Do.

And we have hard evidence that disproves the impossibility hypothesis.


And that was the goal of the first design change – it was not to achieve 100% yield in one jump. Our design simulation had predicted an improvement to about 90%.  And there are other design changes to follow that need this stable foundation to build on.  The order of implementation is critical – and each change needs time to bed in before the next change is made. That is the nature of the challenge of improving a complex adaptive system.

The cost to the department was zero but the benefit was huge.  The bigger benefit to the organisation was felt elsewhere – the ‘customers’ saw a higher quality, quicker process – and there will be a financial benefit for the whole system. It will be difficult to measure with our current financial monitoring systems but it will be real and it will be there – lurking in the data.

The improvement required a trained and experienced Inspector/Designer/Educator to start the wheel of change turning. There are not many of these in the NHS – but the good news is that the first level of this training is now available.

What this means for the post-Francis Report II NHS is that those who want to can choose to leap over the wall of resistance that is being erected by the massing legions of noisy cynics. It means we can all become our own inspectors. It means we can all become our own improvers. It means we can all learn to redesign our systems so that they deliver higher safety, better quality, more quickly and at no extra one-off or recurring cost.  We all can have nothing to fear from the Specialist Cadre of Hospital Inspectors.

The writing is on the wall.


15/02/2013 – Two weeks in and still going strong. The yield has improved from 63% to 92% and is stable. Improvement-by-design works.

10/03/2013 – Six weeks in and a good time to test if the improvement has been sustained.

TTO_Yield_WeeklyThe chart is the weekly performance plotted for 17 weeks before the change and for 5 weeks after. The advantage of weekly aggregated data is that it removes the weekend/weekday 7-day cycle and reduces the effect of day-to-day variation.

The improvement is obvious, significant and has been sustained. This is the objective improvement. More important is the subjective improvement.

Here is what Chris M (departmental operational manager) wrote in an email this week (quoted with permission):

Hi Simon

It is I who need to thank you for explaining to me how to turn our pharmacy performance around and ultimately improve the day to day work for the pharmacy team (and the trust staff). This will increase job satisfaction and make pharmacy a worthwhile career again instead of working in constant pressure with a lack of achievement that had made the team feel rather disheartened and depressed. I feel we can now move onwards and upwards so thanks for the confidence boost.

Best wishes and many thanks

Chris

This is what Improvement Science is all about!

Robert Francis QC

press_on_screen_anim_150_wht_7028Today is an important day.

The Robert Francis QC Report and recommendations from the Mid-Staffordshire Hospital Crisis has been published – and it is a sobering read.  The emotions that just the executive summary evoked in me were sadness, shame and anger.  Sadness for the patients, relatives, and staff who have been irreversibly damaged; shame that the clinical professionals turned a blind-eye; and anger that the root cause has still not been exposed to public scrutiny.

Click here to get a copy of the RFQC Report Executive Summary.

Click here to see the video of RFQC describing his findings. 

The root cause is ignorance at all levels of the NHS.  Not stupidity. Not malevolence. Just ignorance.

Ignorance of what is possible and ignorance of how to achieve it.

RFQC rightly focusses his recommendations on putting patients at the centre of healthcare and on making those paid to deliver care accountable for the outcomes.  Disappointingly, the report is notably thin on the financial dimension other than saying that financial targets took priority over safety and quality.  He is correct. They did. But the report does not say that this is unnecessary – it just says “in future put safety before finance” and in so doing he does not challenge the belief that we are playing a zero-sum-game. The  assumotion that higher-quality-always-costs-more.

This assumption is wrong and can easily be disproved.

A system that has been designed to deliver safety-and-quality-on-time-first-time-and-every-time costs less. And it costs less because the cost of errors, checking, rework, queues, investigation, compensation, inspectors, correctors, fixers, chasers, and all the other expensive-high-level-hot-air-generation-machinery that overburdens the NHS and that RFQC has pointed squarely at is unnecessary.  He says “simplify” which is a step in the right direction. The goal is to render it irrelevent.

The ignorance is ignorance of how to design a healthcare system that works right-first-time. The fact that the Francis Report even exists and is pointing its uncomfortable fingers-of-evidence at every level of the NHS from ward to government is tangible proof of this collective ignorance of system design.

And the good news is that this collective ignorance is also unnecessary … because the knowledge of how to design safe-and-affordable systems already exists. We just have to learn how. I call it 6M Design® – but  the label is irrelevent – the knowledge exists and the evidence that it works exists.

So here are some of the RFQC recommendations viewed though a 6M Design® lens:       

1.131 Compliance with the fundamental standards should be policed by reference to developing the CQC’s outcomes into a specification of indicators and metrics by which it intends to monitor compliance. These indicators should, where possible, be produced by the National Institute for Health and Clinical Excellence (NICE) in the form of evidence-based procedures and practice which provide a practical means of compliance and of measuring compliance with fundamental standards.

This is the safety-and-quality outcome specification for a healthcare system design – the required outcome presented as a relevent metric in time-series format and qualified by context.  Only a stable outcome can be compared with a reference standard to assess the system capability. An unstable outcome metric requires inquiry to understand the root cause and an appropriate action to restore stability. A stable but incapable outcome performance requires redesign to achieve both stability and capability. And if  the terms used above are unfamiliar then that is further evidence of system-design-ignorance.   
 
1.132 The procedures and metrics produced by NICE should include evidence-based tools for establishing the staffing needs of each service. These measures need to be readily understood and accepted by the public and healthcare professionals.

This is the capacity-and-cost specification of any healthcare system design – the financial envelope within which the system must operate. The system capacity design works backwards from this constraint in the manner of “We have this much resource – what design of our system is capable of delivering the required safety and quality outcome with this capacity?”  The essence of this challenge is to identify the components of poor (i.e. wasteful) design in the existing systems and remove or replace them with less wasteful designs that achieve the same or better quality outcomes. This is not impossible but it does require system diagnostic and design capability. If the NHS had enough of those skills then the Francis Report would not exist.

1.133 Adoption of these practices, or at least their equivalent, is likely to help ensure patients’ safety. Where NICE is unable to produce relevant procedures, metrics or guidance, assistance could be sought and commissioned from the Royal Colleges or other third-party organisations, as felt appropriate by the CQC, in establishing these procedures and practices to assist compliance with the fundamental standards.

How to implement evidence-based research in the messy real world is the Elephant in the Room. It is possible but it requires techniques and tools that fall outside the traditional research and audit framework – or rather that sit between research and audit. This is where Improvement Science sits. The fact that the Report only mentions evidence-based practice and audit implies that the NHS is still ignorant of this gap and what fills it – and so it appears is RFQC.   

1.136 Information needs to be used effectively by regulators and other stakeholders in the system wherever possible by use of shared databases. Regulators should ensure that they use the valuable information contained in complaints and many other sources. The CQC’s quality risk profile is a valuable tool, but it is not a substitute for active regulatory oversight by inspectors, and is not intended to be.

Databases store data. Sharing databases will share data. Data is not information. Information requires data and the context for that data.  Furthermore having been informed does not imply either knowledge or understanding. So in addition to sharing information, the capability to convert information-into-decision is also required. And the decisions we want are called “wise decisions” which are those that result in actions and inactions that lead inevitably to the intended outcome.  The knowledge of how to do this exists but the NHS seems ignorant of it. So the challenge is one of education not of yet more investigation.

1.137 Inspection should remain the central method for monitoring compliance with fundamental standards. A specialist cadre of hospital inspectors should be established, and consideration needs to be given to collaborative inspections with other agencies and a greater exploitation of peer review techniques.

This is audit. This is the sixth stage of a 6M Design® – the Maintain step.  Inspectors need to know what they are looking for, the errors of commission and the errors of omission;  and to know what those errors imply and what to do to identify and correct the root cause of these errors when discovered. The first cadre of inspectors will need to be fully trained in healthcare systems design and healthcare systems improvement – in short – they need to be Healthcare Improvementologists. And they too will need to be subject to the same framework of accreditation, and accountability as those who work in the system they are inspecting.  This will be one of the greatest of the challenges. The fact that the Francis report exists implies that we do not have such a cadre. Who will train, accredit and inspect the inspectors? Who has proven themselves competent in reality (not rhetorically)?

1.163 Responsibility for driving improvement in the quality of service should therefore rest with the commissioners through their commissioning arrangements. Commissioners should promote improvement by requiring compliance with enhanced standards that demand more of the provider than the fundamental standards.

This means that commissioners will need to understand what improvement requires and to include that expectation in their commissioning contracts. This challenge is even geater that the creation of a “cadre of inspectors”. What is required is a “generation of competent commissioners” who are also experienced and who have demonstrated competence in healthcare system design. The Commissioners-of-the-Future will need to be experienced healthcare improvementologists.

The NHS is sick – very sick. The medicine it needs to restore its health and vitality does exist – and it will not taste very nice – but to withold an effective treatment for an serious illness on that basis is clinical negligence.

It is time for the NHS to look in the mirror and take the strong medicine. The effect is quick – it will start to feel better almost immediately. 

To deliver safety and quality and quickly and affordably is possible – and if you do not believe that then you will need to muster the humility to ask to have the how demonstrated.

6MDesign

 

Kicking the Habit

no_smoking_400_wht_6805It is not easy to kick a habit. We all know that. And for some reason the ‘bad’ habits are harder to kick than the ‘good’ ones. So what is bad about a ‘bad habit’ and why is it harder to give up? Surely if it was really bad it would be easier to give up?

Improvement is all about giving up old ‘bad’ habits and replacing them with new ‘good’ habits – ones that will sustain the improvement. But there is an invisible barrier that resists us changing any habit – good or bad. And it is that barrier to habit-breaking that we need to understand to succeed. Luck is not a reliable ally.

What does that habit-breaking barrier look like?

The problem is that it is invisible – or rather it is emotional – or to be precise it is chemical.

Our emotions are the output of a fantastically complex chemical system – our brains. And influencing the chemical balance of our brains can have a profound effect on our emotions.  That is how anti-depressants work – they very slightly adjust the chemical balance of every part of our brains. The cumulative effect is that we feel happier.  Nicotine has a similar effect.

And we can achieve the same effect without resorting to drugs or fags – and we can do that by consciously practising some new mental habits until they become ingrained and unconscious. We literally overwrite the old mental habit.

So how do we do this?

First we need to make the mental barrier visible – and then we can focus our attention on eroding it. To do that we need to remove the psychological filter that we all use to exclude our emotions. It is rather like taking off our psychological sunglasses.

When we do that the invisible barrier jumps into view: illuminated by the glare of three negative emotions.  Sadness, fear, and anxiety.  So whenever we feel any of these we know there is a barrier to improvement hiding  the emotional smoke. This is the first stage: tune in to our emotions.

The next step is counter-intuitive. Instead of running away from the negative feeling we consciously flip into a different way of thinking.  We actively engage with our negative feelings – and in a very specific way. We engage in a detached, unemotional, logical, rational, analytical  ‘What caused that negative feeling?’ way.

We then focus on the causes of the negative emotions. And when we have the root causes of our Niggles we design around them, under them, and over them.  We literally design them out of our heads.

The effect is like magic.

And this week I witnessed a real example of this principle in action.

figure_pressing_power_button_150_wht_10080One team I am working with experienced the Power of Improvementology. They saw the effect with their own eyes.  There were no computers in the way, no delays, no distortion and no deletion of data to cloud the issue. They saw the performance of their process jump dramatically – from a success rate of 60% to 96%!  And not just the first day, the second day too.  “Surprised and delighted” sums up their reaction.

So how did we achieve this miracle?

We just looked at the process through a different lens – one not clouded and misshapen by old assumptions and blackened by ignorance of what is possible.  We used the 6M Design® lens – and with the clarity of insight it brings the barriers to improvement became obvious. And they were dissolved. In seconds.

Success then flowed as the Dam of Disbelief crumbled and was washed away.

figure_check_mark_celebrate_anim_150_wht_3617The chaos has gone. The interruptions have gone. The expediting has gone. The firefighting has gone. The complaining has gone.  These chronic Niggles have have been replaced by the Nuggets of calm efficiency, new hope and visible excitement.

And we know that others have noticed the knock-on effect because we got an email from our senior executive that said simply “No one has moaned about TTOs for two days … something has changed.”    

That is Improvementology-in-Action.

 

A Ray Of Hope

stick_figure_shovel_snow_anim_150_wht_9579It does not seem to take much to bring a real system to an almost standstill.  Six inches of snow falling between 10 AM and 2 PM in a Friday in January seems to be enough!

It was not so much the amount of snow – it was the timing.  The decision to close many schools was not made until after the pupils had arrived – and it created a logistical nightmare for parents. 

Many people suddenly needed to get home before they expected which created an early rush hour and gridlocked the road system.

The same number of people travelled the same distance in the same way as they would normally – it just took them a lot longer.  And the queues created more problems as people tried to find work-arounds to bypass the traffic jams.

How many thousands of hours of life-time was wasted sitting in near-stationary queues of cars? How many millions of poundsworth of productivity was lost? How much will the catchup cost? 

And yet while we grumble we shrug our shoulders and say “It is just one of those things. We cannot control the weather. We just have to grin and bear it.”  

Actually we do not have to. And we do not need a weather machine to control the weather. Mother Nature is what it is.

Exactly the same behaviour happens in many systems – and our conclusion is the same.  We assume the chaos and queues are inevitable.

They are not.

They are symptoms of the system design – and specifically they are the inevitable outcomes of the time-design.

But it is tricky to visualise the time-design of a system.  We can see the manifestations of the poor time-design, the queues and chaos, but we do not so easily perceive the causes. So the poor time-design persists. We are not completely useless though; there are lots of obvious things we can do. We can devise ingenious ways to manage the queues; we can build warehouses to hold the queues; we can track the jobs in the queues using sophisticated and expensive information technology; we can identify the hot spots; we can recruit and deploy expediters, problem-solvers and fire-fighters to facilitate the flow through the hottest of them; and we can pump capacity and money into defences, drains and dramatics. And our efforts seem to work so we congratulate ourselves and conclude that these actions are the only ones that work.  And we keep clamouring for more and more resources. More capacity, MORE capacity, MORE CAPACITY.

Until we run out of money!

And then we have to stop asking for more. And then we start rationing. And then we start cost-cutting. And then the chaos and queues get worse. 

And all the time we are not aware that our initial assumptions were wrong.

The chaos and queues are not inevitable. They are a sign of the time-design of our system. So we do have other options.  We can improve the time-design of our system. We do not need to change the safety-design; nor the quality-design; nor the money-design.  Just improving the time-design will be enough. For now.

So the $64,000,000 question is “How?”

Before we explore that we need to demonstrate What is possible. How big is the prize?

The class of system design problem that cause particular angst are called mixed-priority mixed-complexity crossed-stream designs.  We encounter dozens of them in our daily life and we are not aware of it.  One of particular interest to many is called a hospital. The mixed-priority dimension is the need to manage some patients as emergencies, some as urgent and some as routine. The mixed-complexity dimension is that some patients are easy and some are complex. The crossed-stream dimension is the aggregation of specialised resources into departments. Expensive equipment and specific expertise.  We then attempt to push patients with different priorites long different paths through these different departments . And it is a management nightmare! 

BlueprintOur usual and “obvious” response to this challenge is called a carve-out design. And that means we chop up our available resource capacity into chunks.  And we do that in two ways: chunks of time and chunks of space.  We try to simplify the problem by dissecting it into bits that we can understand. We separate the emergency departments from the  planned-care facilities. We separate outpatients from inpatients. We separate medicine from surgery – and we then intellectually dissect our patients into organ systems: brains, lungs, hearts, guts, bones, skin, and so on – and we create separate departments for each one. Neurology, Respiratory, Cardiology, Gastroenterology, Orthopaedics, Dermatology to list just a few. And then we become locked into the carve-out design silos like prisoners in cages of our own making.

And so it is within the departments that are sub-systems of the bigger system. Simplification, dissection and separation. Ad absurdam.

The major drawback with our carve-up design strategy is that it actually makes the system more complicated.  The number of necessary links between the separate parts grows exponentially.  And each link can hold a small queue of waiting tasks – just as each side road can hold a queue of waiting cars. The collective complexity is incomprehensible. The cumulative queue is enormous. The opportunity for confusion and error grows exponentially. Safety and quality fall and cost rises. Carve-out is an inferior time-design.

But our goal is correct: we do need to simplify the system so that means simplifying the time-design.

To illustrate the potential of this ‘simplify the time-design’ approach we need a real example.

One way to do this is to create a real system with lots of carve-out time-design built into it and then we can observe how it behaves – in reality. A carefully designed Table Top Game is one way to do this – one where the players have defined Roles and by following the Rules they collectively create a real system that we can map, measure and modify. With our Table Top Team trained and ready to go we then pump realistic tasks into our realistic system and measure how long they take in reality to appear out of the other side. And we then use the real data to plot some real time-series charts. Not theoretical general ones – real specific ones. And then we use the actual charts to diagnose the actual causes of the actual queues and actual chaos.

TimeDesign_BeforeThis is the time-series chart of a real Time-Design Game that has been designed using an actual hospital department and real observation data.  Which department it was is not of importance because it could have been one of many. Carve-out is everywhere.

During one run of the Game the Team processed 186 tasks and the chart shows how long each task took from arriving to leaving (the game was designed to do the work in seconds when in the real department it took minutes – and this was done so that one working day could be condensed from 8 hours into 8 minutes!)

There was a mix of priority: some tasks were more urgent than others. There was a mix of complexity: some tasks required more steps that others. The paths crossed at separate steps where different people did defined work using different skills and special equipment.  There were handoffs between all of the steps on all of the streams. There were  lots of links. There were many queues. There were ample opportunities for confusion and errors.

But the design of the real process was such that the work was delivered to a high quality – there were very few output errors. The yield was very high. The design was effective. The resources required to achieve this quality were represented by the hours of people-time availability – the capacity. The cost. And the work was stressful, chaotic, pressured, and important – so it got done. Everyone was busy. Everyone pulled together. They helped each other out. They were not idle. They were a good team. The design was efficient.

The thin blue line on the time-series chart is the “time target” set by the Organisation.  But the effective and efficient system design only achieved it 77% of the time.  So the “obvious” solution was to clamour for more people and for more space and for more equipment so that the work can be done more quickly to deliver more jobs on-time.  Unfortunately the Rules of the Time-Design Game do not allow this more-money option. There is no more money.

To succeed at the Time-Design Game the team must find a way to improve their delivery time performance with the capacity they have and also to deliver the same quality.  But this is impossible! If it were possible then the solution would be obvious and they would be doing it already. No one can succeed on the Time-Design Game. 

Wrong. It is possible.  And the assumption that the solution is obvious is incorrect. The solution is not obvious – at least to the untrained eye.

To the trained eye the time-series chart shows the characteristic signals of a carve-out time-design. The high task-to-task variation is highly suggestive as is the pattern of some of the earlier arrivals having a longer lead time. An experienced system designer can diagnose a carve-out time-design from a set of time-series charts of a process just as a doctor can diagnose the disease from the vital signs chart for a patient.  And when the diagnosis is confirmed with a verification test then the time-Redesign phase can start. 

TimeDesign_AfterPhase1This chart shows what happened after the time-design of the system was changed – after some of the carve-out design was modified. The Y-axis scale is the same as before – and the delivery time improvement is dramatic. The Time-ReDesigned system is now delivering 98% achievement of the “on time target”.

The important thing to be aware of is that exactly the same work was done, using exactly the same steps, and exactly the same resources. No one had to be retrained, released or recruited.  The quality was not impaired. And the cost was actually less because less overtime was needed to mop up the spillover of work at the end of the day.

And the Time-ReDesigned system feels better to work in. It is not chaotic; flow is much smoother; and it is busy yet relaxed and even fun.  The same activity is achieved by the same people doing the same work in the same sequence. Only the Time-Design has changed. A change that delivered a win for the workers!

What was the impact of this cost-saving improvement on the customers of this service? They can now be 98% confident that they will get their task completed correctly in less than 120 minutes.  Before the Time-Redesign the 98% confidence limit was 470 minutes! So this is a win for the customers too!

And the Time-ReDesigned system is less expensive so it is a win for whoever is paying.

Same safety and quality, quicker with less variation, and at lower cost. Win-Win-Win.

And the usual reaction to playing the Time-ReDesign Game is incredulous disbelief.  Some describe it as a “light bulb” moment when they see how the diagnosis of the carve-out time-design is made and and how the Time-ReDesign is done. They say “If I had not seen it with my own eyes I would not have believed it.” And they say “The solutions are simple but not obvious!” And they say “I wish I had learned this years ago!”  And thay apologise for being so skeptical before.

And there are those who are too complacent, too careful or too cynical to play the Time-ReDesign Game (which is about 80% of people actually) – and who deny themselves the opportunity of a win-win-win outcome. And that is their choice. They can continue to grin and bear it – for a while longer.     

And for the 20% who want to learn how to do Time ReDesign for real in their actual systems there is now a Ray Of Hope.

And the Ray of Hope is illuminating a signpost on which is written “This Way to Improvementology“. 

Quality First or Time First?

Before we explore this question we need to establish something. If the issue is Safety then that always goes First – and by safety we mean “a risk of harm that everyone agrees is unacceptable”.


figure_running_hamster_wheel_150_wht_4308Many Improvement Zealots state dogmatically that the only way reach the Nirvanah of “Right Thing – On Time – On Budget” is to focus on Quality First.

This is incorrect.  And what makes it incorrect is the word only.

Experience teaches us that it is impossible to divert people to focus on quality when everyone is too busy just keeping afloat. If they stop to do something else then they will drown. And they know it.

The critical word here is busy.

‘Busy’ means that everyone is spending all their time doing stuff – important stuff – the work, the checking, the correcting, the expediting, the problem solving, and the fire-fighting. They are all busy all of the time.

So when a Quality Zealot breezes in and proclaims ‘You should always focus on quality first … that will solve all the problems’ then the reaction they get is predictable. The weary workers listen with their arms-crossed, roll-their eyes, exchange knowing glances, sigh, shrug, shake their heads, grit their teeth, and trudge back to fire-fighting. Their scepticism and cynicism has been cut a notch deeper. And the weary workers get labelled as ‘Not Interested In Quality’ and ‘Resisting Change’  and ‘Laggards’ by the Quality Zealot who has spent more time studying and regurgitating rhetoric than investing time in observing and understanding reality.

The problem here is the seemingly innocuous word ‘always’. It is too absolute. Too black-and-white. Too dogmatic. Too simple.

Sometimes focussing on Quality First is a wise decision. And that situation is when there is low-quality and idle-time. There is some spare capacity to re-invest in understanding the root causes of the quality issues,  in designing them out of the process, and in implementing the design changes.

But when everyone is busy – when there is no idle-time – then focussing on quality first is not a wise decision because it can actually make the problem worse!

[The Quality Zealots will now be turning a strange red colour, steam will be erupting from their ears and sparks will be coming from their finger-tips as they reach for their keyboards to silence the heretical anti-quality lunatic. “Burn, burn, burn” they rant]. 

When everyone is busy then the first thing to focus on is Time.

And because everyone is busy then the person doing the Focus-on-Time stuff must be someone else. Someone like an Improvementologist.  The Quality Zealot is a liability at this stage – but they become an asset later when the chaos has calmed.

And what our Improvementologist is looking for are queues – also known as Work-in-Progress or WIP.

Why WIP?  Why not where the work is happening? Why not focus on resource utilisation? Isn’t that a time metric?

Yes, resource utilisation is a time-related metric but because everyone is busy then resource utilisation will be high. So looking at utilisation will only confirm what we already know.  And everyone is busy doing important stuff – they are not stupid – they are busy and they are doing their best given the constraints of their process design.        

The queue is where an Improvementologist will direct attention first.  And the specific focus of their attention is the cause of the queue.

This is because there is only one cause of a queue: a mismatch-over-time between demand and activity.

So, the critical first step to diagnosing the cause of a queue is to make the flow visible – to plot the time-series charts of demand, activity and WIP.  Until that is done then no progress will be made with understanding what is happening and it wil be impossible to decide what to do. We need a diagnosis before we can treat. And to get a diagnosis we need data from an examination of our process; and we need data on the history of how it has developed. And we need to know how to convert that data into information, and then into understanding, and then into design options, and then into a wise decision, and then into action, and then into improvement.

And we now know how to spot an experienced Improvementologist because the first thing they will look for are the Queues not the Quality.

But why bother with the flow and the queues at all? Customers are not interested in them! If time is the focus then surely it is turnaround times and waiting times that we need to measure! Then we can compare our performance with our ‘target’ and if it is out of range we can then apply the necessary ‘pressure’!

This is indeed what we observe. So let us explore the pros and cons of this approach with an example.

We are the manager of a support department that receives requests, processes them and delivers the output back to the sender. We could be one of many support departments in an organisation:  human resources, procurement, supplies, finance, IT, estates and so on. We are the Backroom Brigade. We are the unsung heros and heroines.

The requests for our service come in different flavours – some are easy to deal with, others are more complex.  They also come with different priorities – urgent, soon and routine. And they arrive as a mixture of dribbles and deluges.  Our job is to deliver high quality work (i.e. no errors) within the delivery time expected by the originator of the request (i.e. on time). If  we do that then we do not get complaints (but we do not get compliments either).

From the outside things look mostly OK.  We deliver mostly on quality and mostly on time. But on the inside our department is in chaos! Every day brings a new fire to fight. Everyone is busy and the pressure and chaos are relentless. We are keeping our head above water – but only just.  We do not enjoy our work-life. It is not fun. Our people are miserable too. Some leave – others complain – others just come to work, do stuff, take the money and go home – like Zombies. They comply.

three_wins_agreementOnce in the past we were were seduced by the sweet talk of a Quality Zealot. We were promised Nirvanah. We were advised to look at the quality of the requests that we get. And this suggestion resonated with us because we were very aware that the requests were of variable quality. Our people had to spend time checking-and-correcting them before we could process them.  The extra checking had improved the quality of what we deliver – but it had increased our costs too. So the Quality Zealot told us we should work more closely with our customers and to ‘swim upstream’ to prevent the quality problems getting to us in the first place. So we sent some of our most experienced and most expensive Inspectors to paddle upstream. But our customers were also very busy and, much as they would have liked, they did not have time to focus on quality either. So our Inspectors started doing the checking-and-correcting for our customers. Our people are now working for our customers but we still pay their wages. And we do not have enough Inspectors to check-and-correct all the requests at source so we still need to keep a skeleton crew of Inspectors in the department. And these stay-at-home Inspectors  are stretched too thin and their job is too pressured and too stressful. So no one wants to do it.And given the choice they would all rather paddle out to the customers first thing in the morning to give them as much time as possible to check-and-correct the requests so the days work can be completed on time.  It all sounds perfectly logical and rational – but it does not seem to have worked as promised. The stay-at-home Inspectors can only keep up with the more urgent work,  delivery of the less urgent work suffers and the chronic chaos and fire-fighting are now aggravated by a stream of interruptions from customers asking when their ‘non-urgent’ requests will be completed.

figure_talk_giant_phone_anim_150_wht_6767The Quality Zealot insisted we should always answer the phone to our customers – so we take the calls – we expedite the requests – we solve the problems – and we fight-the-fire.  Day, after day, after day.

We now know what Purgatory means. Retirement with a pension or voluntary redundancy with a package are looking more attractive – if only we can keep going long enough.

And the last thing we need is more external inspection, more targets, and more expensive Quality Zealots telling us what to do! 

And when we go and look we see a workplace that appears just as chaotic and stressful and angry as we feel. There are heaps of work in progress everywhere – the phone is always ringing – and our people are running around like headless chickens, expediting, fire-fighting and getting burned-out: physically and emotionally. And we feel powerless to stop it. So we hide.

Does this fictional fiasco feel familiar? It is called the Miserable Job Purgatory Vortex.

Now we know the characteristic pattern of symptoms and signs:  constant pressure of work, ever present threat of quality failure, everyone busy, just managing to cope, target-stick-and-carrot management, a miserable job, and demotivated people.

The issue here is that the queues are causing some of the low quality. It is not always low quality that causes all of the queues.

figure_juggling_time_150_wht_4437Queues create delays, which generate interruptions, which force investigation, which generates expediting, which takes time from doing the work, which consumes required capacity, which reduces activity, which increases the demand-activity mismatch, which increases the queue, which increases the delay – and so on. It is a vicious circle. And interruptions are a fertile source of internally generated errors which generates even more checking and correcting which uses up even more required capacity which makes the queues grow even faster and longer. Round and round.  The cries for ‘we need more capacity’ get louder. It is all hands to the pump – but even then eventually there is a crisis. A big mistake happens. Then Senior Management get named-blamed-and shamed,  money magically appears and is thrown at the problem, capacity increases,  the symptoms settle, the cries for more capacity go quiet – but productivity has dropped another notch. Eventually the financial crunch arrives.    

One symptom of this ‘reactive fire-fight design’ is that people get used to working late to catch up at the end of the day so that the next day they can start the whole rollercoaster ride again. And again. And again. At least that is a form of stability. We can expect tomorrow to be just a s miserable as today and yesterday and the day before that. But TOIL (Time Off In Lieu) costs money.

The way out of the Miserable Job Purgatory Vortex is to diagnose what is causing the queue – and to treat that first.

And that means focussing on Time first – and that means Focussing on Flow first.  And by doing that we will improve delivery, improve quality and improve cost because chaotic systems generate errors which need checking and correcting which costs more. Time first is a win-win-win strategy too.

And we already have everything we need to start. We can easily count what comes in and when and what goes out and when.

The first step is to plot the inflow over time (the demand), the outflow over time (the activity), and from that we work out and plot the Work-in-Progress over time. With these three charts we can start the diagnostic process and by that path we can calm the chaos.

And then we can set to work on the Quality Improvement.  


13/01/2013Newspapers report that 17 hospitals are “dangerously understaffed”  Sound familiar?

Next week we will explore how to diagnose the root cause of a queue using Time charts.

For an example to explore please play the SystemFlow Game by clicking here