Old habits die hard” so the saying goes – but not all habits are bad. Most are good.

And in our quest for improvement sometimes we have to challenge a good habit and replace it with an even better one. And doing that is tough – much tougher than challenging a bad habit.

Sometimes the challenge to our comfort zone comes from Reality. We suddenly lose something very dear to us that has become such an integral and important part of our lives that when it is taken away we feel the acute pain of loss. We are left with an open emotional wound and we have to give ourselves time and space to recover and to heal.

With the clarity of hindsight we can see that we knew all along what would happen – we just did not know when it would happen – and we were in a state of hope-for-the-best-for-now. After all, why suffer the perpetual pain of worry when the outcome is inevitable? Well, it may be inevitable but it does not mean it needs to be imminent! So a healthy dose of anxiety is OK. Complacency is the precursor to a catastrophe and most of our catastrophes are preventable. Keeping busy doing what we have always done is not an effective strategy for warding off a preventable catastrophe.

A more effective strategy is to worry just enough to keep our complacency level low and to keep us alert to threats because in averting these we are forced to challenge ourselves and in doing that we discover hidden opportunities.

The outcome is renewal.

Sometimes though we have to learn the lessons of life the hard way.


Improvement requires change and change requires learning – so knowing how to guide learning is an essential skill for an improvement scientist.

There is a common belief that we learn by watching and listening – and therefore that we can teach by showing and talking. This belief is incorrect. We all learn by doing something different and comparing what we perceived with what we predicted. So what prompts us to do something different?  The answer is we are nudged.

We learn and change over time as a result of a series of small nudges – the effects of which add up. We can simulate this behaviour easily.

Find a tray and a piece of kitchen paper and draw two circles on the paper. Put the paper on the tray and then put a heap of granulated sugar on the leftmost circle. It will stay where it is placed. Hold the tray horizontal and nudge the tray repeatedly by tapping on its edge with a finger. The heap of sugar will spread out in all directions – and only a small proportion goes to wards the second circle – the intended direction of improvement.

Now repeat the simulation but this time tilt the tray slightly in the direction of improvement so that the heap stays put – and then nudge the tray. The heap of sugar will spread out and more will move in the direction of the second circle – the improvement goal.  The nudging is necessary but it is not sufficient – a tilt in the intended direction of improvement is also necessary but not sufficient. Actual improvement requires both.

Life provides a continuous series of random nudges – so in reality all that is needed to improve is to set the direction of tilt – which implies making it easier to move in the direction of improvement than away from it. Setting the direction of tilt is one facet of leadership – and it requires aligning the reward with the improvement. Very often this is not done and improvement becomes an uphill struggle that is unsustainable and unmaintainable.

Even when the reward is aligned with the improvement we cannot guarantee success – there is another factor.

Now repeat the sugar flow simulation and this time create a physical barrier between the heap and the goal – such as a row of sugar cubes or a fold in the kitchen paper. Create a barrier that the tilting and nudging is not strong enough to move. Now the sugar flow will be blocked by the barrier and our temptation is to increase the tilt and apply bigger nudges – but this increase-the-pressure-by-pushing-harder strategy has a risk because when the barrier eventially breaks the backlog of sugar lurches forward in an uncontrolled surge. Uncontrolled impprovement is not what we want.

So the second role of the improvement scientist is to help to remove the barriers – and this requires a more focussed action than a tilt or a nudge. It requires a poke.

Pokes are uncomfortable for the poker and for the pokee – and the skill to master the art of the positive poke. Negative pokes are surprising, emotionally painful and result in an angry reaction which damages the pokee. Positive pokes are surprising, emotionally uncomfortable and result in an excited proaction which develops the pokee.

So now poke the barrier where it crosses the line that joins the two circles so that it is reduced or removed at that point – and then tilt and nudge as before. The backlog of sugar will funnel through the gap in the barrier in a well-focussed stream in the direction of improvement. The barrier actually helps to direct the the flow so a precise poke is necessary.

The effective improvement scientist needs to know how to tilt, when to nudge and where to poke.



Improvement Science is not just about removing the barriers that block improvement and building barriers to prevent deterioration – it is also about maintaining acceptable, stable and predictable performance.

In fact most of the time this is what we need our systems to do so that we can focus our attention on the areas for improvement rather than running around keeping all the plates spinning.  Improving the ability of a system to maintain itself is a worthwhile and necessary objective.

Long term stability cannot be achieved by assuming a stable context and creating a rigid solution because the World is always changing. Long term stability is achieved by creating resilient solutions that can adjust their behaviour, within limits, to their ever-changing context.

This self-adjusting behaviour of a system is called homeostasis.

The foundation for the concept of homeostasis was first proposed by Claude Bernard (1813-1878) who unlike most of his contemporaries, believed that all living creatures were bound by the same physical laws as inanimate matter.  In his words: “La fixité du milieu intérieur est la condition d’une vie libre et indépendante” (“The constancy of the internal environment is the condition for a free and independent life”).

The term homeostasis is attributed to Walter Bradford Cannon (1871 – 1945) who was a professor of physiology at Harvard medical school and who popularized his theories in a book called The Wisdom of the Body (1932). Cannon described four principles of homeostasis:

  1. Constancy in an open system requires mechanisms that act to maintain this constancy.
  2. Steady-state conditions require that any tendency toward change automatically meets with factors that resist change.
  3. The regulating system that determines the homeostatic state consists of a number of cooperating mechanisms acting simultaneously or successively.
  4. Homeostasis does not occur by chance, but is the result of organised self-government.

Homeostasis is therefore an emergent behaviour of a system and is the result of organised, cooperating, automatic mechanisms. We know this by another name – feedback control – which is passing data from one part of a system to guide the actions of another part. Any system that does not have homeostatic feedback loops as part of its design will be inherently unstable – especially in a changing environment.  And unstable means untrustworthy.

Take driving for example. Our vehicle and its trusting passengers want to get to their desired destination on time and in one piece. To achieve this we will need to keep our vehicle within the boundaries of the road – the white lines – in order to avoid “disappointment”.

As their trusted driver our feedback loop consists of a view of the road ahead via the front windscreen; our vision connected through a working nervous system to the muscles in ours arms and legs; to the steering wheel, accelerator and brakes; then to the engine, transmission, wheels and tyres and finally to the road underneath the wheels. It is quite a complicated multi-step feedback system – but an effective one. The road can change direction and unpredictable things can happen and we can adapt, adjust and remain in control.  An inferior feedback design would be to use only the rear-view mirror and to steer by looking at the whites lines emerging from behind us. This design is just as complicated but it is much less effective and much less safe because it is entirely reactive.  We get no early warning of what we are approaching.  So, any system that uses the output performance as the feedback loop to the input decision step is like driving with just a rear view mirror.  Complex, expensive, unstable, ineffective and unsafe.     

As the number of steps in a process increases the more important the design of  the feedback stabilisation becomes – as does the number of ways we can get it wrong:  Wrong feedback signal, or from the wrong place, or to the wrong place, or at the wrong time, or with the wrong interpretation – any of which result in the wrong decision, the wrong action and the wrong outcome. Getting it right means getting all of it right all of the time – not just some of it right some of the time. We can’t leave it to chance – we have to design it to work.

Let us consider a real example. The NHS 18-week performance requirement.

The stream map shows a simple system with two parallel streams: A and B that each has two steps 1 and 2. A typical example would be generic referral of patients for investigations and treatment to one of a number of consultants who offer that service. The two streams do the same thing so the first step of the system is to decide which way to direct new tasks – to Step A1 or to Step B1. The whole system is required to deliver completed tasks in less than 18 weeks (18/52) – irrespective of which stream we direct work into.   What feedback data do we use to decide where to direct the next referral?

The do nothing option is to just allocate work without using any feedback. We might do that randomly, alternately or by some other means that are independent of the system.  This is called a push design and is equivalent to driving with your eyes shut but relying on hope and luck for a favourable outcome. We will know when we have got it wrong – but it is too late then – we have crashed the system! 

A more plausible option is to use the waiting time for the first step as the feedback signal – streaming work to the first step with the shortest waiting time. This makes sense because the time waiting for the first step is part of the lead time for the whole stream so minimising this first wait feels reasonable – and it is – BUT only in one situation: when the first steps are the constraint steps in both streams [the constraint step is one one that defines the maximum stream flow].  If this condition is not met then we heading for trouble and the map above illustrates why. In this case Stream A is just failing the 18-week performance target but because the waiting time for Step A1 is the shorter we would continue to load more work onto the failing  stream – and literally push it over the edge. In contrast Stream B is not failing and because the waiting time for Step B1 is the longer it is not being overloaded – it may even be underloaded.  So this “plausible” feedback design can actually make the system less stable. Oops!

In our transport metaphor – this is like driving too fast at night or in fog – only being able to see what is immediately ahead – and then braking and swerving to get around corners when they “suddenly” appear and running off the road unintentionally! Dangerous and expensive.

With this new insight we might now reasonably suggest using the actual output performance to decide which way to direct new work – but this is back to driving by watching the rear-view mirror!  So what is the answer?

The solution is to design the system to use the most appropriate feedback signal to guide the streaming decision. That feedback signal needs to be forward looking, responsive and to lead to stable and equitable performance of the whole system – and it may orginate from inside the system. The diagram above holds the hint: the predicted waiting time for the second step would be a better choice.  Please note that I said the predicted waiting time – which is estimated when the task leaves Step 1 and joins the back of the queue between Step 1 and Step 2. It is not the actual time the most recent task came off the queue: that is rear-view mirror gazing again.

When driving we look as far ahead as we can, for what we are heading towards, and we combine that feedback with our present speed to predict how much time we have before we need to slow down, when to turn, in which direction, by how much, and for how long. With effective feedback we can behave proactively, avoid surprises, and eliminate sudden braking and swerving! Our passengers will have a more comfortable ride and are more likely to survive the journey! And the better we can do all that the faster we can travel in both comfort and safety – even on an unfamiliar road.  It may be less exciting but excitement is not our objective. On time delivery is our goal.

Excitement comes from anticipating improvement – maintaining what we have already improved is rewarding.  We need both to sustain us and to free us to focus on the improvement work! 


The Safety Line in the Quality Sand

Improvement Science is about getting better – and it is also about not getting worse.

These are not the same thing. Getting better requires dismantling barriers that block improvement. Not getting worse requires building barriers to block deterioration.

When things get tough and people start to panic it is common to see corners being cut and short-term quick fixes taking priority over long-term common sense.  The best defense against this self-defeating behaviour is the courage and discipline to say “This is our safety line in the quality sand and we do not cross it“.  This is not dogma it is discipline. Dogma is blind acceptance; discipline is applied wisdom.

Leaders show their mettle when times are difficult not when times are easy.  A leader who abandons their espoused principles when under pressure is a liability to themselves and to their teams and organisations.

The barrier that prevents descent into chaos is not the leader – it is the principle that there is a minimum level of acceptable quality – the line that will not be crossed. So when a decision needs to be made between safety and money the choice is not open to debate. Safety comes first.  

Only those who believe that higher quality always costs more will argue for compromise. So when the going gets tough those who question the Safety Line in the Quality Sand are the ones to challenge by respectfully reminding them of their own principles.

This challenge will require courage because they may be the ones in the seats of power.  But when leaders compromise their own principles they have sacrificed their credibility and have abdicated their power.