What is my P.A.R.T?

four_way_puzzle_people_200_wht_4883Improvement implies change, but change does not imply improvement.

Change follows action. Action follows planning. Effective planning follows from an understanding of the system because it is required to make the wise decisions needed to achieve the purpose.

The purpose is the intended outcome.

Learning follows from observing the effect of change – whatever it is. Understanding follows from learning to predict the effect of both actions and in-actions.

All these pieces of the change jigsaw are different and they are inter-dependent. They fit together. They are a system.

And we can pick out four pieces: the Plan piece, the Action piece, the Observation piece and the Learning piece – and they seem to follow that sequence – it looks like a learning cycle.

This is not a new idea.

It is the same sequence as the Scientific Method: hypothesis, experiment, analysis, conclusion. The preferred tool of  Academics – the Thinkers.

It is also the same sequence as the Shewhart Cycle: plan, do, check, act. The preferred tool of the Pragmatists – the Doers.

So where does all the change conflict come from? What is the reason for the perpetual debate between theorists and activists? The incessant game of “Yes … but!”

One possible cause was highlighted by David Kolb  in his work on ‘experiential learning’ which showed that individuals demonstrate a learning style preference.

We tend to be thinkers or doers and only a small proportion us say that we are equally comfortable with both.

The effect of this natural preference is that real problems bounce back-and-forth between the Tribe of Thinkers and the Tribe of Doers.  Together we are providing separate parts of the big picture – but as two tribes we appear to be unaware of the synergistic power of the two parts. We are blocked by a power struggle.

The Experiential Learning Model (ELM) was promoted and developed by Peter Honey and Alan Mumford (see learning styles) and their work forms the evidence behind the Learning Style Questionnaire that anyone can use to get their ‘score’ on the four dimensions:

  • Pragmatist – the designer and planner
  • Activist – the action person
  • Reflector – the observer and analyst
  • Theorist – the abstracter and hypothesis generator

The evidence from population studies showed that individuals have a preference for one of these styles, sometimes two, occasionally three and rarely all four.

That observation, together with the fact that learning from experience requires moving around the whole cycle, leads to an awareness that both individuals and groups can get ‘stuck’ in their learning preference comfort zone. If the learning wheel is unbalanced it will deliver a bumpy ride when it turns! So it may be more comfortable just to remain stationary and not to learn.

Which means not to change. Which means not to improve.


So if we are embarking on an improvement exercise – be it individual or collective – then we are committed to learning. So where do we start on the learning cycle?

The first step is action. To do something – and the easiest and safest thing to do is just look. Observe what is actually happening out there in the real world – outside the office – outside our comfort zone. We need to look outside our rhetorical inner world of assumptions, intuition and pre-judgements. The process starts with Study.

The next step is to reflect on what we see – we look in the mirror – and we compare what are actually seeing with what we expected to see. That is not as easy as it sounds – and a useful tool to help is to draw charts. To make it visual. All sorts of charts.

The result is often a shock. There is often a big gap between what we see and what we perceive; between what we expect and what we experience; between what we want and what we get; between our intent and our impact.

That emotional shock is actually what we need to power us through the next phase – the Realm of the Theorist – where we ask three simple questions:
Q1: What could be causing the reality that I am seeing?
Q2: How would I know which of the plausible causes is the actual cause?
Q3: What experiment can I do to answer my question and clarify my understanding of Reality?

This is the world of the Academic.

The third step is design an experiment to test our new hypothesis.  The real world is messy and complicated and we need to be comfortable with ‘good enough’ and ‘reasonable uncertainty’.  Design is about practicalities – making something that works well enough in practice – in the real world. Something that is fit-for-purpose. We are not expecting perfection; not looking for optimum; not striving for best – just significantly better than what we have now. And the more we can test our design before we implement it the better because we want to know what to expect before we make the change and we want to avoid unintended negative consequences – the NoNos. This is Plan.

twisting_arrow_200_wht_11738Then we act … and the cycle of learning has come one revolution … but we are not back at the start – we have moved forward. Our understanding is already different from when were were at this stage before: it is deeper and wider.  We are following the trajectory of a spiral – our capability for improvement is expanding over time.

So we need to balance our learning wheel before we start the journey or we will have a slow, bumpy and painful ride!

We need to study, then plan, then do, then study the impact.


One plausible approach is to stay inside our comfort zones, play to our strengths and to say “What we need is a team made of people with complementary strengths. We need a Department of Action for the Activists; a Department of Analysis for the Reflectors; a Department of Research for the Theorists and a Department of Planning for the Pragmatists.

But that is what we have now and what is the impact? The Four Departments have become super-specialised and more polarised.  There is little common ground or shared language.  There is no common direction, no co-ordination, no oil on the axle of the wheel of change. We have ground to a halt. We have chaos. Each part is working but independently of the others in an unsynchronised mess.

We have cultural fibrillation. Change output has dropped to zero.


A better design is for everyone to focus first on balancing their own learning wheel by actively redirecting emotional energy from their comfort zone, their strength,  into developing the next step in their learning cycle.

Pragmatists develop their capability for Action.
Activists develop their capability for Reflection.
Reflectors develop their capability for Hypothesis.
Theorists develop their capability for Design.

The first step in the improvement spiral is Action – so if you are committed to improvement then investing £10 and 20 minutes in the 80-question Learning Style Questionnaire will demonstrate your commitment to yourself.  And that is where change always starts.

The Time Trap

clock_hands_spinning_import_150_wht_3149[Hmmmmmm] The desk amplified the vibration of Bob’s smartphone as it signaled the time for his planned e-mentoring session with Leslie.

[Dring Dring]

<Bob> Hi Leslie, right-on-time, how are you today?

<Leslie> Good thanks Bob. I have a specific topic to explore if that is OK. Can we talk about time traps.

<Bob> OK – do you have a specific reason for choosing that topic?

<Leslie> Yes. The blog last week about ‘Recipe for Chaos‘ set me thinking and I remembered that time-traps were mentioned in the FISH course but I confess, at the time, I did not understand them. I still do not.

<Bob> Can you describe how the ‘Recipe for Chaos‘ blog triggered this renewed interest in time-traps?

<Leslie> Yes – the question that occurred to me was: ‘Is a time-trap a recipe for chaos?’

<Bob> A very good question! What do you feel the answer is?

<Leslie>I feel that time-traps can and do trigger chaos but I cannot explain how. I feel confused.

<Bob>Your intuition is spot on – so can you localize the source of your confusion?

<Leslie>OK. I will try. I confess I got the answer to the MCQ correct by guessing – and I wrote down the answer when I eventually guessed correctly – but I did not understand it.

<Bob>What did you write down?

<Leslie>“The lead time is independent of the flow”.

<Bob>OK. That is accurate – though I agree it is perhaps a bit abstract. One source of confusion may be that there are different causes of of time-traps and there is a lot of overlap with other chaos-creating policies. Do you have a specific example we can use to connect theory with reality?

<Leslie> OK – that might explain my confusion.  The example that jumped to mind is the RTT target.

<Bob> RTT?

<Leslie> Oops – sorry – I know I should not use undefined abbreviations. Referral to Treatment Time.

<Bob> OK – can you describe what you have mapped and measured already?

<Leslie> Yes.  When I plot the lead-time for patients in date-of-treatment order the process looks stable but the histogram is multi-modal with a big spike just underneath the RTT target of 18 weeks. What you describe as the ‘Horned Gaussian’ – the sign that the performance target is distorting the behaviour of the system and the design of the system is not capable on its own.

<Bob> OK and have you investigated why there is not just one spike?

<Leslie> Yes – the factor that best explains that is the ‘priority’ of the referral.  The  ‘urgents’ jump in front of the ‘soons’ and both jump in front of the ‘routines’. The chart has three overlapping spikes.

<Bob> That sounds like a reasonable policy for mixed-priority demand. So what is the problem?

<Leslie> The ‘Routine’ group is the one that clusters just underneath the target. The lead time for routines is almost constant but most of the time those patients sit in one queue or another being leap-frogged by other higher-priority patients. Until they become high-priority – then they do the leap frogging.

<Bob> OK – and what is the condition for a time trap again?

<Leslie> That the lead time is independent of flow.

<Bob>Which implies?

<Leslie> Um. let me think. That the flow can be varying but the lead time stays the same?

<Bob> Yup. So is the flow of routine referrals varying?

<Leslie> Not over the long term. The chart is stable.

<Bob> What about over the short term? Is demand constant?

<Leslie>No of course not – it varies – but that is expected for all systems. Constant means ‘over-smoothed data’ – the Flaw of Averages trap!

<Bob>OK. And how close is the average lead time for routines to the RTT maximum allowable target?

<Leslie> Ah! I see what you mean. The average is about 17 weeks and the target is 18 weeks.

<Bob>So what is the flow variation on a week-to-week time scale?

<Leslie>Demand or Activity?

<Bob>Both.

<Leslie>H’mm – give me a minute to re-plot flow as a weekly-aggregated chart. Oh! I see what you mean – both the weekly activity and demand are both varying widely and they are not in sync with each other. Work in progress must be wobbling up and down a lot! So how can the lead time variation be so low?

<Bob>What do the flow histograms look like?

<Leslie> Um. Just a second. That is weird! They are both bi-modal with peaks at the extremes and not much in the middle – the exact opposite of what I expected to see! I expected a centered peak.

<Bob>What you are looking at is the characteristic flow fingerprint of a chaotic system – it is called ‘thrashing’.

<Leslie> So I was right!

<Bob> Yes. And now you know the characteristic pattern to look for. So what is the policy design flaw here?

<Leslie>The DRAT – the delusional ratio and arbitrary target?

<Bob> That is part of it – that is the external driver policy. The one you cannot change easily. What is the internally driven policy? The reaction to the DRAT?

<Leslie> The policy of leaving routine patients until they are about to breach then re-classifying them as ‘urgent’.

<Bob>Yes! It is called a ‘Prevarication Policy’ and it is surprisingly and uncomfortably common. Ask yourself – do you ever prevaricate? Do you ever put off ‘lower priority’ tasks until later and then not fill the time freed up with ‘higher priority tasks’?

<Leslie> OMG! I do that all the time! I put low priority and unexciting jobs on a ‘to do later’ heap but I do not sit idle – I do then focus on the high priority ones.

<Bob> High priority for whom?

<Leslie> Ah! I see what you mean. High priority for me. The ones that give me the biggest reward! The fun stuff or the stuff that I get a pat on the back for doing or that I feel good about.

<Bob> And what happens?

<Leslie> The heap of ‘no-fun-for-me-to-do’ jobs gets bigger and I await the ‘reminders’ and then have to rush round in a mad panic to avoid disappointment, criticism and blame. It feels chaotic. I get grumpy. I make more mistakes and I deliver lower-quality work. If I do not get a reminder I assume that the job was not that urgent after all and if I am challenged I claim I am too busy doing the other stuff.

<Bob> Have you avoided disappointment?

<Leslie> Ah! No – that I needed to be reminded meant that I had already disappointed. And when I do not get a reminded does not prove I have not disappointed either. Most people blame rather than complain. I have just managed to erode other people’s trust in my reliability. I have disappointed myself. I have achieved exactly the opposite of what I intended. Drat!

<Bob> So what is the reason that you work this way? There will be a reason.  A good reason.

<Leslie> That is a very good question! I will reflect on that because I believe it will help me understand why others behave this way too.

<Bob> OK – I will be interested to hear your conclusion.  Let us return to the question. What is the  downside of a ‘Prevarication Policy’?

<Leslie> It creates stress, chaos, fire-fighting, last minute changes, increased risk of errors,  more work and it erodes both quality, confidence and trust.

<Bob>Indeed so – and the impact on productivity?

<Leslie> The activity falls, the system productivity falls, revenue falls, queues increase, waiting times increase and the chaos increases!

<Bob> And?

<Leslie> We treat the symptoms by throwing resources at the problem – waiting list initiatives – and that pushes our costs up. Either way we are heading into a spiral of decline and disappointment. We do not address the root cause.

<Bob> So what is the way out of chaos?

<Leslie> Reduce the volume on the destabilizing feedback loop? Stop the managers meddling!

<Bob> Or?

<Leslie> Eh? I do not understand what you mean. The blog last week said management meddling was the problem.

<Bob> It is a problem. How many feedback loops are there?

<Leslie> Two – that need to be balanced.

<Bob> So what is another option?

<Leslie> OMG! I see. Turn UP the volume of the stabilizing feedback loop!

<Bob> Yup. And that is a lot easier to do in reality. So that is your other challenge to reflect on this week. And I am delighted to hear you using the terms ‘stabilizing feedback loop’ and ‘destabilizing feedback loop’.

<Leslie> Thank you. That was a lesson for me after last week – when I used the terms ‘positive and negative feedback’ it was interpreted in the emotional context – positive feedback as encouragement and negative feedback as criticism.  So ‘reducing positive feedback’ in that sense is the exact opposite of what I was intending. So I switched my language to using ‘stabilizing and destabilizing’ feedback loops that are much less ambiguous and the confusion and conflict disappeared.

<Bob> That is very useful learning Leslie … I think I need to emphasize that distinction more in the blog. That is one advantage of online media – it can be updated!

 <Leslie> Thanks again Bob!  And I have the perfect opportunity to test a new no-prevarication-policy design – in part of the system that I have complete control over – me!

Software First

computer_power_display_glowing_150_wht_9646A healthcare system has two inter-dependent parts. Let us call them the ‘hardware’ and the ‘software’ – terms we are more familiar with when referring to computer systems.

In a computer the critical-to-success software is called the ‘operating system’ – and we know that by the brand labels such as Windows, Linux, MacOS, or Android. There are many.

It is the O/S that makes the hardware fit-for-purpose. Without the O/S the computer is just a box of hot chips. A rather expensive room heater.

All the programs and apps that we use to to deliver our particular information service require the O/S to manage the actual hardware. Without a coordinator there would be chaos.

In a healthcare system the ‘hardware’ is the buildings, the equipment, and the people.  They are all necessary – but they are not sufficient on their own.

The ‘operating system’ in a healthcare system are the management policies: the ‘instructions’ that guide the ‘hardware’ to do what is required, when it is required and sometimes how it is required.  These policies are created by managers – they are the healthcare operating system design engineers so-to-speak.

Change the O/S and you change the behaviour of the whole system – it may look exactly the same – but it will deliver a different performance. For better or for worse.


In 1953 the invention of the transistor led to the first commercially viable computers. They were faster, smaller, more reliable, cheaper to buy and cheaper to maintain than their predecessors. They were also programmable.  And with many separate customer programs demanding hardware resources – an effective and efficient operating system was needed. So the understanding of “good” O/S design developed quickly.

In the 1960’s the first integrated circuits appeared and the computer world became dominated by mainframe computers. They filled air-conditioned rooms with gleaming cabinets tended lovingly by white-coated technicians carrying clipboards. Mainframes were, and still are, very expensive to build and to run! The valuable resource that was purchased by the customers was ‘CPU time’.  So the operating systems of these machines were designed to squeeze every microsecond of value out of the expensive-to-maintain CPU: for very good commercial reasons. Delivering the “data processing jobs” right, on-time and every-time was paramount.

The design of the operating system software was critical to the performance and to the profit.  So a lot of brain power was invested in learning how to schedule jobs; how to orchestrate the parts of the hardware system so that they worked in harmony; how to manage data buffers to smooth out flow and priority variation; how to design efficient algorithms for number crunching, sorting and searching; and how to switch from one task to the next quickly and without wasting time or making errors.

Every modern digital computer has inherited this legacy of learning.

In the 1970’s the first commercial microprocessors appeared – which reduced the size and cost of computers by orders of magnitude again – and increased their speed and reliability even further. Silicon Valley blossomed and although the first micro-chips were rather feeble in comparison with their mainframe equivalents they ushered in the modern era of the desktop-sized personal computer.

In the 1980’s players such as Microsoft and Apple appeared to exploit this vast new market. The only difference was that Microsoft only offered just the operating system for the new IBM-PC hardware (called MS-DOS); while Apple created both the hardware and software as a tightly integrated system – the Apple I.

The ergonomic-seamless-design philosophy at Apple led to the Apple Mac which revolutionised personal computing. It made them usable by people who had no interest in the innards or in programming. The Apple Macs were the “designer”computers and were reassuringly more expensive. The innovations that Apple designed into the Mac are now expected in all personal computers as well as the latest generations of smartphones and tablets.

Today we carry more computing power in our top pocket than a mainframe of the 1970’s could deliver! The design of the operating system has hardly changed though.

It was the O/S  design that leveraged the maximum potential of the very expensive hardware.  And that is still the case – but we take it for completely for granted.


Exactly the same principle applies to our healthcare systems.

The only difference is that the flow is not 1’s and 0’s – it is patients and all the things needed to deliver patient care. The ‘hardware’ is the expensive part to assemble and run – and the largest cost is the people.  Healthcare is a service delivered by people to people. Highly-trained nurses, doctors and allied healthcare professionals are expensive.

So the key to healthcare system performance is high quality management policy design – the healthcare operating system (HOS).

And here we hit a snag.

Our healthcare management policies have not been designed using the same rigor as the operating systems for our computers. They have not been designed using the well-understood principles of flow physics. The various parts of our healthcare system do not work well together. The flows are fractured. The silos work independently. And the ubiquitous symptom of this dysfunction is confusion, chaos and conflict.  The managers and the doctors are at each others throats. And this is because the management policies have evolved through a largely ineffective and very inefficient strategy called “burn-and-scrape”. Firefighting.

The root cause of the poor design is that neither healthcare managers nor the healthcare workers are trained in operational policy design. Design for Safety. Design for Quality. Design for Delivery. Design for Productivity.

And we are all left with a lose-lose-lose legacy: a system that is no longer fit-for-purpose and a generation of managers and clinicians who have never learned how to design the operational and clinical policies that ensure the system actually delivers what the ‘hardware’ is capable of delivering.


For example:

Suppose we have a simple healthcare system with three stages called A, B and C.  All the patients flow through A, then to B and then to C.  Let us assume these three parts are managed separately as departments with separate budgets and that they are free to use whatever policies they choose so long as they achieve their performance targets -which are (a) to do all the work and (b) to stay in budget and (c) to deliver on time.  So far so good.

Now suppose that the work that arrives at Department B from Department  A is not all the same and different tasks require different pathways and different resources. A Radiology, Pathology or Pharmacy Department for example.

Sorting the work into separate streams and having expensive special-purpose resources sitting idle waiting for work to arrive is inefficient and expensive. It will push up the unit cost – the total cost divided by the total activity. This is called ‘carve-out’.

Switching resources from one pathway to another takes time and that change-over time implies some resources are not able to do the work for a while.  These inefficiencies will contribute to the total cost and therefore push up the “unit-cost”. The total cost for the department divided by the total activity for the department.

So Department B decides to improve its “unit cost” by deploying a policy called ‘batching’.  It starts to sort the incoming work into different types of task and when a big enough batch has accumulated it then initiates the change-over. The cost of the change-over is shared by the whole batch. The “unit cost” falls because Department B is now able to deliver the same activity with fewer resources because they spend less time doing the change-overs. That is good. Isn’t it?

But what is the impact on Departments A and C and what effect does it have on delivery times and work in progress and the cost of storing the queues?

Department A notices that it can no longer pass work to B when it wants because B will only start the work when it has a full batch of requests. The queue of waiting work sits inside Department A.  That queue takes up space and that space costs money but the queue cost is incurred by Department A – not Department B.

What Department C sees is the order of the work changed by Department B to create a bigger variation in lead times for consecutive tasks. So if the whole system is required to achieve a delivery time specification – then Department C has to expedite the longest waiters and delay the shortest waiters – and that takes work,  time, space and money. That cost is incurred by Department C not by Department B.

The unit costs for Department B go down – and those for A and C both go up. The system is less productive as a whole.  The queues and delays caused by the policy change means that work can not be completed reliably on time. The blame for the failure falls on Department C.  Conflict between the parts of the system is inevitable. Lose-Lose-Lose.

And conflict is always expensive – on all dimensions – emotional, temporal and financial.


The policy design flaw here looks like it is ‘batching’ – but that policy is just a reaction to a deeper design flaw. It is a symptom.  The deeper flaw is not even the use of ‘unit costing’. That is a useful enough tool. The deeper flaw is the incorrect assumption that by improving the unit costs of the stages independently will always get an improvement in whole system productivity.

This is incorrect. This error is the result of ‘linear thinking’.

The Laws of Flow Physics do not work like this. Real systems are non-linear.

To design the management policies for a non-linear system using linear-thinking is guaranteed to fail. Disappointment and conflict is inevitable. And that is what we have. As system designers we need to use ‘systems-thinking’.

This discovery comes as a bit of a shock to management accountants. They feel rather challenged by the assertion that some of their cherished “cost improvement policies” are actually making the system less productive. Precisely the opposite of what they are trying to achieve.

And it is the senior management that decide the system-wide financial policies so that is where the linear-thinking needs to be challenged and the ‘software patch’ applied first.

It is not a major management software re-write. Just a minor tweak is all that is required.

And the numbers speak for themselves. It is not a difficult experiment to do.


So that is where we need to start.

We need to learn Healthcare Operating System design and we need to learn it at all levels in healthcare organisations.

And that system-thinking skill has another name – it is called Improvement Science.

The good news is that it is a lot easier to learn than most people believe.

And that is a big shock too – because how to do this has been known for 50 years.

So if you would like to see a real and current example of how poor policy design leads to falling productivity and then how to re-design the policies to reverse this effect have a look at Journal Of Improvement Science 2013:8;1-20.

And if you would like to learn how to design healthcare operating policies that deliver higher productivity with the same resources then the first step is FISH.

Seeing Inside the Black Box

box_opening_up_closing_150_wht_8035 Improvement Science requires the effective, efficient and coordinated use of diagnosis, design and delivery tools.

Experience has also taught us that it is not just about the tools – each must be used as it was designed.

The craftsman knows his tools and knows what instrument to use, where and when the context dictates; and how to use it with skill.

Some tools are simple and effective – easy to understand and to use. The kitchen knife is a good example. It does not require an instruction manual to use it.

Other tools are more complex. Very often because they have a specific purpose. They are not generic. And they may not be intuitively obvious how to use them.  Many labour-saving household appliances have specific purposes: the microwave oven, the dish-washer and so on – but they have complex controls and settings that we need to manipulate to direct the “domestic robot” to deliver what we actually want.  Very often these controls are not intuitively obvious – we are dealing with a black box – and our understanding of what is happening inside is vague.

Very often we do not understand how the buttons and dials that we can see and touch – the inputs – actually influence the innards of the box to determine the outputs. We do not have a mental model of what is inside the Black Box. We do not know – we are ignorant.

In this situation we may resort to just blindly following the instructions;  or blindly copying what someone else does; or blindly trying random combinations of inputs until we get close enough to what we want. No wiser at the end than we were at the start.  The common thread here is “blind”. The box is black. We cannot see inside.

And the complex black box is deliberately made so – because the supplier of the super-tool does not want their “secret recipe” to be known to all – least of all their competitors.

This is a perfect recipe for confusion and for conflict. Lose-Lose-Lose.

Improvement Science is dedicated to eliminating confusion and conflict – so Black Box Tools are NOT on the menu.

Improvement Scientists need to understand how their tools work – and the best way to achieve that level of understanding is to design and build their own.

This may sound like re-inventing the wheel but it is not about building novel tools – it is about re-creating the tried and tested tools – for the purpose of understanding how they work. And understanding their strengths, their weaknesses, their opportunities and their risks or threats.

And doing that requires guidance from a mentor who has been through this same learning journey. Starting with simple, intuitive tools, and working step-by-step to design, build and understand the more complex ones.

So where do we start?

In the FISH course the first tool we learn to use is a Gantt Chart.

It was invented by Henry Laurence Gantt about 100 years ago and requires nothing more than pencil and paper. Coloured pencils and squared paper are even better.

Gantt_ChartThis is an example of a Gantt Chart for a Day Surgery Unit.

At the top are the “tasks” – patients 1 and 2; and at the bottom are the “resources”.

Time runs left to right.

Each coloured bar appears twice: once on each chart.

The power of a Gantt Chart is that it presents a lot of information in a very compact and easy-to-interpret format. That is what Henry Gantt intended.

A Gantt Chart is like the surgeon’s scalpel. It is a simple, generic easy-to-create tool that has a wide range of uses. The skill is knowing where, when and how to use it: and just as importantly where-not, when-not and how-not.

DRAT_04The second tool that an Improvement Scientist learns to use is the Shewhart or time-series chart.

It was invented about 90 years ago.

This is a more complex tool and as such there is a BIG danger that it is used as a Black Box with no understanding of the innards.  The SPC  and Six-Sigma Zealots sell it as a Magic Box. It is not.

We could paste any old time-series data into a bit of SPC software; twiddle with the controls until we get the output we want; and copy the chart into our report. We could do that and hope that no-one will ask us to explain what we have done and how we have done it. Most do not because they do not want to appear ‘ignorant’. The elephant is in the room though.  There is a conspiracy of silence.

The elephant-in-the-room is the risk we take when use Black Box tools – the risk of GIGO. Garbage In Garbage Out.

And unfortunately we have a tendency to blindly trust what comes out of the Black Box that a plausible Zealot tells us is “magic”. This is the Emporer’s New Clothes problem.  Another conspiracy of silence follows.

The problem here is not the tool – it is the desperate person blindly wielding it. The Zealots know this and they warn the Desperados of the risk and offer their expensive Magician services. They are not interested in showing how the magic trick is done though! They prefer the Box to stay Black.

So to avoid this cat-and-mouse scenario and to understand both the simpler and the more complex tools, and to be able to use them effectively and safely, we need to be able to build one for ourselves.

And the know-how to do that is not obvious – if it were we would have already done it – so we need guidance.

And once we have  built our first one – a rough-and-ready working prototype – then we can use the existing ones that have been polished with long use. And we can appreciate the wisdom that has gone into their design. The Black Box becomes Transparent.

So learning how the build the essential tools is the first part of the Improvement Science Practitioner (ISP) training – because without that knowledge it is difficult to progress very far. And without that understanding it is impossible to teach anyone anything other than to blindly follow a Black Box recipe.

Of course Magic Black Box Solutions Inc will not warm to this idea – they may not want to reveal what is inside their magic product. They are fearful that their customers may discover that it is much simpler than they are being told.  And we can test that hypothesis by asking them to explain how it works in language that we can understand. If they cannot (or will not) then we may want to keep looking for someone who can and will.

Space-and-Time

line_figure_phone_400_wht_9858<Lesley>Hi Bob! How are you today?

<Bob>OK thanks Lesley. And you?

<Lesley>I am looking forward to our conversation. I have two questions this week.

<Bob>OK. What is the first one?

<Lesley>You have taught me that improvement-by-design starts with the “purpose” question and that makes sense to me. But when I ask that question in a session I get an “eh?” reaction and I get nowhere.

<Bob>Quod facere bonum opus et quomodo te cognovi unum?

<Lesley>Eh?

<Bob>I asked you a purpose question.

<Lesley>Did you? What language is that? Latin? I do not understand Latin.

<Bob>So although you recognize the language you do not understand what I asked, the words have no meaning. So you are unable to answer my question and your reaction is “eh?”. I suspect the same is happening with your audience. Who are they?

<Lesley>Front-line clinicians and managers who have come to me to ask how to solve their problems. There Niggles. They want a how-to-recipe and they want it yesterday!

<Bob>OK. Remember the Temperament Treacle conversation last week. What is the commonest Myers-Briggs Type preference in your audience?

<Lesley>It is xSTJ – tough minded Guardians.  We did that exercise. It was good fun! Lots of OMG moments!

<Bob>OK – is your “purpose” question framed in a language that the xSTJ preference will understand naturally?

<Lesley>Ah! Probably not! The “purpose” question is future-focused, conceptual , strategic, value-loaded and subjective.

<Bob>Indeed – it is an iNtuitor question. xNTx or xNFx. Pose that question to a roomful of academics or executives and they will debate it ad infinitum.

<Lesley>More Latin – but that phrase I understand. You are right.  And my own preference is xNTP so I need to translate my xNTP “purpose” question into their xSTJ language?

<Bob>Yes. And what language do they use?

<Lesley>The language of facts, figures, jobs-to-do, work-schedules, targets, budgets, rational, logical, problem-solving, tough-decisions, and action-plans. Objective, pragmatic, necessary stuff that keep the operational-wheels-turning.

<Bob>OK – so what would “purpose” look like in xSTJ language?

<Lesley>Um. Good question. Let me start at the beginning. They came to me in desperation because they are now scared enough to ask for help.

<Bob>Scared of what?

<Lesley>Unintentionally failing. They do not want to fail and they do not need beating with sticks. They are tough enough on themselves and each other.

<Bob>OK that is part of their purpose. The “Avoid” part. The bit they do not want. What do they want? What is the “Achieve” part? What is their “Nice If”?

<Lesley>To do a good job.

<Bob>Yes. And that is what I asked you – but in an unfamiliar language. Translated into English I asked “What is a good job and how do you know you are doing one?”

<Lesley>Ah ha! That is it! That is the question I need to ask. And that links in the first map – The 4N Chart®. And it links in measurement, time-series charts and BaseLine© too. Wow!

<Bob>OK. So what is your second question?

<Lesley>Oh yes! I keep getting asked “How do we work out how much extra capacity we need?” and I answer “I doubt that you need any more capacity.”

<Bob>And their response is?

<Lesley>Anger and frustration! They say “That is obvious rubbish! We have a constant stream of complaints from patients about waiting too long and we are all maxed out so of course we need more capacity! We just need to know the minimum we can get away with – the what, where and when so we can work out how much it will cost for the business case.

<Bob>OK. So what do they mean by the word “capacity”. And what do you mean?

<Lesley>Capacity to do a good job?

<Bob>Very quick! Ho ho! That is a bit imprecise and subjective for a process designer though. The Laws of Physics need the terms “capacity”, “good” and “job” clearly defined – with units of measurement that are meaningful.

<Lesley>OK. Let us define “good” as “delivered on time” and “job” as “a patient with a health problem”.

<Bob>OK. So how do we define and measure capacity? What are the units of measurement?

<Lesley>Ah yes – I see what you mean. We touched on that in FISH but did not go into much depth.

<Bob>Now we dig deeper.

<Lesley>OK. FISH talks about three interdependent forms of capacity: flow-capacity, resource-capacity, and space-capacity.

<Bob>Yes. They are the space-and-time capacities. If we are too loose with our use of these and treat them as interchangeable then we will create the confusion and conflict that you have experienced. What are the units of measurement of each?

<Lesley>Um. Flow-capacity will be in the same units as flow, the same units as demand and activity – tasks per unit time.

<Bob>Yes. Good. And space-capacity?

<Lesley>That will be in the same units as work in progress or inventory – tasks.

<Bob>Good! And what about resource-capacity?

<Lesley>Um – Will that be resource-time – so time?

<Bob>Actually it is resource-time per unit time. So they have different units of measurement. It is invalid to mix them up any-old-way. It would be meaningless to add them for example.

<Lesley>OK. So I cannot see how to create a valid combination from these three! I cannot get the units of measurement to work.

<Bob>This is a critical insight. So what does that mean?

<Lesley>There is something missing?

<Bob>Yes. Excellent! Your homework this week is to work out what the missing pieces of the capacity-jigsaw are.

<Lesley>You are not going to tell me the answer?

<Bob>Nope. You are doing ISP training now. You already know enough to work it out.

<Lesley>OK. Now you have got me thinking. I like it. Until next week then.

<Bob>Have a good week.

The Mirror

mirror_mirror[Dring Dring]

The phone announced the arrival of Leslie for the weekly ISP mentoring conversation with Bob.

<Leslie> Hi Bob.

<Bob> Hi Leslie. What would you like to talk about today?

<Leslie> A new challenge – one that I have not encountered before.

<Bob>Excellent. As ever you have pricked my curiosity. Tell me more.

<Leslie> OK. Up until very recently whenever I have demonstrated the results of our improvement work to individuals or groups the usual response has been “Yes, but“. The habitual discount as you call it. “Yes, but your service is simpler; Yes, but your budget is bigger; Yes, but your staff are less militant.” I have learned to expect it so I do not get angry any more.

<Bob> OK. The mantra of the skeptics is to be expected and you have learned to stay calm and maintain respect. So what is the new challenge?

<Leslie>There are two parts to it.  Firstly, because the habitual discounting is such an effective barrier to diffusion of learning;  our system has not changed; the performance is steadily deteriorating; the chaos is worsening and everything that is ‘obvious’ has been tried and has not worked. More red lights are flashing on the patient-harm dashboard and the Inspectors are on their way. There is an increasing  turnover of staff at all levels – including Executive.  There is an anguished call for “A return to compassion first” and “A search for new leaders” and “A cultural transformation“.

<Bob> OK. It sounds like the tipping point of awareness has been reached, enough people now appreciate that their platform is burning and radical change of strategy is required to avoid the ship sinking and them all drowning. What is the second part?

<Leslie> I am getting more emails along the line of “What would you do?

<Bob> And your reply?

<Leslie> I say that I do not know because I do not have a diagnosis of the cause of the problem. I do know a lot of possible causes but I do not know which plausible ones are the actual ones.

<Bob> That is a good answer.  What was the response?

<Leslie>The commonest one is “Yes, but you have shown us that Plan-Do-Study-Act is the way to improve – and we have tried that and it does not work for us. So we think that improvement science is just more snake oil!”

<Bob>Ah ha. And how do you feel about that?

<Leslie>I have learned the hard way to respect the opinion of skeptics. PDSA does work for me but not for them. And I do not understand why that is. I would like to conclude that they are not doing it right but that is just discounting them and I am wary of doing that.

<Bob>OK. You are wise to be wary. We have reached what I call the Mirror-on-the-Wall moment.  Let me ask what your understanding of the history of PDSA is?

<Leslie>It was called Plan-Do-Check-Act by Walter Shewhart in the 1930’s and was presented as a form of the scientific method that could be applied on the factory floor to improving the quality of manufactured products.  W Edwards Deming modified it to PDSA where the “Check” was changed to “Study”.  Since then it has been the key tool in the improvement toolbox.

<Bob>Good. That is an excellent summary.  What the Zealots do not talk about are the limitations of their wonder-tool.  Perhaps that is because they believe it has no limitations.  Your experience would seem to suggest otherwise though.

<Leslie>Spot on Bob. I have a nagging doubt that I am missing something here. And not just me.

<Bob>The reason PDSA works for you is because you are using it for the purpose it was designed for: incremental improvement of small bits of the big system; the steps; the points where the streams cross the stages.  You are using your FISH training to come up with change plans that will work because you understand the Physics of Flow better. You make wise improvement decisions.  In fact you are using PDSA in two separate modes: discovery mode and delivery mode.  In discovery mode we use the Study phase to build your competence – and we learn most when what happens is not what we expected.  In delivery mode we use the Study phase to build our confidence – and that grows most when what happens is what we predicted.

<Leslie>Yes, that makes sense. I see the two modes clearly now you have framed it that way – and I see that I am doing both at the same time, almost by second nature.

<Bob>Yes – so when you demonstrate it you describe PDSA generically – not as two complimentary but contrasting modes. And by demonstrating success you omit to show that there are some design challenges that cannot be solved with either mode.  That hidden gap attracts some of the “Yes, but” reactions.

<Leslie>Do you mean the challenges that others are trying to solve and failing?

<Bob>Yes. The commonest error is to discount the value of improvement science in general; so nothing is done and the inevitable crisis happens because the system design is increasingly unfit for the evolving needs.  The toast is not just burned it is on fire and is now too late to  use the discovery mode of PDSA because prompt and effective action is needed.  So the delivery mode of PDSA is applied to a emergent, ill-understood crisis. The Plan is created using invalid assumptions and guesswork so it is fundamentally flawed and the Do then just makes the chaos worse.  In the ensuing panic the Study and Act steps are skipped so all hope of learning is lost and and a vicious and damaging spiral of knee-jerk Plan-Do-Plan-Do follows. The chaos worsens, quality falls, safety falls, confidence falls, trust falls, expectation falls and depression and despair increase.

<Leslie>That is exactly what is happening and why I feel powerless to help. What do I do?

<Bob>The toughest bit is past. You have looked squarely in the mirror and can now see harsh reality rather than hasty rhetoric. Now you can look out of the window with different eyes.  And you are now looking for a real-world example of where complex problems are solved effectively and efficiently. Can you think of one?

<Leslie>Well medicine is one that jumps to mind.  Solving a complex, emergent clinical problems requires a clear diagnosis and prompt and effective action to stabilise the patient and then to cure the underlying cause: the disease.

<Bob>An excellent example. Can you describe what happens as a PDSA sequence?

<Leslie>That is a really interesting question.  I can say for starters that it does not start with P – we have learned are not to have a preconceived idea of what to do at the start because it badly distorts our clinical judgement.  The first thing we do is assess the patient to see how sick and unstable they are – we use the Vital Signs. So that means that we decide to Act first and our first action is to Study the patient.

<Bob>OK – what happens next?

<Leslie>Then we will do whatever is needed to stabilise the patient based on what we have observed – it is called resuscitation – and only then we can plan how we will establish the diagnosis; the root cause of the crisis.

<Bob> So what does that spell?

<Leslie> A-S-D-P.  It is the exact opposite of P-D-S-A … the mirror image!

<Bob>Yes. Now consider the treatment that addresses the root cause and that cures the patient. What happens then?

<Leslie>We use the diagnosis is used to create a treatment Plan for the specific patient; we then Do that, and we Study the effect of the treatment in that specific patient, using our various charts to compare what actually happens with what we predicted would happen. Then we decide what to do next: the final action.  We may stop because we have achieved our goal, or repeat the whole cycle to achieve further improvement. So that is our old friend P-D-S-A.

<Bob>Yes. And what links the two bits together … what is the bit in the middle?

<Leslie>Once we have a diagnosis we look up the appropriate treatment options that have been proven to work through research trials and experience; and we tailor the treatment to the specific patient. Oh I see! The missing link is design. We design a specific treatment plan using generic principles.

<Bob>Yup.  The design step is the jam in the improvement sandwich and it acts like a mirror: A-S-D-P is reflected back as P-D-S-A

<Leslie>So I need to teach this backwards: P-D-S-A and then Design and then A-S-P-D!

<Bob>Yup – and you know that by another name.

<Leslie> 6M Design®! That is what my Improvement Science Practitioner course is all about.

<Bob> Yup.

<Leslie> If you had told me that at the start it would not have made much sense – it would just have confused me.

<Bob>I know. That is the reason I did not. The Mirror needs to be discovered in order for the true value to appreciated. At the start we look in the mirror and perceive what we want to see. We have to learn to see what is actually there. Us. Now you can see clearly where P-D-S-A and Design fit together and the missing A-S-D-P component that is needed to assemble a 6M Design® engine. That is Improvement-by-Design in a nine-letter nutshell.

<Leslie> Wow! I can’t wait to share this.

<Bob> And what do you expect the response to be?

<Leslie>”Yes, but”?

<Bob> From the die hard skeptics – yes. It is the ones who do not say “Yes, but” that you want to engage with. The ones who are quiet. It is always the quiet ones that hold the key.

Three Essentials

There are three necessary parts before ANY improvement-by-design effort will gain traction. Omit any one of them and nothing happens.

stick_figure_drawing_three_check_marks_150_wht_5283

1. A clear purpose and an outline strategic plan.

2. Tactical measurement of performance-over-time.

3. A generic Improvement-by-Design framework.

These are necessary minimum requirements to be able to safely delegate the day-to-day and week-to-week tactical stuff the delivers the “what is needed”.

These are necessary minimum requirements to build a self-regulating, self-sustaining, self-healing, self-learning win-win-win system.

And this is not a new idea.  It was described by Joseph Juran in the 1960’s and that description was based on 20 years of hands-on experience of actually doing it in a wide range of manufacturing and service organisations.

That is 20 years before  the terms “Lean” or “Six Sigma” or “Theory of Constraints” were coined.  And the roots of Juran’s journey were 20 years before that – when he started work at the famous Hawthorne Works in Chicago – home of the Hawthorne Effect – and where he learned of the pioneering work of  Walter Shewhart.

And the roots of Shewhart’s innovations were 20 years before that – in the first decade of the 20th Century when innovators like Henry Ford and Henry Gantt were developing the methods of how to design and build highly productive processes.

Ford gave us the one-piece-flow high-quality at low-cost production paradigm. Toyota learned it from Ford.  Gantt gave us simple yet powerful visual charts that give us an understanding-at-a-glance of the progress of the work.  And Shewhart gave us the deceptively simple time-series chart that signals when we need to take more notice.

These nuggets of pragmatic golden knowledge have been buried for decades under a deluge of academic mud.  It is nigh time to clear away the detritus and get back to the bedrock of pragmatism. The “how-to-do-it” of improvement. Just reading Juran’s 1964 “Managerial Breakthrough” illustrates just how much we now take for granted. And how ignorant we have allowed ourselves to become.

Acquired Arrogance is a creeping, silent disease – we slip from second nature to blissful ignorance without noticing when we divorce painful reality and settle down with our own comfortable collective rhetoric.

The wake-up call is all the more painful as a consequence: because it is all the more shocking for each one of us; and because it affects more of us.

The pain is temporary – so long as we treat the cause and not just the symptom.

The first step is to acknowledge the gap – and to start filling it in. It is not technically difficult, time-consuming or expensive.  Whatever our starting point we need to put in place the three foundation stones above:

1. Common purpose.
2. Measurement-over-time.
3. Method for Improvement.

Then the rubber meets the road (rather than the sky) and things start to improve – for real. Lots of little things in lots of places at the same time – facilitated by the Junior Managers. The cumulative effect is dramatic. Chaos is tamed; calm is restored; capability builds; and confidence builds. The cynics have to look elsewhere for their sport and the skeptics are able to remain healthy.

Then the Middle Managers feel the new firmness under their feet – where before there were shifting sands. They are able to exert their influence again – to where it makes a difference. They stop chasing Scotch Mist and start reporting real and tangible improvement – with hard evidence. And they rightly claim a slice of the credit.

And the upwelling of win-win-win feedback frees the Senior Managers from getting sucked into reactive fire-fighting and the Victim Vortex; and that releases the emotional and temporal space to start learning and applying System-level Design.  That is what is needed to deliver a significant and sustained improvement.

And that creates the stable platform for the Executive Team to do Strategy from. Which is their job.

It all starts with the Three Essentials:

1. A Clear and Common Constancy of Purpose.
2. Measurement-over-time of the Vital Metrics.
3. A Generic Method for Improvement-by-Design.

The Black Curtain

Black_Curtain_and_DoorA couple of weeks ago an important event happened.  A Masterclass in Demand and Capacity for NHS service managers was run by an internationally renown and very experienced practitioner of Improvement Science.

The purpose was to assist the service managers to develop their capability for designing quality, flow and cost improvement using tried and tested operations management (OM) theory, techniques and tools.

It was assumed that as experienced NHS service managers that they already knew the basic principles of  OM and the foundation concepts, terminology, techniques and tools.

It was advertised as a Masterclass and designed accordingly.

On the day it was discovered that none of the twenty delegates had heard of two fundamental OM concepts: Little’s Law and Takt Time.

These relate to how processes are designed-to-flow. It was a Demand and Capacity Master Class; not a safety, quality or cost one.  The focus was flow.

And it became clear that none of the twenty delegates were aware before the day that there is a well-known and robust science to designing systems to flow.

So learning this fact came as a bit of a shock.

The implications of this observation are profound and worrying:

if a significant % of senior NHS operational managers are unaware of the foundations of operations management then the NHS may have problem it was not aware of …

because …

“if transformational change of the NHS into a stable system that is fit-for-purpose (now and into the future) requires the ability to design processes and systems that deliver both high effectiveness and high efficiency ...”

then …

it raises the question of whether the current generation of NHS managers are fit-for-this-future-purpose“.

No wonder that discovering a Science of  Improvement actually exists came as a bit of a shock!

And saying “Yes, but clinicians do not know this science either!” is a defensive reaction and not a constructive response. They may not but they do not call themselves “operational managers”.

[PS. If you are reading this and are employed by the NHS and do not know what Little’s Law and Takt Time are then it would be worth doing that first. Wikipedia is a good place to start].

And now we have another question:

“Given there are thousands of operational managers in the NHS; what does one sample of 20 managers tell us about the whole population?”

Now that is a good question.

It is also a question of statistics. More specifically quite advanced statistics.

And most people who work in the NHS have not studied statistics to that level. So now we have another do-not-know-how problem.

But it is still an important question that we need to understand the answer to – so we need to learn how and that means taking this learning path one step at a time using what we do know, rather than what we do not.

Step 1:

What do we know? We have one sample of 20 NHS service managers. We know something about our sample because our unintended experiment has measured it: that none of them had heard of Little’s Law or Takt Time. That is 0/20 or 0%.

This is called a “sample statistic“.

What we want to know is “What does this information tell us about the proportion of the whole population of all NHS managers who do have this foundation OM knowledge?”

This proportion of interest is called  the unknown “population parameter“.

And we need to estimate this population parameter from our sample statistic because it is impractical to measure a population parameter directly: That would require every NHS manager completing an independent and accurate assessment of their basic OM knowledge. Which seems unlikely to happen.

The good news is that we can get an estimate of a population parameter from measurements made from small samples of that population. That is one purpose of statistics.

Step 2:

But we need to check some assumptions before we attempt this statistical estimation trick.

Q1: How representative is our small sample of the whole population?

If we chose the delegates for the masterclass by putting the names of all NHS managers in a hat and drawing twenty names out at random, as in a  tombola or lottery, than we have what is called a “random sample” and we can trust our estimate of the wanted population parameter.  This is called “random sampling”.

That was not the case here. Our sample was self-selecting. We were not conducting a research study. This was the real world … so there is a chance of “bias”. Our sample may not be representative and we cannot say what the most likely bias is.

It is possible that the managers who selected themselves were the ones struggling most and therefore more likely than average to have a gap in their foundation OM knowledge. It is also possible that the managers who selected themselves are the most capable in their generation and are very well aware that there is something else that they need to know.

We may have a biased sample and we need to proceed with some caution.

Step 3:

So given the fact that none of our possibly biased sample of mangers were aware of the Foundation OM Knowledge then it is possible that no NHS service managers know this core knowledge.  In other words the actual population parameter is 0%. It is also possible that the managers in our sample were the only ones in the NHS who do not know this.  So, in theory, the sought-for population parameter could be anywhere between 0% and very nearly 100%.  Does that mean it is impossible to estimate the true value?

It is not impossible. In fact we can get an estimate that we can be very confident is accurate. Here is how it is done.

Statistical estimates of population parameters are always presented as ranges with a lower and an upper limit called a “confidence interval” because the sample is not the population. And even if we have an unbiased random sample we can never be 100% confident of our estimate.  The only way to be 100% confident is to measure the whole population. And that is not practical.

So, we know the theoretical limits from consideration of the extreme cases … but what happens when we are more real-world-reasonable and say – “let us assume our sample is actually a representative sample, albeit not a randomly selected one“.  How does that affect the range of our estimate of the elusive number – the proportion of NHS service managers who know basic operation management theory?

Step 4:

To answer that we need to consider two further questions:

Q2. What is the effect of the size of the sample?  What if only 5 managers had come and none of them knew; what if had been 50 or 500 and none of them knew?

Q3. What if we repeated the experiment more times? With the same or different sample sizes? What could we learn from that?

Our intuition tells us that the larger the sample size and the more often we do the experiment then the more confident we will be of the result. In other words  narrower the range of the confidence interval around our sample statistic.

Our intuition is correct because if our sample was 100% of the population we could be 100% confident.

So given we have not yet found an NHS service manager who has the OM Knowledge then we cannot exclude 0%. Our challenge narrows to finding a reasonable estimate of the upper limit of our confidence interval.

Step 5

Before we move on let us review where we have got to already and our purpose for starting this conversation: We want enough NHS service managers who are knowledgeable enough of design-for-flow methods to catalyse a transition to a fit-for-purpose and self-sustaining NHS.

One path to this purpose is to have a large enough pool of service managers who do understand this Science well enough to act as advocates and to spread both the know-of and the know-how.  This is called the “tipping point“.

There is strong evidence that when about 20% of a population knows about something that is useful for the whole population – then that knowledge  will start to spread through the grapevine. Deeper understanding will follow. Wiser decisions will emerge. More effective actions will be taken. The system will start to self-transform.

And in the Brave New World of social media this message may spread further and faster than in the past. This is good.

So if the NHS needs 20% of its operational managers aware of the Foundations of Operations Management then what value is our morsel of data from one sample of 20 managers who, by chance, were all unaware of the Knowledge.  How can we use that data to say how close to the magic 20% tipping point we are?

Step 6:

To do that we need to ask the question in a slightly different way.

Q4. What is the chance of an NHS manager NOT knowing?

We assume that they either know or do not know; so if 20% know then 80% do not.

This is just like saying: if the chance of rolling a “six” is 1-in-6 then the chance of rolling a “not-a-six” is 5-in-6.

Next we ask:

Q5. What is the likelihood that we, just by chance, selected a group of managers where none of them know – and there are 20 in the group?

This is rather like asking: what is the likelihood of rolling twenty “not-a-sixes” in a row?

Our intuition says “an unlikely thing to happen!”

And again our intuition is sort of correct. How unlikely though? Our intuition is a bit vague on that.

If the actual proportion of NHS managers who have the OM Knowledge is about the same chance of rolling a six (about 16%) then we sense that the likelihood of getting a random sample of 20 where not one knows is small. But how small? Exactly?

We sense that 20% is too a high an estimate of a reasonable upper limit.  But how much too high?

The answer to these questions is not intuitively obvious.

We need to work it out logically and rationally. And to work this out we need to ask:

Q6. As the % of Managers-who-Know is reduced from 20% towards 0% – what is the effect on the chance of randomly selecting 20 all of whom are not in the Know?  We need to be able to see a picture of that relationship in our minds.

The good news is that we can work that out with a bit of O-level maths. And all NHS service managers, nurses and doctors have done O-level maths. It is a mandatory requirement.

The chance of rolling a “not-a-six” is 5/6 on one throw – about 83%;
and the chance of rolling only “not-a-sixes” in two throws is 5/6 x 5/6 = 25/36 – about 69%
and the chance of rolling only “not-a-sixes” in three throws is 5/6 x 5/6 x 5/6 – about 58%… and so on.

[This is called the “chain rule” and it requires that the throws are independent of each other – i.e. a random, unbiased sample]

If we do this 20 times we find that the chance of rolling no sixes at all in 20 throws is about 2.6% – unlikely but far from impossible.

We need to introduce a bit of O-level algebra now.

Let us call the proportion of NHS service managers who understand basic OM, our unknown population parameter something like “p”.

So if p is the chance of a “six” then (1-p) is a chance of a “not-a-six”.

Then the chance of no sixes in one throw is (1-p)

and no sixes after 2 throws is (1-p)(1-p) = (1-p)^2 (where ^ means raise to the power)

and no sixes after three throws is (1-p)(1-p)(1-p) = (1-p)^3 and so on.

So the likelihood of  “no sixes in n throws” is (1-p)^n

Let us call this “t”

So the equation we need to solve to estimate the upper limit of our estimate of “p” is

t=(1-p)^20

Where “t” is a measure of how likely we are to choose 20 managers all of whom do not know – just by chance.  And we want that to be a small number. We want to feel confident that our estimate is reasonable and not just a quirk of chance.

So what threshold do we set for “t” that we feel is “reasonable”? 1 in a million? 1 in 1000? 1 in 100? 1 in10?

By convention we use 1 in 20 (t=0.05) – but that is arbitrary. If we are more risk-averse we might choose 1:100 or 1:1000. It depends on the context.

Let us be reasonable – let is say we want to be 95% confident our our estimated upper limit for “p” – which means we are calculating the 95% confidence interval. This means that will accept a 1:20 risk of our calculated confidence interval for “p” being wrong:  a 19:1 odds that the true value of “p” falls outside our calculated range. Pretty good odds! We will be reasonable and we will set the likelihood threshold for being “wrong” at 5%.

So now we need to solve:

0.05= (1-p)^20

And we want a picture of this relationship in our minds so let us draw a graph of t for a range of values of p.

We know the value of p must be between 0 and 1.0 so we have all we need and we can generate this graph easily using Excel.  And every senior NHS operational manager knows how to use Excel. It is a requirement. Isn’t it?

Black_Curtain

The Excel-generated chart shows the relationship between p (horizontal axis) and t (vertical axis) using our equation:

t=(1-p)^20.

Step 7

Let us first do a “sanity check” on what we have drawn. Let us “check the extreme values”.

If 0% of managers know then a sample of 20 will always reveal none – i.e. the leftmost point of the chart. Check!

If 100% of managers know then a sample of 20 will never reveal none – i.e. way off to the right. Check!

What is clear from the chart is that the relationship between p and t  is not a straight line; it is non-linear. That explains why we find it difficult to estimate intuitively. Our brains are not very good at doing non-linear analysis. Not very good at all.

So we need a tool to help us. Our Excel graph.  We read down the vertical “t” axis from 100% to the 5% point, then trace across to the right until we hit the line we have drawn, then read down to the corresponding value for “p”. It says about 14%.

So that is the upper limit of our 95% confidence interval of the estimate of the true proportion of NHS service managers who know the Foundations of Operations Management.  The lower limit is 0%.

And we cannot say better than somewhere between  0%-14% with the data we have and the assumptions we have made.

To get a more precise estimate,  a narrower 95% confidence interval, we need to gather some more data.

[Another way we can use our chart is to ask “If the actual % of Managers who know is x% the what is the chance that no one of our sample of 20 will know?” Solving this manually means marking the x% point on the horizontal axis then tracing a line vertically up until it crosses the drawn line then tracing a horizontal line to the left until it crosses the vertical axis and reading off the likelihood.]

So if in reality 5% of all managers do Know then the chance of no one knowing in an unbiased sample of 20 is about 35% – really quite likely.

Now we are getting a feel for the likely reality. Much more useful than just dry numbers!

But we are 95% sure that 86% of NHS managers do NOT know the basic language  of flow-improvement-science.

And what this chart also tells us is that we can be VERY confident that the true value of p is less than 2o% – the proportion we believe we need to get to transformation tipping point.

Now we need to repeat the experiment experiment and draw a new graph to get a more accurate estimate of just how much less – but stepping back from the statistical nuances – the message is already clear that we do have a Black Curtain problem.

A Black Curtain of Ignorance problem.

Many will now proclaim angrily “This cannot be true! It is just statistical smoke and mirrors. Surely our managers do know this by a different name – how could they not! It is unthinkable to suggest the majority of NHS manages are ignorant of the basic science of what they are employed to do!

If that were the case though then we would already have an NHS that is fit-for-purpose. That is not what reality is telling us.

And it quickly become apparent at the master class that our sample of 20 did not know-this-by-a-different-name.

The good news is that this knowledge gap could hiding the opportunity we are all looking for – a door to a path that leads to a radical yet achievable transformation of the NHS into a system that is fit-for-purpose. Now and into the future.

A system that delivers safe, high quality care for those who need it, in full, when they need it and at a cost the country can afford. Now and for the foreseeable future.

And the really good news is that this IS knowledge gap may be  and extensive deep but it is not wide … the Foundations are is easy to learn, and to start applying immediately.  The basics can be learned in less than a week – the more advanced skills take a bit longer.  And this is not untested academic theory – it is proven pragmatic real-world problem solving know-how. It has been known for over 50 years outside healthcare.

Our goal is not acquisition of theoretical knowledge – is is a deep enough understanding to make wise enough  decisions to achieve good enough outcomes. For everyone. Starting tomorrow.

And that is the design purpose of FISH. To provide those who want to learn a quick and easy way to do so.

Stop Press: Further feedback from the masterclass is that some of the managers are grasping the nettle, drawing back their own black curtains, opening the door that was always there behind it, and taking a peek through into a magical garden of opportunity. One that was always there but was hidden from view.

Improvement-by-Twitter

Sat 5th October

It started with a tweet.

08:17 [JG] The NHS is its people. If you lose them, you lose the NHS.

09:15 [DO] We are in a PEOPLE business – educating people and creating value.

Sun 6th October

08:32 [SD] Who isn’t in people business? It is only people who buy stuff. Plants, animals, rocks and machines don’t.

09:42 [DO] Very true – it is people who use a service and people who deliver a service and we ALL know what good service is.

09:47 [SD] So onus is on us to walk our own talk. If we don’t all improve our small bits of the NHS then who can do it for us?

Then we were off … the debate was on …

10:04 [DO] True – I can prove I am saving over £160 000.00 a year – roll on PBR !?

10:15 [SD] Bravo David. I recently changed my surgery process: productivity up by 35%. Cost? Zero. How? Process design methods.

11:54 [DO] Exactly – cost neutral because we were thinking differently – so how to persuade the rest?

12:10 [SD] First demonstrate it is possible then show those who want to learn how to do it themselves. http://www.saasoft.com/fish/course

We had hard evidence it was possible … and now MC joined the debate …

12:48 [MC] Simon why are there different FISH courses for safety, quality and efficiency? Shouldn’t good design do all of that?

12:52 [SD] Yes – goal of good design is all three. It just depends where you are starting from: Governance, Operations or Finance.

A number of parallel threads then took off and we all had lots of fun exploring  each others knowledge and understanding.

17:28 MC registers on the FISH course.

And that gave me an idea. I emailed an offer – that he could have a complimentary pass for the whole FISH course in return for sharing what he learns as he learns it.  He thought it over for a couple of days then said “OK”.

Weds 9th October

06:38 [MC] Over the last 4 years of so, I’ve been involved in incrementally improving systems in hospitals. Today I’m going to start an experiment.

06:40 [MC] I’m going to see if we can do less of the incremental change and more system redesign. To do this I’ve enrolled in FISH

Fri 11th October

06:47 [MC] So as part of my exploration into system design, I’ve done some studies in my clinic this week. Will share data shortly.

21:21 [MC] Here’s a chart showing cycle time of patients in my clinic. Median cycle time 14 mins, but much longer in 2 pic.twitter.com/wu5MsAKk80

20131019_TTchart

21:22 [MC] Here’s the same clinic from patients’ point if view, wait time. Much longer than I thought or would like

20131019_WTchart

21:24 [MC] Two patients needed to discuss surgery or significant news, that takes time and can’t be rushed.

21:25 [MC] So, although I started on time, worked hard and finished on time. People were waited ages to see me. Template is wrong!

21:27 [MC] By the time I had seen the the 3rd patient, people were waiting 45 mins to see me. That’s poor.

21:28 [MC] The wait got progressively worse until the end of the clinic.

Sunday 13th October

16:02 [MC] As part of my homework on systems, I’ve put my clinic study data into a Gantt chart. Red = waiting, green = seeing me pic.twitter.com/iep2PDoruN

20131019_Ganttchart

16:34 [SD] Hurrah! The visual power of the Gantt Chart. Worth adding the booked time too – there are Seven Sins of Scheduling to find.

16:36 [SD] Excellent – good idea to sort into booked time order – it makes the planned rate of demand easier to see.

16:42 [SD] Best chart is Work In Progress – count the number of patients at each time step and plot as a run chart.

17:23 [SD] Yes – just count how many lines you cross vertically at each time interval. It can be automated in Excel

17:38 [MC] Like this? pic.twitter.com/fTnTK7MdOp

 

20131019_WIPchart

This is the work-in-progress chart. The most useful process monitoring chart of all. It shows the changing size of the queue over time.  Good flow design is associated with small, steady queues.

18:22 [SD] Perfect! You’re right not to plot as XmR – this is a cusum metric. Not a healthy WIP chart this!

There was more to follow but the “ah ha” moment had been seen and shared.

Weds 16th October

MC completes the Online FISH course and receives his well-earned Certificate of Achievement.

This was his with-the-benefit-of-hindsight conclusion:

I wish I had known some of this before. I will have totally different approach to improvement projects now. Key is to measure and model well before doing anything radical.

Improvement Science works.
Improvement-by-Design is a skill that can be learned quickly.
FISH is just a first step.

Celebrating Achievement

CertificateOne of the best things about improvement is the delight that we feel when someone else acknowledges it.

Particularly someone whose opinion we respect.

We feel a warm glow of pride when they notice the difference and take the time to say “Well done!”

We need this affirmative feedback to fuel our improvement engine.

And we need to learn how to give ourselves affirmative feedback because usually there is a LOT of improvement work to do behind the scenes before any externally visible improvement appears.

It is like an iceberg – most of it is hidden from view.

And improvement is tough. We have to wade through Bureaucracy Treacle that is laced with Cynicide and policed by Skeptics.  We know this.

So we need to learn to celebrate the milestones we achieve and to keep reminding ourselves of what we have already done.  Even if no one else notices or cares.

Like the certificates, cups, and medals that we earned at school – still proudly displayed on our mantlepieces and shelves decades later. They are important. Especially to us.

So it is always a joy to celebrate the achievement of others and to say “Well Done” for reaching a significant milestone on the path of learning Improvement Science.

And that has been my great pleasure this week – to prepare and send the Certificates of Achievement to those who have recently completed the FISH course.

The best part of all has been to hear how many times the word “treasured” is used in the “Thank You” replies.

We display our Certificates with pride – not so much that others can see – more to remind ourselves every day to Celebrate Achievement.

 

DRAT!

[Bing Bong]  The sound bite heralded Leslie joining the regular Improvement Science mentoring session with Bob.  They were now using web-technology to run virtual meetings because it allows a richer conversation and saves a lot of time. It is a big improvement.

<Bob> Hi Lesley, how are you today?

<Leslie> OK thank you Bob.  I have a thorny issue to ask you about today. It has been niggling me even since we started to share the experience we are gaining from our current improvement-by-design project.

<Bob> OK. That sounds interesting. Can you paint the picture for me?

<Leslie> Better than that – I can show you the picture, I will share my screen with you.

DRAT_01 <Bob> OK. I can see that RAG table. Can you give me a bit more context?

<Leslie> Yes. This is how our performance management team have been asked to produce their 4-weekly reports for the monthly performance committee meetings.

<Bob> OK. I assume the “Period” means sequential four week periods … so what is Count, Fail and Fail%?

<Leslie> Count is the number of discharges in that 4 week period, Fail is the number whose length of stay is longer than the target, and Fail% is the ratio of Fail/Count for each 4 week period.

<Bob> It looks odd that the counts are all 28.  Is there some form of admission slot carve-out policy?

<Leslie> Yes. There is one admission slot per day for this particular stream – that has been worked out from the average historical activity.

<Bob> Ah! And the Red, Amber, Green indicates what?

<Leslie> That is depends where the Fail% falls in a set of predefined target ranges; less than 5% is green, 5-10% is Amber and more than 10% is red.

<Bob> OK. So what is the niggle?

<Leslie>Each month when we are in the green we get no feedback – a deafening silence. Each month we are in amber we get a warning email.  Each month we are in the red we have to “go and explain ourselves” and provide a “back-on-track” plan.

<Bob> Let me guess – this feedback design is not helping much.

<Leslie> It is worse than that – it creates a perpetual sense of fear. The risk of breaching the target is distorting people’s priorities and their behaviour.

<Bob> Do you have any evidence of that?

<Leslie> Yes – but it is anecdotal.  There is a daily operational meeting and the highest priority topic is “Which patients are closest to the target length of stay and therefore need to have their  discharge expedited?“.

<Bob> Ah yes.  The “target tail wagging the quality dog” problem. So what is your question?

<Leslie> How do we focus on the cause of the problem rather than the symptoms?  We want to be rid of the “fear of the stick”.

<Bob> OK. What you have hear is a very common system design flaw. It is called a DRAT.

<Leslie> DRAT?

<Bob> “Delusional Ratio and Arbitrary Target”.

<Leslie> Ha! That sounds spot on!  “DRAT” is what we say every time we miss the target!

<Bob> Indeed.  So first plot this yield data as a time series chart.

<Leslie> Here we go.

DRAT_02<Bob>Good. I see you have added the cut-off thresholds for the RAG chart. These 5% and 10% thresholds are arbitrary and the data shows your current system is unable to meet them. Your design looks incapable.

<Leslie>Yes – and it also shows that the % expressed to one decimal place is meaningless because there are limited possibilities for the value.

<Bob> Yes. These are two reasons that this is a Delusional Ratio; there are quite a few more.

DRAT_03<Leslie> OK  and if I plot this as an Individuals charts I can see that this variation is not exceptional.

<Bob> Careful Leslie. It can be dangerous to do this: an Individuals chart of aggregate yield becomes quite insensitive with aggregated counts of relatively rare events, a small number of levels that go down to zero, and a limited number of points.  The SPC zealots are compounding the problem and plotting this data as a C-chart or a P-chart makes no difference.

This is all the effect of the common practice of applying  an arbitrary performance target then counting the failures and using that as means of control.

It is poor feedback loop design – but a depressingly common one.

<Leslie> So what do we do? What is a better design?

<Bob> First ask what the purpose of the feedback is?

<Leslie> To reduce the number of beds and save money by forcing down the length of stay so that the bed-day load is reduced and so we can do the same activity with fewer beds and at the same time avoid cancellations.

<Bob> OK. That sounds reasonable from the perspective of a tax-payer and a patient. It would also be a more productive design.

<Leslie> I agree but it seems to be having the opposite effect.  We are focusing on avoiding breaches so much that other patients get delayed who could have gone home sooner and we end up with more patients to expedite. It is like a vicious circle.  And every time we fail we get whacked with the RAG stick again. It is very demoralizing and it generates a lot of resentment and conflict. That is not good for anyone – least of all the patients.

<Bob>Yes.  That is the usual effect of a DRAT design. Remember that senior managers have not been trained in process improvement-by-design either so blaming them is also counter-productive.  We need to go back to the raw data. Can you plot actual LOS by patient in order of discharge as a run chart.

DRAT_04

<Bob> OK – is the maximum LOS target 8 days?

<Leslie> Yes – and this shows  we are meeting it most of the time.  But it is only with a huge amount of effort.

<Bob> Do you know where 8 days came from?

<Leslie> I think it was the historical average divided by 85% – someone read in a book somewhere that 85%  average occupancy was optimum and put 2 and 2 together.

<Bob> Oh dear! The “85% Occupancy is Best” myth combined with the “Flaw of Averages” trap. Never mind – let me explain the reasons why it is invalid to do this.

<Leslie> Yes please!

<Bob> First plot the data as a run chart and  as a histogram – do not plot the natural process limits yet as you have done. We need to do some validity checks first.

DRAT_05

<Leslie> Here you go.

<Bob> What do you see?

<Leslie> The histogram  has more than one peak – and there is a big one sitting just under the target.

<Bob>Yes. This is called the “Horned Gaussian” and is the characteristic pattern of an arbitrary lead-time target that is distorting the behaviour of the system.  Just as you have described subjectively. There is a smaller peak with a mode of 4 days and are a few very long length of stay outliers.  This multi-modal pattern means that the mean and standard deviation of this data are meaningless numbers as are any numbers derived from them. It is like having a bag of mixed fruit and then setting a maximum allowable size for an unspecified piece of fruit. Meaningless.

<Leslie> And the cases causing the breaches are completely different and could never realistically achieve that target! So we are effectively being randomly beaten with a stick. That is certainly how it feels.

<Bob> They are certainly different but you cannot yet assume that their longer LOS is inevitable. This chart just says – “go and have a look at these specific cases for a possible cause for the difference“.

<Leslie> OK … so if they are from a different system and I exclude them from the analysis what happens?

<Bob> It will not change reality.  The current design of  this process may not be capable of delivering an 8 day upper limit for the LOS.  Imposing  a DRAT does not help – it actually makes the design worse! As you can see. Only removing the DRAT will remove the distortion and reveal the underlying process behaviour.

<Leslie> So what do we do? There is no way that will happen in the current chaos!

<Bob> Apply the 6M Design® method. Map, Measure and Model it. Understand how it is behaving as it is then design out all the causes of longer LOS and that way deliver with a shorter and less variable LOS. Your chart shows that your process is stable.  That means you have enough flow capacity – so look at the policies. Draw on all your FISH training. That way you achieve your common purpose, and the big nasty stick goes away, and everyone feels better. And in the process you will demonstrate that there is a better feedback design than DRATs and RAGs. A win-win-win design.

<Leslie> OK. That makes complete sense. Thanks Bob!  But what you have described is not part of the FISH course.

<Bob> You are right. It is part of the ISP training that comes after FISH. Improvement Science Practitioner.

<Leslie> I think we will need to get a few more people trained in the theory, techniques and tools of Improvement Science.

<Bob> That would appear to be the case. They will need a real example to see what is possible.

<Leslie> OK. I am on the case!

Race for the Line

stick_figures_pulling_door_150_wht_6913It is surprising how competitive most people are. We are constantly comparing ourselves with others and using what we find to decide what to do next. Groan or Gloat.  Chase or Cruise.

This is because we are social animals.  Comparing with other is hard-wired into us. We have little choice.

But our natural competitive behaviour can become counter-productive when we learn that we can look better-by-comparison if we block or trip-up our competitors.  In a vainglorious attempt to make ourselves look better-by-comparison we spike the wheels of our competitors’ chariots.  We fight dirty.

It is not usually openly aggressive fighting.  Most of our spiking is done passively. Often by deliberately not doing something.  A deliberate act of omission.  And if we are challenged we often justify our act of omission by claiming we were too busy.

This habitual passive-aggressive learned behaviour is not only toxic to improvement, it creates a toxic culture too. It is toxic to everything.

And it ensures that we stay stuck in The Miserable Job Swamp.  It is a bad design.

So we need a better one.

One idea is to eliminate competition.  This sounds plausible but it does not work. We are hard-wired to compete because it has proven to be a very effective long term survival strategy. The non-competitive have not survived.  To be deliberately non-competitive will guarantee mediocrity and future failure.

A better design is to leverage our competitive nature and this is surprisingly easy to do.

We flip the “battle” into a “race”.

green_leader_running_the_race_150_wht_3444To do that we need:

1) A clear destination – a shared common purpose – that can be measured. We need to be able to plot our progress using objective evidence.

2) A proven, safe, effective and efficient route plan to get us to our destination.

3) A required arrival time that is realistic.  Open-ended time-scales do not work.

4) Regular feedback to measure our individual progress and to compare ourselves with others.  Selective feedback is ineffective.  Secrecy or anonymous feedback is counter-productive at best and toxic at worst.

5) The ability to re-invest our savings on all three win-win-win dimensions: emotional, temporal and financial.  This fuels the engine of improvement. Us.

The rest just happens – but not by magic – it happens because this is a better Improvement-by-Design.

Find and Fill

Many barriers to improvement are invisible.

This is because they are caused by what is not present rather than what is.  They are gaps or omissions.

Some gaps are blindingly obvious.  This is because we expect to see something there so we notice when it is missing. We would notice the gap if a rope bridge across chasm is obviously missing because only end posts are visible.

Many gaps are not obvious. This is because we have no experience or expectation.  The gap is invisible.  We are blind to the omission.

These are the gaps that we accidentally stumble into. Such as a gap in our knowledge and understanding that we cannot see. These are the gaps that create the fear of failure. And the fear is especially real because the gap is invisible and we only know when it is too late.

minefieldIt is like walking across an emotional minefield.  At any moment we could step on an ignorance mine and our confidence would be blasted into fragments.

So our natural and reasonable reaction is to stay outside the emotional minefield and inside our comfort zones – where we feel safe.  We give up trying to learn and trying to improve. Every-one hopes that Some-one or Any-one will do it for us.  No-one does.

The path to Improvement is always across an emotional minefield because improvement implies unlearning. So we need a better design than blundering about hoping not to fall into an invisible gap.  We need a safer design.

There are a number of options:

Option 1. Ask someone who knows the way across the minefield and can demonstrate it. Someone who knows where the mines are and knows how to avoid them. Someone to tell us where to step and where not to.

Option 2. Clear a new path and mark it clearly so others can trust that it is safe.  Remove the ignorance mines. Find and Fill the knowledge map.

Option 1 is quicker but it leaves the ignorance mines in place.  So sooner or later someone will step on one. Boom!

We need to be able to do Option 2.

The obvious  strategy for Option 2 is to clear the ignorance mines.  We could do this by deliberately blundering about setting off the mines. We could adopt the burn-and-scrape or learn-from-mistakes approach.

Or we could detect, defuse and remove them.

The former requires people willing to take emotional risks; the latter does not require such a sacrifice.

And “learn-by-mistakes” only works if people are able to make mistakes visibly so everyone can learn. In an adversarial, competitive, distrustful context this can not happen: and the result is usually for the unwilling troops to be forced into the minefield with the threat of a firing-squad if they do not!

And where a mistake implies irreversible harm it is not acceptable to learn that way. Mistakes are covered up. The ignorance mines are re-set for the next hapless victim to step on. The emotional carnage continues. Any change 0f sustained, system-wide improvement is blocked.

So in a low-trust cultural context the detect-defuse-and-remove strategy is the safer option.

And this requires a proactive approach to finding the gaps in understanding; a proactive approach to filling the knowledge holes; and a proactive approach to sharing what was learned.

Or we could ask someone who knows where the ignorance mines are and work our way through finding and filling our knowledge gaps. By that means any of us can build a safe, effective and efficient path to sustainable improvement.

And the person to ask is someone who can demonstrate a portfolio of improvement in practice – an experienced Improvement Science Practitioner.

And we can all learn to become an ISP and then guide others across their own emotional minefields.

All we need to do is take the first step on a well-trodden path to sustained improvement.

Fudge? We Love Fudge!

stick_figures_moving_net_150_wht_8609
It is almost autumn again.  The new school year brings anticipation and excitement. The evenings are drawing in and there is a refreshing chill in the early morning air.

This is the time of year for fudge.

Alas not the yummy sweet sort that Grandma cooked up and gave out as treats.

In healthcare we are already preparing the Winter Fudge – the annual guessing game of attempting to survive the Winter Pressures. By fudging the issues.

This year with three landmark Safety and Quality reports under our belts we have more at stake than ever … yet we seem as ill prepared as usual. Mr Francis, Prof Keogh and Dr Berwick have collectively exhorted us to pull up our socks.

So let us explore how and why we resort to fudging the issues.

Watch the animation of a highly simplified emergency department and follow the thoughts of the manager. You can pause, rewind, and replay as much as you like.  Follow the apparently flawless logic – it is very compelling. The exercise is deliberately simplified to eliminate wriggle room. But it is valid because the behaviour is defined by the Laws of Physics – and they are not negotiable.

The problem was combination of several planning flaws – two in particular.

First is the “Flaw of Averages” which is where the past performance-over-time is boiled down to one number. An average. And that is then used to predict precise future behaviour. This is a very big mistake.

The second is the “Flaw of Fudge Factors” which is a attempt to mitigate the effects of first error by fudging the answer – by adding an arbitrary “safety margin”.

This pseudo-scientific sleight-of-hand may polish the planning rhetoric and render it more plausible to an unsuspecting Board – but it does not fool Reality.

In reality the flawed design failed – as the animation dramatically demonstrated.  The simulated patients came to harm. Unintended harm to be sure – but harm nevertheless.

So what is the alternative?

The alternative is to learn how to avoid Sir Flaw of Averages and his slippery friend Mr Fudge Factor.

And learning how to do that is possible … it is called Improvement Science.

And you can start right now … click HERE.

Taming the Wicked Bull and the OH Effect

bull_by_the_horns_anim_150_wht_9609Take the bull by the horns” is a phrase that is often heard in Improvement circles.

The metaphor implies that the system – the bull – is an unpredictable, aggressive, wicked, wild animal with dangerous sharp horns.

“Unpredictable” and “Dangerous” is certainly what the newspapers tell us the NHS system is – and this generates fear.  Fear-for-our-safety and fear drives us to avoid the bad tempered beast.

It creates fear in the hearts of the very people the NHS is there to serve – the public.  It is not the intended outcome.

Bullish” is a phrase we use for “aggressive behaviour” and it is disappointing to see those accountable behave in a bullish manner – aggressive, unpredictable and dangerous.

We are taught that bulls are to be  avoided and we are told to not to wave red flags at them! For our own safety.

But that is exactly what must happen for Improvement to flourish.  We all need regular glimpses of the Red Flag of Reality.  It is called constructive feedback – but it still feels uncomfortable.  Our natural tendency to being shocked out of our complacency is to get angry and to swat the red flag waver.  And the more powerful we are,  the sharper our horns are, the more swatting we can do and the more fear we can generate.  Often intentionally.

So inexperienced improvement zealots are prodded into “taking the executive bull by the horns” – but it is poor advice.

Improvement Scientists are not bull-fighters. They are not fearless champions who put themselves at personal risk for personal glory and the entertainment of others.  That is what Rescuers do. The fire-fighters; the quick-fixers; the burned-toast-scrapers; the progress-chasers; and the self-appointed-experts. And they all get gored by an angry bull sooner or later.  Which is what the crowd came to see – Bull Fighter Blood and Guts!

So attempting to slay the wicked bullish system is not a realistic option.

What about taming it?

This is the game of Bucking Bronco.  You attach yourself to the bronco like glue and wear it down as it tries to throw you off and trample you under hoof. You need strength, agility, resilience and persistence. All admirable qualities. Eventually the exhausted beast gives in and does what it is told. It is now tamed. You have broken its spirit.  The stallion is no longer a passionate leader; it is just a passive follower. It has become a Victim.

Improvement requires spirit – lots of it.

Improvement requires the spirit-of-courage to challenge dogma and complacency.
Improvement requires the spirit-of-curiosity to seek out the unknown unknowns.
Improvement requires the spirit-of-bravery to take calculated risks.
Improvement requires the spirit-of-action to make  the changes needed to deliver the improvements.
Improvement requires the spirit-of-generosity to share new knowledge, understanding and wisdom.

So taming the wicked bull is not going to deliver sustained improvement.  It will only achieve stable mediocrity.

So what next?

What about asking someone who has actually done it – actually improved something?

Good idea! Who?

What about someone like Don Berwick – founder of the Institute of Healthcare Improvement in the USA?

Excellent idea! We will ask him to come and diagnose the disease in our system – the one that lead to the Mid-Staffordshire septic safety carbuncle, and the nasty quality rash in 14 Trusts that Professor Sir Bruce Keogh KBE uncovered when he lifted the bed sheet.

[Click HERE to see Dr Bruce’s investigation].

We need a second opinion because the disease goes much deeper – and we need it from a credible, affable, independent, experienced expert. Like Dr Don B.

So Dr Don has popped over the pond,  examined the patient, formulated his diagnosis and delivered his prescription.

[Click HERE to read Dr Don’s prescription].

Of course if you ask two experts the same question you get two slightly different answers.  If you ask ten you get ten.  This is because if there was only one answer that everyone agreed on then there would be no problem, no confusion, and need for experts. The experts know this of course. It is not in their interest to agree completely.

One bit of good news is that the reports are getting shorter.  Mr Robert’s report on the failing of one hospital is huge and has 209 recommendations.  A bit of a bucketful.  Dr Bruce’s report is specific to the Naughty Fourteen who have strayed outside the statistical white lines of acceptable mediocrity.

Dr Don’s is even shorter and it has just 10 recommendations. One for each finger – so easy to remember.

1. The NHS should continually and forever reduce patient harm by embracing wholeheartedly an ethic of learning.

2. All leaders concerned with NHS healthcare – political, regulatory, governance, executive, clinical and advocacy – should place quality of care in general, and patient safety in particular, at the top of their priorities for investment, inquiry, improvement, regular reporting, encouragement and support.

3. Patients and their carers should be present, powerful and involved at all levels of healthcare organisations from wards to the boards of Trusts.

4. Government, Health Education England and NHS England should assure that sufficient staff are available to meet the NHS’s needs now and in the future. Healthcare organisations should ensure that staff are present in appropriate numbers to provide safe care at all times and are well-supported.

5. Mastery of quality and patient safety sciences and practices should be part of initial preparation and lifelong education of all health care professionals, including managers and executives.

6. The NHS should become a learning organisation. Its leaders should create and support the capability for learning, and therefore change, at scale, within the NHS.

7. Transparency should be complete, timely and unequivocal. All data on quality and safety, whether assembled by government, organisations, or professional societies, should be shared in a timely fashion with all parties who want it, including, in accessible form, with the public.

8. All organisations should seek out the patient and carer voice as an essential asset in monitoring the safety and quality of care.

9. Supervisory and regulatory systems should be simple and clear. They should avoid diffusion of responsibility. They should be respectful of the goodwill and sound intention of the vast majority of staff. All incentives should point in the same direction.

10. We support responsive regulation of organisations, with a hierarchy of responses. Recourse to criminal sanctions should be extremely rare, and should function primarily as a deterrent to wilful or reckless neglect or mistreatment.

The meat in the sandwich are recommendations 5 and 6 that together say “Learn Improvement Science“.

And what happens when we commit and engage in that learning journey?

Steve Peak has described what happens in this this very blog. It is called the OH effect.

OH stands for “Obvious-in-Hindsight”.

Obvious means “understandable” which implies visible, sensible, rational, doable and teachable.

Hindsight means “reflection” which implies having done something and learning from reality.

So if you would like to have a sip of Dr Don’s medicine and want to get started on the path to helping to create a healthier healthcare system you can do so right now by learning how to FISH – the first step to becoming an Improvement Science Practitioner.

The good news is that this medicine is neither dangerous nor nasty tasting – it is actually fun!

And that means it is OK for everyone – clinicians, managers, patients, carers and politicians.  All of us.

 

The Learning Labyrinth

labyrinth

The mind is a labyrinth of knowledge – a maze with many twists, turns, joins, splits, tunnels, bridges, crevasses and caverns.

Some paths lead to dead ends; others take a long way around but get to the destination in the end.

The shortest path is not obvious – even in hindsight.

And there is another challenge … no two individuals share the same knowledge labyrinth.  An obvious path between problem and solution for one person may be  invisible or incomprehensible to another.

But the greatest challenge, and the greatest opportunity, is that our labyrinth of knowledge can change and does change continuously … through learning.

So if one person can see a path of improvement between current problem and future solution, then how can they guide another who cannot?

This is a challenge that an Improvement Scientist faces every day.

It is not effective to just give a list of instructions – “To get from problem to solution follow this path“.  The path may not exist in the recipients knowledge labyrinth. If they just follow the instructions they will come up against a wall or fall into a hole.

It is not realistic to expect the learner to replace their labyrinth of knowledge with that of the teacher – to clone the teachers way of thinking. Just reciting the Words of the Guru is not improvement – is Zealotry.

One way is for a guide to describe their own labyrinth of knowledge.  To lay it out in a way that any other can explore.  A way that is fully signposted, with explanations and maps that that the explorer can refer to as  they go.  A template against which they can compare their own knowledge labyrinth to reveal the similarities and the differences.

No two people will explore a knowledge labyrinth in the same way … but that does not matter. So long as they are able to uncover and assumptions that misguide them and any gaps in their knowledge block their progress.  With that feedback they can update their own mental signposts and create safe, effective and efficient paths that they can follow in future at will.

And that  is how the online FISH training is designed.  It is the knowledge labyrinth of an experienced Improvement Scientist that can be explored online.

And it keeps changing  …

The Art of Juggling

figure_juggling_balls_150_wht_4301Improvement Science is like three-ball juggling.

And there are different sets of three things that an Improvementologist needs to juggle:

the Quality-Flow-Cost set and
the Governance-Operations-Finance set and
the Customer-Staff-Organization set.

But the problem with juggling is that it looks very difficult to do – so almost impossible to learn – so we do not try.  We give up before we start. And if we are foolhardy enough to try (by teaching ourselves using the suck-it-and-see or trial-and-error method) then we drop all the balls very quickly. We succeed in reinforcing our impossible-for-me belief with evidence.  It is a self-fulfilling prophesy. Only the most tenacious, self-motivated and confident people succeed – which further reinforces the I-Can’t-Do belief of everyone else.

The problem here is that we are making an Error of Omission.

We are omitting to ask ourselves two basic questions “How does a juggler learn their art?” and “How long does it take?

The answer is surprising.

It is possible for just about anyone to learn to juggle in about 10 minutes. Yes – TEN MINUTES.


Skeptical?  Sure you are – if it was that easy we would all be jugglers.  That is the “I Can’t Do” belief talking. Let us silence that confidence-sapping voice once and for all.

Here is how …

You do need to have at least one working arm and one working eyeball and something connecting them … and it is a bit easier with two working arms and two working eyeballs and something connecting them.

And you need something to juggle – fruit is quite good – oranges and apples are about the right size, shape, weight and consistency (and you can eat the evidence later too).

And you need something else.

You need someone to teach you.

And that someone must be able to juggle and more importantly they must be able to teach someone else how to juggle which is a completely different skill.

juggling_at_Keele_June_2013Those are the necessary-and-sufficient requirements to learn to juggle in 10 minutes.

The recent picture shows an apprentice Improvement Scientist at the “two orange” stage – just about ready to move to the “three orange” stage.

Exactly the same is true of learning the Improvement Science juggling trick.

The ability to improve Quality, Flow and Cost at the same time.

The ability to align Governance, Operations and Finance into a win-win-win synergistic system.

The ability to delight customers, motivate staff and support leaders at the same time.


And the trick to learning to juggle is called step-by-step unlearning. It is counter-intuitive.

To learn to juggle you just “unlearn” what is stopping you from juggling. You unlearn the unconscious assumptions and habits that are getting in the way.

And that is why you need a teacher who knows what needs to be unlearned and how to help you do it.

fish
And for an apprentice Improvement Scientist the first step on the Unlearning Journey is FISH.

Step 5 – Monitor

Improvement-by-Design is not the same as Improvement-by-Desire.

Improvement-by-Design has a clear destination and a design that we know can get us there because we have tested it before we implement it.

Improvement-by-Desire has a vague direction and no design – we do not know if the path we choose will take us in the direction we desire to go. We cannot see the twists and turns, the unknown decisions, the forks, the loops, and the dead-ends. We expect to discover those along the way. It is an exercise in hope.

So where pessimists and skeptics dominate the debate then Improvement-by-Design is a safer strategy.

Just over seven weeks ago I started an Improvement-by-Design project – a personal one. The destination was clear: to get my BMI (body mass index) into a “healthy” range by reducing weight by about 5 kg.  The design was clear too – to reduce energy input rather than increase energy output. It is a tried-and-tested method – “avoid burning the toast”.  The physical and physiological model predicted that the goal was achievable in 6 to 8 weeks.

So what has happened?

To answer that question requires two time-series charts. The input chart of calories ingested and the output chart of weight. This is Step 5 of the 6M Design® sequence.

Energy_Weight_ModelRemember that there was another parameter  in this personal Energy-Weight system: the daily energy expended.

But that is very difficult to measure accurately – so I could not do that.

What I could do was to estimate the actual energy expended from the model of the system using the measured effect of the change. But that is straying into the Department of Improvement Science Nerds. Let us stay in the real world a  bit longer.

Here is the energy input chart …

SRD_EnergyIn_XmR

It shows an average calorie intake of 1500 kcal – the estimated required value to achieve the weight loss given the assumptions of the physiological model. It also shows a wide day-to-day variation.  It does not show any signal flags (red dots) so an inexperienced Improvementologist might conclude that this just random noise.

It is not.  The data is not homogeneous. There is a signal in the system – a deliberate design change – and without that context it is impossible to correctly interpret the chart.

Remember Rule #1: Data without context is meaningless.

The deliberate process design change was to reduce calorie intake for just two days per week by omitting unnecessary Hi-Cal treats – like those nice-but-naughty Chocolate Hobnobs. But which two days varied – so there is no obvious repeating pattern in the chart. And the intake on all days varied – there were a few meals out and some BBQ action.

To separate out these two parts of the voice-of-the-process we need to rationally group the data into the Lo-cal days (F) and the OK-cal days (N).

SRD_EnergyIn_Grouped_XmR

The grouped BaseLine© chart tells a different story.  The two groups clearly have a different average and both have a lower variation-over-time than the meaningless mixed-up chart.

And we can now see a flag – on the second F day. That is a prompt for an “investigation” which revealed: will-power failure.  Thursday evening beer and peanuts! The counter measure was to avoid Lo-cal on a Thursday!

What we are seeing here is the fifth step of 6M Design® exercise  – the Monitor step.

And as well as monitoring the factor we are changing – the cause;  we also monitor the factor we want to influence – the effect.

The effect here is weight. And our design includes a way of monitoring that – the daily weighing.

SRD_WeightOut_XmRThe output metric BaseLine© chart – weight – shows a very different pattern. It is described as “unstable” because there are clusters of flags (red dots) – some at the start and some at the end. The direction of the instability is “falling” – which is the intended outcome.

So we have robust, statistically valid evidence that our modified design is working.

The weight is falling so the energy going in must be less than the energy being put out. I am burning off the excess lard and without doing any extra exercise.  The physics of the system mandate that this is the only explanation. And that was my design specification.

So that is good. Our design is working – but is it working as we designed?  Does observation match prediction? This is Improvement-by-Design.

Remember that we had to estimate the other parameter to our model – the average daily energy output – and we guessed a value of 2400 kcal per day using generic published data.  Now I can refine the model using my specific measured change in weight – and I can work backwards to calculate the third parameter.  And when I did that the number came out at 2300 kcal per day.  Not a huge difference – the equivalent of one yummy Chocolate Hobnob a day – but the effect is cumulative.  Over the 53 days of the 6M Design® project so far that would be a 5300 kcal difference – about 0.6kg of useless blubber.

So now I have refined my personal energy-weight model using the new data and I can update my prediction and create a new chart – a Deviation from Aim chart.

SRD_WeightOut_DFA
This is the  chart I need to watch to see  if I am on the predicted track – and it too is unstable -and not a good direction.  It shows that the deviation-from-aim is increasing over time and this is because my original guesstimate of an unmeasurable model parameter was too high.

This means that my current design will not get me to where I want to be, when I what to be there. This tells me  I need to tweak my design.  And I have a list of options.

1) I could adjust the target average calories per day down from 1500 to 1400 and cut out a few more calories; or

2) I could just keep doing what I am doing and accept that it will take me longer to get to the destination; or

3) I could do a bit of extra exercise to burn the extra 100 kcals a day off, or

4) I could do a bit of any or all three.

And because I am comparing experience with expectation using a DFA chart I will know very quickly if the design tweak is delivering.

And because some nice weather has finally arrived so the BBQ will be busy I have chosen to take longer to get there. I will enjoy the weather, have a few beers and some burgers. And that is OK. It is a perfectly reasonable design option – it is a rational and justifiable choice.

And I need to set my next destination – a weight if about 72 kg according to the BMI chart – and with my calibrated Energy-Weight model I will know exactly how to achieve that weight and how long it will take me. And I also know how to maintain it – by  increasing my calorie intake. More beer and peanuts – maybe – or the occasional Chocolate Hobnob even. Hurrah! Win-win-win!


6MDesign This real-life example illustrates 6M Design® in action and demonstrates that it is a generic framework.

The energy-weight model in this case is a very simple one that can be worked out on the back of a beer mat (which is what I did).

It is called a linear model because the relationship between calories-in and weight-out is approximately a straight line.

Most real-world systems are not like this. Inputs are not linearly related to outputs.  They are called non-linear systems: and that makes a BIG difference.

A very common error is to impose a “linear model” on a “non-linear system” and it is a recipe for disappointment and disaster.  We do that when we commit the Flaw of Averages error. We do it when we plot linear regression lines through time-series data. We do it when we extrapolate beyond the limits of our evidence.  We do it when we equate time with money.

The danger of this error is that our linear model leads us to make unwise decisions and we actually make the problem worse – not better.  We then give up in frustration and label the problem as “impossible” or “wicked” or get sucked into to various forms of Snake Oil Sorcery.

The safer approach is to assume the system is non-linear and just let the voice of the system talk to us through our BaseLine© charts. The challenge for us is to learn to understand what the system is saying.

That is why the time-series charts are called System Behaviour Charts and that is why they are an essential component of Improvement-by-Design.

However – there is a step that must happen before this – and that is to get the Foundations in place. The foundation of knowledge on which we can build our new learning. That gap must be filled first.

And anyone who wants to invest in learning the foundations of improvement science can now do so at their own convenience and at their own pace because it is on-line …. and it is here.

fish

Step 6 – Maintain

Anyone with much experience of  change will testify that one of the hardest parts is sustaining the hard won improvement.

The typical story is all too familiar – a big push for improvement, a dramatic improvement, congratulations and presentations then six months later it is back where it was before but worse. The cynics are feeding on the corpse of the dead change effort.

The cause of this recurrent nightmare is a simple error of omission.

Failure to complete the change sequence. Missing out the last and most important step. Step 6 – Maintain.

Regular readers may remember the story of the pharmacy project – where a sceptical department were surprised and delighted to discover that zero-cost improvement was achievable and that a win-win-win outcome was not an impossible dream.

Enough time has now passed to ask the question: “Was the improvement sustained?”

TTO_Yield_Nov12_Jun13The BaseLine© chart above shows their daily performance data on their 2-hour turnaround target for to-take-out prescriptions (TTOs) . The weekends are excluded because the weekend system is different from the weekday system. The first split in the data in Jan 2013 is when the improvement-by-design change was made. Step 4 on the 6M Design® sequence – Modify.

There was an immediate and dramatic improvement in performance that was sustained for about six weeks – then it started to drift back. Bit by Bit.  The time-series chart flags it clearly.


So what happened next?

The 12-week review happened next – and it was done by the change leader – in this case the Inspector/Designer/Educator.  The review data plotted as a time-series chart revealed instability and that justified an investigation of the root cause – which was that the final and critical step had not been completed as recommended. The inner feedback loop was missing. Step 6 – Maintain was not in place.

The outer feedback loop had not been omitted. That was the responsibility of the experienced change leader.

And the effect of closing the outer-loop is clearly shown by the third segment – a restoration of stability and improved capability. The system is again delivering the improvement it was designed to deliver.


What does this lesson teach us?

The message here is that the sponsors of improvement have essential parts to play in the initiation and the maintenance of change and improvement. If they fail in their responsibility then the outcome is inevitable and predictable. Mediocrity and cynicism.

Part 1: Setting the clarity and constancy of common purpose.

Without a clear purpose then alignment, focus and effectiveness are thwarted.  Purpose that changes frequently is not a purpose – it is reactive knee-jerk politics.  Constancy of purpose is required because improvement takes time to achieve and to embed.  There is always a lag so moving the target while the arrow is in flight is both dangerous and leads to disengagement.  Establishing common ground is essential to avoiding the time-wasting discussion and negotiation that is inevitable when opinions differ – which they always do.

Part 2: Respectful challenge.

Effective change leadership requires an ability to challenge from a position of mutual respect.  Telling people what to do is not leadership – it is dictatorship.  Dodging the difficult conversations and passing the buck to others is not leadership – it is ineffective delegation. Asking people what they want to do is not leadership – it is abdication of responsibility.  People need their leaders to challenge them and to respect them at the same time.  It is not a contradiction.  It is possible to do both.

And one way that a leader of change can challenge with respect is to expose the need for change; to create the context for change; and then to commit to holding those charged with change to account – including themselves.  And to make it clear at the start what their expectation is as a leader – and what the consequences of disappointment are.

It is a delight to see individuals,  teams, departments and organisations blossom and grow when the context of change is conducive.  And it is disappointing to see them wither and shrink when the context of change is laced with cynicide – the toxic product of cynicism.


So what is the next step?

What could an aspirant change leader do to get this for themselves and their organisations?

One option is to become a Student of Improvementology® – and they can do that here.

Resistance and Persistence

[Bing-Bong]

The email from Leslie was unexpected.

Hi Bob, can I change the planned topic of our session today to talk about resistance. We got off to a great start with our improvement project but now I am hitting brick walls and we are losing momentum. I am getting scared we will stall. Leslie”

Bob replied immediately – it was only a few minutes until their regular teleconference call.

Hi Leslie, no problem. Just firing up the Webex session now. Bob”

[Whoop-Whoop]

The sound bite announced Leslie joining in the teleconference.

<Leslie> Hi Bob. Sorry about the last minute change of plan. Can I describe the scenario?

<Bob> Hi Leslie. Please do.

<Leslie> Well we are at stage five of the 6M Design® sequence and we are monitoring the effect of the first set of design changes that we have made. We started by eliminating design flaws that were generating errors and impairing quality.   The information coming in confirms what we predicted at stage 3.  The problem is that a bunch of “fence-sitters” that said nothing at the start are now saying that the data is a load of rubbish and implying we are cooking the books to make it look better than it is! I am pulling my hair out trying to convince them that it is working.

<Bob> OK. What is your measure for improvement?

<Leslie> The percentage yield from the new quality-by-design process. It is improving. The BaseLine© chart says so.

<Bob> And how is that improvement being reported?

<Leslie> As the average yield per week.  I know we should not aggregate for a month because we need to see the impact of the change as it happens and I know there is a seven-day cycle in the system so we set the report interval at one week.

<Bob> Yes. Those are all valid reasons. What is the essence of the argument against your data?

<Leslie> There is no specific argument – it is just being discounted as “rubbish”.

<Bob> So you are feeling resistance?

<Leslie> You betcha!

<Bob> OK. Let us take a different tack on this. How often do you measure the yield?

<Leslie> Daily.

<Bob> And what is the reason you are using the percentage yield as your metric?

<Leslie> So we can compare one day with the next more easily and plot it on a time-series chart. The denominator is different every day so we cannot use just the count of errors.

<Bob> OK. And how do you calculate the weekly average?

<Leslie> From the daily percentage yields. It is not a difficult calculation!

There was a definite hint of irritation and sarcasm in Leslie’s voice.

<Bob> And how confident are you in your answer?

<Leslie> Completely confident. The team are fantastic. They see the value of this and are collecting the data assiduously. They can feel the improvement. They do not need the data to prove it. The feedback is to convince the fence-sitters and skeptics and they are discounting it.

<Bob> OK so you are confident in the quality of the data going in to your calculation – how confident are you in the data coming out?

<Leslie> What do you mean!  It is a simple calculation – a 12 year old could do.

<Bob> How are you feeling Leslie?

<Leslie>Irritated!

<Bob> Does it feel as if I am resisting too?

<Leslie>Yes!!

<Bob> Irritation is anger – the sense of loss in the present. What do you feel you are losing?

<Leslie> My patience and my self-confidence.

<Bob> So what might be my reasons for resisting?

<Leslie> You could be playing games or you could have a good reason.

<Bob> Do I play games?

<Leslie> Not so far! Sorry … no. You do not do that.

<Bob> So what could be my good reason?

<Leslie> Um. You can feel or see something that I cannot. An error?

<Bob> Yes. If I just feel something is not right I cannot do much else but say “That does not feel right”.  If I can see what I is not right I can explain my rationale for resisting.  Can I try to illuminate?

<Leslie> Yes please!

<Bob> OK – have you got a spreadsheet handy?

<Leslie> Yes.

<Bob> OK – create a column of twenty random numbers in the range 20-80 and label them “daily successes”. Next to them create a second column of random numbers in the range 20-100 and label then “daily activity”.

<Leslie> OK – done that.

<Bob> OK – calculate the % yield by day then the average of the column of daily % yield.

<Leslie> OK – that is exactly how I do it.

<Bob> OK – now sum the columns of successes and activities and calculate the average % yield from those two totals.

<Leslie> Yes – I could do that and it will give the same final answer but I do not do that because I cannot use that data on my run chart – for the reasons I said before.

<Bob> Does it give the same answer?

<Leslie> Um – no. Wait. I must have made an error. Let me check. No. I have done it correctly. They are not the same. Uh?

<Bob> What are you feeling?

<Leslie> Confused!  But the evidence is right there in front of me.

<Bob> An assumption you have been making has just been exposed to be invalid. Your rhetoric does not match reality.

<Leslie> But everyone does this … it is standard practice.

<Bob> And that makes it valid?

<Leslie> No .. of course not. That is one of the fundamental principles of Improvement Science. Just doing what everyone else does is not necessarily correct.

<Bob> So now we must understand what is happening. Can you now change the Daily Activity column so it is the same every day – say 60.

<Leslie> OK. Now my method works. The yield answers are the same.

<Bob> Yes.

<Leslie> Why is that?

<Bob> The story goes back to 1948 when Claude Shannon described “Information Theory”.  When you create a ratio you start with two numbers and end up with only one which implies that information is lost in the conversion.  Two numbers can only give one ratio, but that same ratio can be created by an infinite set of two numbers.  The relationship is asymmetric. It is not an equality. And it has nothing to do with the precision of the data. When we throw data away we create ambiguity.

<Leslie> And in my data the activity by day does vary. There is a regular weekly cycle and some random noise. So the way I am calculating the average yield is incorrect, and the message I am sharing is distorted, so others can quite reasonably challenge the data, and because I was 100% confident I was correct I have been assuming that their resistance was just due to cussedness!

<Bob> There may be some cussedness too. It is sometimes difficult to separate skepticism and cynicism.

<Leslie> So what is the lesson here? There must be more to your example than just exposing a basic arithmetic error.

<Bob> The message is that when you feel resistance you must accept the possibility that you are making an error that you cannot see.  The person demonstrating resistance can feel the emotional pain of a rhetoric-reality mismatch but can not explain the cause. You need to strive to see the problem through their eyes. It is OK to say “With respect I do not see it that way because …”.

<Leslie> So feeling “resistance” signals an opportunity for learning?

<Bob> Yes. Always.

<Leslie> So the better response is to pull back and to check assumptions and not to push forward and make the resistance greater or worse still break through the barrier of resistance, celebrate the victory, and then commit an inevitable and avoidable blunder – and then add insult to injury and blame someone else creating even more cynicism on the future.

<Bob> Yes. Well put.

<Leslie> Wow!  And that is why patience and persistence are necessary.  Not persistently pushing but persistently searching for the unconscious assumptions that underpin resistance; consistently using Reality as the arbiter;  and having enough patience to let Reality tell its own story.

<Bob> Yes. And having the patience and persistence to keep learning from our confusion and to keep learning how to explain what we have discovered better and better.

<Leslie> Thanks Bob. Once again you have  opened a new door for me.

<Bob> A door that was always there and yet hidden from view until it was illuminated with an example.

Middle-Aware

line_figure_phone_400_wht_9858[Dring Dring]

<Bob> Hi Leslie, how are you today?

<Leslie> Really good thanks. We are making progress and it is really exciting to see tangible and measurable improvement in safety, delivery, quality and financial stability.

<Bob> That is good to hear. So what topic shall we explore today?

<Leslie> I would like to return to the topic of engagement.

<Bob> OK. I am sensing that you have a specific Niggle that you would like to share.

<Leslie> Yes.  Specifically it is engaging the Board.

<Bob> Ah ha. I wondered when we would get to that. Can you describe your Niggle?

<Leslie> Well, the feeling is fear and that follows from the risk of being identified as a trouble-maker which follows from exposing gaps in knowledge and understanding of seniors.

<Bob> Well put.  This is an expected hurdle that all Improvement Scientists have to learn to leap reliably. What is the barrier that you see?

<Leslie> That I do not know how to do it and I have seen a  lot of people try and commit career-suicide – like moths on a flame.

<Bob> OK – so it is a real fear based on real evidence. What methods did the “toasted moths” try?

<Leslie> Some got angry and blasted off angry send-to-all emails.  They just succeeded in identifying themselves as “terrorists” and were dismissed – politically and actually. Others channeled  their passion more effectively by heroic acts that held the system together for a while – and they succeeded in burning themselves out. The end result was the same: toasted!

<Bob> So with your understanding of design principles what does that say?

<Leslie> That the design of their engagement process is wrong.

<Bob> Wrong?

<Leslie> I mean “not fit for purpose”.

<Bob> And the difference is?

<Leslie> “Wrong” is a subjective judgement, “not fit for purpose” is an objective assessment.

<Bob> Yes. We need to be careful with words. So what is the “purpose”?

<Leslie> An organisation that is capable of consistently delivering improvement on all dimensions, safety, delivery, quality and affordability.

<Bob> Which requires?

<Leslie> All the parts working in synergy to a common purpose.

<Bob> So what are the parts?

<Leslie> The departments.

<Bob> They are the stages that the streams cross – they are parts of system structure. I am thinking more broadly.

<Leslie> The workers, the managers and the executives?

<Bob> Yes.  And how is that usually perceived?

<Leslie> As a power hierarchy.

<Bob> And do physical systems have power hierarchies?

<Leslie> No … they have components with different and complementary roles.

<Bob> So does that help?

<Leslie> Yes! To achieve synergy each component has to know its complementary role and be competent to do it.

<Bob> And each must understand the roles of the others,  respect the difference, and develop trust in their competence.

<Leslie> And the concepts of understanding, respect and trust appears again.

<Bob> Indeed.  They are always there in one form or another.

<Leslie> So as learning and improvement is a challenge then engagement is respectful challenge …

<Bob> … uh huh …

<Leslie> … and each part is different so requires a different form of respectful challenge?

<Bob> Yes. And with three parts there are six relationships between them – so six different ways of one part respectfully challenging another. Six different designs that have the same purpose but a different context.

<Leslie> Ah ha!  And if we do not use the context-dependent-fit-for-purpose-respectful-challenge-design we do not achieve our purpose?

<Bob> Correct. The principles of design are generic.

<Leslie> So what are the six designs?

<Bob> Let us explore three of them. First the context of a manager respectfully challenging a worker to improve.

<Leslie> That would require some form of training. Either the manager trains the worker or employs someone else to.

<Bob> Yes – and when might a manager delegate training?

<Leslie> When they do not have time to or do not know how to.

<Bob> Yes. So how would the flaw in that design be avoided?

<Leslie> By the manager maintaining their own know-how by doing enough training themselves and delegating the rest.

<Bob> Yup. Well done. OK let us consider a manager respectfully challenging other managers to improve.

<Leslie> I see what you mean. That is a completely different dynamic. The closest I can think of is a coaching arrangement.

<Bob> Yes. Coaching is quite different from training. It is more of a two-way relationship and I prefer to refer to it as “informal co-coaching” because both respectfully challenge each other in different ways; both share knowledge; and both learn and develop.

<Leslie> And that is what you are doing now?

<Bob> Yes. The only difference is that we have agreed a formal coaching contract. So what about a worker respectfully challenging a manager or a manager respectfully challenging an executive?

<Leslie>That is a very different dynamic. It is not training and it is not coaching.

<Bob> What other options are there?

<Leslie>Not formal coaching!  An executive is not going to ask a middle manager to coach them!

<Bob> You are right on both counts – so what is the essence of informal coaching?

<Leslie> An informal coach provides a different perspective and will say what they see if asked and will ask questions that help to illustrate alternative perspectives and offer evidence of alternative options. This is just well-structured, judgement-free feedback.

<Bob> Yes. We do it all the time. And we are often “coached” by those much younger than ourselves who have a more modern perspective. Our children for instance.

<Leslie> So the judgement free feedback metaphor is the one that a manager can use to engage an executive.

<Bob> Yes. And look at it from the perspective of the executive – they want feedback that can help them made wiser strategic decisions. That is their role. Boards are always asking for customer feedback, staff feedback and performance feedback.  They want to know the Nuggets, the Niggles, the Nice Ifs and the NoNos.  They just do not ask for it like that.

<Leslie> So they are no different from the rest of us?

<Bob> Not in respect of an insatiable appetite for unfiltered and undistorted feedback. What is different is their role. They are responsible for the strategic decisions – the ones that affect us all – so we can help ourselves by helping them make those decisions. A well-designed feedback model is fit-for-that-purpose.

<Leslie> And an Improvement Scientist needs to be able to do all three – training, coaching and communicating in a collaborative informal style. Is that leadership?

<Bob> I call it “middle-aware”.

<Leslie> It makes complete sense to me. There is a lot of new stuff here and I will need to reflect on it. Thank you once again for showing me a different perspective on the problem.

<Bob> I enjoyed it too – talking it through helps me to learn to explain it better – and I look forward to hearing the conclusions from your reflections because I know I will learn from that too.

Invisible Design

Improvement Science is all about making some-thing better in some-way by some-means.

There are lots of things that might be improved – almost everything in fact.

There are lots of ways that those things might be improved. If it was a process we might improve safety, quality, delivery, and productivity. If it was a product we might improve reliability, usability, durability and affordability.

There are lots of means by which those desirable improvements might be achieved – lots of different designs.

Multiply that lot together and you get a very big number of options – so it is no wonder we get stuck in the “what to do first?” decision process.

So how do we approach this problem currently?

We use our intuition.

Intuition steers us to the obvious – hence the phrase intuitively obvious. Which means what looks to our minds-eye to be a good option.And that is OK. It is usually a lot better than guessing (but not always).

However, the problem using “intuitively obvious” is that we end up with mediocrity. We get “about average”. We get “OKish”.  We get “satisfactory”. We get “what we expected”. We get “same as always”. We do not get “significantly better-than-average’. We do not get “reliably good”. We do not get improvement. And we do not because anyone and everyone can do the “intuitively obvious” stuff.

To improve we need a better-than-average functional design. We need a Reliably Good Design. And that is invisible.

By “invisible” I mean not immediately obvious to our conscious awareness.  We do not notice good functional design because it does not get in the way of achieving our intention.  It does not trip us up.

We notice poor functional design because it trips us up. It traps us into making mistakes. It wastes out time. It fails to meet our expectation. And we are left feeling disappointed, irritated, and anxious. We feel Niggled.

We also notice exceptional design – because it works far better than we expected. We are surprised and we are delighted.

We do not notice Good Design because it just works. But there is a trap here. And that is we habitually link expectation to price.  We get what we paid for.  Higher cost => Better design => Higher expectation.

So we take good enough design for granted. And when we take stuff for granted we are on the slippery slope to losing it. As soon as something becomes invisible it is at risk of being discounted and deleted.

If we combine these two aspects of “invisible design” we arrive at an interesting conclusion.

To get from Poor Design to OK Design and then Good Design we have to think “counter-intuitively”.  We have to think “outside the box”. We have to “think laterally”.

And that is not a natural way for us to think. Not for individuals and not for teams. To get improvement we need to learn a method of how to counter our habit of thinking intuitively and we need to practice the method so that we can do it when we need to. When we want to need to improve.

To illustrate what I mean let us consider an real example.

Suppose we have 26 cards laid out in a row on a table; each card has a number on it; and our task is to sort the cards into ascending order. The constraint is that we can only move cards by swapping them.  How do we go about doing it?

There are many sorting designs that could achieve the intended purpose – so how do we choose one?

One criteria might be the time it takes to achieve the result. The quicker the better.

One criteria might be the difficulty of the method we use to achieve the result. The easier the better.

When individuals are given this task they usually do something like “scan the cards for the smallest and swap it with the first from the left, then repeat for the second from the left, and so on until we have sorted all the cards“.

This card-sorting-design is fit for purpose.  It is intuitively obvious, it is easy to explain, it is easy to teach and it is easy to do. But is it the quickest?

The answer is NO. Not by a long chalk.  For 26 randomly mixed up cards it will take about 3 minutes if we scan at a rate of 2 per second. If we have 52 cards it will take us about 12 minutes. Four times as long. Using this intuitively obvious design the time taken grows with the square of the number of cards that need sorting.

In reality there are much quicker designs and for this type of task one of the quickest is called Quicksort. It is not intuitively obvious though, it is not easy to describe, but it is easy to do – we just follow the Quicksort Policy.  (For those who are curious you can read about the method here and make up your own mind about how “intuitively obvious” it is.  Quicksort was not invented until 1960 so given that sorting stuff is not a new requirement, it clearly was not obvious for a few thousand years).

Using Quicksort to sort our 52 cards would take less than 3 minutes! That is a 400% improvement in productivity when we flip from an intuitive to a counter-intuitive design.  And Quicksort was not chance discovery – it was deliberately designed to address a specific sorting problem – and it was designed using robust design principles.

So our natural intuition tends to lead us to solutions that are “effective, easy and inefficient” – and that means expensive in terms of use of resources.

This has an important conclusion – if we are all is given the same improvement assignment and we all used our intuition to solve it then we will get similar and mediocre results.  It will feel OK and it will appear obvious but there will be no improvement.

We then conclude that “OK, this is the best we can expect.” which is intuitively obvious, logically invalid, and wrong. It is that sort of intuitive thinking trap that blocked us from inventing Quicksort for thousands of years.

And remember, to decide what is “best” we have to explore all options exhaustively – both intuitively obvious and counter-intuitively obscure. That impossible in practice.  This is why “best” and “optimum” are generally unhelpful concepts in the context of improvement science.

So how do we improve when good design is so counter-intuitive?

The answer is that we learn a set of “good designs” from a teacher who knows and understands them, and then we prove them to ourselves in practice. We leverage the “obvious in retrospect” effect. And we practice until we understand. And then we then teach others.

So if we wanted to improve the productivity of our designed-by-intuition card sorting process we could:
(a) consult a known list of proven sorting algorithms,
(b) choose one that meets our purpose (our design specification),
(c) compare the measured performance of our current “intuitively obvious” design with the predicted performance of that “counter-intuitively obscure” design,
(d) set about planning how to implement the higher performance design – possibly as a pilot first to confirm the prediction, reassure the fence-sitters, satisfy the skeptics, and silence the cynics.

So if these proven good designs are counter-intuitive then how do we get them?

The simplest and quickest way is to learn from people who already know and understand them. If we adopt the “not invented by us” attitude and attempt to re-invent the wheel then we may get lucky and re-discover a well-known design, we might even discover a novel design; but we are much more likely to waste a lot of time and end up no better off, or worse. This is called “meddling” and is driven by a combination of ignorance and arrogance.

So who are these people who know and understand good design?

They are called Improvement Scientists – and they have learned one-way-or-another what a good design looks like. That lalso means they can see poor design where others see only-possible design.

That difference of perception creates a lot of tension.

The challenge that Improvement Scientists face is explaining how counter-intuitive good design works: especially to highly intelligent, skeptical people who habitually think intuitively. They are called Academics.  And it is a pointless exercise trying to convince them using rhetoric.

Instead our Improvement Scientists side-steps the “theoretical discussion” and the “cynical discounting” by pragmatically demonstrating the measured effect of good design in practice. They use reality to make the case for good design – not rhetoric.

Improvement Scientists are Pragmatists.

And because they have learned how counter-intuitive good design is to the novice – how invisible it is to their intuition – then they are also Voracious Learners. They have enough humility to see themselves as Eternal Novices and enough confidence to be selective students.  They will actively seek learning from those who can demonstrate the “what” and explain the “how”.  They know and understand it is a much quicker and easier way to improve their knowledge and understanding.  It is Good Design.

 

“When the Student is ready …”

Improvement Science is not a new idea.  The principles are enduring and can be traced back as far as recorded memory – for Millennia. This means that there is a deep well of ancient wisdom that we can draw from.  Much of this wisdom is condensed into short sayings which capture a fundamental principle or essence.

One such saying is attributed to Zen Buddhism and goes “When the Student is ready the Teacher will appear.

This captures the essence of a paradigm shift – a term made popular by Thomas S Kuhn in his seminal 1962 book – The Structure of Scientific Revolutions.  It was written just over 50 years ago.

System-wide change takes time and the first stage is the gradual build up of dissatisfaction with the current paradigm.  The usual reaction from the Guardians of the Status Quo is to silence the first voices of dissent, often brutally. As the pressure grows there are too many voices to silence individually so more repressive Policies and Policing are introduced. This works for a while but does not dissolve the drivers of dissatisfaction. The pressure builds and the cracks start to appear.  This is a dangerous phase.

There are three ways out: repression, revolution, and evolution.  The last one is the preferred option – and it requires effective leadership to achieve.  Effective leaders are both Teachers and Students. Knowledge and understanding flow through them as they acquire Wisdom.

The first essence of the message is that the solutions to the problems are already known – but the reason they are not widely known and used is our natural affection for the familiar and our distrust of the unfamiliar.  If we are comfortable then why change?

It is only when we are uncomfortable enough that we will start to look for ways to regain comfort – physical and psychological.

The second essence of the message is that to change we need to learn something and that means we have to become Students, and to seek the guidance of a Teacher. Someone who understands the problems, their root causes, the solutions, the benefits and most importantly – how to disseminate that knowledge and understanding.  A Teacher that can show us how not just tell us what.

The third essence of the message is that the Students become Teachers themselves as they put into practice what they have learned and prove to themselves that it works, and it is workable.  The new understanding flows along the Optimism-Skepticism gradient until the Tipping Point is reached.  It is then unstoppable and the Paradigm flips. Often remarkably quickly.

The risk is that change means opportunity and there are many who can sniff out an opportunity to cash in on the change chaos. They are the purveyors of Snakeoil – and they prey on the dissatisfied and desperate.

So how does a Student know a True-Teacher from a Snakeoil Salesperson?

Simple – the genuine Teacher will be able to show a portfolio of successes and delighted ex-students; will be able to explain and demonstrate how they were both achieved; will be willing to share their knowledge; and will respectfully decline to teach someone who they feel is not yet ready to learn.

The Green Shoots of Improvement

one_on_one_challenge_150_wht_8069Improvement is a form of innovation and it obeys the same Laws of Innovation.

One of these Laws describes how innovation diffuses and it is called Rogers’ Law.

The principle is that innovations diffuse according to two opposing forces – the Force of Optimism and the Force of Skepticism.  As individuals we differ in our balance of these two preferences.

When we are in status quo the two forces are exactly balanced.

As the Force of Optimism builds (usually from increasing dissatisfaction with the status quo driving Necessity-the-Mother-of-Invention) then the Force of Skepticism tends to build too. It feels like being in a vice that is slowly closing. The emotional stress builds, the strain starts to show and the cracks begin to appear.  Sometimes the Optimism jaw of the vice shatters first, sometimes the Skepticism jaw does – either way the pent-up-tension is relieved. At least for a while.

The way to avoid the Vice is to align the forces of Optimism and Skepticism so that they both pull towards the common goal, the common purpose, the common vision.  And there always is one. People want a win-win-win outcome, they vary in daring to dream that it is possible. It is.

The importance of pull is critical. When we have push forces and a common goal we do get movement – but there is a danger – because things can veer out of control quickly.  Pull is much easier to steer and control than push.  We all know this from our experience of the real world.

And When the status quo starts to move in the direction of the common vision we are seeing tangible evidence of the Green Shoots of Improvement breaking through the surface into our conscious awareness.  Small signs first, tender green shoots, often invisible among the overgrowth, dead wood and weeds.

Sometimes the improvement is a reduction of the stuff we do not want – and that can be really difficult to detect if it is gradual because we adapt quickly and do not notice diffuse, slow changes.

We can detect the change by recording how it feels now then reviewing our records later (very few of us do that – very few of us keep a personal reflective journal). We can also detect change by comparing ourselves with others – but that is a minefield of hidden traps and is much less reliable (but we do that all the time!).

Improvement scientists prepare the Soil-of-Change, sow the Seeds of Innovation, and wait for the Spring to arrive.  As the soil thaws (the burning platform of a crisis may provide some energy for this) some of the Seeds will germinate and start to grow.  They root themselves in past reality and they shoot for the future rhetoric.  But they have a finite fuel store for growth – they need to get to the surface and to sunlight before their stored energy runs out. The preparation, planting and timing are all critical.

plant_growing_anim_150_wht_9902And when the Green Shoots of Improvement appear the Improvement Scientist switches role from Germinator to Grower – providing the seedlings with emotional sunshine in the form of positive feedback, encouragement, essential training, and guidance.  The Grower also has to provide protection from toxic threats that can easily kill a tender improvement seedling – the sources of Cynicide that are always present. The disrespectful sneers of “That will never last!” and “You are wasting your time – nothing good lasts long around here!”

The Improvement Scientist must facilitate harnessing the other parts of the system so that they all pull in the direction of the common vision – at least to some degree.  And the other parts add up to about 85% of it so they collectively they have enough muscle to create movement in the direction of the shared vision. If they are aligned.

And each other part has a different, significant and essential role.

The Disruptive Innovators provide the new ideas – they are always a challenge because they are always questioning “Why do we do it that way?” “What if we did it differently?” “How could we change?”  We do not want too many disruptive innovators because they are – disruptive.  Frustrated disruptive innovations can easily flip to being Cynics – so it is wise not to ignore them.

The Early Adopters provide the filter – they test the new ideas; they reject the ones that do not work; and they shape the ones that do. They provide the robust evidence of possibility. We need more Adopters than Innovators because lots of the ideas do not germinate. Duff seed or hostile soil – it does not matter which.  We want Green Shoots of Improvement.

The Majority provide the route to sharing the Adopter-Endorsed ideas, the Green Shoots of Improvement. They will sit on the fence, consider the options, comment, gossip, listen, ponder and eventually they will commit and change. The Early Majority earlier and the Late Majority later. The Late Majority are also known as the Skeptics. They are willing to be convinced but they need the most evidence. They are most risk-averse and for that reason they are really useful – because they can help guide the Shoots of  Improvement around the Traps. They will help if asked and given a clear role – “Tell us if you see gaps and risks and tell us why so that we can avoid them at the design and development stage”.  And you can tell if they are a True Skeptic or a Cynic-in-Skeptic clothing – because the Cynics will decline to help saying that they are too busy.

The last group, the Cynics, are a threat to significant and sustained improvement. And they can be managed using one or more the these four tactics:

1. Ignore them. This has the advantage of not wasting time but it tends to enrage them and they get noisier and more toxic.
2. Isolate them. This is done by establishing peer group ground rules that are is based on Respectful Challenge.
3. Remove them. This needs senior intervention and a cast-iron case with ample evidence of bad behaviour. Last resort.
4. Engage them. This is the best option if it can be achieved – invite the Cynics to be Skeptics. The choice is theirs.

It is surprising how much improvement follows from just turning blocking some of the sources of Cynicide!

growing_blue_vine_dissolve_150_wht_244So the take home message is a positive one:

  • Look for the Green Shoots of Improvement,
  • Celebrate every one you find,
  • Nurture and Protect them

and they will grow bigger and stronger and one day will flower, fruit and create their own Seeds of Innovation.

Do Not Give Up Too Soon

clock_hands_spinning_import_150_wht_3149Tangible improvement takes time. Sometimes it takes a long time.

The more fundamental the improvement the more people are affected. The more people involved the greater the psychological inertia. The greater the resistance the longer it takes to show tangible effects.

The advantage of deep-level improvement is that the cumulative benefit is greater – the risk is that the impatient Improvementologist may give up too early – sometimes just before the benefit becomes obvious to all.

The seeds of change need time to germinate and to grow – and not all good ideas will germinate. The green shoots of innovation do not emerge immediately – there is often a long lag and little tangible evidence for a long time.

This inevitable  delay is a source of frustration, and the impatient innovator can unwittingly undo their good work.  By pushing too hard they can drag a failure from the jaws of success.

Q: So how do we avoid this trap?

The trick is to understand the effect of the change on the system.  This means knowing where it falls on our Influence Map that is marked with the Circles of Control, Influence and Concern.

Our Circle of Concern includes all those things that we are aware of that present a threat to our future survival – such as a chunk of high-velocity space rock smashing into the Earth and wiping us all out in a matter of milliseconds. Gulp! Very unlikely but not impossible.

Some concerns are less dramatic – such as global warming – and collectively we may have more influence over changing that. But not individually.

Our Circle of Influence lies between the limit of our individual control and the limit of our collective control. This a broad scope because “collective” can mean two, twenty, two hundred, two thousand, two million, two billion and so on.

Making significant improvements is usually a Circle of Influence challenge and only collectively can we make a difference.  But to deliver improvement at this level we have to influence others to change their knowledge, understanding, attitudes, beliefs and behaviour. That is not easy and that is not quick. It is possible though – with passion, plausibility, persistence, patience – and an effective process.

It is here that we can become impatient and frustrated and are at risk of giving up too soon – and our temperaments influence the risk. Idealists are impatient for fundamental change. Rationals, Guardians and Artisans do not feel the same pain – and it is a rich source of conflict.

So if we need to see tangible results quickly then we have to focus closer to home. We have to work inside our Circle of Individual Influence and inside our Circle of Control.  The scope of individual influence varies from person-to-person but our Circle of Control is the same for all of us: the outer limit is our skin.  We all choose our behaviour and it is that which influences others: for better or for worse.  It is not what we think it is what we do. We cannot read or control each others minds. We can all choose our attitudes and our actions.

So if we want to see tangible improvement quickly then we must limit the scope of our action to our Circle of Individual Influence and get started.  We do what we can and as soon as we can.

Choosing what to do and what not do requires wisdom. That takes time to develop too.


Making an impact outside the limit of our Circle of Individual Influence is more difficult because it requires influencing many other people.

So it is especially rewarding for to see examples of how individual passion, persistence and patience have led to profound collective improvement.  It proves that it is still possible. It provides inspiration and encouragement for others.

One example is the recently published Health Foundation Quality, Cost and Flow Report.

This was a three-year experiment to test if the theory, techniques and tools of Improvement Science work in healthcare: specifically in two large UK acute hospitals – Sheffield and Warwick.

The results showed that Improvement Science does indeed work in healthcare and it worked for tough problems that were believed to be very difficult if not impossible to solve. That is very good news for everyone – patients and practitioners.

But the results have taken some time to appear in published form – so it is really good news to report that the green shoots of improvement are now there for all to see.

The case studies provide hard evidence that win-win-win outcomes are possible and achievable in the NHS.

The Impossibility Hypothesis has been disproved. The cynics can step off the bus. The skeptics have their evidence and can now become adopters.

And the report offers a lot of detail on how to do it including two references that are available here:

  1. A Recipe for Improvement PIE
  2. A Study of Productivity Improvement Tactics using a Two-Stream Production System Model

These references both describe the fundamentals of how to align financial improvement with quality and delivery improvement to achieve the elusive win-win-win outcome.

A previously invisible door has opened to reveal a new Land of Opportunity. A land inhabited by Improvementologists who mark the path to learning and applying this new knowledge and understanding.

There are many who do not know what to do to solve the current crisis in healthcare – they now have a new vista to explore.

Do not give up too soon –  there is a light at the end of the dark tunnel.

And to get there safely and quickly we just need to learn and apply the Foundations of Improvement Science in Healthcare – and we first learn to FISH in our own ponds first.

fish

The Seventh Flow

texting_a_friend_back_n_forth_150_wht_5352Bing Bong

Bob looked up from the report he was reading and saw the SMS was from Leslie, one of his Improvement Science Practitioners.

It said “Hi Bob, would you be able to offer me your perspective on another barrier to improvement that I have come up against.”

Bob thumbed a reply immediately “Hi Leslie. Happy to help. Free now if you would like to call. Bob

Ring Ring

<Bob> Hello, Bob here.

<Leslie> Hi Bob. Thank you for responding so quickly. Can I describe the problem?

<Bob> Hi Leslie – Yes, please do.

<Leslie> OK. The essence of it is that I have discovered that our current method of cash-flow control is preventing improvements in safety, quality, delivery and paradoxically in productivity too. I have tried to talk to the Finance department and all I get back is “We have always done it this way. That is what we are taught. It works. The rules are not negotiable and the problem is not Finance“. I am at a loss what to do.

<Bob> OK. Do not worry. This is a common issue that every ISP discovers at some point. What led you to your conclusion that the current methods are creating a barrier to change?

<Leslie> Well, the penny dropped when I started using the modelling tools you have shown me.  In particular when predicting the impact of process improvement-by-design changes on the financial performance of the system.

<Bob> OK. Can you be more specific?

<Leslie> Yes. The project was to design a new ambulatory diagnostic facility that will allow much more of the complex diagnostic work to be done on an outpatient basis.  I followed the 6M Design approach and looked first at the physical space design. We needed that to brief the architect.

<Bob> OK. What did that show?

<Leslie> It showed that the physical layout had a very significant impact on the flow in the process and that by getting all the pieces arranged in the right order we could create a physical design that felt spacious without actually requiring a lot of space. We called it the “Tardis Effect“. The most marked impact was on the size of the waiting areas – they were really small compared with what we have now which are much bigger and yet still feel cramped and chaotic.

<Bob> OK. So how does that physical space design link to the finance question?

<Leslie> Well, the obvious links were that the new design would have a smaller physical foot-print and at the same time give a higher throughput. It will cost less to build and will generate more activity than if we just copied the old design into a shiny new building.

<Bob> OK. I am sure that the Capital Allocation Committee and the Revenue Generation Committee will have been pleased with that outcome. What was the barrier?

<Leslie> Yes, you are correct. They were delighted because it left more in the Capital Pot for other equally worthy projects. The problem was not capital it was revenue.

<Bob> You said that activity was predicted to increase. What was the problem?

<Leslie>Yes – sorry, I was not clear – it was not the increased activity that was the problem – it was how to price the activity and  how to distribute the revenue generated. The Reference Cost Committee and Budget Allocation Committee were the problem.

<Bob> OK. What was the problem?

<Leslie> Well the estimates for the new operational budgets were basically the current budgets multiplied by the ratio of the future planned and historical actual activity. The rationale was that the major costs are people and consumables so the running costs should scale linearly with activity. They said the price should stay as it is now because the quality of the output is the same.

<Bob> OK. That does sound like a reasonable perspective. The variable costs will track with the activity if nothing else changes. Was it apportioning the overhead costs as part of the Reference Costing that was the problem?

<Leslie> No actually. We have not had that conversation yet. The problem was more fundamental. The problem is that the current budgets are wrong.

<Bob> Ah! That statement might come across as a bit of a challenge to the Finance Department. What was their reaction?

<Leslie> To para-phrase it was “We are just breaking even in the current financial year so the current budget must be correct. Please do not dabble in things that you clearly do not understand.”

<Bob> OK. You can see their point. How did you reply?

<Leslie> I tried to explain the concepts of the Cost-Of-The-Queue and how that cost was incurred by one part of the system with one budget but that the queue was created by a different part of the system with a different budget. I tried to explain that just because the budgets were 100% utilised does not mean that the budgets were optimal.

<Bob> How was that explanation received?

<Leslie> They did not seem to understand what I was getting at and kept saying “Inventory is an asset on the balance sheet. If profit is zero we must have planned our budgets perfectly. We cannot shift money between budgets within year if the budgets are already perfect. Any variation will average out. We have to stick to the financial plan and projections for the year. It works. The problem is not Finance – the problem is you.

<Bob> OK. Have you described the Seventh Flow and put it in context?

<Leslie> Arrrgh! No! Of course! That is how I should have approached it. Budgets are Cash-Inventories and what we need is Cash-Flow to where and when it is needed and in just the right amount according to the Principle of Parsimonious Pull. Thank you. I knew you would ask the crunch question. That has given me a fresh perspective on it. I will have another go.

<Bob> Let know how you get on. I am curious to hear the next instalment of the story.

<Leslie> Will do. Bye for now.

Drrrrrrrr

construction_blueprint_meeting_150_wht_10887Creating a productive and stable system design requires considering Seven Flows at the same time. The Seventh Flow is cash flow.

Cash is like energy – it is only doing useful work when it is flowing.

Energy is often described as two forms – potential energy and and kinetic energy.  The ‘doing’ happens when one form is being converted from potential to kinetic. Cash in the budget is like potential energy – sitting there ready to do some business.  Cash flow is like kinetic energy – it is the business.

The most versatile form of energy that we use is electrical energy. It is versatile because it can easily be converted into other forms – e.g. heat, light and movement. Since the late 1800’s our whole society has become highly dependent on electrical energy.  But electrical energy is tricky to store and even now our battery technology is pretty feeble. So, if we want to store energy we use a different form – chemical energy.  Gas, oil and coal – the fossil fuels – are all ancient stores of chemical energy that were originally derived from sunlight captured by vast carboniferous forests over millions of years. These carbon-rich fossil fuels are convenient to store near where they are needed, and when they are needed. But fossil fuels have a number of drawbacks: One is that they release their stored carbon when they are “burned”.  Another is that they are not renewable.  So, in the future we will need to develop better ways to capture, transport, use and store the energy from the Sun that will flow in glorious abundance for millions of years to come.

Plants discovered millions of years ago how to do this sunlight-to-chemical energy conversion and that biological legacy is built into every cell in every plant on the planet. Animals just do the reverse trick – they convert chemical-to-electrical. Every cell in every animal on the planet is a microscopic electrical generator that “burns” chemical fuel – carbohydrate. The other products are carbon dioxide and water. Plants use sunlight to recycle and store the carbon dioxide. It is a resilient and sustainable design.

plant_growing_anim_150_wht_9902Plants seemingly have it easy – the sunlight comes to them – they just sunbathe all day!  The animals have to work a bit harder – they have to move about gathering their chemical fuel. Some animals just feed on plants, others feed on other animals, and we do a bit of both. This food-gathering is a more complicated affair – and it creates a problem. Animals need a constant supply of energy – so they have to carry a store of chemical fuel around with them. That store is heavy so it needs energy to move it about.  Herbivors can be bigger and less intelligent because their food does not run away.  Carnivors need to be more agile; both physically and mentally. A balance is required. A big enough fuel store but not too big.  So, some animals have evolved additional strategies. Animals have become very good at not wasting energy – because the more that is wasted the more food that is needed and the greater the risk of getting eaten or getting too weak to catch the next meal.

To illustrate how amazing animals are at energy conservation we just need to look at an animal structure like the heart. The heart is there to pump blood around. Blood carries chemical nutrients and waste from one “department” of the body to another – just like ships, rail, roads and planes carry stuff around the world.

cardiogram_heart_working_150_wht_5747Blood is a sticky, viscous fluid that requires considerable energy to pump around the body and, because it is pumped continuously by the heart, even a small improvement in the energy efficiency of the circulation design has a big long-term cumulative effect. The flow of blood to any part of the body must match the requirements of that part.  If the blood flow to your brain slows down for even few seconds the brain cannot work properly and you lose consciousness – it is called “fainting”.

If the flow of blood to the brain is stopped for just a few minutes then the brain cells actually die. That is called a “stroke”. Our brains use a lot of electrical energy to do their job and our brain cells do not have big stores of fuel – so they need constant re-supply. And our brains are electrically active all the time – even when we are sleeping.

Other parts of the body are similar. Muscles for instance. The difference is that the supply of blood that muscles need is very variable – it is low when resting and goes up with exercise. It has been estimated that the change in blood flow for a muscle can be 30 fold!  That variation creates a design problem for the body because we need to maintain the blood flow to brain at all times but we only want blood to be flowing to the muscles in just the amount that they need, where they need it and when they need it. And we want to minimise the energy required to pump the blood at all times. How then is the total and differential allocation of blood flow decided and controlled?  It is certainly not a conscious process.

stick_figure_turning_valve_150_wht_8583The answer is that the brain and the muscles control their own flow. It is called autoregulation.  They open the tap when needed and just as importantly they close the tap when not needed. It is called the Principle of Parsimonious Pull. The brain directs which muscles are active but it does not direct the blood supply that they need. They are left to do that themselves.

So, if we equate blood-flow and energy-flow to cash-flow then we arrive at a surprising conclusion. The optimal design, the most energy and cash efficient, is where the separate parts of the system continuously determine the energy/cash flow required for them to operate effectively. They control the supply. They autoregulate their cash-flow. They pull only what they need when they need it.

BUT

For this to work then every part of the system needs to have a collaborative and parsimonious pull-design philosophy – one that wastes as little energy and cash as possible.  Minimum waste of energy requires careful design – it is called ergonomic design. Minimum waste of cash requires careful design – it is called economic design.

business_figures_accusing_anim_150_wht_9821Many socioeconomic systems are fragmented and have parts that behave in a “greedy” manner and that compete with each other for resources. It is a dog-eat-dog design. They would use whatever resources they can get for fear of being starved. Greed is Good. Collaboration is Weak.  In such a competitive situation a rigid-budget design is a requirement because it helps prevent one part selfishly and blindly destabilising the whole system for all. The problem is that this rigid financial design blocks change so it blocks improvement.

This means that greedy, competitive, selfish systems are unable to self-improve.

So, when the world changes too much and their survival depends on change then they risk becoming extinct just as the dinosaurs did.

red_arrow_down_crash_400_wht_2751Many will challenge this assertion by saying “But competition drives up performance“.  Actually, it is not as simple as that. Competition will weed out the weakest who “die” and remove themselves from the equation – apparently increasing the average. What actually drives improvement is customer choice. Organisations that are able to self-improve will create higher-quality and lower-cost products and in a globally-connected-economy the customers will vote with their wallets. The greedy and selfish competition lags behind.

So, to ensure survival in a global economy the Seventh Flow cannot be rigidly restricted by annually allocated departmental budgets. It is a dinosaur design.

And there is no difference between public and private organisations. The laws of cash-flow physics are universal.

How then is the cash flow controlled?

The “trick” is to design a monitoring and feedback component into the system design. This is called the Sixth Flow – and it must be designed so that just the right amount of cash is pulled to the just the right places and at just the right time and for just as long as needed to maximise the revenue.  The rest of the design – First Flow to Fifth Flow ensure the total amount of cash needed is a minimum.  All Seven Flows are needed.

So the essential ingredient for financial stability and survival is Sixth and Seventh Flow Design capability. That skill has another name – it is called Value Stream Accounting which is a component of complex adaptive systems engineering (CASE).

What? Never heard of Value Stream Accounting?

Maybe that is just another Error of Omission?

What Can I Do To Help?

stick_figures_moving_net_150_wht_8609The growing debate about the safety of our health care systems is gaining momentum.

This is not just a UK phenomenon.

The same question was being asked 10 years ago across the pond by many people – perhaps the most familiar name is Don Berwick.

The term Improvement Science has been buzzing around for a long time. This is a global – not just a local challenge.

Seeing the shameful reality in black-and-white [the Francis Report] is a nasty shock to everyone. There are no winners here. Our blissful ignorance is gone. Painful awareness has arrived.

The usual emotional reaction to being shoved from blissful ignorance into painful awareness is characteristic;  and it does not matter if it is discovering horse in your beef pie or hearing of 1200 avoidable deaths in a UK hospital.

Our emotional reaction is a predictable sequence that goes something like:

Shock => Denial => Anger =>Bargaining =>Depression =>Acceptance

=> Resolution.

It is the psychological healing process that is called the grief reaction and it is a normal part of the human psyche. We all do it. And we do it both individually and collectively. I remember well the global grief reactions that followed the sudden explosion of Challenger; the sudden death of Princess Diana; and the sudden collapse of the Twin Towers.

Fortunately such avoidable tragedies are uncommon.

The same chain-reaction happens to a lesser degree in any sudden change. We grieve the loss of our old way of thinking – we mourn the passing away our comfortable rhetoric that has been rudely and suddenly disproved by harsh reality. This is the Nerve Curve.  And learning to ride it safely is a critical-to-survival life skill.  Especially in turbulent times.

The UK population has suffered two psychological shocks in recent weeks – the discovery of horse in the beef pie and the fuller public disclosure of the story behind the 1000’s of avoidable deaths in one of our Trust hospitals. Both are now escalating and the finger of blame is pointing squarely at a common cause: the money-tail-wagging-the-safety-dog.

So what will happen next?  The Wall of Denial has been dynamited with hard evidence. We are now into the Collective Anger phase.

First there will be widespread righteous indignation and a strong desire to blame, to hunt down the evil ones, and to crucify the responsible and accountable. Partly as punishment, partly as a lesson to others, and partly to prevent them doing harm again.  Uncontrolled anger is dangerous especially when there is a lethal weapon to hand. The more controlled, action-oriented and future-focused will want to do something about it. Now! There will be rallies, and soap-boxes, and megaphones. The We-Told-You-So brigade will get shoved aside and trampled in the rush to do something – ANYTHING. Conferences will be hastily arranged and those most fearful for their reputations and jobs will cough up the cash and clear their diaries. They will be expected to be there. They will be. Desperately looking for answers. Anxiously seeking credible leaders. And the snake-oil salesmen will have a bonanza! The calmer, more reflective, phlegmatic, academic types will call for more money for more research so that we can fully analyse and fully understand the problem before we do anything.

And while the noisy bargaining for more cash keeps everyone busy the harm will continue to happen.

Eventually the message will sink in as the majority accept that there is no way to change the past; that we cannot cling to what is out-of-date thinking; and that all of our new-reality-avoiding tactics are fruitless. And we are forced to accept that there is no more cash. Now we are in danger of becoming helpless and hopeless, slipping into depression, and then into despair. We are at risk of giving up and letting ourselves wallow and drown in self-pity. This is a dangerous phase. Depression is understandable but it is avoidable because there is always something than can be done. We can always ask the elephant-in-the-room questions. Inside we usually know the answers.

We accept the new reality; we accept that we cannot change the past, we accept that we have some learning to do; we accept that we have to adjust; and we accept that all of us can do something.

Now we have reached the most important stage – resolution. This is the test of our resolve. Are we all-talk or can we convert talk-to-walk?

stick_figure_help_button_150_wht_9911We can all ask ourselves one question: “What can I do to help?”

I have asked myself that question and my first answer was “As a system designer I can help by looking at this challenge as a design assignment and describe what I see “.

Design starts with the intended outcome, the vision, the goal, the objective, the specification, the target.

The design goal is: Significant reduction in avoidable harm in the NHS, quickly, and at no extra cost.

[Please note that a design goal is a “what we get” not a “what we do”. It is a purpose and not just a process.]

Now we can invite, gather, dream-up, brain-storm any number of design options and then we can consider logically and rationally how well they might meet our design goal.

What are some of the design options on the table?

Design Option 1. Create a cadre of hospital inspectors.

Nope – that will take time and money and inspection alone does not guarantee better outcomes. We have enough evidence of that.

Design Option 2. Get lots more PhDs funded, do high quality academic research, write papers, publish them and hope the evidence is put into practice.

Nope – that will take time and money too and publication alone does not guarantee adoption of the lessons and delivery of better outcomes. We have enough evidence of that too. What is proven to be efficacious in a research trial is not necessarily effective, practical or affordable  in reality.  

Design Option 3. Put together conferences and courses to teach/train a new generation of competent healthcare improvement practitioners.

Maybe – it has the potential to deliver the outcome but it too will take time and money. We have been doing conferences and courses for decades – they are not very cost-effective. The Internet may have changed things though. 

Design Option 4. All of the above plus broadcast via the Internet the current pragmatic know-how of the basics of safe system design to everyone in the NHS so that they know what is possible and they know how to get started.

Promising – it has the greatest potential to deliver the required outcome, a broadcast will cost nothing and it can start working immediately.

OK – Option 4 it is – here we go …

The Basics of How To Design a Safe System

Definition 1: Safe means free of risk of harm.

Definition 2Harm is the result of hazards combining with risks.

There are two components to safe system design – the people stuff and the process stuff.

For example a busy main road is designed to facilitate the transport of stuff from A to B. It also represents a hazard – the potential for harm. If the vehicles bump into each other or other things then harm will result. So a lot of the design of the vehicles and the roads is about reducing the risk of bumps or mitigating the effects (e.g. seat-belts).

The risk is multi-factorial. If you drive at high speed, under the influence of recreational drugs, at night, on an icy road then the probability of having a bump is high.  If you step into a busy road without looking then the risk of getting bumped into is high too.

So the path to better safety is to eliminate as many hazards as possible and to reduce the risks as much as possible. And we have to do that without unintentionally creating more hazards, higher risks, excessive delays and higher costs.

So how is this done outside healthcare?

One tried-and-tested method for designing safer processes is called FMEA – Failure Modes and Effects Analysis.

Now that sounds really nerdy and it is.  It is an attention-to-detail exercise that will make your brain ache and your eyes bleed. But it works – so it is worthwhile learning the basic principles.

For the people part there is the whole body of Human Factors Research to access. This is also a bit nerdy for us hands-on oily-rag pragmatists so if you want something more practical immediately then have a go with The 4N Chart and the Niggle-o-Gram (which is a form of emotional FMEA). This short summary is also free to download, read, print, copy, share, discuss and use.

OK – I am off to design and build something else – an online course for teaching safety-by-design.

What are you going to do to help improve safety in the NHS?

The Writing on the Wall – Part II

Who_Is_To_BlameThe retrospectoscope is the favourite instrument of the forensic cynic – the expert in the after-the-event-and-I-told-you-so rhetoric. The rabble-rouser for the lynch-mob.

It feels better to retrospectively nail-to-a-cross the person who committed the Cardinal Error of Omission, and leave them there in emotional and financial pain as a visible lesson to everyone else.

This form of public feedback has been used for centuries.

It is called barbarism, and it has no place in a modern civilised society.


A more constructive question to ask is:

Could the evolving Mid-Staffordshire crisis have been detected earlier … and avoided?”

And this question exposes a tricky problem: it is much more difficult to predict the future than to explain the past.  And if it could have been detected and avoided earlier, then how is that done?  And if the how-is-known then is everyone else in the NHS using this know-how to detect and avoid their own evolving Mid-Staffs crisis?

To illustrate how it is currently done let us use the actual Mid-Staffs data. It is conveniently available in Figure 1 embedded in Figure 5 on Page 360 in Appendix G of Volume 1 of the first Francis Report.  If you do not have it at your fingertips I have put a copy of it below.

MS_RawData

The message does not exactly leap off the page and smack us between the eyes does it? Even with the benefit of hindsight.  So what is the problem here?

The problem is one of ergonomics. Tables of numbers like this are very difficult for most people to interpret, so they create a risk that we ignore the data or that we just jump to the bottom line and miss the real message. And It is very easy to miss the message when we compare the results for the current period with the previous one – a very bad habit that is spread by accountants.

This was a slowly emerging crisis so we need a way of seeing it evolving and the better way to present this data is as a time-series chart.

As we are most interested in safety and outcomes, then we would reasonably look at the outcome we do not want – i.e. mortality.  I think we will all agree that it is an easy enough one to measure.

MS_RawDeathsThis is the raw mortality data from the table above, plotted as a time-series chart.  The green line is the average and the red-lines are a measure of variation-over-time. We can all see that the raw mortality is increasing and the red flags say that this is a statistically significant increase. Oh dear!

But hang on just a minute – using raw mortality data like this is invalid because we all know that the people are getting older, demand on our hospitals is rising, A&Es are busier, older people have more illnesses, and more of them will not survive their visit to our hospital. This rise in mortality may actually just be because we are doing more work.

Good point! Let us plot the activity data and see if there has been an increase.

MS_Activity

Yes – indeed the activity has increased significantly too.

Told you so! And it looks like the activity has gone up more than the mortality. Does that mean we are actually doing a better job at keeping people alive? That sounds like a more positive message for the Board and the Annual Report. But how do we present that message? What about as a ratio of mortality to activity? That will make it easier to compare ourselves with other hospitals.

Good idea! Here is the Raw Mortality Ratio chart.

MS_RawMortality_RatioAh ha. See! The % mortality is falling significantly over time. Told you so.

Careful. There is an unstated assumption here. The assumption that the case mix is staying the same over time. This pattern could also be the impact of us doing a greater proportion of lower complexity and lower risk work.  So we need to correct this raw mortality data for case mix complexity – and we can do that by using data from all NHS hospitals to give us a frame of reference. Dr Foster can help us with that because it is quite a complicated statistical modelling process. What comes out of Dr Fosters black magic box is the Global Hospital Raw Mortality (GHRM) which is the expected number of deaths for our case mix if we were an ‘average’ NHS hospital.

MS_ExpectedMortality_Ratio

What this says is that the NHS-wide raw mortality risk appears to be falling over time (which may be for a wide variety of reasons but that is outside the scope of this conversation). So what we now need to do is compare this global raw mortality risk with our local raw mortality risk  … to give the Hospital Standardised Mortality Ratio.

MS_HSMRThis gives us the Mid Staffordshire Hospital HSMR chart.  The blue line at 100 is the reference average – and what this chart says is that Mid Staffordshire hospital had a consistently higher risk than the average case-mix adjusted mortality risk for the whole NHS. And it says that it got even worse after 2001 and that it stayed consistently 20% higher after 2003.

Ah! Oh dear! That is not such a positive message for the Board and the Annual Report. But how did we miss this evolving safety catastrophe?  We had the Dr Foster data from 2001

This is not a new problem – a similar thing happened in Vienna between 1820 and 1850 with maternal deaths caused by Childbed Fever. The problem was detected by Dr Ignaz Semmelweis who also discovered a simple, pragmatic solution to the problem: hand washing.  He blew the whistle but unfortunately those in power did not like the implication that they had been the cause of thousands of avoidable mother and baby deaths.  Semmelweis was vilified and ignored, and he did not publish his data until 1861. And even then the story was buried in tables of numbers.  Semmelweis went mad trying to convince the World that there was a problem.  Here is the full story.

Also, time-series charts were not invented until 1924 – and it was not in healthcare – it was in manufacturing. These tried-and-tested safety and quality improvement tools are only slowly diffusing into healthcare because the barriers to innovation appear somewhat impervious.

And the pores have been clogged even more by the social poison called “cynicide” – the emotional and political toxin exuded by cynics.

So how could we detect a developing crisis earlier – in time to avoid a catastrophe?

The first step is to estimate the excess-death-equivalent. Dr Foster does this for you.MS_ExcessDeathsHere is the data from the table plotted as a time-series chart that shows that the estimated-excess-death-equivalent per year. It has an average of 100 (that is two per week) and the average should be close to zero. More worryingly the number was increasing steadily over time up to 200 per year in 2006 – that is about four excess deaths per week – on average.  It is important to remember that HSMR is a risk ratio and mortality is a multi-factorial outcome. So the excess-death-equivalent estimate does not imply that a clear causal chain will be evident in specific deaths. That is a complete misunderstanding of the method.

I am sorry – you are losing me with the statistical jargon here. Can you explain in plain English what you mean?

OK. Let us use an example.

Suppose we set up a tombola at the village fete and we sell 50 tickets with the expectation that the winner bags all the money. Each ticket holder has the same 1 in 50 risk of winning the wad-of-wonga and a 49 in 50 risk of losing their small stake. At the appointed time we spin the barrel to mix up the ticket stubs then we blindly draw one ticket out. At that instant the 50 people with an equal risk changes to one winner and 49 losers. It is as if the grey fog of risk instantly condenses into a precise, black-and-white, yes-or-no, winner-or-loser, reality.

Translating this concept back into HSMR and Mid Staffs – the estimated 1200 deaths are the just the “condensed risk of harm equivalent”.  So, to then conduct a retrospective case note analysis of specific deaths looking for the specific cause would be equivalent to trying to retrospectively work out the reason the particular winning ticket in the tombola was picked out. It is a search that is doomed to fail. To then conclude from this fruitless search that HSMR is invalid, is only to compound the delusion further.  The actual problem here is ignorance and misunderstanding of the basic Laws of Physics and Probability, because our brains are not good at solving these sort of problems.

But Mid Staffs is a particularly severe example and  it only shows up after years of data has accumulated. How would a hospital that was not as bad as this know they had a risk problem and know sooner? Waiting for years to accumulate enough data to prove there was a avoidable problem in the past is not much help. 

That is an excellent question. This type of time-series chart is not very sensitive to small changes when the data is noisy and sparse – such as when you plot the data on a month-by-month timescale and avoidable deaths are actually an uncommon outcome. Plotting the annual sum smooths out this variation and makes the trend easier to see, but it delays the diagnosis further. One way to increase the sensitivity is to plot the data as a cusum (cumulative sum) chart – which is conspicuous by its absence from the data table. It is the running total of the estimated excess deaths. Rather like the running total of swings in a game of golf.

MS_ExcessDeaths_CUSUMThis is the cusum chart of excess deaths and you will notice that it is not plotted with control limits. That is because it is invalid to use standard control limits for cumulative data.  The important feature of the cusum chart is the slope and the deviation from zero. What is usually done is an alert threshold is plotted on the cusum chart and if the measured cusum crosses this alert-line then the alarm bell should go off – and the search then focuses on the precursor events: the Near Misses, the Not Agains and the Niggles.

I see. You make it look easy when the data is presented as pictures. But aren’t we still missing the point? Isn’t this still after-the-avoidable-event analysis?

Yes! An avoidable death should be a Never-Event in a designed-to-be-safe healthcare system. It should never happen. There should be no coffins to count. To get to that stage we need to apply exactly the same approach to the Near-Misses, and then the Not-Agains, and eventually the Niggles.

You mean we have to use the SUI data and the IR1 data and the complaint data to do this – and also ask our staff and patients about their Niggles?

Yes. And it is not the number of complaints that is the most useful metric – it is the appearance of the cumulative sum of the complaint severity score. And we need a method for diagnosing and treating the cause of the Niggles too. We need to convert the feedback information into effective action.

Ah ha! Now I understand what the role of the Governance Department is: to apply the tools and techniques of Improvement Science proactively.  But our Governance Department have not been trained to do this!

Then that is one place to start – and their role needs to evolve from Inspectors and Supervisors to Demonstrators and Educators – ultimately everyone in the organisation needs to be a competent Healthcare Improvementologist.

OK – I now now what to do next. But wait a minute. This is going to cost a fortune!

This is just one small first step.  The next step is to redesign the processes so the errors do not happen in the first place. The cumulative cost saving from eliminating the repeated checking, correcting, box-ticking, documenting, investigating, compensating and insuring is much much more than the one-off investment in learning safe system design.

So the Finance Director should be a champion for safety and quality too.

Yup!

Brill. Thanks. And can I ask one more question? I do not want to appear to skeptical but how do we know we can trust that this risk-estimation system has been designed and implemented correctly? How do we know we are not being bamboozled by statisticians? It has happened before!

That is the best question yet.  It is important to remember that HSMR is counting deaths in hospital which means that it is not actually the risk of harm to the patient that is measured – it is the risk to the reputation of hospital! So the answer to your question is that you demonstrate your deep understanding of the rationle and method of risk-of-harm estimation by listing all the ways that such a system could be deliberately “gamed” to make the figures look better for the hospital. And then go out and look for hard evidence of all the “games” that you can invent. It is a sort of creative poacher-becomes-gamekeeper detective exercise.

OK – I sort of get what you mean. Can you give me some examples?

Yes. The HSMR method is based on deaths-in-hospital so discharging a patient from hospital before they die will make the figures look better. Suppose one hospital has more access to end-of-life care in the community than another: their HSMR figures would look better even though exactly the same number of people died. Another is that the HSMR method is weighted towards admissions classified as “emergencies” – so if a hospital admits more patients as “emergencies” who are not actually very sick and discharges them quickly then this will inflated their estimated deaths and make their actual mortality ratio look better – even though the risk-of-harm to patients has not changed.

OMG – so if we have pressure to meet 4 hour A&E targets and we get paid more for an emergency admission than an A&E attendance then admitting to an Assessmen Area and discharging within one day will actually reward the hospital financially, operationally and by apparently reducing their HSMR even though there has been no difference at all to the care that patients actually recieve?

Yes. It is an inevitable outcome of the current system design.

But that means that if I am gaming the system and my HSMR is not getting better then the risk-of-harm to patients is actually increasing and my HSMR system is giving me false reassurance that everything is OK.   Wow! I can see why some people might not want that realisation to be public knowledge. So what do we do?

Design the system so that the rewards are aligned with lower risk of harm to patients and improved outcomes.

Is that possible?

Yes. It is called a Win-Win-Win design.

How do we learn how to do that?

Improvement Science.

Footnote I:

The graphs tell a story but they may not create a useful sense of perspective. It has been said that there is a 1 in 300 chance that if you go to hospital you will not leave alive for avoidable causes. What! It cannot be as high as 1 in 300 surely?

OK – let us use the published Mid-Staffs data to test this hypothesis. Over 12 years there were about 150,000 admissions and an estimated 1,200 excess deaths (if all the risk were concentrated into the excess deaths which is not what actually happens). That means a 1 in 130 odds of an avoidable death for every admission! That is twice as bad as the estimated average.

The Mid Staffordshire statistics are bad enough; but the NHS-as-a-whole statistics are cumulatively worse because there are 100’s of other hospitals that are each generating not-as-obvious avoidable mortality. The data is very ‘noisy’ so it is difficult even for a statistical expert to separate the message from the morass.

And remember – that  the “expected” mortality is estimated from the average for the whole NHS – which means that if this average is higher than it could be then there is a statistical bias and we are being falsely reassured by being ‘not statistically significantly different’ from the pack.

And remember too – for every patient and family that suffers and avoidable death there are many more that have to live with the consequences of avoidable but non-fatal harm.  That is called avoidable morbidity.  This is what the risk really means – everyone has a higher risk of some degree of avoidable harm. Psychological and physical harm.

This challenge is not just about preventing another Mid Staffs – it is about preventing 1000’s of avoidable deaths and 100,000s of patients avoidably harmed every year in ‘average’ NHS trusts.

It is not a mass conspiracy of bad nurses, bad doctors, bad managers or bad policians that is the root cause.

It is poorly designed processes – and they are poorly designed because the nurses, doctors and managers have not learned how to design better ones.  And we do not know how because we were not trained to.  And that education gap was an accident – an unintended error of omission.  

Our urgently-improve-NHS-safety-challenge requires a system-wide safety-by-design educational and cultural transformation.

And that is possible because the knowledge of how to design, test and implement inherently safe processes exists. But it exists outside healthcare.

And that safety-by-design training is a worthwhile investment because safer-by-design processes cost less to run because they require less checking, less documenting, less correcting – and all the valuable nurse, doctor and manager time freed up by that can be reinvested in more care, better care and designing even better processes and systems.

Everyone Wins – except the cynics who have a choice: to eat humble pie or leave.

Footnote II:

In the debate that has followed the publication of the Francis Report a lot of scrutiny has been applied to the method by which an estimated excess mortality number is created and it is necessary to explore this in a bit more detail.

The HSMR is an estimate of relative risk – it does not say that a set of specific patients were the ones who came to harm and the rest were OK. So looking at individual deaths and looking for the specific causes is to completely misunderstand the method. So looking at the actual deaths individually and looking for identifiable cause-and-effect paths is an misuse of the message.  When very few if any are found to conclude that HSMR is flawed is an error of logic and exposes the ignorance of the analyst further.

HSMR is not perfect though – it has weaknesses.  It is a benchmarking process the”standard” of 100 is always moving because the collective goal posts are moving – the reference is always changing . HSMR is estimated using data submitted by hospitals themselves – the clinical coding data.  So the main weakness is that it is dependent on the quality of the clinicial coding – the errors of comission (wrong codes) and the errors of omission (missing codes). Garbage In Garbage Out.

Hospitals use clinically coded data for other reasons – payment. The way hospitals are now paid is based on the volume and complexity of that activity – Payment By Results (PbR) – using what are called Health Resource Groups (HRGs). This is a better and fairer design because hospitals with more complex (i.e. costly to manage) case loads get paid more per patient on average.  The HRG for each patient is determined by their clinical codes – including what are called the comorbidities – the other things that the patient has wrong with them. More comorbidites means more complex and more risky so more money and more risk of death – roughly speaking.  So when PbR came in it becamevery important to code fully in order to get paid “properly”.  The problem was that before PbR the coding errors went largely unnoticed – especially the comorbidity coding. And the errors were biassed – it is more likely to omit a code than to have an incorrect code. Errors of omission are harder to detect. This meant that by more complete coding (to attract more money) the estimated casemix complexity would have gone up compared with the historical reference. So as actual (not estimated) NHS mortality has gone down slightly then the HSMR yardstick becomes even more distorted.  Hospitals that did not keep up with the Coding Game would look worse even though  their actual risk and mortality may be unchanged.  This is the fundamental design flaw in all types of  benchmarking based on self-reported data.

The actual problem here is even more serious. PbR is actually a payment for activity – not a payment for outcomes. It is calculated from what it cost to run the average NHS hospital using a technique called Reference Costing which is the same method that manufacturing companies used to decide what price to charge for their products. It has another name – Absorption Costing.  The highest performers in the manufacturing world no longer use this out-of-date method. The implication of using Reference Costing and PbR in the NHS are profound and dangerous:

If NHS hospitals in general have poorly designed processes that create internal queues and require more bed days than actually necessary then the cost of that “waste” becomes built into the future PbR tariff. This means average length of stay (LOS) is financially rewarded. Above average LOS is financially penalised and below average LOS makes a profit.  There is no financial pressure to improve beyound average. This is called the Regression to the Mean effect.  Also LOS is not a measure of quality – so there is a to shorten length of stay for purely financial reasons – to generate a surplus to use to fund growth and capital investment.  That pressure is non-specific and indiscrimiate.  PbR is necessary but it is not sufficient – it requires an quality of outcome metric to complete it.    

So the PbR system is based on an out-of-date cost-allocation model and therefore leads to the very problems that are contributing to the MidStaffs crisis – financial pressure causing quality failures and increased risk of mortality.  MidStaffs may be a chance victim of a combination of factors coming together like a perfect storm – but those same factors are present throughout the NHS because they are built into the current design.

One solution is to move towards a more up-to-date financial model called stream costing. This uses the similar data to reference costing but it estimates the “ideal” cost of the “necessary” work to achieve the intended outcome. This stream cost becomes the focus for improvement – the streams where there is the biggest gap between the stream cost and the reference cost are the focus of the redesign activity. Very often the root cause is just poor operational policy design; sometimes it is quality and safety design problems. Both are solvable without investment in extra capacity. The result is a higher quality, quicker, lower-cost stream. Win-win-win. And in the short term that  is rewarded by a tariff income that exceeds cost and a lower HSMR.

Radically redesigning the financial model for healthcare is not a quick fix – and it requires a lot of other changes to happen first. So the sooner we start the sooner we will arrive. 

The Writing On The Wall – Part I

writing_on_the_wallThe writing is on the wall for the NHS.

It is called the Francis Report and there is a lot of it. Just the 290 recommendations runs to 30 pages. It would need a very big wall and very small writing to put it all up there for all to see.

So predictably the speed-readers have latched onto specific words – such as “Inspectors“.

Recommendation 137Inspection should remain the central method for monitoring compliance with fundamental standards.”

And it goes further by recommending “A specialist cadre of hospital inspectors should be established …”

A predictable wail of anguish rose from the ranks “Not more inspectors! The last lot did not do much good!”

The word “cadre” is not one that is used in common parlance so I looked it up:

Cadre: 1. a core group of people at the center of an organization, especially military; 2. a small group of highly trained people, often part of a political movement.

So it has a military, centralist, specialist, political flavour. No wonder there was a wail of anguish! Perhaps this “cadre of inspectors” has been unconsciously labelled with another name? Persecutors.

Of more interest is the “highly trained” phrase. Trained to do what? Trained by whom? Clearly none of the existing schools of NHS management who have allowed the fiasco to happen in the first place. So who – exactly? Are these inspectors intended to be protectors, persecutors, or educators?

And what would they inspect?

And how would they use the output of such an inspection?

Would the fear of the inspection and its possible unpleasant consequences be the stick to motivate compliance?

Is the language of the Francis Report going to create another brick wall of resistance from the rubble of the ruins of the reputation of the NHS?  Many self-appointed experts are already saying that implementing 290 recommendations is impossible.

They are incorrect.

The number of recommendations is a measure of the breadth and depth of the rot. So the critical-to-success factor is to implement them in a well-designed order. Get the first few in place and working and the rest will follow naturally.  Get the order wrong and the radical cure will kill the patient.

So where do we start?

Let us look at the inspection question again.  Why would we fear an external inspection? What are we resisting? There are three facets to this: first we do not know what is expected of us;  second we do not know if we can satisfy the expectation; and third we fear being persecuted for failing to achieve the impossible.

W Edwards Deming used a very effective demonstration of the dangers of well-intended but badly-implemented quality improvement by inspection: it was called the Red Bead Game.  The purpose of the game was to illustrate how to design an inspection system that actually helps to achieve the intended goal. Sustained improvement.

This is applied Improvement Science and I will illustrate how it is done with a real and current example.


I am assisting a department in a large NHS hospital to improve the quality of their service. I have been sent in as an external inspector.  The specific quality metric they have been tasked to improve is the turnaround time of the specialist work that they do. This is a flow metric because a patient cannot leave hospital until this work is complete – and more importantly it is a flow and quality metric because when the hospital is full then another patient, one who urgently needs to be admitted, will be waiting for the bed to be vacated. One in one out.

The department have been set a standard to meet, a target, a specification, a goal. It is very clear and it is easily measurable. They have to turnaround each job of work in less than 2 hours.  This is called a lead time specification and it is arbitrary.  But it is not unreasonable from the perspective of the patient waiting to leave and for the patient waiting to be admitted. Neither want to wait.

The department has a sophisticated IT system that measures their performance. They use it to record when each job starts and when each job is finished and from those two events the software calculates the lead time for each job in real-time. At the end of each day the IT system counts how many jobs were completed in less than 2 hours and compares this with how many were done in total and calculates a ratio which it presents as a percentage in the range of 0 and 100. This is called the process yield.  The department are dedicated and they work hard and they do all the work that arrives each day the same day – no matter how long it takes. And at the end of each day they have their score for that day. And it is almost never 100%.  Not never. Almost never. But it is not good enough and they are being blamed for it. In turn they blame others for making their job more difficult. It is a blame-game and it has been going on for years.

So how does an experienced Improvement Science-trained Inspector approach this sort of “wicked” problem?

First we need to get the writing on the wall – we need to see the reality – we need to “plot the dots” – we need to see what the performance is doing over time – we need to see the voice of the process. And that requires only their data, a pencil, some paper and for the chart to be put on the on the wall where everyone can see it.

Chart_1This is what their daily % yield data for three consecutive weeks looked like as a time-series chart. The thin blue line is the 100% yield target.

The 100% target was only achieved on three days – and they were all Sundays. On the other Sunday it was zero (which may mean that there was no data to calculate a ratio from).

There is wide variation from one day to the next and it is the variation as well as the average that is of interest to an improvement scientist. What is the source of the variation it? If 100% yield can be achieved some days then what is different about those days?

Chart_2

So our Improvement science-trained Inspector will now re-plot the data in a different way – as rational groups. This exposes the issue clearly. The variation on Weekends is very wide and the performance during the Weekdays is much less variable.  What this says is that the weekend system and the weekday system are different. This means that it is invalid to combine the data for both.

It also raises the question of why there is such high variation in yield only at weekends?  The chart cannot answer the question, so our IS-trained Inspector digs a bit deeper and discovers that the volume of work done at the weekend is low, the staffing of the department is different, and that the recording of the events is less reliable. In short – we cannot even trust the weekend data – so we have two reasons to justify excluding it from our chart and just focusing on what happens during the week.

Chart_3We re-plot our chart, marking the excluded weekend data as not for analysis.

We can now see that the weekday performance of our system is visible, less variable, and the average is a long way from 100%.

The team are working hard and still only achieving mediocre performance. That must mean that they need something that is missing. Motivating maybe. More people maybe. More technology maybe.  But there is no more money for more people or technology and traditional JFDI motivation does not seem to have helped.

This looks like an impossible task!

Chart_4

So what does our Inspector do now? Mark their paper with a FAIL and put them on the To Be Sacked for Failing to Meet an Externally Imposed Standard heap?

Nope.

Our IS-trained Inspector calculates the limits of expected performance from the data  and plots these limits on the chart – the red lines.  The computation is not difficult – it can be done with a calculator and the appropriate formula. It does not need a sophisticated IT system.

What this chart now says is “The current design of this process is capable of delivering between 40% and 85% yield. To expect it do do better is unrealistic”.  The implication for action is “If we want 100% yield then the process needs to be re-designed.” Persecution will not work. Blame will not work. Hoping-for-the-best will not work. The process must be redesigned.

Our improvement scientist then takes off the Inspector’s hat and dons the Designer’s overalls and gets to work. There is a method to this and it is called 6M Design®.

Chart_5

First we need to have a way of knowing if any future design changes have a statistically significant impact – for better or for worse. To do this the chart is extended into the future and the red lines are projected forwards in time as the black lines called locked-limits.  The new data is compared with this projected baseline as it comes in.  The weekends and bank holidays are excluded because we know that they are a different system. On one day (20/12/2012) the yield was surprisingly high. Not 100% but more than the expected upper limit of 85%.

Chart_6The alerts us to investigate and we found that it was a ‘hospital bed crisis’ and an ‘all hands to the pumps’ distress call went out.

Extra capacity was pulled to the process and less urgent work was delayed until later.  It is the habitual reaction-to-a-crisis behaviour called “expediting” or “firefighting”.  So after the crisis had waned and the excitement diminished the performance returned to the expected range. A week later the chart signals us again and we investigate but this time the cause was different. It was an unusually quiet day and there was more than enough hands on the pumps.

Both of these days are atypically good and we have an explanation for each of them. This is called an assignable cause. So we are justified in excluding these points from our measure of the typical baseline capability of our process – the performance the current design can be expected to deliver.

An inexperienced manager might conclude from these lessons that what is needed is more capacity. That sounds and feels intuitively obvious and it is correct that adding more capacity may improve the yield – but that does not prove that lack of capacity is the primary cause.  There are many other causes of long lead times  just as there are many causes of headaches other than brain tumours! So before we can decide the best treatment for our under-performing design we need to establish the design diagnosis. And that is done by inspecting the process in detail. And we need to know what we are looking for; the errors of design commission and the errors of design omission. The design flaws.

Only a trained and experienced process designer can spot the flaws in a process design. Intuition will trick the untrained and inexperienced.


Once the design diagnosis is established then the redesign stage can commence. Design always works to a specification and in this case it was clear – to significantly improve the yield to over 90% at no cost.  In other words without needing more people, more skills, more equipment, more space, more anything. The design assignment was made trickier by the fact that the department claimed that it was impossible to achieve significant improvement without adding extra capacity. That is why the Inspector had been sent in. To evaluate that claim.

The design inspection revealed a complex adaptive system – not a linear, deterministic, production-line that manufactures widgets.  The department had to cope with wide variation in demand, wide variation in quality of request, wide variation in job complexity, and wide variation in urgency – all at the same time.  But that is the nature of healthcare and acute hospital work. That is the expected context.

The analysis of the current design revealed that it was not well suited for this requirement – and the low yield was entirely predictable. The analysis also revealed that the root cause of the low yield was not lack of either flow-capacity or space-capacity.

This insight led to the suggestion that it would be possible to improve yield without increasing cost. The department were polite but they did not believe it was possible. They had never seen it, so why should they be expected to just accept this on faith?

Chart_7So, the next step was to develop, test and demonstrate a new design and that was done in three stages. The final stage was the Reality Test – the actual process design was changed for just one day – and the yield measured and compared with the predicted improvement.

This was the validity test – the proof of the design pudding. And to visualise the impact we used the same technique as before – extending the baseline of our time-series chart, locking the limits, and comparing the “after” with the “before”.

The yellow point marks the day of the design test. The measured yield was well above the upper limit which suggested that the design change had made a significant improvement. A statistically significant improvement.  There was no more capacity than usual and the day was not unusually quiet. At the end of the day we held a team huddle.

Our first question was “How did the new design feel?” The consensus was “Calmer, smoother, fewer interruptions” and best of all “We finished on time – there was no frantic catch up at the end of the day and no one had to stay late to complete the days work!”

The next question was “Do we want to continue tomorrow with this new design or revert back to the old one?” The answer was clear “Keep going with the new design. It feels better.”

The same chart was used to show what happened over the next few days – excluding the weekends as before. The improvement was sustained – it did not revert to the original because the process design had been changed. Same work, same capacity, different process – higher yield. The red flags on the charts mark the statistically significant evidence of change and the cluster of red flags is very strong statistical evidence that the improvement is not due to chance.

The next phase of the 6M Design® method is to continue to monitor the new process to establish the new baseline of expectation. That will require at least twelve data points and it is in progress. But we have enough evidence of a significant improvement. This means that we have no credible justification to return to the old design, and it also implies that it is no longer valid to compare the new data against the old projected limits. Our chart tells us that we need to split the data into before-and-after and to calculate new averages and limits for each segment separately. We have changed the voice of the process by changing the design.

Chart_8And when we split the data at the point-of-change then the red flags disappear – which means that our new design is stable. And it has a new capability – a better one. We have moved closer to our goal of 100% yield. It is still early days and we do not really have enough data to calculate the new capability.

What we can say is that we have improved average quality yield from 63% to about 90% at no cost using a sequence of process diagnose, design, deliver.  Study-Plan-Do.

And we have hard evidence that disproves the impossibility hypothesis.


And that was the goal of the first design change – it was not to achieve 100% yield in one jump. Our design simulation had predicted an improvement to about 90%.  And there are other design changes to follow that need this stable foundation to build on.  The order of implementation is critical – and each change needs time to bed in before the next change is made. That is the nature of the challenge of improving a complex adaptive system.

The cost to the department was zero but the benefit was huge.  The bigger benefit to the organisation was felt elsewhere – the ‘customers’ saw a higher quality, quicker process – and there will be a financial benefit for the whole system. It will be difficult to measure with our current financial monitoring systems but it will be real and it will be there – lurking in the data.

The improvement required a trained and experienced Inspector/Designer/Educator to start the wheel of change turning. There are not many of these in the NHS – but the good news is that the first level of this training is now available.

What this means for the post-Francis Report II NHS is that those who want to can choose to leap over the wall of resistance that is being erected by the massing legions of noisy cynics. It means we can all become our own inspectors. It means we can all become our own improvers. It means we can all learn to redesign our systems so that they deliver higher safety, better quality, more quickly and at no extra one-off or recurring cost.  We all can have nothing to fear from the Specialist Cadre of Hospital Inspectors.

The writing is on the wall.


15/02/2013 – Two weeks in and still going strong. The yield has improved from 63% to 92% and is stable. Improvement-by-design works.

10/03/2013 – Six weeks in and a good time to test if the improvement has been sustained.

TTO_Yield_WeeklyThe chart is the weekly performance plotted for 17 weeks before the change and for 5 weeks after. The advantage of weekly aggregated data is that it removes the weekend/weekday 7-day cycle and reduces the effect of day-to-day variation.

The improvement is obvious, significant and has been sustained. This is the objective improvement. More important is the subjective improvement.

Here is what Chris M (departmental operational manager) wrote in an email this week (quoted with permission):

Hi Simon

It is I who need to thank you for explaining to me how to turn our pharmacy performance around and ultimately improve the day to day work for the pharmacy team (and the trust staff). This will increase job satisfaction and make pharmacy a worthwhile career again instead of working in constant pressure with a lack of achievement that had made the team feel rather disheartened and depressed. I feel we can now move onwards and upwards so thanks for the confidence boost.

Best wishes and many thanks

Chris

This is what Improvement Science is all about!

Robert Francis QC

press_on_screen_anim_150_wht_7028Today is an important day.

The Robert Francis QC Report and recommendations from the Mid-Staffordshire Hospital Crisis has been published – and it is a sobering read.  The emotions that just the executive summary evoked in me were sadness, shame and anger.  Sadness for the patients, relatives, and staff who have been irreversibly damaged; shame that the clinical professionals turned a blind-eye; and anger that the root cause has still not been exposed to public scrutiny.

Click here to get a copy of the RFQC Report Executive Summary.

Click here to see the video of RFQC describing his findings. 

The root cause is ignorance at all levels of the NHS.  Not stupidity. Not malevolence. Just ignorance.

Ignorance of what is possible and ignorance of how to achieve it.

RFQC rightly focusses his recommendations on putting patients at the centre of healthcare and on making those paid to deliver care accountable for the outcomes.  Disappointingly, the report is notably thin on the financial dimension other than saying that financial targets took priority over safety and quality.  He is correct. They did. But the report does not say that this is unnecessary – it just says “in future put safety before finance” and in so doing he does not challenge the belief that we are playing a zero-sum-game. The  assumotion that higher-quality-always-costs-more.

This assumption is wrong and can easily be disproved.

A system that has been designed to deliver safety-and-quality-on-time-first-time-and-every-time costs less. And it costs less because the cost of errors, checking, rework, queues, investigation, compensation, inspectors, correctors, fixers, chasers, and all the other expensive-high-level-hot-air-generation-machinery that overburdens the NHS and that RFQC has pointed squarely at is unnecessary.  He says “simplify” which is a step in the right direction. The goal is to render it irrelevent.

The ignorance is ignorance of how to design a healthcare system that works right-first-time. The fact that the Francis Report even exists and is pointing its uncomfortable fingers-of-evidence at every level of the NHS from ward to government is tangible proof of this collective ignorance of system design.

And the good news is that this collective ignorance is also unnecessary … because the knowledge of how to design safe-and-affordable systems already exists. We just have to learn how. I call it 6M Design® – but  the label is irrelevent – the knowledge exists and the evidence that it works exists.

So here are some of the RFQC recommendations viewed though a 6M Design® lens:       

1.131 Compliance with the fundamental standards should be policed by reference to developing the CQC’s outcomes into a specification of indicators and metrics by which it intends to monitor compliance. These indicators should, where possible, be produced by the National Institute for Health and Clinical Excellence (NICE) in the form of evidence-based procedures and practice which provide a practical means of compliance and of measuring compliance with fundamental standards.

This is the safety-and-quality outcome specification for a healthcare system design – the required outcome presented as a relevent metric in time-series format and qualified by context.  Only a stable outcome can be compared with a reference standard to assess the system capability. An unstable outcome metric requires inquiry to understand the root cause and an appropriate action to restore stability. A stable but incapable outcome performance requires redesign to achieve both stability and capability. And if  the terms used above are unfamiliar then that is further evidence of system-design-ignorance.   
 
1.132 The procedures and metrics produced by NICE should include evidence-based tools for establishing the staffing needs of each service. These measures need to be readily understood and accepted by the public and healthcare professionals.

This is the capacity-and-cost specification of any healthcare system design – the financial envelope within which the system must operate. The system capacity design works backwards from this constraint in the manner of “We have this much resource – what design of our system is capable of delivering the required safety and quality outcome with this capacity?”  The essence of this challenge is to identify the components of poor (i.e. wasteful) design in the existing systems and remove or replace them with less wasteful designs that achieve the same or better quality outcomes. This is not impossible but it does require system diagnostic and design capability. If the NHS had enough of those skills then the Francis Report would not exist.

1.133 Adoption of these practices, or at least their equivalent, is likely to help ensure patients’ safety. Where NICE is unable to produce relevant procedures, metrics or guidance, assistance could be sought and commissioned from the Royal Colleges or other third-party organisations, as felt appropriate by the CQC, in establishing these procedures and practices to assist compliance with the fundamental standards.

How to implement evidence-based research in the messy real world is the Elephant in the Room. It is possible but it requires techniques and tools that fall outside the traditional research and audit framework – or rather that sit between research and audit. This is where Improvement Science sits. The fact that the Report only mentions evidence-based practice and audit implies that the NHS is still ignorant of this gap and what fills it – and so it appears is RFQC.   

1.136 Information needs to be used effectively by regulators and other stakeholders in the system wherever possible by use of shared databases. Regulators should ensure that they use the valuable information contained in complaints and many other sources. The CQC’s quality risk profile is a valuable tool, but it is not a substitute for active regulatory oversight by inspectors, and is not intended to be.

Databases store data. Sharing databases will share data. Data is not information. Information requires data and the context for that data.  Furthermore having been informed does not imply either knowledge or understanding. So in addition to sharing information, the capability to convert information-into-decision is also required. And the decisions we want are called “wise decisions” which are those that result in actions and inactions that lead inevitably to the intended outcome.  The knowledge of how to do this exists but the NHS seems ignorant of it. So the challenge is one of education not of yet more investigation.

1.137 Inspection should remain the central method for monitoring compliance with fundamental standards. A specialist cadre of hospital inspectors should be established, and consideration needs to be given to collaborative inspections with other agencies and a greater exploitation of peer review techniques.

This is audit. This is the sixth stage of a 6M Design® – the Maintain step.  Inspectors need to know what they are looking for, the errors of commission and the errors of omission;  and to know what those errors imply and what to do to identify and correct the root cause of these errors when discovered. The first cadre of inspectors will need to be fully trained in healthcare systems design and healthcare systems improvement – in short – they need to be Healthcare Improvementologists. And they too will need to be subject to the same framework of accreditation, and accountability as those who work in the system they are inspecting.  This will be one of the greatest of the challenges. The fact that the Francis report exists implies that we do not have such a cadre. Who will train, accredit and inspect the inspectors? Who has proven themselves competent in reality (not rhetorically)?

1.163 Responsibility for driving improvement in the quality of service should therefore rest with the commissioners through their commissioning arrangements. Commissioners should promote improvement by requiring compliance with enhanced standards that demand more of the provider than the fundamental standards.

This means that commissioners will need to understand what improvement requires and to include that expectation in their commissioning contracts. This challenge is even geater that the creation of a “cadre of inspectors”. What is required is a “generation of competent commissioners” who are also experienced and who have demonstrated competence in healthcare system design. The Commissioners-of-the-Future will need to be experienced healthcare improvementologists.

The NHS is sick – very sick. The medicine it needs to restore its health and vitality does exist – and it will not taste very nice – but to withold an effective treatment for an serious illness on that basis is clinical negligence.

It is time for the NHS to look in the mirror and take the strong medicine. The effect is quick – it will start to feel better almost immediately. 

To deliver safety and quality and quickly and affordably is possible – and if you do not believe that then you will need to muster the humility to ask to have the how demonstrated.

6MDesign

 

Kicking the Habit

no_smoking_400_wht_6805It is not easy to kick a habit. We all know that. And for some reason the ‘bad’ habits are harder to kick than the ‘good’ ones. So what is bad about a ‘bad habit’ and why is it harder to give up? Surely if it was really bad it would be easier to give up?

Improvement is all about giving up old ‘bad’ habits and replacing them with new ‘good’ habits – ones that will sustain the improvement. But there is an invisible barrier that resists us changing any habit – good or bad. And it is that barrier to habit-breaking that we need to understand to succeed. Luck is not a reliable ally.

What does that habit-breaking barrier look like?

The problem is that it is invisible – or rather it is emotional – or to be precise it is chemical.

Our emotions are the output of a fantastically complex chemical system – our brains. And influencing the chemical balance of our brains can have a profound effect on our emotions.  That is how anti-depressants work – they very slightly adjust the chemical balance of every part of our brains. The cumulative effect is that we feel happier.  Nicotine has a similar effect.

And we can achieve the same effect without resorting to drugs or fags – and we can do that by consciously practising some new mental habits until they become ingrained and unconscious. We literally overwrite the old mental habit.

So how do we do this?

First we need to make the mental barrier visible – and then we can focus our attention on eroding it. To do that we need to remove the psychological filter that we all use to exclude our emotions. It is rather like taking off our psychological sunglasses.

When we do that the invisible barrier jumps into view: illuminated by the glare of three negative emotions.  Sadness, fear, and anxiety.  So whenever we feel any of these we know there is a barrier to improvement hiding  the emotional smoke. This is the first stage: tune in to our emotions.

The next step is counter-intuitive. Instead of running away from the negative feeling we consciously flip into a different way of thinking.  We actively engage with our negative feelings – and in a very specific way. We engage in a detached, unemotional, logical, rational, analytical  ‘What caused that negative feeling?’ way.

We then focus on the causes of the negative emotions. And when we have the root causes of our Niggles we design around them, under them, and over them.  We literally design them out of our heads.

The effect is like magic.

And this week I witnessed a real example of this principle in action.

figure_pressing_power_button_150_wht_10080One team I am working with experienced the Power of Improvementology. They saw the effect with their own eyes.  There were no computers in the way, no delays, no distortion and no deletion of data to cloud the issue. They saw the performance of their process jump dramatically – from a success rate of 60% to 96%!  And not just the first day, the second day too.  “Surprised and delighted” sums up their reaction.

So how did we achieve this miracle?

We just looked at the process through a different lens – one not clouded and misshapen by old assumptions and blackened by ignorance of what is possible.  We used the 6M Design® lens – and with the clarity of insight it brings the barriers to improvement became obvious. And they were dissolved. In seconds.

Success then flowed as the Dam of Disbelief crumbled and was washed away.

figure_check_mark_celebrate_anim_150_wht_3617The chaos has gone. The interruptions have gone. The expediting has gone. The firefighting has gone. The complaining has gone.  These chronic Niggles have have been replaced by the Nuggets of calm efficiency, new hope and visible excitement.

And we know that others have noticed the knock-on effect because we got an email from our senior executive that said simply “No one has moaned about TTOs for two days … something has changed.”    

That is Improvementology-in-Action.

 

Shifting, Shaking and Shaping

Stop Press: For those who prefer cartoons to books please skip to the end to watch the Who Moved My Cheese video first.


ThomasKuhnIn 1962 – that is half a century ago – a controversial book was published. The title was “The Structure of Scientific Revolutions” and the author was Thomas S Kuhn (1922-1996) a physicist and historian at Harvard University.  The book ushered in the concept of a ‘paradigm shift’ and it upset a lot a people.

In particular it upset a lot of scientists because it suggested that the growth of knowledge and understanding is not smooth – it is jerky. And Kuhn showed that the scientists were causing the jerking.

Kuhn described the process of scientific progress as having three phases: pre-science, normal science and revolutionary science.  Most of the work scientists do is normal science which means exploring, consolidating, and applying the current paradigm. The current conceptual model of how things work.  Anyone who argues against the paradigm is regarded as ‘mistaken’ because the paradigm represents the ‘truth’.  Kuhn draws on the history of science for his evidence, quoting  examples of how innovators such as Galileo, Copernicus, Newton, Einstein and Hawking radically changed the way that we now view the Universe. But their different models were not accepted immediately and ethusiastically because they challenged the status quo. Galileo was under house arrest for much of his life because his ‘heretical’ writings challenged the Church.  

Each revolution in thinking was both disruptive and at the same time constructive because it opened a door to allow rapid expansion of knowledge and understanding. And that foundation of knowledge that has been built over the centuries is one that we all take for granted.  It is a fragile foundation though. It could be all lost and forgotten in one generation because none of us are born with this knowledge and understanding. It is not obvious. We all have to learn it.  Even scientists.

Kuhn’s book was controversial because it suggested that scientists spend most of their time blocking change. This is not necessarily a bad thing. Stability for a while is very useful and the output of normal science is mostly positive. For example the revolution in thinking introduced by Isaac Newton (1643-1727) led directly to the Industrial Revolution and to far-reaching advances in every sphere of human knowledge. Most of modern engineering is built on Newtonian mechanics and it is only at the scales of the very large, the very small and the very quick that it falls over. Relativistic and quantum physics are more recent and very profound shifts in thinking and they have given us the digital computer and the information revolution. This blog is a manifestation of the quantum paradigm.

Kuhn concluded that the progess of change is jerky because scientists create resistance to change to create stability while doing normal science experiments.  But these same experiments produce evidence that suggest that the current paradigm is flawed. Over time the pressure of conflicting evidence accumulates, disharmony builds, conflict is inevitable and intellectual battle lines are drawn.  The deeper and more fundamental the flaw the more bitter the battle.

In contrast, newcomers seek harmony in the cacophony and propose new theories that explain both the old and the new. New paradigms. The stage is now set for a drama and the public watch bemused as the academic heavyweights slug it out. Eventually a tipping point is reached and one of the new paradigms becomes dominant. Often the transition is triggered by one crucial experiment.

There is a sudden release of the tension and a painful and disruptive conceptual  lurch – a paradigm shift. Then the whole process starts over again. The creators of the new paradigm become the consolidators and in time the defenders and eventually the dogmatics!  And it can take decades and even generations for the transition to be completed.

It is said that Albert Einstein (1879-1955) never fully accepted quantum physics even though his work planted the seeds for it and experience showed that it explained the experimental observations better. [For more about Einstein click here].              

The message that some take from Kuhn’s book is that paradigm shifts are the only way that knowledge  can advance.  With this assumption getting change to happen requires creating a crisis – a burning platform. Unfortunatelty this is an error of logic – it is a unverified generalisation from an observed specific. The evidence is growing that this we-always-need-a-burning-platform assumption is incorrect.  It appears that the growth of  knowledge and understanding can be smoother, less damaging and more effective without creating a crisis.

So what is the evidence that this is possible?

Well, what pattern would you look for to illustrate that it is possible to improve smoothly and continually? A smooth growth curve of some sort? Yes – but it is more than that.  It is a smooth curve that is steeper than anyone else’s and one that is growing steeper over time.  Evidence that someone is learning to improve faster than their peers – and learning painlessly and continuously without crises; not painfully and intermittently using crises.

Two examples are Toyota and Apple.

ToyotaLogoToyota is a Japanese car manufacturer that has out-performed other car manufacturers consistently for 40 years – despite the global economic boom-bust cycles. What is their secret formula for their success?

WorldOilPriceChartWe need a bit of history. In the 1980’s a crisis-of-confidence hit the US economy. It was suddenly threatened by higher-quality and lower-cost imported Japanese products – for example cars.

The switch to buying Japanese cars had been triggered by the Oil Crisis of 1973 when the cost of crude oil quadrupled almost overnight – triggering a rush for smaller, less fuel hungry vehicles.  This is exactly what Toyota was offering.

This crisis was also a rude awakening for the US to the existence of a significant economic threat from their former adversary.  It was even more shocking to learn that W Edwards Deming, an American statistician, had sown the seed of Japan’s success thirty years earlier and that Toyota had taken much of its inspiration from Henry Ford.  The knee-jerk reaction of the automotive industry academics was to copy how Toyota was doing it, the Toyota Production System (TPS) and from that the school of Lean Tinkering was born.

This knowledge transplant has been both slow and painful and although learning to use the Lean Toolbox has improved Western manufacturing productivity and given us all more reliable, cheaper-to-run cars – no other company has been able to match the continued success of Japan.  And the reason is that the automotive industry academics did not copy the paradigm – the intangible, subjective, unspoken mental model that created the context for success.  They just copied the tangible manifestation of that paradigm.  The tools. That is just cynically copying information and knowledge to gain a competitive advantage – it is not respecfully growing understanding and wisdom to reach a collaborative vision.

AppleLogoApple is now one of the largest companies in the world and it has become so because Steve Jobs (1955-2011), its Californian, technophilic, Zen Bhuddist, entrepreneurial co-founder, had a very clear vision: To design products for people.  And to do that they continually challenged their own and their customers paradigms. Design is a logical-rational exercise. It is the deliberate use of explicit knowledge to create something that delivers what is needed but in a different way. Higher quality and lower cost. It is normal science.

Continually challenging our current paradigm is not normal science. It is revolutionary science. It is deliberately disruptive innovation. But continually challenging the current paradigm is uncomfortable for many and, by all accounts, Steve Jobs was not an easy person to work for because he was future-looking and demanded perfection in the present. But the success of this paradigm is a matter of fact: 

“In its fiscal year ending in September 2011, Apple Inc. hit new heights financially with $108 billion in revenues (increased significantly from $65 billion in 2010) and nearly $82 billion in cash reserves. Apple achieved these results while losing market share in certain product categories. On August 20, 2012 Apple closed at a record share price of $665.15 with 936,596,000 outstanding shares it had a market capitalization of $622.98 billion. This is the highest nominal market capitalization ever reached by a publicly traded company and surpasses a record set by Microsoft in 1999.”

And remember – Apple almost went bust. Steve Jobs had been ousted from the company he co-founded in a boardroom coup in 1985.  After he left Apple floundered and Steve Jobs proved it was his paradigm that was the essential ingredient by setting up NeXT computers and then Pixar. Apple’s fortunes only recovered after 1998 when Steve Jobs was invited back. The rest is history so click to see and hear Steve Jobs describing the Apple paradigm.

So the evidence states that Toyota and Apple are doing something very different from the rest of the pack and it is not just very good product design. They are continually updating their knowledge and understanding – and they are doing this using a very different paradigm.  They are continually challenging themselves to learn. To illustrate how they do it – here is a list of the five principles that underpin Toyota’s approach:

  • Challenge
  • Improvement
  • Go and see
  • Teamwork
  • Respect

This is Win-Win-Win thinking. This is the Science of Improvement. This is Improvementology®.


So what is the reason that this proven paradigm seems so difficult to replicate? It sounds easy enough in theory! Why is it not so simple to put into practice?

The requirements are clearly listed: Respect for people (challenge). Respect for learning (improvement). Respect for reality (go and see). Respect for systems (teamwork).

In a word – Respect.

Respect is a big challenge for the individualist mindset which is fundamentally disrespectful of others. The individualist mindset underpins the I-Win-You-Lose Paradigm; the Zero-Sum -Game Paradigm; the Either-Or Paradigm; the Linear-Thinking Paradigm; the Whole-Is-The-Sum-Of-The-Parts Paradigm; the Optimise-The-Parts-To-Optimise-The-Whole Paradigm.

Unfortunately these are the current management paradigms in much of the private and public worlds and the evidence is accumulating that this paradigm is failing. It may have been adequate when times were better, but it is inadequate for our current needs and inappropriate for our future needs. 


So how can we avoid having to set fire to the current failing management paradigm to force a leap into the cold and uninviting reality of impending global economic failure?  How can we harness our burning desire for survival, security and stability? How can we evolve our paradigm pro-actively and safely rather than re-actively and dangerously?

all_in_the_same_boat_150_wht_9404We need something tangible to hold on to that will keep us from drowning while the old I-am-OK-You-are-Not-OK Paradigm is dissolved and re-designed. Like the body of the caterpillar that is dissolved and re-assembled inside the pupa as the body of a completely different thing – a butterfly.

We need a robust  and resilient structure that will keep us safe in the transition from old to new and we also need something stable that we can steer to a secure haven on a distant shore.

We need a conceptual lifeboat. Not just some driftwood,  a bag of second-hand tools and no instructions! And we need that lifeboat now.

But why the urgency?

UK_PopulationThe answer is basic economics.

The UK population is growing and the proportion of people over 65 years old is growing faster.  Advances in healthcare means that more of us survive age-related illnesses such as cancer and heart disease. We live longer and with better quality of life – which is great.

But this silver-lining hides a darker cloud.

The proportion of elderly and very elderly will increase over the next 20 years as the post WWII baby-boom reaches retirement age. The number of people who are living on pensions is increasing and the demands on health and social services is increasing.  Pensions and public services are not paid out of past savings  they are paid out of current earnings.  So the country will need to earn more to pay the bills. The UK economy will need to grow.

UK_GDP_GrowthBut the UK economy is not growing.  Our Gross Domestic Product (GDP) is currently about £380 billion and flat as a pancake. This sounds like a lot of dosh – but when shared out across the population of 56 million it gives a more modest figure of just over £100 per person per week.  And the time-series chart for the last 20 years shows that the past growth of about 1% per quarter took a big dive in 2008 and went negative! That means serious recession. It recovered briefly but is now sagging towards zero.

So we are heading for a big economic crunch and hiding our heads in the sand and hoping for the best is not a rational strategy. The only way to survive is to cut public services or for tax-funded services to become more productive. And more productive means increasing the volume of goods and services for the same cost. These are the services that we will need to support the growing population of  dependents but without increasing the cost to the country – which means the taxpayer.

The success of Toyota and Apple stemmed from learning how to do just that: how to design and deliver what is needed; and how to eliminate what is not; and how to wisely re-invest the released cash. The difference can translate into higher profit, or into growth, or into more productivity. It just depends on the context.  Toyota and Apple went for profit and growth. Tax-funded public services will need to opt for productivity. 

And the learning-productivity-improvement-by-design paradigm will be a critical-to-survival factor in tax-payer funded public services such as the NHS and Social Care.  We do not have a choice if we want to maintain what we take for granted now.  We have to proactively evolve our out-of-date public sector management paradigm. We have to evolve it into one that can support dramatic growth in productivity without sacrificing quality and safety.

We cannot use the burning platform approach. And we have to act with urgency.

We need a lifeboat!

Our current public sector management paradigm is sinking fast and is being defended and propped up by the old school managers who were brought up in it.  Unfortunately the evidence of 500 years of change says that the old school cannot unlearn. Their mental models go too deep.  The captains and their crews will go down with their ships.  [Remember the Titanic the unsinkable ship that sank in 1912 on the maiden voyage. That was a victory of reality over rhetoric.]

Those of us who want to survive are the ‘rats’. We know when it is time to leave the sinking ship.  We know we need lifeboats because it could be a long swim! We do not want to freeze and drown during the transition to the new paradigm.

So where are the lifeboats?

One possibility is an unfamiliar looking boat called “6M Design”. This boat looks odd when viewed through the lens of the conventional management paradigm because it combines three apparently contradictiry things: the rational-logical elements of system design;  the respect-for-people and learning-through-challenge principles embodied by Toyota and Apple; and the counter-intuitive technique of systems thinking.

Another reason it feel odd is because “6M Design” is not a solution; it is a meta-solution. 6M Design is a way of creating a good-enough-for-now solution by changing the current paradigm a bit at a time. It is a-how-to-design framework; it is not the-what-to-do solution. 6M Design is a paradigm shaper – not a paradigm shaker or a paradigm shifter.

And there is yet another reason why 6M Design does not float the current management boat.  It does not need to be controlled by self-appointed experts.  Business schools and management consultants, who have a vested interest in defending the current management paradigm, cannot make a quick buck from it because they are irrelevant. 6M Design is intended to be used by anyone and everyone as a common language for collectively engaging in respectful challenge and lifelong learning. Anyone can learn to use it. Anyone.

We do not need a crisis to change. But without changing we will get the crisis we do not want. If we choose to change then we can choose a safer and smoother path of change.

The choice seems clear.  Do you want to go down with the ship or stay afloat aboard an innovation boat?

And we will need something to help us navigate our boat.

If you are a reflective, conceptual learner then you might ike to read a synopsis of Thomas Kuhn’s book.  You can download a copy here. [There is also a 50 year anniversary edition of the original that was published this year].

And if you prefer learning from stories then there is an excellent one called “Who Moved My Cheese” that describes the same challenge of change. And with the power of the digital paradigm you can watch the video here.


Defusing Trust Eroders – Part III

<Bing Bong>

laptop_mail_PA_150_wht_2109Leslie’s computer heralded the arrival of yet another email!  They were coming in faster and faster – now that the word had got out on the grapevine about Improvementology

Leslie glanced at the sender. It was from Bob. That was a surprise. Bob had never emailed out-of-the-blue before.  Leslie was too impatient to wait until later to read the email.

<Dear Leslie, could I trouble you to ask your advice on something. It is not urgent.  A ten minute chat on the phone would be all I need. If that is OK please let me know a good time is and I will ring you. Bob>

Leslie was consumed with curiosity. What could Bob possibly want advice on? It was Leslie who sought advice from Bob – not the other way around.

Leslie could not wait and emailed back immediately that it was OK to talk now.

<Ring Ring>

Hello Bob, what a pleasant surprise! I am very curious to know what you need my advice about.

? Thank you Leslie.  What I would like your counsel on is how to engage in learning the science of improvement.

Wow!  That is a surprising question. I am really confused now. You helped me to learn this new thinking and now you are asking me to teach you?

? Yes. On the surface it seems counter-intuitive. It is a genuine request though. I need to learn and understand what works for you and what does not.

OK. I think I am getting an idea of what you are asking.  But I am only just getting grips with the basics. I do not know how to engage others yet and I certainly would not be able to teach anyone!

? I must apologise. I was not clear in my request. I need to understand how you engaged yourself in learning. I only provided the germ of the idea – it was you who added what was needed for it to develop into something tangible and valuable for you.  I need to understand how that happened.

Ahhhh! I see what you mean. Yes. Let me think. Would it help if I describe my current mental metaphor?

? That sounds like an excellent plan.

OK. Well your phrase ‘germ of an idea’ was a trigger. I see the science of improvement as a seed of information that grows into a sturdy tree of understanding.  Just like the ‘tiny acorn into the mighty oak’ concept.  Using that seed-to-tree metaphor helped me to appreciate that the seed is necessary but it is not sufficient. There are other things that are needed too. Soil, water, air, sunlight, and protection from hazards and predators.

I then realised that the seed-to-tree metaphor goes deeper.  One insight that I had was when I realised that the first few leaves are critical to success – because they provide the ongoing energy and food to support the growth of more leaves, and the twigs, branches, trunk, and roots that support the leaves and supply them with water and nutrients.  I see the tree as synergistic system that has a common purpose: to become big enough and stable enough to be able to survive the inevitable ups-and-downs of reality. To weather the winter storms and survive the summer droughts.

plant_metaphor_240x135It seemed to me that the first leaf needed to be labelled ‘safety’ because in our industry if we damage our customers or our staff we do not get a second chance!  The next leaf to grow is labelled ‘quality’ and that means quality-by-design.  Doing the right thing and doing it right first time without needing inspection-and-correction. The safety and quality leaves provide the resources needed to grow the next leaf which I labelled ‘delivery’.  Getting the work done in time, on time, every time.  Together these three leaves support the growth of the fourth – ‘economy’ which means using only what is necessaryand also having just enough reserve to ride over the inevitable rocks and ruts in the road of reality.

I then reflected on what the water and the sunshine would represent when applying improvement science in the real world.

It occurred to me that the water in the tree is like money in a real system.  It is required for both growth and health; it must flow to where it is needed, when it is needed and as much as needed. Too little will prevent growth, and too much water at the wrong time and wrong place is just as unhealthy.  I did some reading about the biology of trees and I learned that the water is pulled up the tree! The ‘suck’ is created by the water evaporating from the leaves. The plant does not have a committee that decides where the available water should go! It is a simple self-adjusting system.  

The sunshine for the tree is like feedback for people. In a plant the suns energy provides the motive force for the whole system.  In our organisations we call it motivation and the feedback loop is critical to success. Keeping people in the dark about what is required and how they are doing is demotivating.  Healthy organisations are feedback-fuelled!

? Yes. I see the picture in my mind clearly. That is a powerful metaphor. How did it help overcome the natural resistance to change?

Well using the 6M Design method and taking the ‘sturdy tree of understanding’ as the objective of the seed-to-tree process I then considered what the possible ways it could fail – the failure modes and effects analysis method that you taught me.

? OK. Yes I see how that approach would help – approaching the problem from the far side of the invisible barrier. What insights did that lead to?

poison_faucet_150_wht_9860Well it highlighted that just having enough water and enough sunshine was not sufficient – it had to be clean water and the right sort of sunshine.  The quality is as critical as the quantity. A toxic environment will kill tender new shoots of improvement long before they can get established.  Cynicism is like cyanide! Non-specific cost cutting is like blindly wielding a pair of sharp secateurs. Ignoring the competition from wasteful weeds and political predators is a guaranteed recipe-for-failure too.       

This metaphor really helped because it allowed me to draw up a checklist of necessary conditions for successful growth of knowledge and understanding.  Rather like the shopping list that a gardener might have. Viable seeds, fertile soil, clean water, enough sunlight, and protection from threats and hazards, especially in the early stages. And patience. Growing from seed takes time. Not all seeds will germinate. Not all seeds can thrive in the context our gardener is able to create.  And the harsher the elements the fewer the types of seed that have any chance of survival. The conditions select the successful seeds. Deserts select plants that hoard water so the desert remains a desert. If money is too tight the miserly will thrive at the expense of the charitable – and money remains hoarded and fought over as the organisation withers. And the timing is crucial – the seeds need to be planted at the right time in the cycle of change.  Too early and they cannot germinateg, too late and they do not have time to become strong enough to survive in the real world.    

? Yes. I see. The deeper you dig into your seeds-to-trees metaphor the more insightful it becomes.

Bob, you just said something really profound then that has unlocked something for me.

? Did I? What was it?

RainForestYou said ‘seeds-to-trees’.  Up until you said that I was unconsciously limiting myself to one-seed-to-one-tree. Of course! If it works for the individual it can work for the collective.  Woods and forests are collectives. The best example I can think of is a tropical rainforest.  With ample water and sunshine the plant-collective creates a synergistic system that has endured millions of years of global climate change. And one of the striking features of the tropical rain forest is the diversity of species. It is as if that diversity is an important part of the design. Competition is ever present though – all the trees compete for sunlight – but it is healthy competition. Trees do not succeed individually by hunting each other down. And the diversity seems to be an important component of healthy competition too. It is as if they are in a shared race to the sun and their differences are an asset rather than a liability. If all the trees were the same the forest would be at greater risk of all making the same biological blunder and suddenly becoming extinct if their environment changes unpredictably.  Uniformity only seems to work in harsh conditions.

? That is a profound observation Leslie. I had not consciously made that distinction.

So have I answered your question? Have I helped you? It has certainly helped me by being asked to putting my thoughts into words. I see it clearer too now.

? Yes. You are a good teacher. I believe others will resonate with your seeds-to-trees metaphor just as I have.

Thank you Bob. I believe I am beginning to understand something you said in a previous conversation – “the teacher is the person who learns the most”.  I am going to test our seeds-to-trees metaphor on the real world! And I will feedback what I learn – because in doing that I will amplify and clarify my own learning.

? Thank you Leslie. I look forward to learning with you.


Defusing Trust Eroders – Part I

Defusing Trust Eroders – Part II


Defusing Trust Eroders – Part I

texting_a_friend_back_n_forth_150_wht_5352<Beep><Beep>

Bob heard the beep and looked at his phone. There was a text message from Leslie, one of his Improvementology mentees.

It said:

Hi Bob, Do you have time to help me with a behaviour barrier that I keep hitting and cannot see a way around?

Bob thumbed his reply:

?Yes. I am free at the moment – please feel free to call.

<Ring><Ring>

?Hello Leslie. How can I help?

Hi Bob.  I really hope  you can help me with this recurring Niggle. I have looked through my Foundation notes and I cannot see where it is described and it does not seem to be a Nerve Curve problem.

?I will do my best. Can you outline the context or give me an example?

It is easier to give you an example.  This week I was working with a team in my organisation who approached me to help them with recurring niggles in their process. I went to see for myself and I mapped their process and identified where their niggles were and what was driving them.  That was the easy bit.  But when I started to make suggestions of what they could do to resolve their problems they started to give me a hard time and kept saying ‘Yes, but …”.  It was as if they were asking for help but did not really want it.  They kept emphasising that all their problems were caused by other people outside their department and kept asking me what I could do about it. I felt as if they were pushing the problem onto me and I was also feeling guilty for not being able to sort it out for them.

There was a pause. Then Bob said.

?You are correct Leslie. This is not a Nerve Curve issue.  It is a different people-related system issue. It is ubiquitous and it is a potentially deadly organisational disease. We call it Trust Eroding Behaviour.

That sounds exactly how it felt for me. I went to help in good faith and quickly started to feel distrustful of their motives. It was not a good feeling and I do not know if I want to go back. One part of me says ‘ It is your duty – you have made a commitment’ and another part of me says ‘Stop – you are being suckered.’  What is happening?

?Do you remember that the Improvement Science framework has three parts – Processes, People and Systems?

Yes.

?OK. This is part of the People component and it is similar to but different from the Nerve Curve.  The Nerve Curve is a hard-wired emotional response to any change. The Fright, Fight, Flight response. It is just the way we are and it is not ‘correctable’. This is different. This is a learned behaviour.  Which means it can be unlearned.

Unlearned? That is not a concept that I am familiar with. Can you explain? Is it the same as forgetting?

?Forgetting means that you cannot bring something to conscious awareness.  Unlearning is different – it operates at a deeper psychological and emotional level.  Have you ever tried to change a bad habit?

Yes I have. I used to smoke which is definitely a bad habit and I managed to give up but it was really tough.

?What you did was to unlearn the smoking habit.  You did not forget about smoking.  You could not because you are repeatedly reminded by other people who still indulge in the habit.

Ah ha! I see what you mean. Yes – after I kicked the habit I became a bit of a Stop-Smoking evangelist. I even had a tee shirt. It did not seem to make much impact on the still-smokers though.  If anything it seemed to make them more determined to keep doing it – just to spite me!

?Yes. What you describe is what many people report. It is part if the same learned behaviour patterns. The habit that is causing the issue is rather like smoking because it causes short-term pleasure and long-term pain. It is both attractive and destructive.  The behaviour feels good briefly but it is toxic to trust which is why we call it the Trust Eroding Behaviour.

What is the habit? I do not recognise the behaviour that you are referring to.

?The habit is called discounting.  The reason we are not aware of it is we do it unconsciously. 

What is it that we do?

?It is easier to give you some examples.  How do you feel when all the feedback you get is silence? How do you feel when someone complains that their mistake was not their fault? How do you feel when you try to help but you hit invisible barriers that block your progess?

sad_faceOuch! Those are uncomfortable questions. When I get no feedback I feel anxious and even fearful that I have made a mistake,  and no one is telling me, and a nasty surprise is on its way. When someone keeps complaining that even though they made the mistake they are not to blame I feel angry. When I try to help others and fail I feel sad because my reputation, credibility and self-confidence is damaged.

?OK. Do not panic. These negative emotional reactions are the normal reaction to discounting behaviour.  Another word for discounting is disrespect. The three primary emotions we feel are fear, anger and sadness. Fear is the sense of impending loss; anger is the sense of present loss; and sadness is the sense of past loss.  They are the same emotions that we feel on the Nerve Curve.  What is different is the cause. Discounting is a learned disrepectful behaviour.

Oooo! That really resonates with me. Just reflecting on one day at work I can think of lots of examples of all of those negative feelings. So when do we learn this discounting habit?

?It is believed that we learn this behaviour when we are very young – before the age of seven.  And because we learn it so young we internalise it and we become unaware of it.  It then becomes a habit that is reinforced with years of practice.

Wow! That rings true for me – and it may explain why I actively avoided some people at school – they were just toxic.  But they had friends, went to college, got jobs, married andstarted families – just like me. Does that mean we grow out of it? 

?Most people unlearn some of these behavioural habits because life-experience teaches them that they are counter-productive. We all carry some of them though and they tend to emerge when we are tired and under pressure. Some people get sort of stuck and carry these behaviours into their adult life. Their behaviour can be toxic to organisations.

I definitely resonate with that statement! Is there a way to unlearn this discounting habit?

?Yes – just becoming aware of its existence is the first step. There are some strategies that we can learn, practice and use to defuse the discounting behaviour and over time our bad habit can be kicked.”

Wow! That sounds really useful.  And not just at work – I can see benefits in other areas of my life too.

?Yes. Improvement science is powerful medicine.

So what do I need to do?

?You have learned the 6M Design framework for resolving process niggles. There is an equivalent one for dissolving people niggles.  I will send you some material to read and then we can talk again.

Will it help me resolve the problem that I have with the department that asked for my help who are behaving like Victims?

?Yes.

OK – please send me the material. I promise to read it, reflect on it and I will arrange another conversation. I cannot wait to learn how to nail this niggle! I can see a huge win-win-win opportunity here.

?OK. The material is on its way. I look forward to our next conversation.


Defusing Trust Eroders – Part I

Defusing Trust Eroders – Part II

Defusing Trust Eroders – Part III


The Six Dice Game

<Ring Ring><Ring Ring>

?Hello, you are through to the Improvement Science Helpline. How can we help?

This is Leslie, one of your FISH apprentices.  Could I speak to Bob – my ISP coach?

?Yes, Bob is free. I will connect you now.

<Ring Ring><Ring Ring>

?Hello Leslie, Bob here. How can I help?

Hi Bob, I have a problem that I do not feel my Foundation training has equipped me to solve. Can I talk it through with you?

?Of course. Can you outline the context for me?

Yes. The context is a department that is delivering an acceptable quality-of-service and is delivering on-time but is failing financially. As you know we are all being forced to adopt austerity measures and I am concerned that if their budget is cut then they will fail on delivery and may start cutting corners and then fail on quality too.  We need a win-win-win outcome and I do not know where to start with this one.

?OK – are you using the 6M Design method?

Yes – of course!

?OK – have you done The 4N Chart for the customer of their service?

Yes – it was their customers who asked me if I could help and that is what I used to get the context.

?OK – have you done The 4N Chart for the department?

Yes. And that is where my major concerns come from. They feel under extreme pressure; they feel they are working flat out just to maintain the current level of quality and on-time delivery; they feel undervalued and frustrated that their requests for more resources are refused; they feel demoralized; demotivated and scared that their service may be ‘outsourced’. On the positive side they feel that they work well as a team and are willing to learn. I do not know what to do next.

?OK. Do not panic. This sounds like a very common and treatable system illness.  It is a stream design problem which may be the reason your Foundation training feels insufficient. Would you like to see how a Practitioner would approach this?

Yes please!

?OK. Have you mapped their internal process?

Yes. It is a six-step process for each job. Each step has different requirements and are done by different people with different skills. In the past they had a problem with poor service quality so extra safety and quality checks were imposed by the Governance department.  Now the quality of each step is measured on a 1-6 scale and the quality of the whole process is the sum of the individual steps so is measured on a scale of 6 to 36. They now have been given a minimum quality target of 21 to achieve for every job. How they achieve that is not specified – it was left up to them.

?OK – do they record their quality measurement data?

Yes – I have their report.

?OK – how is the information presented?

As an average for the previous month which is reported up to the Quality Performance Committee.

?OK – what was the average for last month?

Their results were 24 – so they do not have an issue delivering the required quality. The problem is the costs they are incurring and they are being labelled by others as ‘inefficient’. Especially the departments who are in budget and are annoyed that this department keeps getting ‘bailed out’.

?OK. One issue here is the quality reporting process is not alerting you to the real issue. It sounds from what you say that you have fallen into the Flaw of Averages trap.

I don’t understand. What is the Flaw of Averages trap?

?The answer to your question will become clear. The finance issue is a symptom – an effect – it is unlikely to be the cause. When did this finance issue appear?

Just after the Safety and Quality Review. They needed to employ more agency staff to do the extra work created by having to meet the new Minimum Quality target.

?OK. I need to ask you a personal question. Do you believe that improving quality always costs more?

I have to say that I am coming to that conclusion. Our Governance and Finance departments are always arguing about it. Governance state ‘a minimum standard of safety and quality is not optional’ and finance say ‘but we are going out of business’. They are at loggerheads. The departments get caught in the cross-fire.

?OK. We will need to use reality to demonstrate that this belief is incorrect. Rhetoric alone does not work. If it did then we would not be having this conversation. Do you have the raw data from which the averages are calculated?

Yes. We have the data. The quality inspectors are very thorough!

?OK – can you plot the quality scores for the last fifty jobs as a BaseLine chart?

Yes – give me a second. The average is 24 as I said.

?OK – is the process stable?

Yes – there is only one flag for the fifty. I know from my FISH training that is not a cause for alarm.

?OK – what is the process capability?

I am sorry – I don’t know what you mean by that?

?My apologies. I forgot that you have not completed the Practitioner training yet. The capability is the range between the red lines on the chart.

Um – the lower line is at 17 and the upper line is at 31.

?OK – how many points lie below the target of 21.

None of course. They are meeting their Minimum Quality target. The issue is not quality – it is money.

There was a pause.  Leslie knew from experience that when Bob paused there was a surprise coming.

?Can you email me your chart?

A cold-shiver went down Leslie’s back. What was the problem here? Bob had never asked to see the data before.

Sure. I will send it now.  The recent fifty is on the right, the data on the left is from after the quality inspectors went in and before the the Minimum Quality target was imposed. This is the chart that Governance has been using as evidence to justify their existence because they are claiming the credit for improving the quality.

?OK – thanks. I have got it – let me see.  Oh dear.

Leslie was shocked. She had never heard Bob use language like ‘Oh dear’.

There was another pause.

?Leslie, what is the context for this data? What does the X-axis represent?

Leslie looked at the chart again – more closely this time. Then she saw what Bob was getting at. There were fifty points in the first group, and about the same number in the second group. That was not the interesting part. In the first group the X-axis went up to 50 in regular steps of five; in the second group it went from 50 to just over 149 and was no longer regularly spaced. Eventually she replied.

Bob, that is a really good question. My guess it is that this is the quality of the completed work.

?It is unwise to guess. It is better to go and see reality.

You are right. I knew that. It is drummed into us during the Foundation training! I will go and ask. Can I call you back?

?Of course. I will email you my direct number.


[reveal heading=”Click here to read the rest of the story“]


<Ring Ring><Ring Ring>

?Hello, Bob here.

Bob – it is Leslie. I am  so excited! I have discovered something amazing.

?Hello Leslie. That is good to hear. Can you tell me what you have discovered?

I have discovered that better quality does not always cost more.

?That is a good discovery. Can you prove it with data?

Yes I can!  I am emailing you the chart now.

?OK – I am looking at your chart. Can you explain to me what you have discovered?

Yes. When I went to see for myself I saw that when a job failed the Minimum Quality check at the end then the whole job had to be re-done because there was no time to investigate and correct the causes of the failure.  The people doing the work said that they were helpless victims of errors that were made upstream of them – and they could not predict from one job to the next what the error would be. They said it felt like quality was a lottery and that they were just firefighting all the time. They knew that just repeating the work was not solving the problem but they had no other choice because they were under enormous pressure to deliver on-time as well. The only solution they could see is was to get more resources but their requests were being refused by Finance on the grounds that there is no more money. They felt completely trapped.

?OK. Can you describe what you did?

Yes. I saw immediately that there were so many sources of errors that it would be impossible for me to tackle them all. So I used the tool that I had learned in the Foundation training: the Niggle-o-Gram. That focussed us and led to a surprisingly simple, quick, zero-cost process design change. We deliberately did not remove the Inspection-and-Correction policy because we needed to know what the impact of the change would be. Oh, and we did one other thing that challenged the current methods. We plotted both the successes and the failures on the BaseLine chart so we could see both the the quality and the work done on one chart.  And we updated the chart every day and posted it chart on the notice board so everyone in the department could see the effect of the change that they had designed. It worked like magic! They have already slashed their agency staff costs, the whole department feels calmer and they are still delivering on-time. And best of all they now feel that they have the energy and time to start looking at the next niggle. Thank you so much! Now I see how the tools and techniques I learned in FISH school are so powerful and now I understand better the reason we learned them first.

?Well done Leslie. You have taken an important step to becoming a fully fledged Improvement Science Practitioner. There are many more but you have learned some critical lessons in this challenge.


This scenario is fictional but realistic.

And it has been designed so that it can be replicated easily using a simple game that requires only pencil, paper and some dice.

If you do not have some dice handy then you can use this little program that simulates rolling six dice.

The Six Digital Dice program (for PC only).

Instructions
1. Prepare a piece of A4 squared paper with the Y-axis marked from zero to 40 and the X-axis from 1 to 80.
2. Roll six dice and record the score on each (or one die six times) – then calculate the total.
3. Plot the total on your graph. Left-to-right in time order. Link the dots with lines.
4. After 25 dots look at the chart. It should resemble the leftmost data in the charts above.
5. Now draw a horizontal line at 21. This is the Minimum Quality Target.
6. Keep rolling the dice – six per cycle, adding the totals to the right of your previous data.

But this time if the total is less than 21 then repeat the cycle of six dice rolls until the score is 21 or more. Record on your chart the output of all the cycles – not just the acceptable ones.

7. Keep going until you have 25 acceptable outcomes. As long as it takes.

Now count how many cycles you needed to complete in order to get 25 acceptable outcomes.  You should find that it is about twice as many as before you “imposed” the Inspect-and-Correct QI policy.

This illustrates the problem of an Inspection-and-Correction design for quality improvement.  It does improve the quality of the output – but at a higher cost.  We are treating the symptoms and ignoring the disease.

The internal design of the process is unchanged – and it is still generating mistakes.

How much quality improvement you get and how much it costs you is determined by the design of the underlying process – which has not changed. There is a Law of Diminishing returns here – and a risk.

The risk is that if quality improves as the result of applying a quality target then it encourages the Governance thumbscrews to be tightened further and forces the people further into cross-fire between Governance and Finance.

The other negative consequence of the Inspection-and-Correction approach is that it increases both the average and the variation in lead time which also fuels the calls for more targets, more sticks, calls for  more resources and pushes costs up even further.

The lesson from this simple reality check seems clear.

The better strategy for improving quality is to design the root causes of errors out of the processes  because then we will get improved quality and improved delivery and improved productivity and we will discover that we have improved safety as well.

The Six Dice Game is a simpler version of the famous Red Bead Game that W Edwards Deming used to explain why the arbitrary-target-driven-stick-and-carrot style of management creates more problems than it solves.

The illusion of short-term gain but the reality of long-term pain.

And if you would like to see and hear Deming talking about the science of improvement there is a video of him speaking in 1984. He is at the bottom of the page.  Click here.

[/reveal]

The Three R’s

Processes are like people – they get poorly – sometimes very poorly.

Poorly processes present with symptoms. Symptoms such as criticism, complaints, and even catastrophes.

Poorly processes show signs. Signs such as fear, queues and deficits.

So when a process gets very poorly what do we do?

We follow the Three R’s

1-Resuscitate
2-Review
3-Repair

Resuscitate means to stabilize the process so that it is not getting sicker.

Review means to quickly and accurately diagnose the root cause of the process sickness.

Repair means to make changes that will return the process to a healthy and stable state.

So the concept of ‘stability’ is fundamental and we need to understand what that means in practice.

Stability means ‘predictable within limits’. It is not the same as ‘constant’. Constant is stable but stable is not necessarily constant.

Predictable implies time – so any measure of process health must be presented as time-series data.

We are now getting close to a working definition of stability: “a useful metric of system performance that is predictable within limits over time”.

So what is a ‘useful metric’?

There will be at least three useful metrics for every system: a quality metric, a time metric and a money metric.

Quality is subjective. Money is objective. Time is both.

Time is the one to start with – because it is the easiest to measure.

And if we treat our system as a ‘black box’ then from the outside there are three inter-dependent time-related metrics. These are external process metrics (EPMs) – sometimes called Key Performance Indicators (KPIs).

Flow in – also called demand
Flow out – also called activity
Delivery time – which is the time a task spends inside our system – also called the lead time.

But this is all starting to sound like rather dry, conceptual, academic mumbo-jumbo … so let us add a bit of realism and drama – let us tell this as a story …

[reveal heading=”Click here to reveal the story …“] 


Picture yourself as the manager of a service that is poorly. Very poorly. You are getting a constant barrage of criticism and complaints and the occasional catastrophe. Your service is struggling to meet the required delivery time performance. Your service is struggling to stay in budget – let alone meet future cost improvement targets. Your life is a constant fire-fight and you are getting very tired and depressed. Nothing you try seems to make any difference. You are starting to think that anything is better than this – even unemployment! But you have a family to support and jobs are hard to come by in austere times so jumping is not an option. There is no way out. You feel you are going under. You feel are drowning. You feel terrified and helpless!

In desperation you type “Management fire-fighting” into your web search box and among the list of hits you see “Process Improvement Emergency Service”.  That looks hopeful. The link takes you to a website and a phone number. What have you got to lose? You dial the number.

It rings twice and a calm voice answers.

?“You are through to the Process Improvement Emergency Service – what is the nature of the process emergency?”

“Um – my service feels like it is on fire and I am drowning!”

The calm voice continues in a reassuring tone.

?“OK. Have you got a minute to answer three questions?”

“Yes – just about”.

?“OK. First question: Is your service safe?”

“Yes – for now. We have had some catastrophes but have put in lots of extra safety policies and checks which seems to be working. But they are creating a lot of extra work and pushing up our costs and even then we still have lots of criticism and complaints.”

?“OK. Second question: Is your service financially viable?”

“Yes, but not for long. Last year we just broke even, this year we are projecting a big deficit. The cost of maintaining safety is ‘killing’ us.”

?“OK. Third question: Is your service delivering on time?”

“Mostly but not all of the time, and that is what is causing us the most pain. We keep getting beaten up for missing our targets.  We constantly ask, argue and plead for more capacity and all we get back is ‘that is your problem and your job to fix – there is no more money’. The system feels chaotic. There seems to be no rhyme nor reason to when we have a good day or a bad day. All we can hope to do is to spot the jobs that are about to slip through the net in time; to expedite them; and to just avoid failing the target. We are fire-fighting all of the time and it is not getting better. In fact it feels like it is getting worse. And no one seems to be able to do anything other than blame each other.”

There is a short pause then the calm voice continues.

?“OK. Do not panic. We can help – and you need to do exactly what we say to put the fire out. Are you willing to do that?”

“I do not have any other options! That is why I am calling.”

The calm voice replied without hesitation. 

?“We all always have the option of walking away from the fire. We all need to be prepared to exercise that option at any time. To be able to help then you will need to understand that and you will need to commit to tackling the fire. Are you willing to commit to that?”

You are surprised and strangely reassured by the clarity and confidence of this response and you take a moment to compose yourself.

“I see. Yes, I agree that I do not need to get toasted personally and I understand that you cannot parachute in to rescue me. I do not want to run away from my responsibility – I will tackle the fire.”

?“OK. First we need to know how stable your process is on the delivery time dimension. Do you have historical data on demand, activity and delivery time?”

“Hey! Data is one thing I do have – I am drowning in the stuff! RAG charts that blink at me like evil demons! None of it seems to help though – the more data I get sent the more confused I become!”

?“OK. Do not panic.  The data you need is very specific. We need the start and finish events for the most recent one hundred completed jobs. Do you have that?”

“Yes – I have it right here on a spreadsheet – do I send the data to you to analyse?”

?“There is no need to do that. I will talk you through how to do it.”

“You mean I can do it now?”

?“Yes – it will only take a few minutes.”

“OK, I am ready – I have the spreadsheet open – what do I do?”

?“Step 1. Arrange the start and finish events into two columns with a start and finish event for each task on each row.

You copy and paste the data you need into a new worksheet. 

“OK – done that”.

?“Step 2. Sort the two columns into ascending order using the start event.”

“OK – that is easy”.

?“Step 3. Create a third column and for each row calculate the difference between the start and the finish event for that task. Please label it ‘Lead Time’”.

“OK – do you want me to calculate the average Lead Time next?”

There was a pause. Then the calm voice continued but with a slight tinge of irritation.

?“That will not help. First we need to see if your system is unstable. We need to avoid the Flaw of Averages trap. Please follow the instructions exactly. Are you OK with that?”

This response was a surprise and you are starting to feel a bit confused.    

“Yes – sorry. What is the next step?”

?“Step 4: Plot a graph. Put the Lead Time on the vertical axis and the start time on the horizontal axis”.

“OK – done that.”

?“Step 5: Please describe what you see?”

“Um – it looks to me like a cave full of stalagtites. The top is almost flat, there are some spikes, but the bottom is all jagged.”

?“OK. Step 6: Does the pattern on the left-side and on the right-side look similar?”

“Yes – it does not seem to be rising or falling over time. Do you want me to plot the smoothed average over time or a trend line? They are options on the spreadsheet software. I do that use all the time!”

The calm voice paused then continued with the irritated overtone again.

?“No. There is no value is doing that. Please stay with me here. A linear regression line is meaningless on a time series chart. You may be feeling a bit confused. It is common to feel confused at this point but the fog will clear soon. Are you OK to continue?”

An odd feeling starts to grow in you: a mixture of anger, sadness and excitement. You find yourself muttering “But I spent my own hard-earned cash on that expensive MBA where I learned how to do linear regression and data smoothing because I was told it would be good for my career progression!”

?“I am sorry I did not catch that? Could you repeat it for me?”

“Um – sorry. I was talking to myself. Can we proceed to the next step?”

?”OK. From what you say it sounds as if your process is stable – for now. That is good.  It means that you do not need to Resuscitate your process and we can move to the Review phase and start to look for the cause of the pain. Are you OK to continue?”

An uncomfortable feeling is starting to form – one that you cannot quite put your finger on.

“Yes – please”. 

?Step 7: What is the value of the Lead Time at the ‘cave roof’?”

“Um – about 42”

?“OK – Step 8: What is your delivery time target?”

“42”

?“OK – Step 9: How is your delivery time performance measured?”

“By the percentage of tasks that are delivered late each month. Our target is better than 95%. If we fail any month then we are named-and-shamed at the monthly performance review meeting and we have to explain why and what we are going to do about it. If we succeed then we are spared the ritual humiliation and we are rewarded by watching others else being mauled instead. There is always someone in the firing line and attendance at the meeting is not optional!”

You also wanted to say that the data you submit is not always completely accurate and that you often expedite tasks just to avoid missing the target – in full knowkedge that the work had not been competed to the required standard. But you hold that back. Someone might be listening.

There was a pause. Then the calm voice continued with no hint of surprise. 

?“OK. Step 10. The most likely diagnosis here is a DRAT. You have probably developed a Gaussian Horn that is creating the emotional pain and that is fuelling the fire-fighting. Do not panic. This is a common and curable process illness.”

You look at the clock. The conversation has taken only a few minutes. Your feeling of panic is starting to fade and a sense of relief and curiosity is growing. Who are these people?

“Can you tell me more about a DRAT? I am not familiar with that term.”

?“Yes.  Do you have two minutes to continue the conversation?”

“Yes indeed! You have my complete attention for as long as you need. The emails can wait.”

The calm voice continues.

?“OK. I may need to put you on hold or call you back if another emergency call comes in. Are you OK with that?”

“You mean I am not the only person feeling like this?”

?“You are not the only person feeling like this. The process improvement emergency service, or PIES as we call it, receives dozens of calls like this every day – from organisations of every size and type.”

“Wow! And what is the outcome?”

There was a pause. Then the calm voice continued with an unmistakeable hint of pride.

?“We have a 100% success rate to date – for those who commit. You can look at our performance charts and the client feedback on the website.”

“I certainly will! So can you explain what a DRAT is?” 

And as you ask this you are thinking to yourself ‘I wonder what happened to those who did not commit?’ 

The calm voice interrupts your train of thought with a well-practiced explanation.

?“DRAT stands for Delusional Ratio and Arbitrary Target. It is a very common management reaction to unintended negative outcomes such as customer complaints. The concept of metric-ratios-and-performance-specifications is not wrong; it is just applied indiscriminately. Using DRATs can drive short-term improvements but over a longer time-scale they always make the problem worse.”

One thought is now reverberating in your mind. “I knew that! I just could not explain why I felt so uneasy about how my service was being measured.” And now you have a new feeling growing – anger.  You control the urge to swear and instead you ask:

“And what is a Horned Gaussian?”

The calm voice was expecting this question.

?“It is easier to demonstrate than to explain. Do you still have your spreadsheet open and do you know how to draw a histogram?”

“Yes – what do I need to plot?”

?“Use the Lead Time data and set up ten bins in the range 0 to 50 with equal intervals. Please describe what you see”.

It takes you only a few seconds to do this.  You draw lots of histograms – most of them very colourful but meaningless. No one seems to mind though.

“OK. The histogram shows a sort of heap with a big spike on the right hand side – at 42.”

The calm voice continued – this time with a sense of satisfaction.

?“OK. You are looking at the Horned Gaussian. The hump is the Gaussian and the spike is the Horn. It is a sign that your complex adaptive system behaviour is being distorted by the DRAT. It is the Horn that causes the pain and the perpetual fire-fighting. It is the DRAT that causes the Horn.”

“Is it possible to remove the Horn and put out the fire?”

?“Yes.”

This is what you wanted to hear and you cannot help cutting to the closure question.

“Good. How long does that take and what does it involve?”

The calm voice was clearly expecting this question too.

?“The Gaussian Horn is a non-specific reaction – it is an effect – it is not the cause. To remove it and to ensure it does not come back requires treating the root cause. The DRAT is not the root cause – it is also a knee-jerk reaction to the symptoms – the complaints. Treating the symptoms requires learning how to diagnose the specific root cause of the lead time performance failure. There are many possible contributors to lead time and you need to know which are present because if you get the diagnosis wrong you will make an unwise decision, take the wrong action and exacerbate the problem.”

Something goes ‘click’ in your head and suddently your fog of confusion evaporates. It is like someone just switched a light on.

“Ah Ha! You have just explained why nothing we try seems to work for long – if at all.  How long does it take to learn how to diagnose and treat the specific root causes?”

The calm voice was expecting this question and seemed to switch to the next part of the script.

?“It depends on how committed the learner is and how much unlearning they have to do in the process. Our experience is that it takes a few hours of focussed effort over a few weeks. It is rather like learning any new skill. Guidance, practice and feedback are needed. Just about anyone can learn how to do it – but paradoxically it takes longer for the more experienced and, can I say, cynical managers. We believe they have more unlearning to do.”

You are now feeling a growing sense of urgency and excitement.

“So it is not something we can do now on the phone?”

?“No. This conversation is just the first step.”

You are eager now – sitting forward on the edge of your chair and completely focussed.

“OK. What is the next step?”

There is a pause. You sense that the calm voice is reviewing the conversation and coming to a decision.

?“Before I can answer your question I need to ask you something. I need to ask you how you are feeling.”

That was not the question you expected! You are not used to talking about your feelings – especially to a complete stranger on the phone – yet strangely you do not sense that you are being judged. You have is a growing feeling of trust in the calm voice.

You pause, collect your thoughts and attempt to put your feelings into words. 

“Er – well – a mixture of feelings actually – and they changed over time. First I had a feeling of surprise that this seems so familiar and straightforward to you; then a sense of resistance to the idea that my problem is fixable; and then a sense of confusion because what you have shown me challenges everything I have been taught; and then a feeling distrust that there must be a catch and then a feeling of fear of embarassement if I do not spot the trick. Then when I put my natural skepticism to one side and considered the possibility as real then there was a feeling of anger that I was not taught any of this before; and then a feeling of sadness for the years of wasted time and frustration from battling something I could not explain.  Eventually I started to started to feel that my cherished impossibility belief was being shaken to its roots. And then I felt a growing sense of curiosity, optimism and even excitement that is also tinged with a feeling of fear of disappointment and of having my hopes dashed – again.”

There was a pause – as if the calm voice was digesting this hearty meal of feelings. Then the calm voice stated:

?“You are experiencing the Nerve Curve. It is normal and expected. It is a healthy sign. It means that the healing process has already started. You are part of your system. You feel what it feels – it feels what you do. The sequence of negative feelings: the shock, denial, anger, sadness, depression and fear will subside with time and the positive feelings of confidence, curiosity and excitement will replace them. Do not worry. This is normal and it takes time. I can now suggest the next step.”

You now feel like you have just stepped off an emotional rollercoaster – scary yet exhilarating at the same time. A sense of relief sweeps over you. You have shared your private emotional pain with a stranger on the phone and the world did not end! There is hope.

“What is the next step?”

This time there was no pause.

?“To commit to learning how to diagnose and treat your process illnesses yourself.”

“You mean you do not sell me an expensive training course or send me a sharp-suited expert who will come tell me what to do and charge me a small fortune?”

There is an almost sarcastic tone to your reply that you regret as soon as you have spoken.

Another pause.  An uncomfortably long one this time. You sense the calm voice knows that you know the answer to your own question and is waiting for you to answer it yourself.

You answer your own question.  

“OK. I guess not. Sorry for that. Yes – I am definitely up for learning how! What do I need to do.”

?“Just email us. The address is on the website. We will outline the learning process. It is neither difficult nor expensive.”

The way this reply was delivered – calmly and matter-of-factly – was reassuring but it also promoted a new niggle – a flash of fear.

“How long have I got to learn this?”

This time the calm voice had an unmistakable sense of urgency that sent a cold prickles down your spine.

?”Delay will add no value. You are being stalked by the Horned Gaussian. This means your system is on the edge of a catastrophe cliff. It could tip over any time. You cannot afford to relax. You must maintain all your current defenses. It is a learning-by-doing process. The sooner you start to learn-by-doing the sooner the fire starts to fade and the sooner you move away from the edge of the cliff.”       

“OK – I understand – and I do not know why I did not seek help a long time ago.”

The calm voice replied simply.

?”Many people find seeking help difficult. Especially senior people”.

Sensing that the conversation is coming to an end you feel compelled to ask:

“I am curious. Where do the DRATs come from?”

?“Curiosity is a healthy attitude to nurture. We believe that DRATs originated in finance departments – where they were originally called Fiscal Averages, Ratios and Targets.  At some time in the past they were sucked into operations and governance departments by a knowledge vacuum created by an unintended error of omission.”

You are not quite sure what this unfamiliar language means and you sense that you have strayed outside the scope of the “emergency script” but the phrase ‘error of omission sounds interesting’ and pricks your curiosity. You ask: 

“What was the error of omission?”

?“We believe it was not investing in learning how to design complex adaptive value systems to deliver capable win-win-win performance. Not investing in learning the Science of Improvement.”

“I am not sure I understand everything you have said.”

?“That is OK. Do not worry. You will. We look forward to your email.  My name is Bob by the way.”

“Thank you so much Bob. I feel better just having talked to someone who understands what I am going through and I am grateful to learn that there is a way out of this dark pit of despair. I will look at the website and send the email immediately.”

?”I am happy to have been of assistance.”

[/reveal]

Systems within Systems

Each of us is a small part of a big system.  Each of us is a big system made of smaller parts. The concept of a system is the same at all scales – it is called scale invariant

When we put a system under a microscope we see parts that are also systems. And when we zoom in on those we see their parts are also systems. And if we look outwards with a telescope we see that we are part of a bigger system which in turn is part of an even bigger system.

This concept of systems-within-systems has a down-side and an up-side.

The down-side is that it quickly becomes impossible to create a mental picture of the whole system-of-systems. Our caveman brains are just not up to the job. So we just focus our impressive-but-limited cognitive capacity on the bit that affects us most. The immediate day-to-day people-and-process here-and-now stuff. And we ignore the ‘rest’. We deliberately become ignorant – and for good reason. We do not ask about the ‘rest’ because we do not want to know because we cannot comprehend the complexity. We create cognitive comfort zones and personal silos.

And we stay inside our comfort zones and we hide inside our silos.


Unfortunately – ignoring the ‘rest’ does not make it go away.

We are part of a system – we are affected by it and it is affected by us. That is how systems work.


The up-side is that all systems behave in much the same way – irrespective of the level.  This is very handy because if we can master a method for understanding and improving a system at one level – then we can use the same method at any level.  The only change is the degree of detail. We can chunk up and down and still use the same method.  

The improvement scientist needs to be a master of one method and to be aware of three levels: the system level, the stream level and the step level.

The system provides the context for the streams. The steps provide the content of the streams.

  1. Direction operates at the system level.
  2. Delivery operates at the stream level.
  3. Doing operates at the step level.

So an effective and efficient improvement science method must work at all three levels – and one method that has been demonstrated to do that is called 6M Design®.


6M Design® is not the only improvement science method, and it is not intended to be the best. Being the best is not the purpose because it is not necessary. Having better than what we had before is the purpose because it is sufficient. That is improvement.


6M Design® works at all three levels.  It is sufficient for system-wide and system-deep improvement. So that is what I use.


The first M stands for Map.

Maps are designed to be visual and two-dimensional because that is how our Mark-I eyeballs abd visual sensory systems work. Our caveman brains are good at using pictures and in extraction meaning from the detail. It is a survival skill. 

All real systems have a lot more than two dimensions. Safety, Quality, Flow and Cost are four dimensions to start with, and there are many more. So we need lots of maps. Each one looking at just two of the dimensions.  It is our set of maps that provide us with a multi-dimensional picture of the system we want to improve.

One dimension features more often in the maps than any other – and that dimension is time.

The Western cultural convention is to put time on the horizonal axis with past in the left and future on the right. Left-to-right means looking forward in time.  Right-to-left means looking backwards in time. 


We have already seen one of the time-dependent maps – The 4N Chart®.

It is a Emotion-Time map. How do we feel now and why? What do we want to feel in the futrure and why? It is a status-at-a-glance map. A static map. A snapshot.

The emotional roller coaster of change – the Nerve Curve – is an Emotion-Time map too. It is a dynamic map – an expected trajectory map.  The emotional ups and downs that we expect to encounter when we engage in significant change.

Change usually involves several threads at the same time – each with its own Nerve Curve. 

The 4N Charts® are snapshots of all the parallel threads of change – they evolve over time – they are our day-to-day status-at-a-glance maps – and they guide us to which Nerve Curve to pay attention to next and what to do. 

The map that links the three – the purposes, the pathways and the parts – is the map that underpins 6M Design®. A map that most people are not familiar with because it represents a counter-intuitive way of thinking.

And it is that critical-to-success map which differentiates innovative design from incremental improvement.

And using that map can be learned quite quickly – if you have a guide – an Improvement Scientist.

A Recipe for Improvement PIE.

Most of us are realists. We have to solve problems in the real world so we prefer real examples and step-by-step how-to-do recipes.

A minority of us are theorists and are more comfortable with abstract models and solving rhetorical problems.

Many of these Improvement Science blog articles debate abstract concepts – because I am a strong iNtuitor by nature. Most realists are Sensors – so by popular request here is a “how-to-do” recipe for a Productivity Improvement Exercise (PIE)

Step 1 – Define Productivity.

There are many definitions we could choose because productivity means the results delivered divided by the resources used.  We could use any of the three currencies – quality, time or money – but the easiest is money. And that is because it is easier to measure and we have well established department for doing it – Finance – the guardians of the money.  There are two other departments who may need to be involved – Governance (the guardians of the safety) and Operations (the guardians of the delivery).

So the definition we will use is productivity = revenue generated divided cost incurred.

Step 2 – Draw a map of the process we want to make more productive.

This means creating a picture of the parts and their relationships to each other – in particular what the steps in the process are; who does what, where and when; what is done in parallel and what is done in sequence; what feeds into what and what depends on what. The output of this step is a diagram with boxes and arrows and annotations – called a process map. It tells us at a glance how complex our process is – the number of boxes and the number of arrows.  The simpler the process the easier it is to demonstrate a productivity improvement quickly and unambiguously.

Step 3 – Decide the objective metrics that will tell us our productivity.

We have chosen a finanical measure of productivity so we need to measure revenue and cost over time – and our Finance department do that already so we do not need to do anything new. We just ask them for the data. It will probably come as a monthly report because that is how Finance processes are designed – the calendar month accounting cycle is not negotiable.

We will also need some internal process metrics (IPMs) that will link to the end of month productivity report values because we need to be observing our process more often than monthly. Weekly, daily or even task-by-task may be necessary – and our monthly finance reports will not meet that time-granularity requirement.

These internal process metrics will be time metrics.

Start with objective metrics and avoid the subjective ones at this stage. They are necessary but they come later.

Step 4 – Measure the process.

There are three essential measures we usually need for each step in the process: A measure of quality, a measure of time and a measure of cost.  For the purposes of this example we will simplify by making three assumptions. Quality is 100% (no mistakes) and Predictability is 100% (no variation) and Necessity is 100% (no worthless steps). This means that we are considering a simplified and theoretical situation but we are novices and we need to start with the wood and not get lost in the trees.

The 100% Quality means that we do not need to worry about Governance for the purposes of this basic recipe.

The 100% Predictability means that we can use averages – so long as we are careful.

The 100% Necessity means that we must have all the steps in there or the process will not work.

The best way to measure the process is to observe it and record the events as they happen. There is no place for rhetoric here. Only reality is acceptable. And avoid computers getting in the way of the measurement. The place for computers is to assist the analysis – and only later may they be used to assist the maintenance – after the improvement has been achieved.

Many attempts at productivity improvement fail at this point – because there is a strong belief that the more computers we add the better. Experience shows the opposite is usually the case – adding computers adds complexity, cost and the opportunity for errors – so beware.

Step 5 – Identify the Constraint Step.

The meaning of the term constraint in this context is very specific – it means the step that controls the flow in the whole process.  The critical word here is flow. We need to identify the current flow constraint.

A tap or valve on a pipe is a good example of a flow constraint – we adjust the tap to control the flow in the whole pipe. It makes no difference how long or fat the pipe is or where the tap is, begining, middle or end. (So long as the pipe is not too long or too narrow or the fluid too gloopy because if they are then the pipe will become the flow constraint and we do not want that).

The way to identify the constraint in the system is to look at the time measurements. The step that shows the same flow as the output is the constraint step. (And remember we are using the simplified example of no errors and no variation – in real life there is a bit more to identifying the constraint step).

Step 6 – Identify the ideal place for the Constraint Step.

This is the critical-to-success step in the PIE recipe. Get this wrong and it will not work.

This step requires two pieces of measurement data for each step – the time data and the cost data. So the Operational team and the Finance team will need to collaborate here. Tricky I know but if we want improved productivity then there is no alternative.

Lots of productivity improvement initiatives fall at the Sixth Fence – so beware.  If our Finance and Operations departments are at war then we should not consider even starting the race. It will only make the bad situation even worse!

If they are able to maintain an adult and respectful face-to-face conversation then we can proceed.

The time measure for each step we need is called the cycle time – which is the time interval from starting one task to being ready to start the next one. Please note this is a precise definition and it should be used exactly as defined.

The money measure for each step we need is the fully absorbed cost of time of providing the resource.  Your Finance department will understand that – they are Masters of FACTs!

The magic number we need to identify the Ideal Constraint is the product of the Cycle Time and the FACT – the step with the highest magic number should be the constraint step. It should control the flow in the whole process. (In reality there is a bit more to it than this but I am trying hard to stay out of the trees).

Step 7 – Design the capacity so that the Ideal Constraint is the Actual Constraint.

We are using a precise definition of the term capacity here – the amount of resource-time available – not just the number of resources available. Again this is a precise definition and should be used as defined.

The capacity design sequence  means adding and removing capacity to and from steps so that the constraint moves to where we want it.

The sequence  is:
7a) Set the capacity of the Ideal Constraint so it is capable of delivering the required activity and revenue.
7b) Increase the capacity of the all the other steps so that the Ideal Constraint actually controls the flow.
7c) Reduce the capacity of each step in turn, a click at a time until it becomes the constraint then back off one click.

Step 8 – Model your whole design to predict the expected productivity improvement.

This is critical because we are not interested in suck-it-and-see incremental improvement. We need to be able to decide if the expected benefit is worth the effort before we authorise and action any changes.  And we will be asked for a business case. That necessity is not negotiable either.

Lots of productivity improvement projects try to dodge this particularly thorny fence behind a smoke screen of a plausible looking business case that is more fiction than fact. This happens when any of Steps 2 to 7 are omitted or done incorrectly.  What we need here is a model and if we are not prepared to learn how to build one then we should not start. It may only need a simple model – but it will need one. Intuition is too unreliable.

A model is defined as a simplified representation of reality used for making predictions.

All models are approximations of reality. That is OK.

The art of modeling is to define the questions the model needs to be designed to answer (and the precision and accuracy needed) and then design, build and test the model so that it is just simple enough and no simpler. Adding unnecessary complexity is difficult, time consuming, error prone and expensive. Using a computer model when a simple pen-and-paper model would suffice is a good example of over-complicating the recipe!

Many productivity improvement projects that get this far still fall at this fence.  There is a belief that modeling can only be done by Marvins with brains the size of planets. This is incorrect.  There is also a belief that just using a spreadsheet or modelling software is all that is needed. This is incorrect too. Competent modelling requires tools and training – and experience because it is as much art as science.

Step 9 – Modify your system as per the tested design.

Once you have demonstrated how the proposed design will deliver a valuable increase in productivity then get on with it.

Not by imposing it as a fait accompli – but by sharing the story along with the rationale, real data, explanation and results. Ask for balanced, reasoned and respectful feedback. The question to ask is “Can you think of any reasons why this would not work?” Very often the reply is “It all looks OK in theory but I bet it won’t work in practice but I can’t explain why”. This is an emotional reaction which may have some basis in fact. It may also just be habitual skepticism/cynicism. Further debate is usually  worthless – the only way to know for sure is by doing the experiment. As an experiment – as a small-scale and time-limited pilot. Set the date and do it. Waiting and debating will add no value. The proof of the pie is in the eating.

Step 10 – Measure and maintain your system productivity.

Keep measuring the same metrics that you need to calculate productivity and in addition monitor the old constraint step and the new constraint steps like a hawk – capturing their time metrics for every task – and tracking what you see against what the model predicted you should see.

The correct tool to use here is a system behaviour chart for each constraint metric.  The before-the-change data is the baseline from which improvement is measured over time;  and with a dot plotted for each task in real time and made visible to all the stakeholders. This is the voice of the process (VoP).

A review after three months with a retrospective financial analysis will not be enough. The feedback needs to be immediate. The voice of the process will dictate if and when to celebrate. (There is a bit more to this step too and the trees are clamoring for attention but we must stay out of the wood a bit longer).

And after the charts-on-the-wall have revealed the expected improvement has actually happened; and after the skeptics have deleted their ‘we told you so’ emails; and after the cynics have slunk off to sulk; and after the celebration party is over; and after the fame and glory has been snatched by the non-participants – after all of that expected change management stuff has happened …. there is a bit more work to do.

And that is to establish the new higher productivity design as business-as-usual which means tearing up all the old policies and writing new ones: New Policies that capture the New Reality. Bin the out-of-date rubbish.

This is an essential step because culture changes slowly.  If this step is omitted then out-of-date beliefs, attitudes, habits and behaviours will start to diffuse back in, poison the pond, and undo all the good work.  The New Policies are the reference – but they alone will not ensure the improvement is maintained. What is also needed is a PFL – a performance feedback loop.

And we have already demonstrated what that needs to be – the tactical system behaviour charts for the Intended Constraint step.

The finanical productivity metric is the strategic output and is reported monthly – as a system behaviour chart! Just comparing this month with last month is meaningless.  The tactical SBCs for the constraint step must be maintained continuously by the people who own the constraint step – because they control the productivity of the whole process.  They are the guardians of the productivity improvement and their SBCs are the Early Warning System (EWS).

If the tactical SBCs set off an alarm then investigate the root cause immediately – and address it. If they do not then leave it alone and do not meddle.

This is the simplified version of the recipe. The essential framework.

Reality is messier. More complicated. More fun!

Reality throws in lots of rusty spanners so we do also need to understand how to manage the complexity; the unnecessary steps; the errors; the meddlers; and the inevitable variation.  It is possible (though not trivial) to design real systems to deliver much higher productivity by using the framework above and by mastering a number of other tools and techniques.  And for that to succeed the Governance, Operations and Finance functions need to collaborate closely with the People and the Process – initially with guidance from an experienced and competent Improvement Scientist. But only initially. This is a learnable skill. And it takes practice to master – so start with easy ones and work up.

If any of these bits are missing or are dysfunctional the recipe will not work. So that is the first nettle the Executive must grasp. Get everyone who is necessary on the same bus going in the same direction – and show the cynics the exit. Skeptics are OK – they will counter-balance the Optimists. Cynics add no value and are a liability.

What you may have noticed is that 8 of the 10 steps happen before any change is made. 80% of the effort is in the design – only 20% is in the doing.

If we get the design wrong the the doing will be an ineffective and inefficient waste of effort, time and money.


The best complement to real Improvement PIE is a FISH course.


The Frightening Cost Of Fear

The recurring theme this week has been safety and risk.

Specifically in a healthcare context. Most people are not aware just how risky our current healthcare systems are. Those who work in healthcare are much more aware of the dangers but they seem powerless to do much to make their systems safer for patients.


The shroud-waving  zealots who rant on about safety often use a very unhelpful quotation. They say “Every system is perfectly designed to deliver the performance it does“. The implication is that when the evidence shows that our healthcare systems are dangerous …. then …. we designed them to be dangerous.  The reaction from the audience is emotional and predictable “We did not intend this so do not try to pin the blame on us!”  The well-intentioned shroud-waving safety zealot loses whatever credibility they had and the collective swamp of cynicism and despair gets a bit deeper.


The warning-word here is design – because it has many meanings.  The design of a system can mean “what the system is” in the sense of a blueprint. The design of a system can also mean “how the blueprint was created”.  This process sense is the trap – because it implies intention.  Design needs a purpose – the intended outcome – so to say an unsafe system has been designed is to imply that it was intended to be unsafe. This is incorrect.

The message in the emotional backlash that our well-intended zealot provoked is “You said we intended bad things to happen which is not correct so if you are wrong on that fundamental belief then how can I trust anything else you say?“. This is the reason zealots lose credibility and actually make improvement less likely to happen.


The reality is not that the system was designed to be unsafe – it is that it was not designed not to be. The double negatives are intentional. The two statements are not the same.


The default way of the Universe is evolutionary (which is unintentional and reactive) and chaotic (which is unstable and unsafe). To design a system to be not-unsafe we need to understand Two Sciences – Design Science and Safety Science. Only then can we proactively and intentionally design safe, stable, and trustable systems.    If we do nothing and do not invest in mastering the Two Sciences then we will get the default outcome: unintended unsafety.  This is what the uncomfortable  evidence says we have.


So where does the Frightening Cost of Fear come in?

If our system is unintentionally and unpredictably unsafe then of course we will try to protect ourselves from the blame which inevitably will follow from disappointed customers.  We fear the blame partly because we know it is justified and partly because we feel powerless to avoid it. So we cover our backs. We invent and implement complex check-and-correct systems and we document everything we do so that we have the evidence in the inevitable event of a bad outcome and the backlash it unleashes. The evidence that proves we did our best; it shows we did what the safety zealots told us to do; it shows that we cannot be held responsible for the bad outcome.

Unfortunately this strategy does little to prevent bad outcomes. In fact it can have has exactly the opposite effect of what is intended. The added complexity and cost of our cover-my-back bureaucracy actually increases the stress and chaos and makes bad outcomes more likely to happen. It makes the system even less safe. It does not deflect the blame. It just demonstrates that we do not understand how to design a not-unsafe system.


And the financial cost of our fear is frighteningly high.

Studies have shown that over 60% of nursing time is spent on documentation – and about 70% of healthcare cost is on hospital nurse salaries. The maths is easy – at least 42% of total healthcare cost is spent on back-covering-blame-deflection-bureaucracy.

It gets worse though.

Those legal documents called clinical records need to be moved around and stored for a minimum of seven years. That is expensive. Converting them into an electronic format misses the point entirely. Finding the few shreds of valuable clinical information amidst the morass of back-covering-bureaucracy uses up valuable specialist time and has a high risk of failure. Inevitably the risk of decision errors increases – but this risk is unmeasured and is possibly unmeasurable. The frustration and fear it creates is very obvious though: to anyone willing to look.

The cost of correcting the Niggles that have been detected before they escalate to Not Agains, Near Misses and Never Events can itself account for half the workload. And the cost of clearing up the mess after the uncommon but inevitable disaster becomes built into the system too – as insurance premiums to pay for future litigation and compensation. It is no great surprise that we have unintentionally created a compensation culture! Patient expectation is rising.

Add all those costs up and it becomes plausible to suggest that the Cost of Fear could be a terrifying 80% of the total cost!


Of course we cannot just flick a switch and say “Right – let us train everyone in safe system design science“.  What would all the people who make a living from feeding on the present dung-heap do? What would the checkers and auditors and litigators and insurers do to earn a crust? Join the already swollen ranks of the unemployed?


If we step back and ask “Does the Cost of Fear principle apply to everything?” then we are faced with the uncomfortable conclusion that it most likely is.  So the cost of everything we buy will have a Cost of Fear component in it. We will not see it written down like that but it will be in there – it must be.

This leads us to a profound idea.  If we collectively invested in learning how to design not-unsafe systems then the cost of everything could fall. This means we would not need to work as many hours to earn enough to pay for what we need to live. We could all have less fear and stress. We could all have more time to do what we enjoy. We could all have both of these and be no worse off in terms of financial security.

This Win-Win-Win outcome feels counter-intuitive enough to deserve serious consideration.


So here are some other blog topics on the theme of Safety and Design:

Never Events, Near Misses, Not Agains and Nailing Niggles

The Safety Line in the Quality Sand

Safety By Design

Standard Ambiguity

One of the words that causes the most debate and confusion in the world of Improvement is the word standard – because it has so many different yet inter-related meanings.  It is an ambiguous word and a multi-facetted concept.

For example standard method can be the normal way of doing something (as in a standard operating procedure  or SOP); standard can be the expected outcome of doing something; standard can mean the minimum acceptable quality of the output (as in a safety standard); standard can mean an aspirational performance target; standard can mean an absolute reference or yardstick (as in the standard kilogram); standard can mean average; and so on.  It is an ambiguous word.

So it is no surprise that we get confused. And when we are confused we get scared and we try to relieve our fear by asking questions which doesn’t help because we don’t get clear answers so we start to discuss, and debate and argue and all this takes effort, time and inevitably money. But the fog of confusion does not lift.  If anything it gets denser.  And the reason? Standard Ambiguity.


One cause of this is the perennial confusion between purpose and process. Purpose is the Why. Process is the How.  The concept of standard applied to the Purpose will include the outcomes: the minimum acceptable (safety standard), the expected (the specification standard) and the actual (the de facto standard).  The concept of standard applied to the process would include the standard operating procedures and the reference standards for accurate process measurement (e.g. a gold standard).


To illustrate the problems that result from confusing purpose standards with process standards we need look no further than education.  What is the purpose of a school? To deliver pupils who have achieved their highest educational potential perhaps. What is the purpose of an exam board? To have a common educational reference standard and to have a reliable method for comparing individual pupils against that reference standard perhaps.  So where does the idea of “Being the school that achieved the highest percentage of top grades?” fit with these two purpose standards?  Where does the league table concept fit? It is hard to see immediately. But we do want to improve the educational capability of our population because that is a national and global asset in an increasingly complex, rapidly changing, high technology world. So a league table will drive up the quality of education surely? But it doesn’t seem to be turning out that way. So what is getting in the way?


What is getting in the way is how we confuse collaboration and competition.  It seems to be that many believe we have either collaboration or competition. Either-Or thinking is a trap for the unwary and whenever these words are uttered a small alarm bell should ring.  Are collaboration and competition mutually exclusive? Or are we just making this assumption to simplify the problem? We do that a lot.


Suppose the exam boards were both competing and collaborating with each other. Suppose they collaborated to set and to maintain a stable and trusted reference standard; and suppose that they competed to provide the highest quality service to the schools – in terms of setting and marking exams. What would happen?  An exam board that stepped out of line in terms of the standard would lose its authority to set and mark exams – it would cut its own commercial throat.  And the quality of the examination process would go up because those who invest in that will attract more of the market.  What about the schools – what if they collaborated and competed too.  What if they collaborated to set and maintain a stable and trusted reference standard of conduct and competency of their teachers – and what if they competed to improve the quality of their educational process. They would attract the most pupils. What could happen if we combine competition and collaboration so the sum becomes greater than the parts?


A similar situation exists in healthcare.  Some hospitals are talking about competing to be the safest hospitals and collaborating to improve quality.  It sounds plausible but it is rational?

Safety is an absolute standard – it is the common minimum acceptable quality. No hospital should fail on safety so this is not a suitable subject for competition.  All hospitals should collaborate to set and to maintain safety – helping each other by sharing data, information, knowledge, and understanding.  And with that Foundation of Trust they can then compete on quality – using the competitive spirit to pull them every higher. Better quality of service, better quality of delivery and better quality of performance – including financial. Win-win-win.  So when the quality of everyone improves through competitive upwards pull then the level of minimum acceptable quality increases – so the Safety Standard improves too.


A win-win-win outcome is the purpose of the application of the process of Improvement Science.

The Challenge of Wicked Problems

“Wicked problem” is a phrase used to describe a problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often not recognised.
The term ‘wicked’ is used, not in the sense of evil, but rather in the sense that it is resistant to resolution.
The complex inter-dependencies imply that an effort to solve one aspect of a wicked problem may reveal or create other problems.

System-level improvement is a very common example of a wicked problem, so an Improvement Scientist needs to be able to sort the wicked problems from the tame ones.

Tame problems can be solved using well known and understood methods and the solution is either right or wrong. For example – working out how much resource capacity is needed to deliver a defined demand is a tame problem.  Designing a booking schedule to avoid excessive waiting is a tame problem.  The fact that many people do not know how to solve these tame problems does not make them wicked ones.  Ignorance in not that same as intransigence.

Wicked problems do not have right or wrong solutions – they have better or worse outcomes.  Wicked problems cannot be precisely defined, dissected, analysed and solved. They are messy. They are more than complicated – they are complex.  A mechanical clock is a complicated mechanism but designing, building, operating and even repairing a clock is a tame problem not a wicked one.

So how can we tell a wicked problem from a tame one?

If a problem has been solved and there is a known and repeatable solution then it is, by definition, a tame problem.  If a problem has never been solved then it might be tame – and the only way to find out is to try solving it.
The barrier we then discover is that each of us gets stuck in the mud of our habitual, unconscious assumptions. Experience teaches us that just taking a different perspective can be enough to create the breakthrough insight – the “Ah ha!” moment. Seeking other perspectives and opinions is an effective strategy when stuck.

So, if two-heads-are-better-than-one then many heads must be even better! Do we need a committee to solve wicked problems?
Experience teaches us that when we try it we find that it often does not work!
The different perspectives also come with different needs, different assumptions, and different agendas and we end up with a different wicked problem. The committee is rendered ineffective and inefficient by rhetorical discussion and argument.

This is where a very useful Improvement Science technique comes in handy. It is called Argument Free Problem Solving (AFPS) and it was intentionally designed to facilitate groups working on complex problems.

The trick to AFPS is to understand what generates the arguments and to design these causes out of the problem solving process. There are several contributors.

First there is just good old fashioned disrespectful skepticism – otherwise known as cynicism.  The antidote to this poison is to respectfully challenge the disrespectful component of the cynical behaviour – the personal discounting bit.  And it is surprisingly effective!

Second there is the well known principle that different people approach life and problems in different ways.  Some call this temperament and others call it personality. Whatever the label, knowing our preferred style and how different styles can conflict is useful because it leads to mutual respect for our different gifts.  One tried and tested method is Jungian Typology which comes in various brands such as the MBTI® (Myers Briggs Type Indicator).

Third there is the deepening understanding of how the 1.3 kg of caveman wetware between our ears actually works.  The ongoing advances in neuroscience are revealing fascinating insights into how “irrational” we really are and how easy it is to fool the intuition. Stage magicians and hypnotists make a living out of this inherent “weakness”. One of the lessons from neuroscience is that we find it easier to communicate when we are all in the same mental state – even if we have different temperaments.  It is called cognitive  resonance.  Being on the same wavelength.  Arguments arise when different people are in conflicting mental states – cognitive dissonance.

So an effective problem solving team is more akin to a flock of birds or a shoal of fish – that can change direction quickly and as one – without a committee, without an argument, and without creating chaos.  For birds and fish it is an effective survival strategy because it confounds the predators. The ones that do not join in … get eaten!

When a group are able to change perspective together and still stay focused on the problem then the tame ones get resolved and the wicked ones start to be dissolved.
And that is all we can expect for wicked problems.

The AFPS method can be learned quickly – and experience shows that just one demonstration is usually enough to convince the participants when a team is hopelessly entangled in a wicked-looking problem!

The Surprising Science of Motivation

Intended improvement requires focussed change which requires systemic design which requires collaborative action which requires motivation. So where does the motivation come from? Money? or Meaning?  This animated talk by Dan Pink from RSA is so much more effective than a feeble blog!

Design work is the antithesis of the repetitive, mechanical, uninspiring, mundane, day-to-day work that we do for money. Design work is always unique, always challenging, and always fun – and hard – and many people do it in their own time for nothing. The whole Open Source Software movement is testament to that.

But why should the designers have all the fun? The question misses the point – we are all designers and we can can all become better designers. We can mix up the designing and the delivering. And when we do that it gets even better because we get the fun of the design bit and the reward of the delivery bit too.

So how can we justify staying as we are when we can see how much fun is feasible?

Pruning the Niggle Tree

Sometimes our daily existence feels like a perpetual struggle between two opposing forces: the positive force of innovation, learning, progress and success; and the opposing force of cynicism, complacency, stagnation and failure.  Often the balance-of-opposing-forces is so close that even small differences of opinion can derail us – especially if they are persistent. And we want to stay on course to improvement.

Niggles are the irritating things that happen every day. Day after day. Niggles are persistent. So when we are in our “ying-yang” equilibrium and “balanced on the edge” then just one extra niggle can push us off our emotional tight-rope. And we know it. The final straw!

So to keep ourselves on track to success we need to “nail” niggles.  But which ones? There seem to be so many! Where do we start?

If we recorded just one day and from that we listed all the positive things that happened on green PostIt® notes and all the negatives things on red ones – then we would be left with a random-looking pile of red and green notes. Good days would have more green, and bad days would have more red – and all days would have both. And that is just the way it is. Yes? But are they actually random? Is there a deeper connection?

Experience teaches us that when we Investigate-a-Niggle we find it is connected to other niggles. The “cannot find a parking place” niggle is because of the “car park is full” niggle which also causes the “someone arrived late for my important meeting” niggle. The red leaf is attached to a red twig which in turn sprouts other red leaves. The red leaves connect to other red leaves; not to green ones.

If we tug on a green leaf – a Nugget – we find that it too is connected to other nuggets. The “congratulations on a job well done” nugget is connected to the the “feedback is important” nugget from which sprouts the “opportunities for learning” nugget. Our green leaf is attached, indirectly, to many other green leaves; not to red ones.

It seems that our red leaves (niggles) and our green leaves (nuggets) are connected – but not directly to each other. It is as if we have two separate but tightly intertwined plants competing with each other for space and light. So if we want a tree that is more green than red and if we want to progress steadily in the direction of sustained improvement – then we need to prune the niggle tree (red leaves) and leave the nugget tree (green leaves) unscathed.

The problem is that if we just cut off one or two red leaves new ones sprout quickly from the red twigs to replace them. We quickly learn that this apprach is futile. We suspect that if we were able to cut all the red leaves off at once then the niggle tree might shrivel and die – but that looks impossible. We need to be creative and we need to search deeper. With the  knowledge that the red-leaves are part of one tree and we can remove multiple red leaves in one snip by working our way back from the leaves, up the red twigs and to the red branches. If we prune far enough back then we can expect a large number of interconnected red leaves to wither and fall off – leaving the healthy green leaves more space and more light to grow on that part of the tree.

Improvement Science is about pruning the Niggle tree to make space for the Nugget tree to grow. It is about creating an environment for the Green shoots of innovation to sprout.  Most resistance comes from those who feed on the Red leaves – the Cynics – and if we remove enough red branches then they will go hungry. And now the Cynics have a choice: learn to taste and appreciate the Green leaves or “find another tree”.

We want a Greener tree- with fewer poisonous Red leaves on it.

Negotiate, Negotiate, Negotiate.

One of the most important skills that an Improvement Scientist needs is the ability to negotiate.  We are all familiar with one form of negotiaton which is called distributive negotiation which is where the parties carve up the pie in a low trust compromise. That is not the form we need – what we need is called integrative negotiation. The goal of integrative negotiation is to join several parts into a greater whole and it implies a higher level of trust and a greater degree of collaboration.

Organisations of more than about 90 people are usually split into departments – and for good reasons. The complex organisation requires specialist aptitudes, skills, and know-how and it is easier to group people together who share the specialist skills needed to deliver that service to the organisation – such as financial services in the accounts department.  The problem is that this division also creates barriers and as the organisation increases in size these barriers have a cumulative effect that can severely limit the capability of the organisation.  The mantra that is often associated with this problem is “communication, communication, communication” … which is too non-specific and therefore usually ineffective.

The products and services that an organisation is designed to deliver are rarely the output of one department – so the parts need to align and to integrate to create an effective and efficient delivery system. This requires more than just communication – it requires integrative negotiation – and it is not a natural skill or one that is easy to develop. It requires investment of effort and time.

To facilitate the process we need to provide three things: a common goal, a common language and a common ground.  The common goal is what all parts of the system are aligned to; the common language is how the dialog is communicated; and the common ground is our launch pad.

Integrative negotiation starts with finding the common ground – the areas of agreement. Very often these are taken for granted because we are psychologically tuned to notice differences rather than similarities. We have to make the “assumed” and “obvious” explicit before we turn our attention on our differences.

Integrative negoation proceeds with defining the common niggles and nice-ifs that could be resolved by a single change; the win-win-win opportunities.

Integrative negotiation concludes with identifying changes that are wholly within the circle of influence of the parties involved – the changes that they have the power to make individually and collectively.

After negotiation comes decision and after decision comes action and that is when improvement happens.

The Nerve Curve

The Nerve Curve is the emotional roller-coaster ride that everyone who engages in Improvement needs to become confident to step onto.

Just like a theme park ride it has ups and downs, twists and turns, surprises and challenges, an element of danger and a splash of excitement.  If it did not have all of those components then it would not be fun and there would not be queues of people wanting to ride, again and again.  And the reason that theme parks are so successful is because their rides have been very carefully designed – to be challenging, exciting, fun and safe – all at the same time.

So, when we challenge others to step aboard our Improvement Nerve Curve then we need to ensure that our ride is safe – and to do that we need to understand where the emotional dangers lurk, to actively point them out and then avoid them.

A big danger hides right at the start.  To get aboard the Nerve Curve we have to ask questions that expose the Elephant-in-the-Room issues.  Everyone knows they are there – but no one wants to talk about them.   The biggest one is called Distrust – which is wrapped up in all sorts of different ways and inside the nut is the  Kernel of Cynicism.  The inexperienced improvement facilitator may blunder straight into this trap just by using one small word … the word “Why”?  Arrrrrgh!  Kaboom!  Splat!  Game Over.

The “Why” question is like throwing a match into a barrel of emotional gunpowder – because it is interpreted as “What is your purpose?” and in a low-trust climate no one will want to reveal what their real purpose or intention is.  They have learned from experience to keep their cards close to their chest – it is safer to keep agendas hidden.

A much safer question is “What?”  What are the facts?  What are the effects? What are the causes? What works well? What does not? What do we want? What don’t we want? What are the constraints? What are our change options? What would each deliver? What are everyone’s views?  What is our decision?  What is our first action? What is the deadline?

Sticking to the “What” question helps to avoid everyone diving for the Political Panic Button and pulling the Emotional Emergency Brake before we have even got started.

The first part of the ride is the “Awful Reality Slope” that swoops us down into “Painful Awareness Canyon” which is the emotional low-point of the ride.  This is where the elephants-in-the-room roam for all to see and where passengers realise that, once the issues are in plain view, there is no way back.

The next danger is at the far end of the Canyon and is called the Black Chasm of Ignorance and the roller-coaster track goes right to the edge of it.  Arrrgh – we are going over the edge of the cliff – quick grab the Wilful Blindness Goggles and Denial Bag from under the seat, apply the Blunder Onwards Blind Fold and the Hope-for-the-Best Smoke Hood.

So, before our carriage reaches the Black Chasm we need to switch on the headlights to reveal the Bridge of How:  The structure and sequence that spans the chasm and that is copiously illuminated with stories from those who have gone before.  The first part is steep though and the climb is hard work.  Our carriage clanks and groans and it seems to take forever but at the top we are rewarded by a New Perspective and the exhilarating ride down into the Plateau of Understanding where we stop to reflect and to celebrate our success.

Here we disembark and discover the Forest of Opportunity which conceals many more Nerve Curves going off in all directions – rides that we can board when we feel ready for a new challenge.  There is danger lurking here too though – hidden in the Forest is Complacency Swamp – which looks innocent except that the Bridge of How is hidden from view.   Here we can get lured by the pungent perfume of Power and the addictive aroma of Arrogance and we can become too comfortable in the Zone.   As we snooze in the Hammock of Calm from we do not notice that the world around us is changing.  In reality we are slipping backwards into Blissful Ignorance and we do not notice – until we suddenly find ourselves in an unfamiliar Canyon of Painful Awareness.  Ouch!

Being forewarned is our best defense.  So, while we are encouraged to explore the Forest of Opportunity,  we learn that we must also return regularly to the Plateau of Understanding to don the Habit of Humility.  We must  regularly refresh ourselves from the Fountain of New Knowledge by showing others what we have learned and learning from them in return.  And when we start to crave more excitement we can board another Nerve Curve to a new Plateau of Understanding.

The Safety Harness of our Improvement journey is called See-Do-Teach and the most important part is Teach.  Our educators need to have more than just a knowledge of how-to-do, they also need to have enough understanding to be able to explore the why-to -do. The Quest for Purpose.

To convince others to get onboard the Nerve Curve we must be able to explain why the Issues still exist and why the current methods are not sufficient.  Those who have been on the ride are the only ones who are credible because they understand.  They have learned by doing.

And that understanding grows with practice and it grows more quickly when we take on the challenge of learning how to explore purpose and explain why.  This is Nerve Curve II.

All aboard for the greatest ride of all.

Knowledge and Understanding

Knowledge is not the same as Understanding.

We all know that the sun rises in the East and sets in the West; most of us know that the oceans have a twice-a-day tidal cycle and some of us know that these tides also have a monthly cycle that is associated with the phase of the moon. We know all of this just from taking notice; remembering what we see; and being able to recognise the patterns. We use this knowledge to make reliable predictions of the future times and heights of the tides; and we can do all of this without any understanding of how tides are caused.

Our lack of understanding means that we can only describe what has happened. We cannot explain how it happened. We cannot extract meaning – the why it happened.

People have observed and described the movements of the sun, sea, moon, and stars for millennia and a few could even predict them with surprising accuracy – but it was not until the 17th century that we began to understand what caused the tides. Isaac Newton developed enough of an understanding to explain how it worked and he did it using a new concept called gravity and a new tool called calculus.  He then used this understanding to explain a lot of other unexplained things and suddenly the Universe started to make a lot more sense to everyone. Nowadays we teach this knowledge at school and we take it for granted. We assume it is obvious and it is not. We are no smarter now that people in the 17th Century – we just have a deeper understanding (of physics).

Understanding enables things that have not been observed or described to be predicted and explained. Understanding is necessary if we want to make rational and reliable decisions that will lead to changes for the better in a changing world.

So, how can we test if we only know what to do or if we actually understand what to do?

If we understand then we can demonstrate the application of our knowledge by solving old and new problems effectively and we can explain how we do it.  If we do not understand then we may still be able to apply our knowledge to old problems but we do not solve new problems effectively or efficiently and we are not able to explain why.

But we do not want the risk of making a mistake in order to test if we have and understanding-gap so how can we find out? What we look for is the tell-tale sign of an excess of knowledge and a dearth of understanding – and it has a name – it is called “bureaucracy”.

Suppose we have a system where the decisions-makers do not make effective decisions when faced with new challenges – which means that their decisions lead to unintended adverse outcomes. It does not take very long for the system to know that the decision process is ineffective – so to protect itself the system reacts by creating bureaucracy – a sort of organisational damage-limitation circle of sand-bags that limit the negative consequences of the poor decisions. A bureaucratic firewall so to speak.

Unfortunately, while bureaucracy is effective it is non-specific, it uses up resources and it slows everything down. Bureaucracy is inefficiency. What we get as a result is a system that costs more and appears to do less and that is resistant to any change – not just poor decisions – it slows down good ones too.

The bureaucratic barrier is important though; doing less bad stuff is actually a reasonable survival strategy – until the cost of the bureaucracy threatens the systems viability. Then it becomes a liability.

So what happens when a last-saloon-in-town “efficiency” drive is started in desperation and the “bureaucratic red tape” is slashed? The poor decisions that the red tape was ensnaring are free to spread virally and when implemented they create a big-bang unintended adverse consequence! The safety and quality performance of the system drops sharply and that triggers the reflex “we-told-you-so” and rapid re-introduction of the red-tape, plus some extra to prevent it happening again.  The system learns from its experience and concludes that “higher quality always costs more” and “don’t trust our decision-makers” and “the only way to avoid a bad decision is not to make/or/implement any decisions” and to “the safest way to maintain quality is to add extra checks and increased the price”. The system then remembers this new knowledge for future reference; the bureaucratic concrete sets hard; and the whole cycle repeats itself. Ad infinitum.

So, with this clearer insight into the value of bureaucracy and its root cause we can now design an alternative system: to develop knowledge into understanding and by that route to improve our capability to make better decisions that lead to predictable, reliable, demonstrable and explainable benefits for everyone. When we do that the non-specific bureaucracy is seen to impede progress so it makes sense to dismantle the bits that block improvement – and keep the bits that block poor decisions and that maintain performance. We now get improved quality and lower costs at the same time, quickly, predictably and without taking big risks, and we can reinvest what we have saved in making making further improvements and developing more knowledge, a deeper understanding and wiser decisions. Ad infinitum.

The primary focus of Improvement Science is to expand understanding – our ability to decide what to do, and what not to; where and where not to; and when and when not to – and to be able to explain and to demonstrate the “how” and to some extent the “why”.

One proven method is to See, then to Do, and then to Teach. And when we try that we discover to our surprise that the person whose understanding increases the most is the teacher!  Which is good because the deeper the teachers understanding the more flaxible, adaptable and open to new learning they become.  Education and bureaucracy are poor partners.

Resistance to Change

Many people who are passionate about improvement become frustrated when they encounter resistance-to-change.

It does not matter what sort of improvement is desired – safety, delivery, quality, costs, revenue, productivity or all of them.

The natural and intuitive reaction to meeting resistance is to push harder – and our experience of the physical world has taught us that if we apply enough pressure at the right place then resistance will be overcome and we will move forward.

Unfortunately we sometimes discover that we are pushing against an immovable object and even our maximum effort is futile – so we give up and label it as “impossible”.

Much of Improvement Science appears counter-intuitive at first sight and the challenge of resistance is no different.  The counter-intuitive response to feeling resistance is to pull back, and that is exactly what works better. But why does it work better? Isn’t that just giving up and giving in? How can that be better?

To explain the rationale it is necessary to examine the nature of resistance more closely.

Resistance to change is an emotional reaction to an unconsciously perceived threat that is translated into a conscious decision, action and justification: the response. The range of verbal responses is large, as illustrated in the caption, and the range of non-verbal responses is just as large.  Attempting to deflect or defuse all of them is impractical, ineffective and leads to a feeling of frustration and futility.

This negative emotional reaction we call resistance is non-specific because that is how our emotions work – and it is triggered as much by the way the change is presented as by what the change is.

Many change “experts” recommend  the better method of “driving” change is selling-versus-telling and recommend learning psycho-manipulation techniques to achieve it – close-the-deal sales training for example. Unfortunately this strategy can create a psychological “arms race” which can escalate just as quickly and lead to the same outcome: an  emotional battle and psychological casualties. This outcome is often given the generic label of “stress”.

An alternative approach is to regard resistance behaviour as multi-factorial and one model separates the non-specific resistance response into separate categories: Why DoDon’t Do – Can’t Do – Won’t Do.

The Why Do response is valuable feedback because is says “we do not understand the purpose of the proposed change” and it is not unusual for proposals to be purposeless. This is sometimes called “meddling”.  This is fear of the unknown.

The Don’t Do  is valuable feedback that is saying “there is a risk with this proposed change – an unintended negative consequence that may be greater than the intended positive outcome“.  Often it is very hard to explain this NoNo reaction because it is the output of an unconscious thought process that operates out of awareness. It just doesn’t feel good. And some people are better at spotting the risks – they prefer to wear the Black Hat – they are called skeptics.  This is fear of failure.

The Can’t Do is also valuable feedback that is saying “we get the purpose and we can see the problem and the benefit of a change – we just cannot see the path that links the two because it is blocked by something.” This reaction is often triggered by an unconscious recognition that some form of collaborative working will be required but the cultural context is low on respect and trust. It can also just be a manifestation of a knowledge, skill or experience gap – the “I don’t know how to do” gap. Some people habitually adopt the Victim role – most are genuine and do not know how.

The Won’t Do response is also valuable feedback that is saying “we can see the purpose, the problem, the benefit, and the path but we won’t do it because we don’t trust you“. This reaction is common in a low-trust culture where manipulation, bullying and game playing is the observed and expected behaviour. The role being adopted here is the Persecutor role – and the psychological discount is caring for others. Persecutors lack empathy.

The common theme here is that all resistance-to-change responses represent valuable feedback and explains why the better reaction to resistance is to stop talking and start listening because to make progress will require using the feedback to diagnose what components or resistance are present. This is necessary because each category requires a different approach.

For example Why Do requires making the both problem and the purpose explicit; Don’t Do requires exploring the fear and bringing to awareness what is fuelling it; Can’t Do requires searching for the skill gaps and filling them; and Won’t Do requires identifying the trust-eroding beliefs, attitudes and behaviours and making it safe to talk about them.

Resistance-to-change is generalised as a threat when in reality it represents an opportunity to learn and to improve – which is what Improvement Science is all about.

Targets, Tyrannies and Traps.

If we are required to place a sensitive part of our anatomy into a device that is designed to apply significant and sustained pressure, then the person controlling the handle would have our complete attention!

Our sole objective would be to avoid the crushing and relentless pain and this would most definitely bias our behaviour.

We might say or do things that ordinarily we would not – just to escape from the pain.

The requirement to meet well-intentioned but poorly-designed performance targets can create the organisational equivalent of a medieval thumbscrew; and the distorting effect on behaviour is the same.  Some people even seem to derive pleasure from turning the screw!

But what if we do not know how to achieve the performance target? We might then act to deflect the pain onto others – we might become tyrants too – and we might start to apply our own thumbscrews further along the chain of command.  Those unfortunate enough to be at the end of the pecking order have nowhere to hide – and that is a deeply distressing place to be – helpless and hopeless.

Fortunately there is a way out of the corporate torture chamber: It is to learn how to design systems to deliver the required performance specification – and learning how to do this is much easier than many believe.

For example, most assume without question that big queues and long waits are always caused by inefficient use of available capacity – because that is what their monitoring systems report. So out come thumbscrews heralded by the chanted mantra “increase utilisation, increase utilisation”.  Unfortunately, this belief is only partially correct: low utilisation of available capacity can and does lead to big queues and long waits but there is a much more prevalent and insidious cause of long waits that has nothing to do with capacity or utilisation. These little beasties are are called time-traps.

The essential feature of a time trap is that it is independent of both flow and time – it adds the same amount of delay irrespective of whether the flow is low or high and irrespective of when the work arrives. In contrast waits caused by insufficient capacity are flow and time dependent – the higher the flow the longer the wait – and the effect is cumulative over time.

Many confuse the time-trap with its close relative the batch – but they are not the same thing at all – and most confuse both of these with capacity-constraints which are a completely different delay generating beast altogether.

The distinction is critical because the treatments for time-traps, batches and capacity-constraints are different – and if we get the diagnosis wrong then we will make the wrong decision, choose the wrong action, and our system will get sicker, or at least no better. The corporate pain will continue and possibly get worse – leading to even more bad behaviour and more desperate a self-destructive strategies.

So when we want to reduce lead times by reducing waiting-in-queues then the first thing we need to do is to search for the time-traps, and to do that we need to be able to recognise their characteristic footprint on our time-series charts; the vital signs of our system.

We need to learn how to create and interpret the charts – and to do that quickly we need guidance from someone who can explain what to look for and how to interpret the picture.

If we lack insight and humility and choose not to learn then we are choosing to stay in the target-tyranny-trap and our pain will continue.

Seeing Is Believing or Is It?

Do we believe what we see or do we see what we believe?  It sounds like a chicken-and-egg question – so what is the answer? One, the other or both?

Before we explore further we need to be clear about what we mean by the concept “see”.  I objectively see with my real eyes but I subjectively see with my mind’s eye. So to use the word see for both is likely to result in confusion and conflict and to side-step this we will use the word perceive for seeing-with-our-minds-eye.   

When we are sure of our belief then we perceive what we believe. This may sound incorrect but psychologists know better – they have studied sensation and perception in great depth and they have proved that we are all susceptible to “perceptual bias”. What we believe we will see distorts what we actually perceive – and we do it unconsciously. Our expectation acts like a bit of ancient stained glass that obscures and distorts some things and paints in a false picture of the rest.  And that is just during the perception process: when we recall what we perceived we can add a whole extra layer of distortion and can can actually modify our original memory! If we do that often enough we can become 100% sure we saw something that never actually happened. This is why eye-witness accounts are notoriously inaccurate! 

But we do not do this all of the time.  Sometimes we are open-minded, we have no expectation of what we will see or we actually expect to be surprised by what we will see. We like the feeling of anticipation and excitement – of not knowing what will happen next.   That is the psychological basis of entertainment, of exploration, of discovery, of learning, and of improvement science.

An experienced improvement facilitator knows this – and knows how to create a context where deeply held beliefs can be explored with sensitivity and respect; how to celebrate what works and how and why it does; how to challenge what does not; and how to create novel experiences; foster creativity and release new ideas that enhance what is already known, understood and believed.

Through this exploration process our perception broadens, sharpens and becomes more attuned with reality. We achieve both greater clarity and deeper understanding – and it is these that enable us to make wiser decisions and commit to more effective action.

Sometimes we have an opportunity to see for real what we would like to believe is possible – and that can be the pivotal event that releases our passion and generates our commitment to act. It is called the Black Swan effect because seeing just one black swan dispels our belief that all swans are white.

A practical manifestation of this principle is in the rational design of effective team communication – and one of the most effective I have seen is the Communication Cell – a standardised layout of visual information that is easy-to-see and that creates an undistorted perception of reality.  I first saw it many years ago as a trainee pilot when we used it as the focus for briefings and debriefings; I saw it again a few years ago at Unipart where it is used for daily communication; and I have seen it again this week in the NHS where it is being used as part of a service improvement programme.

So if you do not believe then come and see for yourself.

Homeostasis

Improvement Science is not just about removing the barriers that block improvement and building barriers to prevent deterioration – it is also about maintaining acceptable, stable and predictable performance.

In fact most of the time this is what we need our systems to do so that we can focus our attention on the areas for improvement rather than running around keeping all the plates spinning.  Improving the ability of a system to maintain itself is a worthwhile and necessary objective.

Long term stability cannot be achieved by assuming a stable context and creating a rigid solution because the World is always changing. Long term stability is achieved by creating resilient solutions that can adjust their behaviour, within limits, to their ever-changing context.

This self-adjusting behaviour of a system is called homeostasis.

The foundation for the concept of homeostasis was first proposed by Claude Bernard (1813-1878) who unlike most of his contemporaries, believed that all living creatures were bound by the same physical laws as inanimate matter.  In his words: “La fixité du milieu intérieur est la condition d’une vie libre et indépendante” (“The constancy of the internal environment is the condition for a free and independent life”).

The term homeostasis is attributed to Walter Bradford Cannon (1871 – 1945) who was a professor of physiology at Harvard medical school and who popularized his theories in a book called The Wisdom of the Body (1932). Cannon described four principles of homeostasis:

  1. Constancy in an open system requires mechanisms that act to maintain this constancy.
  2. Steady-state conditions require that any tendency toward change automatically meets with factors that resist change.
  3. The regulating system that determines the homeostatic state consists of a number of cooperating mechanisms acting simultaneously or successively.
  4. Homeostasis does not occur by chance, but is the result of organised self-government.

Homeostasis is therefore an emergent behaviour of a system and is the result of organised, cooperating, automatic mechanisms. We know this by another name – feedback control – which is passing data from one part of a system to guide the actions of another part. Any system that does not have homeostatic feedback loops as part of its design will be inherently unstable – especially in a changing environment.  And unstable means untrustworthy.

Take driving for example. Our vehicle and its trusting passengers want to get to their desired destination on time and in one piece. To achieve this we will need to keep our vehicle within the boundaries of the road – the white lines – in order to avoid “disappointment”.

As their trusted driver our feedback loop consists of a view of the road ahead via the front windscreen; our vision connected through a working nervous system to the muscles in ours arms and legs; to the steering wheel, accelerator and brakes; then to the engine, transmission, wheels and tyres and finally to the road underneath the wheels. It is quite a complicated multi-step feedback system – but an effective one. The road can change direction and unpredictable things can happen and we can adapt, adjust and remain in control.  An inferior feedback design would be to use only the rear-view mirror and to steer by looking at the whites lines emerging from behind us. This design is just as complicated but it is much less effective and much less safe because it is entirely reactive.  We get no early warning of what we are approaching.  So, any system that uses the output performance as the feedback loop to the input decision step is like driving with just a rear view mirror.  Complex, expensive, unstable, ineffective and unsafe.     

As the number of steps in a process increases the more important the design of  the feedback stabilisation becomes – as does the number of ways we can get it wrong:  Wrong feedback signal, or from the wrong place, or to the wrong place, or at the wrong time, or with the wrong interpretation – any of which result in the wrong decision, the wrong action and the wrong outcome. Getting it right means getting all of it right all of the time – not just some of it right some of the time. We can’t leave it to chance – we have to design it to work.

Let us consider a real example. The NHS 18-week performance requirement.

The stream map shows a simple system with two parallel streams: A and B that each has two steps 1 and 2. A typical example would be generic referral of patients for investigations and treatment to one of a number of consultants who offer that service. The two streams do the same thing so the first step of the system is to decide which way to direct new tasks – to Step A1 or to Step B1. The whole system is required to deliver completed tasks in less than 18 weeks (18/52) – irrespective of which stream we direct work into.   What feedback data do we use to decide where to direct the next referral?

The do nothing option is to just allocate work without using any feedback. We might do that randomly, alternately or by some other means that are independent of the system.  This is called a push design and is equivalent to driving with your eyes shut but relying on hope and luck for a favourable outcome. We will know when we have got it wrong – but it is too late then – we have crashed the system! 

A more plausible option is to use the waiting time for the first step as the feedback signal – streaming work to the first step with the shortest waiting time. This makes sense because the time waiting for the first step is part of the lead time for the whole stream so minimising this first wait feels reasonable – and it is – BUT only in one situation: when the first steps are the constraint steps in both streams [the constraint step is one one that defines the maximum stream flow].  If this condition is not met then we heading for trouble and the map above illustrates why. In this case Stream A is just failing the 18-week performance target but because the waiting time for Step A1 is the shorter we would continue to load more work onto the failing  stream – and literally push it over the edge. In contrast Stream B is not failing and because the waiting time for Step B1 is the longer it is not being overloaded – it may even be underloaded.  So this “plausible” feedback design can actually make the system less stable. Oops!

In our transport metaphor – this is like driving too fast at night or in fog – only being able to see what is immediately ahead – and then braking and swerving to get around corners when they “suddenly” appear and running off the road unintentionally! Dangerous and expensive.

With this new insight we might now reasonably suggest using the actual output performance to decide which way to direct new work – but this is back to driving by watching the rear-view mirror!  So what is the answer?

The solution is to design the system to use the most appropriate feedback signal to guide the streaming decision. That feedback signal needs to be forward looking, responsive and to lead to stable and equitable performance of the whole system – and it may orginate from inside the system. The diagram above holds the hint: the predicted waiting time for the second step would be a better choice.  Please note that I said the predicted waiting time – which is estimated when the task leaves Step 1 and joins the back of the queue between Step 1 and Step 2. It is not the actual time the most recent task came off the queue: that is rear-view mirror gazing again.

When driving we look as far ahead as we can, for what we are heading towards, and we combine that feedback with our present speed to predict how much time we have before we need to slow down, when to turn, in which direction, by how much, and for how long. With effective feedback we can behave proactively, avoid surprises, and eliminate sudden braking and swerving! Our passengers will have a more comfortable ride and are more likely to survive the journey! And the better we can do all that the faster we can travel in both comfort and safety – even on an unfamiliar road.  It may be less exciting but excitement is not our objective. On time delivery is our goal.

Excitement comes from anticipating improvement – maintaining what we have already improved is rewarding.  We need both to sustain us and to free us to focus on the improvement work!