Purpose-Process-Pilot-Policy-Police

inspector_searching_around_150_wht_14757When it comes to light that things are not going well a common reaction from the top is to send in more inspectors.

This may give the impression that something decisive is being done but it almost never works … for two reasons.

The first is because it is attempting to treat the symptom and not the cause.

The second is because the inspectors are created in the same paradigm that that created the problem.

That is not so say that inspectors are not required … they are … when the system is working … not when it is failing.

The inspection police actually come last – and just before them comes the Policy that the Police enforce.

Policy comes next to last. Not first.

A rational Policy can only be written once there is proof of  effectiveness … and that requires a Pilot study … in the real world.

A small scale reality check of the rhetoric.

Cooking up Policy and delivery plans based on untested rhetoric from the current paradigm is a recipe for disappointment.


Working backwards we can see that the Pilot needs something to pilot … and that is a new Process; to replace the old process that is failing to deliver.

And any Process needs to be designed to be fit-for-purpose.  Cutting-and-pasting someone else’s design usually does not work. The design process is more important than the design it creates.

So thus brings us to the first essential requirement … the Purpose.

And that is where we very often find a big gap … an error of omission … no clarity or constancy of common Purpose.

And that is where leaders must start. It is their job to clarify and communicate the common Purpose. And if the leaders are not cohesive and the board cannot agree the Purpose then the political cracks will spread through the whole organisation and destabilize it.

And with a Purpose the system and process designers can get to work.

But here we hit another gap. There is virtually no design capability in most organisations.

There is usually lots of delivery capability … but efficiently delivering an ineffective design will amplify the chaos not dissolve it.

So in parallel with clarifying the purpose, the leaders must  endorse the creation of a cohort of process designers.

And from the organisation a cohort of process inspectors … but of a different calibre … inspectors who are able to find the root causes and able to guide the improvement process because they have done this themselves many times before.

And perhaps to draw a line between the future and the past we could give them a different name – Mentors.

The Productive Meeting

networking_people_PA_300_wht_1844The engine of improvement is a productive meeting.

Complex adaptive systems (CAS) are those that  learn and change themselves.

The books of ‘rules’ are constantly revised and refreshed as the CAS co-evolves with its environment.

System improvement is the outcome of effective actions.

Effective actions are the outcomes of wise decisions.

Wise decisions are the output of productive meetings.

So the meeting process must be designed to be productive: which means both effective and efficient.


One of the commonest niggles that individuals report is ‘Death by Meeting’.

That alone is enough evidence that our current design for meetings is flawed.


One common error of omission is lack of clarity about the purpose of the meeting.

This cause has two effects:

1. The wrong sort of meeting design is used for the problem(s) under consideration.

A meeting designed for tactical  (how to) planning will not work well for strategic (why to) problems.

2. A mixed bag of problems is dumped into the all-purpose-less meeting.

Mixing up short term tactical and long term strategic problems on a single overburdened agenda is doomed to fail.


Even when the purpose of  a meeting  is clear and agreed it is common to observe an unproductive meeting process.

The process may be unproductive because it is ineffective … there are no wise decisions made and so no effective actions implemented.

Worse even than that … decisions are made that are unwise and the actions that follow lead to unintended negative consequences.

The process may also be unproductive because it is inefficient … it requires too much input to get any output.

Of course we want both an effective and an efficient meeting process … and we need to be aware that effectiveness  comes first.  Designing the meeting process to be a more efficient generator of unwise decisions is not a good idea! The result is an even bigger problem!


So our meeting design focus is ‘How could we make wise decisions as a group?’

But if we knew the answer to that we would probably already be doing it!

So we can ask the same question another way: ‘How do we make unwise decisions as a group?

The second question is easier to answer. We just reflect on our current experience.

Some ways we appear to unintentionally generate unwise decisions are:

a) Ensure we have no clarity of purpose – confusion is a good way to defuse effective feedback.
b) Be selective in who we invite to the meeting – group-think facilitates consensus.
c) Ignore the pragmatic, actual, reality and only use academic, theoretical, rhetoric.
d) Encourage the noisy – quiet people are non-contributors.
e) Engage in manipulative styles of behaviour – people cannot be trusted.
f) Encourage the  sceptics and cynics to critique and cull innovative suggestions.
g) Have a trump card – keep the critical ‘any other business’ to the end – just in case.

If we adopt all these tactics we can create meetings that are ‘lively’, frustrating, inefficient and completely unproductive. That of course protects us from making unwise decisions.


So one approach to designing meetings to be more productive is simply to recognise and challenge the unproductive behaviours – first as individuals and then as groups.

The place to start is within our own circle of influence – with those we trust – and to pledge to each other to consciously monitor for unproductive behaviours and to respectfully challenge them.

These behaviours are so habitual that we are often unaware that we are doing them.

And it feels strange at first but it get easier with practice and when you see the benefits.

Seeing-by-Doing

OneStopBeforeGanttFlow improvement-by-design requires being able to see the flows; and that is trickier than it first appears.

We can see movement very easily.

Seeing flows is not so easy – particularly when they are mixed-up and unsteady.

One of the most useful tools for visualising flow was invented over 100 years ago by Henry Laurence Gantt (1861-1919).

Henry Gantt was a mechanical engineer from Johns Hopkins University and an early associate of Frederick Taylor. Gantt parted ways with Taylor because he disagreed with the philosophy of Taylorism which was that workers should be instructed what to do by managers (=parent-child).  Gantt saw that workers and managers could work together for mutual benefit of themselves and their companies (=adult-adult).  At one point Gantt was invited to streamline the production of munitions for the war effort and his methods were so successful that the Ordinance Department was the most productive department of the armed forces.  Gantt favoured democracy over autocracy and is quoted to have said “Our most serious trouble is incompetence in high places. The manager who has not earned his position and who is immune from responsibility will fail time and again, at the cost of the business and the workman“.

Henry Gantt invented a number of different charts – not just the one used in project management which was actually invented 20 years earlier by Karol Adamieki and re-invented by Gantt. It become popularised when it was used in the Hoover Dam project management; but that was after Gantt’s death in 1919.

The form of Gantt chart above is called a process template chart and it is designed to show the flow of tasks through  a process. Each horizontal line is a task; each vertical column is an interval of time. The colour code in each cell indicates what the task is doing and which resource the task is using during that time interval. Red indicates that the task is waiting. White means that the task is outside the scope of the chart (e.g. not yet arrived or already departed).

The Gantt chart shows two “red wedges”.  A red wedge that is getting wider from top to bottom is the pattern created by a flow constraint.  A red wedge that is getting narrower from top to bottom is the pattern of a policy constraint.  Both are signs of poor scheduling design.

A Gantt chart like this has three primary uses:
1) Diagnosis – understanding how the current flow design is creating the queues and delays.
2) Design – inventing new design options.
3) Prognosis – testing the innovative designs so the ‘fittest’ can be chosen for implementation.

These three steps are encapsulated in the third “M” of 6M Design® – the Model step.

In this example the design flaw was the scheduling policy.  When that was redesigned the outcome was zero-wait performance. No red on the chart at all.  The same number of tasks were completed in the same with the same resources used. Just less waiting. Which means less space is needed to store the queue of waiting work (i.e. none in this case).

That this is even possible comes as a big surprise to most people. It feels counter-intuitive. It is however an easy to demonstrate fact. Our intuition tricks us.

And that reduction in the size of the queue implies a big cost reduction when the work-in-progress is perishable and needs constant attention [such as patients lying on A&E trolleys and in hospital beds].

So what was the cost of re-designing this schedule?

A pinch of humility. A few bits of squared paper and some coloured pens. A couple hours of time. And a one-off investment in learning how to do it.  Peanuts in comparison with the recurring benefit gained.

 

A Bit Of A Shock

egg_face_spooked_400_wht_13421It comes as a bit of a shock to learn that some of our habitual assumptions and actions are worthless.

Improvement implies change. Change requires doing things differently. That requires making different decisions. And that requires innovative thinking. And that requires new knowledge.

We are comfortable with the idea of adding  new knowledge to the vast store we have already accumulated.

We are less comfortable with the idea of removing old knowledge when it has grown out-of-date.

We are shocked when we discover that some of our knowledge is just wrong and it always has been. Since the start of time.

So we need to prepare ourselves for those sorts of shocks. We need to be resilient so that we are not knocked off our feet by them.  We need to practice a different emotional reaction to our habitual fright-flight-or-fight reaction.

We need to cultivate our curiosity.

For example:

It comes as a big shock to many when they learn that it is impossible to determine the cause from an analysis of the observed effect.  Not just difficult. Impossible.

“No Way!”  We shout angrily.  “We do that all the time!”

But do we?

What we do is we observe temporal associations.  We notice that Y happened after X and we conclude that X caused Y.

This is an incorrect conclusion.  We can only conclude from this observation that ‘X may have played a part in causing Y’ but we cannot prove it.

Not by observation alone.

What we can definitely say is that Y did not cause X – because time does not go backwards. At least it does not appear to.

Another thing that does not go backwards is information.

Q: What is 2 + 2?  Four. Easy. There is only one answer. Two numbers become one.

Let us try this in reverse …

Q: What two numbers when added together give 4? Tricky. There are countless answers.  One number cannot become two without adding uncertainty. Guessing.

So when we look at the information coming out of a system – the effects and we attempt to analyse it to reveal the causes we hit a problem. It is impossible.

And learning that is a big shock to people who describe themselves as ‘information analysts’ …. the whole foundation of what they do appears to evaporate.

So we need to outline what we can reasonably do with the retrospective analysis of effect data.

We can look for patterns.

Patterns that point to plausible causes.

Just like patterns of symptoms that point to possible diseases.

But how do we learn what patterns to look for?

Simple. We experiment. We do things and observe what happens immediately afterwards – the immediate effects. We conduct lots and lots of small experiments. And we learn the repeating patterns. “If the context is this and I do that then I always see this effect”.

If we observe a young child learning that is what we see … they are experimenting all the time.  They are curious. They delight in discovery. Novelty is fun. Learning to walk is a game.  Learning to talk is a game.  Learning to be a synergistic partner in a social group is a game.

And that same child-like curiosity is required for effective improvement.

And we know when we are doing improvement right: it feels good. It is fun. Learning is fun.

A Stab At The Vitals

pirate_flag_anim_150_wht_12881[Drrring Drrring] The phone heralded the start of the weekly ISP mentoring session.

<Bob> Hi Leslie, how are you today?

<Leslie> Hi Bob. To be honest I am not good. I am drowning. Drowning in data!

<Bob> Oh dear! I am sorry to hear that. Can I help? What led up to this?

<Leslie> Well, it was sort of triggered by our last chat and after you opened my eyes to the fact that we habitually throw most of our valuable information away by thresholding, aggregating and normalising.  Then we wonder why we make poor decisions … and then we get frustrated because nothing seems to improve.

<Bob> OK. What happened next?

<Leslie> I phoned our Performance Team and asked for some raw data. Three months worth.

<Bob> And what was their reaction?

<Leslie> They said “OK, here you go!” and sent me a twenty megabyte Excel spreadsheet that clogged my email inbox!  I did manage to unclog it eventually by deleting loads of old junk.  But I could swear that I heard the whole office laughing as they hung up the phone! Maybe I am paranoid?

<Bob> OK. And what happened next?

<Leslie> I started drowning!  The mega-file had a row of data for every patient that has attended A&E for the last three months as I had requested, but there were dozens of columns!  Trying to slice-and-dice it was a nightmare! My computer was smoking and each step took ages for it to complete.  In the end I gave up in frustration.  I now have a lot more respect for the Performance Team I can tell you! They do this for a living?

<Bob> OK.  It sounds like you are ready for a Stab At the Vitals.

<Leslie> What?  That sounds rather piratical!  Are you making fun of my slicing-and-dicing metaphor?

<Bob> No indeed.  I am deadly serious!  Before we leap into the data ocean we need to be able to swim; and we also need a raft that will keep us afloat;  and we need a sail to power our raft; and we need a way to navigate our raft to our desired destination.

<Leslie> OK. I like the nautical metaphor but how does it help?

<Bob> Let me translate. Learning to use system behaviour charts is equivalent to learning the skill of swimming. We have to do that first and practice until we are competent and confident.  Let us call our raft “ISP” – you are already aboard.  The sail you also have already – your Excel software.  The navigation aid is what I refer to as Vitals. So we need to have a “stab at the vitals”.

<Leslie> Do you mean we use a combination of time-series charts, ISP and Excel to create a navigation aid that helps avoid the Depths of Data and the Rocks of DRAT?

<Bob> Exactly.

<Leslie> Can you demonstrate with an example?

<Bob> Sure. Send me some of your data … just the arrival and departure events for one day – a typical one.

<Leslie> OK … give me a minute!  …  It is on its way.  How long will it take for you to analyse it?

<Bob> About 2 seconds. OK, here is your email … um … copy … paste … copy … reply

Vitals_Charts<Leslie> What the ****? That was quick! Let me see what this is … the top left chart is the demand, activity and work-in-progress for each hour; the top right chart is the lead time by patient plotted in discharge order; the table bottom left includes the 4 hour breach rate.  Those I do recognise. What is the chart on the bottom right?

<Bob> It is a histogram of the lead times … and it shows a problem.  Can you see the spike at 225 to 240 minutes?

<Leslie> Is that the fabled Horned Gaussian?

<Bob> Yes.  That is the sign that the 4-hour performance target is distorting the behaviour of the system.  And this is yet another reason why the  Breach Rate is a dangerous management metric. The adaptive reaction it triggers amplifies the variation and fuels the chaos.

<Leslie> Wow! And you did all that in Excel using my data in two seconds?  That must need a whole host of clever macros and code!

<Bob> “Yes” it was done in Excel and “No” it does not need any macros or code.  It is all done using simple formulae.

<Leslie> That is fantastic! Can you send me a copy of your Excel file?

<Bob> Nope.

<Leslie>Whaaaat? Why not? Is this some sort of evil piratical game?

<Bob> Nope. You are going to learn how to do this yourself – you are going to build your own Vitals Chart Generator – because that is the only way to really understand how it works.

<Leslie> Phew! You had me going for a second there! Bring it on! What do I do next?

<Bob> I will send you the step-by-step instructions of how to build, test and use a Vitals Chart Generator.

<Leslie> Thanks Bob. I cannot wait to get started! Weigh anchor and set the sails! Ha’ harrrr me hearties.

Ratio Hazards

waste_paper_shot_miss_150_wht_11853[Bzzzzz Bzzzzz] Bob’s phone was on silent but the desktop amplified the vibration and heralded the arrival of Leslie’s weekly ISP coaching call.

<Bob> Hi Leslie.  How are you today and what would you like to talk about?

<Leslie> Hi Bob.  I am well and I have an old chestnut to roast today … target-driven-behaviour!

<Bob> Excellent. That is one of my favorite topics. Is there a specific context?

<Leslie> Yes.  The usual desperate directive from on-high exhorting everyone to “work harder to hit the target” and usually accompanied by a RAG table of percentages that show just who is failing and how badly they are doing.

<Bob> OK. Red RAGs irritating the Bulls eh? Percentages eh? Have we talked about Ratio Hazards?

<Leslie> We have talked about DRATs … Delusional Ratios and Arbitrary Targets as you call them. Is that the same thing?

<Bob> Sort of. What happened when you tried to explain DRATs to those who are reacting to these ‘desperate directives’?

<Leslie> The usual reply is ‘Yes, but that is how we are required to report our performance to our Commissioners and Regulatory Bodies.’

<Bob> And are the key performance indicators that are reported upwards and outwards also being used to manage downwards and inwards?  If so, then that is poor design and is very likely to be contributing to the chaos.

<Leslie> Can you explain that a bit more? It feels like a very fundamental point you have just made.

 <Bob> OK. To do that let us work through the process by which the raw data from your system is converted into the externally reported KPI.  Choose any one of your KPIs

<Leslie> Easy! The 4-hour A&E target performance.

<Bob> What is the raw data that goes in to that?

<Leslie> The percentage of patients who breach 4-hours per day.

<Bob> And where does that ratio come from?

<Leslie> Oh! I see what you mean. That comes from a count of the number of patients who are in A&E for more than 4 hours divided by a count of the number of patients who attended.

<Bob> And where do those counts come come from?

<Leslie> We calculate the time the patient is in A&E and use the 4-hour target to label them as breaches or not.

<Bob> And what data goes into the calculation of that time?

<Leslie>The arrival and departure times for each patient. The arrive and depart events.

<Bob>OK. Is that the raw data?

<Leslie>Yes. Everything follows from that.

<Bob> Good.  Each of these two events is a time – which is a continuous metric.  In principle,  we could in record it to any degree of precision we like – milliseconds if we had a good enough enough clock.

<Leslie> Yes. We record it to an accuracy of of seconds – it is when the patient is ‘clicked through’ on the computer.

<Bob> Careful Leslie, do not confuse precision with accuracy. We need both.

<Leslie> Oops! Yes I remember we had that conversation before.

<Bob> And how often is the A&E 4-hour target KPI reported externally?

<Leslie> Quarterly. We either succeed or fail each quarter of the financial year.

<Bob> That is a binary metric. An “OK or not OK”. No gray zone.

<Leslie> Yes. It is rather blunt but that is how we are contractually obliged to report our performance.

<Bob> OK. And how many patients per day on average come to A&E?

<Leslie> About 200 per day.

<Bob> So the data analysis process is boiling down about 36,000 pieces of continuous data into one Yes-or-No bit of binary data.

<Leslie> Yes.

<Bob> And then that one bit is used to drive the action of the Board: if it is ‘OK last quarter’ then there is no ‘desperate directive’ and if it is a ‘Not OK last quarter’ then there is.

<Leslie> Yes.

<Bob> So you are throwing away 99.9999% of your data and wondering why what is left is not offering much insight in what to do.

<Leslie>Um, I guess so … when you say it like that.  But how does that relate to your phrase ‘Ratio Hazards’?

<Bob> A ratio is just one of the many ways that we throw away information. A ratio requires two numbers to calculate it; and it gives one number as an output so we are throwing half our information away.  And this is an irreversible act.  Two specific numbers will give one ratio; but that ratio can be created by an infinite number possible pairs of numbers and we have no way of knowing from the ratio what specific pair was used to create it.

<Leslie> So a ratio is an exercise in obfuscation!

<Bob> Well put! And there is an even more data-wasteful behaviour that we indulge in. We aggregate.

<Leslie> By that do you mean we summarise a whole set of numbers with an average?

<Bob> Yes. When we average we throw most of the data away and when we average over time then we abandon our ability to react in a timely way.

<Leslie>The Flaw of Averages!

<Bob> Yes. One of them. There are many.

<Leslie>No wonder it feels like we are flying blind and out of control!

<Bob> There is more. There is an even worse data-wasteful behaviour. We threshold.

<Leslie>Is that when we use a target to decide if the lead time is OK or Not OK.

<Bob> Yes. And using an arbitrary target makes it even worse.

<Leslie> Ah ha! I see what you are getting at.  The raw event data that we painstakingly collect is a treasure trove of information and potential insight that we could use to help us diagnose, design and deliver a better service. But we throw all but one single solitary binary digit when we put it through the DRAT Processor.

<Bob> Yup.

<Leslie> So why could we not do both? Why could we not use use the raw data for ourselves and the DRAT processed data for external reporting.

<Bob> We could.  So what is stopping us doing just that?

<Leslie> We do not know how to effectively and efficiently interpret the vast ocean of raw data.

<Bob> That is what a time-series chart is for. It turns the thousands of pieces of valuable information onto a picture that tells a story – without throwing the information away in the process. We just need to learn how to interpret the pictures.

<Leslie> Wow! Now I understand much better why you insist we ‘plot the dots’ first.

<Bob> And now you understand the Ratio Hazards a bit better too.

<Leslie> Indeed so.  And once again I have much to ponder on. Thank you again Bob.

The Learning Labyrinth

Minecraft There is an amazing phenomenon happening right now – a whole generation of people are learning to become system designers and they are doing it by having fun.

There is a game called Minecraft which millions of people of all ages are rapidly discovering.  It is creative, fun and surprisingly addictive.

This is what it says on the website.

“Minecraft is a game about breaking and placing blocks. At first, people built structures to protect against nocturnal monsters, but as the game grew players worked together to create wonderful, imaginative things.”

The principle is that before you can build you have to dig … you have to gather the raw materials you need … and then you have to use what you have gathered in novel and imaginative ways.  You need tools too, and you need to learn what they are used for, and what they are useless for. And the quickest way to learn the necessary survival and creative  skills is by exploring, experimenting, seeking help, and sharing your hard-won knowledge and experience with others.

The same principles hold in the real world of Improvement Science.

The treasure we are looking for is less tangible though … but no less difficult to find … unless you know where to look.

The treasure we seek is learning; how to achieve significant and sustained improvement on all dimensions.

And there is a mountain of opportunity that we can mine into. It is called Reality.

And when we do that we uncover nuggets of knowledge, jewels of understanding, and pearls of wisdom.

There are already many tunnels that have been carved out by others who have gone before us. They branch and join to form a vast cave network. A veritable labyrinth. Complicated and not always well illuminated or signposted.

And stored in the caverns is a vast treasure trove of experience we can dip into – and an even greater horde of new treasure waiting to be discovered.

But even now there there is no comprehensive map of the labyrinth. So it is easy to get confused and to get lost. Not all junctions have signposts and not all the signposts are correct. There are caves with many entrances and exits, there are blind-ending tunnels, and there are many hazards and traps for the unwary.

So to enter the Learning Labyrinth and to return safety with Improvement treasure we need guides. Those who know the safe paths and the unsafe ones. And as we explore we all need to improve the signage and add warning signs where hazards lurk.

And we need to work at the edge of knowledge  to extend the tunnels further. We need to seal off the dead-ends, and to draw and share up-to-date maps of the paths.

We need to grow a Community of Improvement Science Minecrafters.

And the first things we need are some basic improvement tools and techniques … and they can be found here.

Firewall

buncefield_fireFires are destructive, indifferent, and they can grow and spread very fast.

The picture is of  the Buncefield explosion and conflagration that occurred on 11th December 2005 near Hemel Hempstead in the UK.  The root cause was a faulty switch that failed to prevent tank number 912 from being overfilled. This resulted in an initial 300 gallon petrol spill which created the perfect conditions for an air-fuel explosion.  The explosion was triggered by a spark and devastated the facility. Over 2000 local residents needed to be evacuated and the massive fuel fire took days to bring under control. The financial cost of the accident has been estimated to run into tens of millions of pounds.

The Great Fire of London in September 1666 led directly to the adoption of new building standards – notably brick and stone instead of wood because they are more effective barriers to fire.

A common design to limit the spread of a fire is called a firewall.

And we use the same principle in computer systems to limit the spread of damage when a computer system goes out of control.


Money is the fuel that keeps the wheels of healthcare systems turning.  And healthcare is an expensive business so every drop of cash-fuel is precious.  Healthcare is also a risky business – from both a professional and a financial perspective. Mistakes can quickly lead to loss of livelihood, expensive recovery plans and huge compensation claims. The social and financial equivalent of a conflagration.

Financial fires spread just like real ones – quickly. So it makes good sense not to have all the cash-fuel in one big pot.  It makes sense to distribute it to smaller pots – in each department – and to distribute the cash-fuel intermittently. These cash-fuel silos are separated by robust financial firewalls and they are called Budgets.

The social sparks that ignite financial fires are called ‘Niggles‘.  They are very numerous but we have effective mechanisms for containing them. The problem happens when a multiple sparks happen at the same time and place and together create a small chain reaction. Then we get a complaint. A ‘Not Again‘.  And we are required to spend some of our precious cash-fuel investigating and apologizing.  We do not deal with the root cause, we just scrape the burned toast.

And then one day the chain reaction goes a bit further and we get a ‘Near Miss‘.  That has a different  reporting mechanism so it stimulates a bigger investigation and it usually culminates in some recommendations that involve more expensive checking, documenting and auditing of the checking and documentation.  The root cause, the Niggles, go untreated – because there are too many of them.

But this check-and-correct reaction is also  expensive and we need even more cash-fuel to keep the organizational engine running – but we do not have any more. Our budgets are capped. So we start cutting corners. A bit here and a bit there. And that increases the risk of more Niggles, Not Agains, and Near Misses.

Then the ‘Never Event‘ happens … a Safety and Quality catastrophe that triggers the financial conflagration and toasts the whole organization.


So although our financial firewalls, the Budgets, are partially effective they also have downsides:

1. Paradoxically they can create the perfect condition for a financial conflagration when too small a budget leads to corner-cutting on safety.

2. They lead to ‘off-loading’ which means that too-expensive-to-solve problems are chucked over the financial firewalls into the next department.  The cost is felt downstream of the source – in a different department – and is often much larger. The sparks are blown downwind.

For example: a waiting list management department is under financial pressure and is running short staffed as a recruitment freeze has been imposed. The overburdening of the remaining staff leads to errors in booking patients for operations. The knock on effect that is patients being cancelled on the day and the allocated operating theatre time is wasted.  The additional cost of wasted theatre time is orders of magnitude greater than the cost-saving achieved in the upstream stage.  The result is a lower quality service, a greater cost to the whole system, and the risk that safety corners will be cut leading to a Near Miss or a Never Event.

The nature of real systems is that small perturbations can be rapidly amplified by a ‘tight’ financial design to create a very large and expensive perturbation called a ‘catastrophe’.  A silo-based financial budget design with a cost-improvement thumbscrew feature increases the likelihood of this universally unwanted outcome.

So if we cannot use one big fuel tank or multiple, smaller, independent fuel tanks then what is the solution?

We want to ensure smooth responsiveness of our healthcare engine, we want healthcare  cash-fuel-efficiency and we want low levels of toxic emissions (i.e. complaints) at the same time. How can we do that?

Fuel-injection.

fuel_injectorsElectronic Fuel Injection (EFI) designs have now replaced the old-fashioned, inefficient, high-emission  carburettor-based engines of the 1970’s and 1980’s.

The safer, more effective and more efficient cash-flow design is to inject the cash-fuel where and when it is needed and in just the right amount.

And to do that we need to have a robust, reliable and rapid feedback system that controls the cash-injectors.

But we do not have such a feedback system in healthcare so that is where we need to start our design work.

Designing an automated cash-injection system requires understanding how the Seven Flows of any  system work together and the two critical flows are Data Flow and Cash Flow.

And that is possible.

Our Iceberg Is Melting

hold_your_ground_rope_300_wht_6223[Dring Dring] The telephone soundbite announced the start of the coaching session.

<Bob> Good morning Leslie. How are you today?

<Leslie> I have been better.

<Bob> You seem upset. Do you want to talk about it?

<Leslie> Yes, please. The trigger for my unhappiness is that last week I received an email demanding that I justify the time I spend doing improvement work and  a summons to a meeting to ‘discuss some issues that have been raised‘.

<Bob> OK. I take it that you do not know what or who has triggered this inquiry.

<Leslie> You are correct. My working hypothesis is that it is the end of the financial year and budget holders are looking for opportunities to do some pruning – to meet their cost improvement program targets!

<Bob> So what is the problem? You have shared the output of your work. You have demonstrated significant improvements in safety, flow, quality and productivity and you have described both them and the methodology clearly.

<Leslie> I know. That us why I was so upset to get this email. It is as if everything that we have achieved has been ignored. It is almost as if it is resented.

<Bob> Ah! You may well be correct.  This is the nature of paradigm shifts. Those who have the greatest vested interest in the current paradigm get spooked when they feel it start to wobble. Each time you share the outcome of your improvement work you create emotional shock-waves. The effects are cumulative and eventually there will be is a ‘crisis of confidence’ in those who feel most challenged by the changes that you are demonstrating are possible.  The whole process is well described in Thomas Kuhn’s The Structure of Scientific Revolutions. That is not a book for an impatient reader though – for those who prefer something lighter I recommend “Our Iceberg is Melting” by John Kotter.

<Leslie> Thanks Bob. I will get a copy of Kotter’s book – that sounds more my cup of tea. Will that tell me what to do?

<Bob> It is a parable – a fictional story of a colony of penguins who discover that their iceberg is melting and are suddenly faced with a new and urgent potential risk of not surviving the storms of the approaching winter. It is not a factual account of a real crisis or a step-by-step recipe book for solving all problems  – it describes some effective engagement strategies in general terms.

<Leslie> I will still read it. What I need is something more specific to my actual context.

<Bob> This is an improvement-by-design challenge. The only difference from the challenges you have done already is that this time the outcome you are looking for is a smooth transition from the ‘old’ paradigm to the ‘new’ one.  Kuhn showed that this transition will not start to happen until there is a new paradigm because individuals choose to take the step from the old to the new and they do not all do that at the same time.  Your work is demonstrating that there is a new paradigm. Some will love that message, some will hate it. Rather like Marmite.

<Leslie> Yes, that make sense.  But how do I deal with an unseen enemy who is stirring up trouble behind my back?

<Bob> Are you are referring to those who have ‘raised some issues‘?

<Leslie> Yes.

<Bob> They will be the ones who have most invested in the current status quo and they will not be in senior enough positions to challenge you directly so they are going around spooking the inner Chimps of those who can. This is expected behaviour when the relentlessly changing reality starts to wobble the concrete current paradigm.

<Leslie> Yes! That is  exactly how it feels.

<Bob> The danger lurking here is that your inner Chimp is getting spooked too and is conjuring up Gremlins and Goblins from the Computer! Left to itself your inner Chimp will steer you straight into the Victim Vortex.  So you need to take it for a long walk, let it scream and wave its hairy arms about, listen to it, and give it lots of bananas to calm it down. Then put your put your calmed-down Chimp into its cage and your ‘paradigm transition design’ into the Computer. Only then will you be ready for the ‘so-justify-yourself’ meeting.  At the meeting your Chimp will be out of its cage like a shot and interpreting everything as a threat. It will disable you and go straight to the Computer for what to do – and it will read your design and follow the ‘wise’ instructions that you have put in there.

<Leslie> Wow! I see how you are using the Chimp Paradox metaphor to describe an incredibly complex emotional process in really simple language. My inner Chimp is feeling happier already!

<Bob> And remember that you are in all in the same race. Your collective goal is to cross the finish line as quickly as possible with the least chaos, pain and cost.  You are not in a battle – that is lose-lose inner Chimp thinking.  The only message that your interrogators must get from you is ‘Win-win is possible and here is how we can do it‘. That will be the best way to soothe their inner Chimps – the ones who fear that you are going to sink their boat by rocking it.

<Leslie> That is really helpful. Thank you again Bob. My inner Chimp is now snoring gently in its cage and while it is asleep I have some Improvement-by-Design work to do and then some Computer programming.

Jiggling

hurry_with_the_SFQP_kit[Dring] Bob’s laptop signaled the arrival of Leslie for their regular ISP remote coaching session.

<Bob> Hi Leslie. Thanks for emailing me with a long list of things to choose from. It looks like you have been having some challenging conversations.

<Leslie> Hi Bob. Yes indeed! The deepening gloom and the last few blog topics seem to be polarising opinion. Some are claiming it is all hopeless and others, perhaps out of desperation, are trying the FISH stuff for themselves and discovering that it works.  The ‘What Ifs’ are engaged in war of words with the ‘Yes Buts’.

<Bob> I like your metaphor! Where would you like to start on the long list of topics?

<Leslie> That is my problem. I do not know where to start. They all look equally important.

<Bob> So, first we need a way to prioritise the topics to get the horse-before-the-cart.

<Leslie> Sounds like a good plan to me!

<Bob> One of the problems with the traditional improvement approaches is that they seem to start at the most difficult point. They focus on ‘quality’ first – and to be fair that has been the mantra from the gurus like W.E.Deming. ‘Quality Improvement’ is the Holy Grail.

<Leslie>But quality IS important … are you saying they are wrong?

<Bob> Not at all. I am saying that it is not the place to start … it is actually the third step.

<Leslie>So what is the first step?

<Bob> Safety. Eliminating avoidable harm. Primum Non Nocere. The NoNos. The Never Events. The stuff that generates the most fear for everyone. The fear of failure.

<Leslie> You mean having a service that we can trust not to harm us unnecessarily?

<Bob> Yes. It is not a good idea to make an unsafe design more efficient – it will deliver even more cumulative harm!

<Leslie> OK. That makes perfect sense to me. So how do we do that?

<Bob> It does not actually matter.  Well-designed and thoroughly field-tested checklists have been proven to be very effective in the ‘ultra-safe’ industries like aerospace and nuclear.

<Leslie> OK. Something like the WHO Safe Surgery Checklist?

<Bob> Yes, that is a good example – and it is well worth reading Atul Gawande’s book about how that happened – “The Checklist Manifesto“.  Gawande is a surgeon who had published a lot on improvement and even so was quite skeptical that something as simple as a checklist could possibly work in the complex world of surgery. In his book he describes a number of personal ‘Ah Ha!’ moments that illustrate a phenomenon that I call Jiggling.

<Leslie> OK. I have made a note to read Checklist Manifesto and I am curious to learn more about Jiggling – but can we stick to the point? Does quality come after safety?

<Bob> Yes, but not immediately after. As I said, Quality is the third step.

<Leslie> So what is the second one?

<Bob> Flow.

There was a long pause – and just as Bob was about to check that the connection had not been lost – Leslie spoke.

<Leslie> But none of the Improvement Schools teach basic flow science.  They all focus on quality, waste and variation!

<Bob> I know. And attempting to improve quality before improving flow is like papering the walls before doing the plastering.  Quality cannot grow in a chaotic context. The flow must be smooth before that. And the fear of harm must be removed first.

<Leslie> So the ‘Improving Quality through Leadership‘ bandwagon that everyone is jumping on will not work?

<Bob> Well that depends on what the ‘Leaders’ are doing. If they are leading the way to learning how to design-for-safety and then design-for-flow then the bandwagon might be a wise choice. If they are only facilitating collaborative agreement and group-think then they may be making an unsafe and ineffective system more efficient which will steer it over the edge into faster decline.

<Leslie>So, if we can stabilize safety using checklists do we focus on flow next?

<Bob>Yup.

<Leslie> OK. That makes a lot of sense to me. So what is Jiggling?

<Bob> This is Jiggling. This conversation.

<Leslie> Ah, I see. I am jiggling my understanding through a series of ‘nudges’ from you.

<Bob>Yes. And when the learning cogs are a bit rusty, some Improvement Science Oil and a bit of Jiggling is more effective and much safer than whacking the caveman wetware with a big emotional hammer.

<Leslie>Well the conversation has certainly jiggled Safety-Flow-Quality-and-Productivity into a sensible order for me. That has helped a lot. I will sort my to-do list into that order and start at the beginning. Let me see. I have a plan for safety, now I can focus on flow. Here is my top flow niggle. How do I design the resource capacity I need to ensure the flow is smooth and the waiting times are short enough to avoid ‘persecution’ by the Target Time Police?

<Bob> An excellent question! I will send you the first ISP Brainteaser that will nudge us towards an answer to that question.

<Leslie> I am ready and waiting to have my brain-teased and my niggles-nudged!

The Time Trap

clock_hands_spinning_import_150_wht_3149[Hmmmmmm]

The desk amplified the vibration of Bob’s smartphone as it signaled the time for his planned e-mentoring session with Leslie.

<Bob> Hi Leslie, right-on-time, how are you today?

<Leslie> Good thanks Bob. I have a specific topic to explore if that is OK. Can we talk about time traps.

<Bob> OK – do you have a specific reason for choosing that topic?

<Leslie> Yes. The blog last week about ‘Recipe for Chaos‘ set me thinking and I remembered that time-traps were mentioned in the FISH course but I confess, at the time, I did not understand them. I still do not.

<Bob> Can you describe how the ‘Recipe for Chaos‘ blog triggered this renewed interest in time-traps?

<Leslie> Yes – the question that occurred to me was: ‘Is a time-trap a recipe for chaos?’

<Bob> A very good question! What do you feel the answer is?

<Leslie> I feel that time-traps can and do trigger chaos but I cannot explain how. I feel confused.

<Bob> Your intuition is spot on – so can you localize the source of your confusion?

<Leslie> OK. I will try. I confess I got the answer to the MCQ correct by guessing – and I wrote down the answer when I eventually guessed correctly – but I did not understand it.

<Bob> What did you write down?

<Leslie> “The lead time is independent of the flow”.

<Bob> OK. That is accurate – though I agree it is perhaps a bit abstract. One source of confusion may be that there are different causes of time-traps and there is a lot of overlap with other chaos-creating policies. Do you have a specific example we can use to connect theory with reality?

<Leslie> OK – that might explain my confusion.  The example that jumped to mind is the RTT target.

<Bob> RTT?

<Leslie> Oops – sorry – I know I should not use undefined abbreviations. Referral to Treatment Time.

<Bob> OK – can you describe what you have mapped and measured already?

<Leslie> Yes.  When I plot the lead-time for patients in date-of-treatment order the process looks stable but the histogram is multi-modal with a big spike just underneath the RTT target of 18 weeks. What you describe as the ‘Horned Gaussian’ – the sign that the performance target is distorting the behaviour of the system and the design of the system is not capable on its own.

<Bob> OK, and have you investigated why there is not just one spike?

<Leslie> Yes – the factor that best explains that is the ‘priority’ of the referral.  The  ‘urgents’ jump in front of the ‘soons’ and both jump in front of the ‘routines’. The chart has three overlapping spikes.

<Bob> That sounds like a reasonable policy for mixed-priority demand. So what is the problem?

<Leslie> The ‘Routine’ group is the one that clusters just underneath the target. The lead time for routines is almost constant but most of the time those patients sit in one queue or another being leap-frogged by other higher-priority patients. Until they become high-priority – then they do the leap frogging.

<Bob> OK – and what is the condition for a time trap again?

<Leslie> That the lead time is independent of flow.

<Bob> Which implies?

<Leslie> Um. Let me think. That the flow can be varying but the lead time stays the same?

<Bob> Yup. So is the flow of routine referrals varying?

<Leslie> Not over the long term. The chart is stable.

<Bob> What about over the short term? Is demand constant?

<Leslie> No of course not – it varies – but that is expected for all systems. Constant means ‘over-smoothed data’ – the Flaw of Averages trap!

<Bob> OK. And how close is the average lead time for routines to the RTT maximum allowable target?

<Leslie> Ah! I see what you mean. The average is about 17 weeks and the target is 18 weeks.

<Bob> So, what is the flow variation on a week-to-week time scale?

<Leslie> Demand or Activity?

<Bob> Both.

<Leslie> H’mm – give me a minute to re-plot flow as a weekly-aggregated chart. Oh! I see what you mean – both the weekly activity and demand are both varying widely and they are not in sync with each other. Work in progress must be wobbling up and down a lot! So how can the lead time variation be so low?

<Bob> What do the flow histograms look like?

<Leslie> Um. Just a second. That is weird! They are both bi-modal with peaks at the extremes and not much in the middle – the exact opposite of what I expected to see! I expected a centered peak.

<Bob> What you are looking at is the characteristic flow fingerprint of a chaotic system – it is called ‘thrashing’.

<Leslie> So, I was right!

<Bob> Yes. And now you know the characteristic pattern to look for. So, what is the policy design flaw here?

<Leslie> The DRAT – the delusional ratio and arbitrary target?

<Bob> That is part of it – that is the external driver policy. The one you cannot change easily. What is the internally driven policy? The reaction to the DRAT?

<Leslie> The policy of leaving routine patients until they are about to breach then re-classifying them as ‘urgent’.

<Bob> Yes! It is called a ‘Prevarication Policy’ and it is surprisingly and uncomfortably common. Ask yourself – do you ever prevaricate? Do you ever put off ‘lower priority’ tasks until later and then not fill the time freed up with ‘higher priority tasks’?

<Leslie> OMG! I do that all the time! I put low priority and unexciting jobs on a ‘to do later’ heap but I do not sit idle – I do then focus on the high priority ones.

<Bob> High priority for whom?

<Leslie> Ah! I see what you mean. High priority for me. The ones that give me the biggest reward! The fun stuff or the stuff that I get a pat on the back for doing or that I feel good about.

<Bob> And what happens?

<Leslie> The heap of ‘no-fun-for-me-to-do’ jobs gets bigger and I await the ‘reminders’ and then have to rush round in a mad panic to avoid disappointment, criticism and blame. It feels chaotic. I get grumpy. I make more mistakes and I deliver lower-quality work. If I do not get a reminder I assume that the job was not that urgent after all and if I am challenged I claim I am too busy doing the other stuff.

<Bob> And have you avoided disappointment?

<Leslie> Ah! No – that I needed to be reminded meant that I had already disappointed. And when I do not get a reminded does not prove I have not disappointed either. Most people blame rather than complain. I have just managed to erode other people’s trust in my reliability. I have disappointed myself. I have achieved exactly the opposite of what I intended. Drat!

<Bob> So, what is the reason that you work this way? There will be a reason.  A good reason.

<Leslie> That is a very good question! I will reflect on that because I believe it will help me understand why others behave this way too.

<Bob> OK – I will be interested to hear your conclusion.  Let us return to the question. What is the  downside of a ‘Prevarication Policy’?

<Leslie> It creates stress, chaos, fire-fighting, last minute changes, increased risk of errors,  more work and it erodes both quality, confidence and trust.

<Bob> Indeed so – and the impact on productivity?

<Leslie> The activity falls, the system productivity falls, revenue falls, queues increase, waiting times increase and the chaos increases!

<Bob> And?

<Leslie> We treat the symptoms by throwing resources at the problem – waiting list initiatives – and that pushes our costs up. Either way we are heading into a spiral of decline and disappointment. We do not address the root cause.

<Bob> So what is the way out of chaos?

<Leslie> Reduce the volume on the destabilizing feedback loop? Stop the managers meddling!

<Bob> Or?

<Leslie> Eh? I do not understand what you mean. The blog last week said management meddling was the problem.

<Bob> It is a problem. How many feedback loops are there?

<Leslie> Two – that need to be balanced.

<Bob> So, what is another option?

<Leslie> OMG! I see. Turn UP the volume of the stabilizing feedback loop!

<Bob> Yup. And that is a lot easier to do in reality. So, that is your other challenge to reflect on this week. And I am delighted to hear you using the terms ‘stabilizing feedback loop’ and ‘destabilizing feedback loop’.

<Leslie> Thank you. That was a lesson for me after last week – when I used the terms ‘positive and negative feedback’ it was interpreted in the emotional context – positive feedback as encouragement and negative feedback as criticism.  So ‘reducing positive feedback’ in that sense is the exact opposite of what I was intending. So I switched my language to using ‘stabilizing and destabilizing’ feedback loops that are much less ambiguous and the confusion and conflict disappeared.

<Bob> That is very useful learning Leslie … I think I need to emphasize that distinction more in the blog. That is one advantage of online media – it can be updated!

 <Leslie> Thanks again Bob!  And I have the perfect opportunity to test a new no-prevarication-policy design – in part of the system that I have complete control over – me!

The Recipe for Chaos

boxes_group_PA4_150_wht_4916There are only four ingredients required to create Chaos.

The first is Time.

All processes and systems are time-dependent.

The second ingredient is a Metric of Interest (MoI).

That means a system performance metric that is important to all – such as a Safety or Quality or Cost; and usually all three.

The third ingredient is a feedback loop of a specific type – it is called a Negative Feedback Loop.  The NFL  is one that tends to adjust, correct and stabilise the behaviour of the system.

Negative feedback loops are very useful – but they have a drawback. They resist change and they reduce agility. The name is also a disadvantage – the word ‘negative feedback’ is often associated with criticism.

The fourth and final ingredient in our Recipe for Chaos is also a feedback loop but one of a different design – a Positive Feedback Loop (PFL)- one that amplifies variation and change.

Positive feedback loops are also very useful – they are required for agility – quick reactions to unexpected events. Fast reflexes.

The downside of a positive feedback loop is that increases instability.

The name is also confusing – ‘positive feedback’ is associated with encouragement and praise.

So, in this context it is better to use the terms ‘stabilizing feedback’ and ‘destabilizing feedback’  loops.

When we mix these four ingredients in just the right amounts we get a system that may behave chaotically. That is surprising and counter-intuitive. But it is how the Universe works.

For example:

Suppose our Metric of Interest is the amount of time that patients spend in a Accident and Emergency Department. We know that the longer this time is the less happy they are and the higher the risk of avoidable harm – so it is a reasonable goal to reduce it.

Longer-than-possible waiting times have many root causes – it is a non-specific metric.  That means there are many things that could be done to reduce waiting time and the most effective actions will vary from case-to-case, day-to-day and even minute-to-minute.  There is no one-size-fits-all solution.

This implies that those best placed to correct the causes of these delays are the people who know the specific system well – because they work in it. Those who actually deliver urgent care. They are the stabilizing ingredient in our Recipe for Chaos.

The destabilizing ingredient is the hit-the-arbitrary-target policy which drives a performance management feedback loop.

This policy typically involves:
(1) Setting a performance target that is desirable but impossible for the current design to achieve reliably;
(2) inspecting how close to the target we are; then
(3) using the real-time data to justify threats of dire consequences for failure.

Now we have a perfect Recipe for Chaos.

The higher the failure rate the more inspections, reports, meetings, exhortations, threats, interruptions, and interventions that are generated.  Fear-fuelled management meddling. This behaviour consumes valuable time – so leaves less time to do the worthwhile work. Less time to devote to safety, flow, and quality. The queues build and the pressure increases and the system becomes hyper-sensitive to small fluctuations. Delays multiply and errors are more likely and spawn more workload, more delays and more errors.  Tempers become frayed and molehills are magnified into mountains. Irritations become arguments.  And all of this makes the problem worse rather than better. Less stable. More variable. More chaotic. More dangerous. More expensive.

It is actually possible to write a simple equation that captures this complex dynamic behaviour characteristic of real systems.  And that was a very surprising finding when it was discovered in 1976 by a mathematician called Robert May.

This equation is called the logistic equation.

Here is the abstract of his seminal paper.

Nature 261, 459-467 (10 June 1976)

Simple mathematical models with very complicated dynamics

First-order difference equations arise in many contexts in the biological, economic and social sciences. Such equations, even though simple and deterministic, can exhibit a surprising array of dynamical behaviour, from stable points, to a bifurcating hierarchy of stable cycles, to apparently random fluctuations. There are consequently many fascinating problems, some concerned with delicate mathematical aspects of the fine structure of the trajectories, and some concerned with the practical implications and applications. This is an interpretive review of them.

The fact that this chaotic behaviour is completely predictable and does not need any ‘random’ element was a big surprise. Chaotic is not the same as random. The observed chaos in the urgent healthcare care system is the result of the design of the system – or more specifically the current healthcare system management policies.

This has a number of profound implications – the most important of which is this:

If the chaos we observe in our health care systems is the predictable and inevitable result of the management policies we ourselves have created and adopted – then eliminating the chaos will only require us to re-design these policies.

In fact we only need to tweak one of the ingredients of the Recipe for Chaos – such as to reduce the strength of the destabilizing feedback loop. The gain. The volume control on the variation amplifier!

This is called the MM factor – otherwise known as ‘Management Meddling‘.

We need to keep all four ingredients though – because we need our system to have both agility and stability.  It is the balance of ingredients that that is critical.

The flaw is not the Managers themselves – it is their learned behaviour – the Meddling.  This is learned so it can be unlearned. We need to keep the Managers but “tweak” their role slightly. As they unlearn their old habits they move from being ‘Policy-Enforcers and Fire-Fighters’ to becoming ‘Policy-Engineers and Chaos-Calmers’. They focus on learning to understand the root causes of variation that come from outside the circle of influence of the non-Managers.   They learn how to rationally and radically redesign system policies to achieve both agility and stability.

And doing that requires developing systemic-thinking and learning Improvement Science skills – because the causes of chaos are counter-intuitive. If it were intuitively-obvious we would have discovered the nature of chaos thousands of years ago. The fact that it was not discovered until 1976 demonstrates this fact.

It is our homo sapiens intuition that got us into this mess!  The inherent flaws of the chimp-ware between our ears.  Our current management policies are intuitively-obvious, collectively-agreed, rubber-stamped and wrong! They are part of the Recipe for Chaos.

And when we learn to re-design our system policies and upload the new system software then the chaos evaporates as if a magic wand had been waved.

And that comes as a really BIG surprise!

What also comes as a big surprise is just how small the counter-intuitive policy design tweaks often are.

Safe, smooth, efficient, effective, and productive flow is restored. Calm confidence reigns. Safety, Flow, Quality and Productivity all increase – at the same time.  The emotional storm clouds dissipate and the prosperity sun shines again.

Everyone feels better. Everyone. Patients, managers, and non-managers.

This is Win-Win-Win improvement by design. Improvement Science.

Software First

computer_power_display_glowing_150_wht_9646A healthcare system has two inter-dependent parts. Let us call them the ‘hardware’ and the ‘software’ – terms we are more familiar with when referring to computer systems.

In a computer the critical-to-success software is called the ‘operating system’ – and we know that by the brand labels such as Windows, Linux, MacOS, or Android. There are many.

It is the O/S that makes the hardware fit-for-purpose. Without the O/S the computer is just a box of hot chips. A rather expensive room heater.

All the programs and apps that we use to to deliver our particular information service require the O/S to manage the actual hardware. Without a coordinator there would be chaos.

In a healthcare system the ‘hardware’ is the buildings, the equipment, and the people.  They are all necessary – but they are not sufficient on their own.

The ‘operating system’ in a healthcare system are the management policies: the ‘instructions’ that guide the ‘hardware’ to do what is required, when it is required and sometimes how it is required.  These policies are created by managers – they are the healthcare operating system design engineers so-to-speak.

Change the O/S and you change the behaviour of the whole system – it may look exactly the same – but it will deliver a different performance. For better or for worse.


In 1953 the invention of the transistor led to the first commercially viable computers. They were faster, smaller, more reliable, cheaper to buy and cheaper to maintain than their predecessors. They were also programmable.  And with many separate customer programs demanding hardware resources – an effective and efficient operating system was needed. So the understanding of “good” O/S design developed quickly.

In the 1960’s the first integrated circuits appeared and the computer world became dominated by mainframe computers. They filled air-conditioned rooms with gleaming cabinets tended lovingly by white-coated technicians carrying clipboards. Mainframes were, and still are, very expensive to build and to run! The valuable resource that was purchased by the customers was ‘CPU time’.  So the operating systems of these machines were designed to squeeze every microsecond of value out of the expensive-to-maintain CPU: for very good commercial reasons. Delivering the “data processing jobs” right, on-time and every-time was paramount.

The design of the operating system software was critical to the performance and to the profit.  So a lot of brain power was invested in learning how to schedule jobs; how to orchestrate the parts of the hardware system so that they worked in harmony; how to manage data buffers to smooth out flow and priority variation; how to design efficient algorithms for number crunching, sorting and searching; and how to switch from one task to the next quickly and without wasting time or making errors.

Every modern digital computer has inherited this legacy of learning.

In the 1970’s the first commercial microprocessors appeared – which reduced the size and cost of computers by orders of magnitude again – and increased their speed and reliability even further. Silicon Valley blossomed and although the first micro-chips were rather feeble in comparison with their mainframe equivalents they ushered in the modern era of the desktop-sized personal computer.

In the 1980’s players such as Microsoft and Apple appeared to exploit this vast new market. The only difference was that Microsoft only offered just the operating system for the new IBM-PC hardware (called MS-DOS); while Apple created both the hardware and software as a tightly integrated system – the Apple I.

The ergonomic-seamless-design philosophy at Apple led to the Apple Mac which revolutionised personal computing. It made them usable by people who had no interest in the innards or in programming. The Apple Macs were the “designer”computers and were reassuringly more expensive. The innovations that Apple designed into the Mac are now expected in all personal computers as well as the latest generations of smartphones and tablets.

Today we carry more computing power in our top pocket than a mainframe of the 1970’s could deliver! The design of the operating system has hardly changed though.

It was the O/S  design that leveraged the maximum potential of the very expensive hardware.  And that is still the case – but we take it for completely for granted.


Exactly the same principle applies to our healthcare systems.

The only difference is that the flow is not 1’s and 0’s – it is patients and all the things needed to deliver patient care. The ‘hardware’ is the expensive part to assemble and run – and the largest cost is the people.  Healthcare is a service delivered by people to people. Highly-trained nurses, doctors and allied healthcare professionals are expensive.

So the key to healthcare system performance is high quality management policy design – the healthcare operating system (HOS).

And here we hit a snag.

Our healthcare management policies have not been designed using the same rigor as the operating systems for our computers. They have not been designed using the well-understood principles of flow physics. The various parts of our healthcare system do not work well together. The flows are fractured. The silos work independently. And the ubiquitous symptom of this dysfunction is confusion, chaos and conflict.  The managers and the doctors are at each others throats. And this is because the management policies have evolved through a largely ineffective and very inefficient strategy called “burn-and-scrape”. Firefighting.

The root cause of the poor design is that neither healthcare managers nor the healthcare workers are trained in operational policy design. Design for Safety. Design for Quality. Design for Delivery. Design for Productivity.

And we are all left with a lose-lose-lose legacy: a system that is no longer fit-for-purpose and a generation of managers and clinicians who have never learned how to design the operational and clinical policies that ensure the system actually delivers what the ‘hardware’ is capable of delivering.


For example:

Suppose we have a simple healthcare system with three stages called A, B and C.  All the patients flow through A, then to B and then to C.  Let us assume these three parts are managed separately as departments with separate budgets and that they are free to use whatever policies they choose so long as they achieve their performance targets -which are (a) to do all the work and (b) to stay in budget and (c) to deliver on time.  So far so good.

Now suppose that the work that arrives at Department B from Department  A is not all the same and different tasks require different pathways and different resources. A Radiology, Pathology or Pharmacy Department for example.

Sorting the work into separate streams and having expensive special-purpose resources sitting idle waiting for work to arrive is inefficient and expensive. It will push up the unit cost – the total cost divided by the total activity. This is called ‘carve-out’.

Switching resources from one pathway to another takes time and that change-over time implies some resources are not able to do the work for a while.  These inefficiencies will contribute to the total cost and therefore push up the “unit-cost”. The total cost for the department divided by the total activity for the department.

So Department B decides to improve its “unit cost” by deploying a policy called ‘batching’.  It starts to sort the incoming work into different types of task and when a big enough batch has accumulated it then initiates the change-over. The cost of the change-over is shared by the whole batch. The “unit cost” falls because Department B is now able to deliver the same activity with fewer resources because they spend less time doing the change-overs. That is good. Isn’t it?

But what is the impact on Departments A and C and what effect does it have on delivery times and work in progress and the cost of storing the queues?

Department A notices that it can no longer pass work to B when it wants because B will only start the work when it has a full batch of requests. The queue of waiting work sits inside Department A.  That queue takes up space and that space costs money but the queue cost is incurred by Department A – not Department B.

What Department C sees is the order of the work changed by Department B to create a bigger variation in lead times for consecutive tasks. So if the whole system is required to achieve a delivery time specification – then Department C has to expedite the longest waiters and delay the shortest waiters – and that takes work,  time, space and money. That cost is incurred by Department C not by Department B.

The unit costs for Department B go down – and those for A and C both go up. The system is less productive as a whole.  The queues and delays caused by the policy change means that work can not be completed reliably on time. The blame for the failure falls on Department C.  Conflict between the parts of the system is inevitable. Lose-Lose-Lose.

And conflict is always expensive – on all dimensions – emotional, temporal and financial.


The policy design flaw here looks like it is ‘batching’ – but that policy is just a reaction to a deeper design flaw. It is a symptom.  The deeper flaw is not even the use of ‘unit costing’. That is a useful enough tool. The deeper flaw is the incorrect assumption that by improving the unit costs of the stages independently will always get an improvement in whole system productivity.

This is incorrect. This error is the result of ‘linear thinking’.

The Laws of Flow Physics do not work like this. Real systems are non-linear.

To design the management policies for a non-linear system using linear-thinking is guaranteed to fail. Disappointment and conflict is inevitable. And that is what we have. As system designers we need to use ‘systems-thinking’.

This discovery comes as a bit of a shock to management accountants. They feel rather challenged by the assertion that some of their cherished “cost improvement policies” are actually making the system less productive. Precisely the opposite of what they are trying to achieve.

And it is the senior management that decide the system-wide financial policies so that is where the linear-thinking needs to be challenged and the ‘software patch’ applied first.

It is not a major management software re-write. Just a minor tweak is all that is required.

And the numbers speak for themselves. It is not a difficult experiment to do.


So that is where we need to start.

We need to learn Healthcare Operating System design and we need to learn it at all levels in healthcare organisations.

And that system-thinking skill has another name – it is called Improvement Science.

The good news is that it is a lot easier to learn than most people believe.

And that is a big shock too – because how to do this has been known for 50 years.

So if you would like to see a real and current example of how poor policy design leads to falling productivity and then how to re-design the policies to reverse this effect have a look at Journal Of Improvement Science 2013:8;1-20.

And if you would like to learn how to design healthcare operating policies that deliver higher productivity with the same resources then the first step is FISH.

Seeing Inside the Black Box

box_opening_up_closing_150_wht_8035 Improvement Science requires the effective, efficient and coordinated use of diagnosis, design and delivery tools.

Experience has also taught us that it is not just about the tools – each must be used as it was designed.

The craftsman knows his tools and knows what instrument to use, where and when the context dictates; and how to use it with skill.

Some tools are simple and effective – easy to understand and to use. The kitchen knife is a good example. It does not require an instruction manual to use it.

Other tools are more complex. Very often because they have a specific purpose. They are not generic. And they may not be intuitively obvious how to use them.  Many labour-saving household appliances have specific purposes: the microwave oven, the dish-washer and so on – but they have complex controls and settings that we need to manipulate to direct the “domestic robot” to deliver what we actually want.  Very often these controls are not intuitively obvious – we are dealing with a black box – and our understanding of what is happening inside is vague.

Very often we do not understand how the buttons and dials that we can see and touch – the inputs – actually influence the innards of the box to determine the outputs. We do not have a mental model of what is inside the Black Box. We do not know – we are ignorant.

In this situation we may resort to just blindly following the instructions;  or blindly copying what someone else does; or blindly trying random combinations of inputs until we get close enough to what we want. No wiser at the end than we were at the start.  The common thread here is “blind”. The box is black. We cannot see inside.

And the complex black box is deliberately made so – because the supplier of the super-tool does not want their “secret recipe” to be known to all – least of all their competitors.

This is a perfect recipe for confusion and for conflict. Lose-Lose-Lose.

Improvement Science is dedicated to eliminating confusion and conflict – so Black Box Tools are NOT on the menu.

Improvement Scientists need to understand how their tools work – and the best way to achieve that level of understanding is to design and build their own.

This may sound like re-inventing the wheel but it is not about building novel tools – it is about re-creating the tried and tested tools – for the purpose of understanding how they work. And understanding their strengths, their weaknesses, their opportunities and their risks or threats.

And doing that requires guidance from a mentor who has been through this same learning journey. Starting with simple, intuitive tools, and working step-by-step to design, build and understand the more complex ones.

So where do we start?

In the FISH course the first tool we learn to use is a Gantt Chart.

It was invented by Henry Laurence Gantt about 100 years ago and requires nothing more than pencil and paper. Coloured pencils and squared paper are even better.

Gantt_ChartThis is an example of a Gantt Chart for a Day Surgery Unit.

At the top are the “tasks” – patients 1 and 2; and at the bottom are the “resources”.

Time runs left to right.

Each coloured bar appears twice: once on each chart.

The power of a Gantt Chart is that it presents a lot of information in a very compact and easy-to-interpret format. That is what Henry Gantt intended.

A Gantt Chart is like the surgeon’s scalpel. It is a simple, generic easy-to-create tool that has a wide range of uses. The skill is knowing where, when and how to use it: and just as importantly where-not, when-not and how-not.

DRAT_04The second tool that an Improvement Scientist learns to use is the Shewhart or time-series chart.

It was invented about 90 years ago.

This is a more complex tool and as such there is a BIG danger that it is used as a Black Box with no understanding of the innards.  The SPC  and Six-Sigma Zealots sell it as a Magic Box. It is not.

We could paste any old time-series data into a bit of SPC software; twiddle with the controls until we get the output we want; and copy the chart into our report. We could do that and hope that no-one will ask us to explain what we have done and how we have done it. Most do not because they do not want to appear ‘ignorant’. The elephant is in the room though.  There is a conspiracy of silence.

The elephant-in-the-room is the risk we take when use Black Box tools – the risk of GIGO. Garbage In Garbage Out.

And unfortunately we have a tendency to blindly trust what comes out of the Black Box that a plausible Zealot tells us is “magic”. This is the Emporer’s New Clothes problem.  Another conspiracy of silence follows.

The problem here is not the tool – it is the desperate person blindly wielding it. The Zealots know this and they warn the Desperados of the risk and offer their expensive Magician services. They are not interested in showing how the magic trick is done though! They prefer the Box to stay Black.

So to avoid this cat-and-mouse scenario and to understand both the simpler and the more complex tools, and to be able to use them effectively and safely, we need to be able to build one for ourselves.

And the know-how to do that is not obvious – if it were we would have already done it – so we need guidance.

And once we have  built our first one – a rough-and-ready working prototype – then we can use the existing ones that have been polished with long use. And we can appreciate the wisdom that has gone into their design. The Black Box becomes Transparent.

So learning how the build the essential tools is the first part of the Improvement Science Practitioner (ISP) training – because without that knowledge it is difficult to progress very far. And without that understanding it is impossible to teach anyone anything other than to blindly follow a Black Box recipe.

Of course Magic Black Box Solutions Inc will not warm to this idea – they may not want to reveal what is inside their magic product. They are fearful that their customers may discover that it is much simpler than they are being told.  And we can test that hypothesis by asking them to explain how it works in language that we can understand. If they cannot (or will not) then we may want to keep looking for someone who can and will.

Space-and-Time

line_figure_phone_400_wht_9858<Lesley>Hi Bob! How are you today?

<Bob>OK thanks Lesley. And you?

<Lesley>I am looking forward to our conversation. I have two questions this week.

<Bob>OK. What is the first one?

<Lesley>You have taught me that improvement-by-design starts with the “purpose” question and that makes sense to me. But when I ask that question in a session I get an “eh?” reaction and I get nowhere.

<Bob>Quod facere bonum opus et quomodo te cognovi unum?

<Lesley>Eh?

<Bob>I asked you a purpose question.

<Lesley>Did you? What language is that? Latin? I do not understand Latin.

<Bob>So although you recognize the language you do not understand what I asked, the words have no meaning. So you are unable to answer my question and your reaction is “eh?”. I suspect the same is happening with your audience. Who are they?

<Lesley>Front-line clinicians and managers who have come to me to ask how to solve their problems. There Niggles. They want a how-to-recipe and they want it yesterday!

<Bob>OK. Remember the Temperament Treacle conversation last week. What is the commonest Myers-Briggs Type preference in your audience?

<Lesley>It is xSTJ – tough minded Guardians.  We did that exercise. It was good fun! Lots of OMG moments!

<Bob>OK – is your “purpose” question framed in a language that the xSTJ preference will understand naturally?

<Lesley>Ah! Probably not! The “purpose” question is future-focused, conceptual , strategic, value-loaded and subjective.

<Bob>Indeed – it is an iNtuitor question. xNTx or xNFx. Pose that question to a roomful of academics or executives and they will debate it ad infinitum.

<Lesley>More Latin – but that phrase I understand. You are right.  And my own preference is xNTP so I need to translate my xNTP “purpose” question into their xSTJ language?

<Bob>Yes. And what language do they use?

<Lesley>The language of facts, figures, jobs-to-do, work-schedules, targets, budgets, rational, logical, problem-solving, tough-decisions, and action-plans. Objective, pragmatic, necessary stuff that keep the operational-wheels-turning.

<Bob>OK – so what would “purpose” look like in xSTJ language?

<Lesley>Um. Good question. Let me start at the beginning. They came to me in desperation because they are now scared enough to ask for help.

<Bob>Scared of what?

<Lesley>Unintentionally failing. They do not want to fail and they do not need beating with sticks. They are tough enough on themselves and each other.

<Bob>OK that is part of their purpose. The “Avoid” part. The bit they do not want. What do they want? What is the “Achieve” part? What is their “Nice If”?

<Lesley>To do a good job.

<Bob>Yes. And that is what I asked you – but in an unfamiliar language. Translated into English I asked “What is a good job and how do you know you are doing one?”

<Lesley>Ah ha! That is it! That is the question I need to ask. And that links in the first map – The 4N Chart®. And it links in measurement, time-series charts and BaseLine© too. Wow!

<Bob>OK. So what is your second question?

<Lesley>Oh yes! I keep getting asked “How do we work out how much extra capacity we need?” and I answer “I doubt that you need any more capacity.”

<Bob>And their response is?

<Lesley>Anger and frustration! They say “That is obvious rubbish! We have a constant stream of complaints from patients about waiting too long and we are all maxed out so of course we need more capacity! We just need to know the minimum we can get away with – the what, where and when so we can work out how much it will cost for the business case.

<Bob>OK. So what do they mean by the word “capacity”. And what do you mean?

<Lesley>Capacity to do a good job?

<Bob>Very quick! Ho ho! That is a bit imprecise and subjective for a process designer though. The Laws of Physics need the terms “capacity”, “good” and “job” clearly defined – with units of measurement that are meaningful.

<Lesley>OK. Let us define “good” as “delivered on time” and “job” as “a patient with a health problem”.

<Bob>OK. So how do we define and measure capacity? What are the units of measurement?

<Lesley>Ah yes – I see what you mean. We touched on that in FISH but did not go into much depth.

<Bob>Now we dig deeper.

<Lesley>OK. FISH talks about three interdependent forms of capacity: flow-capacity, resource-capacity, and space-capacity.

<Bob>Yes. They are the space-and-time capacities. If we are too loose with our use of these and treat them as interchangeable then we will create the confusion and conflict that you have experienced. What are the units of measurement of each?

<Lesley>Um. Flow-capacity will be in the same units as flow, the same units as demand and activity – tasks per unit time.

<Bob>Yes. Good. And space-capacity?

<Lesley>That will be in the same units as work in progress or inventory – tasks.

<Bob>Good! And what about resource-capacity?

<Lesley>Um – Will that be resource-time – so time?

<Bob>Actually it is resource-time per unit time. So they have different units of measurement. It is invalid to mix them up any-old-way. It would be meaningless to add them for example.

<Lesley>OK. So I cannot see how to create a valid combination from these three! I cannot get the units of measurement to work.

<Bob>This is a critical insight. So what does that mean?

<Lesley>There is something missing?

<Bob>Yes. Excellent! Your homework this week is to work out what the missing pieces of the capacity-jigsaw are.

<Lesley>You are not going to tell me the answer?

<Bob>Nope. You are doing ISP training now. You already know enough to work it out.

<Lesley>OK. Now you have got me thinking. I like it. Until next week then.

<Bob>Have a good week.

The Mirror

mirror_mirror[Dring Dring]

The phone announced the arrival of Leslie for the weekly ISP mentoring conversation with Bob.

<Leslie> Hi Bob.

<Bob> Hi Leslie. What would you like to talk about today?

<Leslie> A new challenge – one that I have not encountered before.

<Bob>Excellent. As ever you have pricked my curiosity. Tell me more.

<Leslie> OK. Up until very recently whenever I have demonstrated the results of our improvement work to individuals or groups the usual response has been “Yes, but“. The habitual discount as you call it. “Yes, but your service is simpler; Yes, but your budget is bigger; Yes, but your staff are less militant.” I have learned to expect it so I do not get angry any more.

<Bob> OK. The mantra of the skeptics is to be expected and you have learned to stay calm and maintain respect. So what is the new challenge?

<Leslie>There are two parts to it.  Firstly, because the habitual discounting is such an effective barrier to diffusion of learning;  our system has not changed; the performance is steadily deteriorating; the chaos is worsening and everything that is ‘obvious’ has been tried and has not worked. More red lights are flashing on the patient-harm dashboard and the Inspectors are on their way. There is an increasing  turnover of staff at all levels – including Executive.  There is an anguished call for “A return to compassion first” and “A search for new leaders” and “A cultural transformation“.

<Bob> OK. It sounds like the tipping point of awareness has been reached, enough people now appreciate that their platform is burning and radical change of strategy is required to avoid the ship sinking and them all drowning. What is the second part?

<Leslie> I am getting more emails along the line of “What would you do?

<Bob> And your reply?

<Leslie> I say that I do not know because I do not have a diagnosis of the cause of the problem. I do know a lot of possible causes but I do not know which plausible ones are the actual ones.

<Bob> That is a good answer.  What was the response?

<Leslie>The commonest one is “Yes, but you have shown us that Plan-Do-Study-Act is the way to improve – and we have tried that and it does not work for us. So we think that improvement science is just more snake oil!”

<Bob>Ah ha. And how do you feel about that?

<Leslie>I have learned the hard way to respect the opinion of skeptics. PDSA does work for me but not for them. And I do not understand why that is. I would like to conclude that they are not doing it right but that is just discounting them and I am wary of doing that.

<Bob>OK. You are wise to be wary. We have reached what I call the Mirror-on-the-Wall moment.  Let me ask what your understanding of the history of PDSA is?

<Leslie>It was called Plan-Do-Check-Act by Walter Shewhart in the 1930’s and was presented as a form of the scientific method that could be applied on the factory floor to improving the quality of manufactured products.  W Edwards Deming modified it to PDSA where the “Check” was changed to “Study”.  Since then it has been the key tool in the improvement toolbox.

<Bob>Good. That is an excellent summary.  What the Zealots do not talk about are the limitations of their wonder-tool.  Perhaps that is because they believe it has no limitations.  Your experience would seem to suggest otherwise though.

<Leslie>Spot on Bob. I have a nagging doubt that I am missing something here. And not just me.

<Bob>The reason PDSA works for you is because you are using it for the purpose it was designed for: incremental improvement of small bits of the big system; the steps; the points where the streams cross the stages.  You are using your FISH training to come up with change plans that will work because you understand the Physics of Flow better. You make wise improvement decisions.  In fact you are using PDSA in two separate modes: discovery mode and delivery mode.  In discovery mode we use the Study phase to build your competence – and we learn most when what happens is not what we expected.  In delivery mode we use the Study phase to build our confidence – and that grows most when what happens is what we predicted.

<Leslie>Yes, that makes sense. I see the two modes clearly now you have framed it that way – and I see that I am doing both at the same time, almost by second nature.

<Bob>Yes – so when you demonstrate it you describe PDSA generically – not as two complimentary but contrasting modes. And by demonstrating success you omit to show that there are some design challenges that cannot be solved with either mode.  That hidden gap attracts some of the “Yes, but” reactions.

<Leslie>Do you mean the challenges that others are trying to solve and failing?

<Bob>Yes. The commonest error is to discount the value of improvement science in general; so nothing is done and the inevitable crisis happens because the system design is increasingly unfit for the evolving needs.  The toast is not just burned it is on fire and is now too late to  use the discovery mode of PDSA because prompt and effective action is needed.  So the delivery mode of PDSA is applied to a emergent, ill-understood crisis. The Plan is created using invalid assumptions and guesswork so it is fundamentally flawed and the Do then just makes the chaos worse.  In the ensuing panic the Study and Act steps are skipped so all hope of learning is lost and and a vicious and damaging spiral of knee-jerk Plan-Do-Plan-Do follows. The chaos worsens, quality falls, safety falls, confidence falls, trust falls, expectation falls and depression and despair increase.

<Leslie>That is exactly what is happening and why I feel powerless to help. What do I do?

<Bob>The toughest bit is past. You have looked squarely in the mirror and can now see harsh reality rather than hasty rhetoric. Now you can look out of the window with different eyes.  And you are now looking for a real-world example of where complex problems are solved effectively and efficiently. Can you think of one?

<Leslie>Well medicine is one that jumps to mind.  Solving a complex, emergent clinical problems requires a clear diagnosis and prompt and effective action to stabilise the patient and then to cure the underlying cause: the disease.

<Bob>An excellent example. Can you describe what happens as a PDSA sequence?

<Leslie>That is a really interesting question.  I can say for starters that it does not start with P – we have learned are not to have a preconceived idea of what to do at the start because it badly distorts our clinical judgement.  The first thing we do is assess the patient to see how sick and unstable they are – we use the Vital Signs. So that means that we decide to Act first and our first action is to Study the patient.

<Bob>OK – what happens next?

<Leslie>Then we will do whatever is needed to stabilise the patient based on what we have observed – it is called resuscitation – and only then we can plan how we will establish the diagnosis; the root cause of the crisis.

<Bob> So what does that spell?

<Leslie> A-S-D-P.  It is the exact opposite of P-D-S-A … the mirror image!

<Bob>Yes. Now consider the treatment that addresses the root cause and that cures the patient. What happens then?

<Leslie>We use the diagnosis is used to create a treatment Plan for the specific patient; we then Do that, and we Study the effect of the treatment in that specific patient, using our various charts to compare what actually happens with what we predicted would happen. Then we decide what to do next: the final action.  We may stop because we have achieved our goal, or repeat the whole cycle to achieve further improvement. So that is our old friend P-D-S-A.

<Bob>Yes. And what links the two bits together … what is the bit in the middle?

<Leslie>Once we have a diagnosis we look up the appropriate treatment options that have been proven to work through research trials and experience; and we tailor the treatment to the specific patient. Oh I see! The missing link is design. We design a specific treatment plan using generic principles.

<Bob>Yup.  The design step is the jam in the improvement sandwich and it acts like a mirror: A-S-D-P is reflected back as P-D-S-A

<Leslie>So I need to teach this backwards: P-D-S-A and then Design and then A-S-P-D!

<Bob>Yup – and you know that by another name.

<Leslie> 6M Design®! That is what my Improvement Science Practitioner course is all about.

<Bob> Yup.

<Leslie> If you had told me that at the start it would not have made much sense – it would just have confused me.

<Bob>I know. That is the reason I did not. The Mirror needs to be discovered in order for the true value to appreciated. At the start we look in the mirror and perceive what we want to see. We have to learn to see what is actually there. Us. Now you can see clearly where P-D-S-A and Design fit together and the missing A-S-D-P component that is needed to assemble a 6M Design® engine. That is Improvement-by-Design in a nine-letter nutshell.

<Leslie> Wow! I can’t wait to share this.

<Bob> And what do you expect the response to be?

<Leslie>”Yes, but”?

<Bob> From the die hard skeptics – yes. It is the ones who do not say “Yes, but” that you want to engage with. The ones who are quiet. It is always the quiet ones that hold the key.

Three Essentials

There are three necessary parts before ANY improvement-by-design effort will gain traction. Omit any one of them and nothing happens.

stick_figure_drawing_three_check_marks_150_wht_5283

1. A clear purpose and an outline strategic plan.

2. Tactical measurement of performance-over-time.

3. A generic Improvement-by-Design framework.

These are necessary minimum requirements to be able to safely delegate the day-to-day and week-to-week tactical stuff the delivers the “what is needed”.

These are necessary minimum requirements to build a self-regulating, self-sustaining, self-healing, self-learning win-win-win system.

And this is not a new idea.  It was described by Joseph Juran in the 1960’s and that description was based on 20 years of hands-on experience of actually doing it in a wide range of manufacturing and service organisations.

That is 20 years before  the terms “Lean” or “Six Sigma” or “Theory of Constraints” were coined.  And the roots of Juran’s journey were 20 years before that – when he started work at the famous Hawthorne Works in Chicago – home of the Hawthorne Effect – and where he learned of the pioneering work of  Walter Shewhart.

And the roots of Shewhart’s innovations were 20 years before that – in the first decade of the 20th Century when innovators like Henry Ford and Henry Gantt were developing the methods of how to design and build highly productive processes.

Ford gave us the one-piece-flow high-quality at low-cost production paradigm. Toyota learned it from Ford.  Gantt gave us simple yet powerful visual charts that give us an understanding-at-a-glance of the progress of the work.  And Shewhart gave us the deceptively simple time-series chart that signals when we need to take more notice.

These nuggets of pragmatic golden knowledge have been buried for decades under a deluge of academic mud.  It is nigh time to clear away the detritus and get back to the bedrock of pragmatism. The “how-to-do-it” of improvement. Just reading Juran’s 1964 “Managerial Breakthrough” illustrates just how much we now take for granted. And how ignorant we have allowed ourselves to become.

Acquired Arrogance is a creeping, silent disease – we slip from second nature to blissful ignorance without noticing when we divorce painful reality and settle down with our own comfortable collective rhetoric.

The wake-up call is all the more painful as a consequence: because it is all the more shocking for each one of us; and because it affects more of us.

The pain is temporary – so long as we treat the cause and not just the symptom.

The first step is to acknowledge the gap – and to start filling it in. It is not technically difficult, time-consuming or expensive.  Whatever our starting point we need to put in place the three foundation stones above:

1. Common purpose.
2. Measurement-over-time.
3. Method for Improvement.

Then the rubber meets the road (rather than the sky) and things start to improve – for real. Lots of little things in lots of places at the same time – facilitated by the Junior Managers. The cumulative effect is dramatic. Chaos is tamed; calm is restored; capability builds; and confidence builds. The cynics have to look elsewhere for their sport and the skeptics are able to remain healthy.

Then the Middle Managers feel the new firmness under their feet – where before there were shifting sands. They are able to exert their influence again – to where it makes a difference. They stop chasing Scotch Mist and start reporting real and tangible improvement – with hard evidence. And they rightly claim a slice of the credit.

And the upwelling of win-win-win feedback frees the Senior Managers from getting sucked into reactive fire-fighting and the Victim Vortex; and that releases the emotional and temporal space to start learning and applying System-level Design.  That is what is needed to deliver a significant and sustained improvement.

And that creates the stable platform for the Executive Team to do Strategy from. Which is their job.

It all starts with the Three Essentials:

1. A Clear and Common Constancy of Purpose.
2. Measurement-over-time of the Vital Metrics.
3. A Generic Method for Improvement-by-Design.

Improvement-by-Twitter

Sat 5th October

It started with a tweet.

08:17 [JG] The NHS is its people. If you lose them, you lose the NHS.

09:15 [DO] We are in a PEOPLE business – educating people and creating value.

Sun 6th October

08:32 [SD] Who isn’t in people business? It is only people who buy stuff. Plants, animals, rocks and machines don’t.

09:42 [DO] Very true – it is people who use a service and people who deliver a service and we ALL know what good service is.

09:47 [SD] So onus is on us to walk our own talk. If we don’t all improve our small bits of the NHS then who can do it for us?

Then we were off … the debate was on …

10:04 [DO] True – I can prove I am saving over £160 000.00 a year – roll on PBR !?

10:15 [SD] Bravo David. I recently changed my surgery process: productivity up by 35%. Cost? Zero. How? Process design methods.

11:54 [DO] Exactly – cost neutral because we were thinking differently – so how to persuade the rest?

12:10 [SD] First demonstrate it is possible then show those who want to learn how to do it themselves. http://www.saasoft.com/fish/course

We had hard evidence it was possible … and now MC joined the debate …

12:48 [MC] Simon why are there different FISH courses for safety, quality and efficiency? Shouldn’t good design do all of that?

12:52 [SD] Yes – goal of good design is all three. It just depends where you are starting from: Governance, Operations or Finance.

A number of parallel threads then took off and we all had lots of fun exploring  each others knowledge and understanding.

17:28 MC registers on the FISH course.

And that gave me an idea. I emailed an offer – that he could have a complimentary pass for the whole FISH course in return for sharing what he learns as he learns it.  He thought it over for a couple of days then said “OK”.

Weds 9th October

06:38 [MC] Over the last 4 years of so, I’ve been involved in incrementally improving systems in hospitals. Today I’m going to start an experiment.

06:40 [MC] I’m going to see if we can do less of the incremental change and more system redesign. To do this I’ve enrolled in FISH

Fri 11th October

06:47 [MC] So as part of my exploration into system design, I’ve done some studies in my clinic this week. Will share data shortly.

21:21 [MC] Here’s a chart showing cycle time of patients in my clinic. Median cycle time 14 mins, but much longer in 2 pic.twitter.com/wu5MsAKk80

20131019_TTchart

21:22 [MC] Here’s the same clinic from patients’ point if view, wait time. Much longer than I thought or would like

20131019_WTchart

21:24 [MC] Two patients needed to discuss surgery or significant news, that takes time and can’t be rushed.

21:25 [MC] So, although I started on time, worked hard and finished on time. People were waited ages to see me. Template is wrong!

21:27 [MC] By the time I had seen the the 3rd patient, people were waiting 45 mins to see me. That’s poor.

21:28 [MC] The wait got progressively worse until the end of the clinic.

Sunday 13th October

16:02 [MC] As part of my homework on systems, I’ve put my clinic study data into a Gantt chart. Red = waiting, green = seeing me pic.twitter.com/iep2PDoruN

20131019_Ganttchart

16:34 [SD] Hurrah! The visual power of the Gantt Chart. Worth adding the booked time too – there are Seven Sins of Scheduling to find.

16:36 [SD] Excellent – good idea to sort into booked time order – it makes the planned rate of demand easier to see.

16:42 [SD] Best chart is Work In Progress – count the number of patients at each time step and plot as a run chart.

17:23 [SD] Yes – just count how many lines you cross vertically at each time interval. It can be automated in Excel

17:38 [MC] Like this? pic.twitter.com/fTnTK7MdOp

 

20131019_WIPchart

This is the work-in-progress chart. The most useful process monitoring chart of all. It shows the changing size of the queue over time.  Good flow design is associated with small, steady queues.

18:22 [SD] Perfect! You’re right not to plot as XmR – this is a cusum metric. Not a healthy WIP chart this!

There was more to follow but the “ah ha” moment had been seen and shared.

Weds 16th October

MC completes the Online FISH course and receives his well-earned Certificate of Achievement.

This was his with-the-benefit-of-hindsight conclusion:

I wish I had known some of this before. I will have totally different approach to improvement projects now. Key is to measure and model well before doing anything radical.

Improvement Science works.
Improvement-by-Design is a skill that can be learned quickly.
FISH is just a first step.

The Power of the Converted Skeptic

puzzle_lightbulb_build_PA_150_wht_4587One of the biggest challenges in Improvement Science is diffusion of an improvement outside the circle of control of the innovator.

It is difficult enough to make a significant improvement in one small area – it is an order of magnitude more difficult to spread the word and to influence others to adopt the new idea!

One strategy is to shame others into change by demonstrating that their attitude and behaviour are blocking the diffusion of innovation.

This strategy does not work.  It generates more resistance and amplifies the differences of opinion.

Another approach is to bully others into change by discounting their opinion and just rolling out the “obvious solution” by top-down diktat.

This strategy does not work either.  It generates resentment – even if the solution is fit-for-purpose – which it usually is not!

So what does work?

The key to it is to convert some skeptics because a converted skeptic is a powerful force for change.

But doesn’t that fly in the face of established change management theory?

Innovation diffuses from innovators to early-adopters, then to the silent majority, then to the laggards and maybe even dinosaurs … doesn’t it?

Yes – but that style of diffusion is incremental, slow and has a very high failure rate.  What is very often required is something more radical, much faster and more reliable.  For that it needs both push from the Confident Optimists and pull from some Converted Pessimists.  The tipping point does not happen until the silent majority start to come off the fence in droves: and they do that when the noisy optimists and equally noisy pessimists start to agree.

The fence-sitters jump when the tug-o-war stalemate stops and the force for change becomes aligned in the direction of progress.

So how is a skeptic converted?

Simple. By another Converted Skeptic.


Here is a real example.

We are all skeptical about many things that we would actually like to improve.

Personal health for instance. Something like weight. Yawn! Not that Old Chestnut!

We are bombarded with shroud-waver stories that we are facing an epidemic of obesity, rapidly rising  rates of diabetes, and all the nasty and life-shortening consequences of that. We are exhorted to eat “five portions of fruit and veg a day” …  or else! We are told that we must all exercise our flab away. We are warned of the Evils of Cholesterol and told that overweight children are caused by bad parenting.

The more gullible and fearful are herded en-masse in the direction of the Get-Thin-Quick sharks who then have a veritable feeding frenzy. Their goal is their short-term financial health not the long-term health of their customers.

The more insightful, skeptical and frustrated seek solace in the chocolate Hob Nob jar.

For their part, the healthcare professionals are rewarded for providing ineffective healthcare by being paid-for-activity not for outcome. They dutifully measure the decline and hand out ineffective advice. Their goal is survival too.

The outcome is predictable and seemingly unavoidable.


So when a disruptive innovation comes along that challenges the current dogma and status quo, the healthy skeptics inevitably line up and proclaim that it will not work.

Not that it does not work. They do not know that because they never try it. They are skeptics. Someone else has to prove it to them.

And I am a healthy skeptic about many things.

I am skeptical about diets – the evidence suggests that their proclaimed benefit is difficult to achieve and even more difficult to sustain: and that is the hall-mark of either a poor design or a deliberate, profit-driven, yet legal scam.

So I decided to put an innovative approach to weight loss to the test.  It is not a diet – it is a design to achieve and sustain a healthier weight to height ratio.  And for it to work it must work for me because I am a diet skeptic.

The start of the story is  HERE

I am now a Converted Healthier Skeptic.

I call the innovative design a “2 out of 7 Lo-CHO” policy and what that means is for two days a week I just cut out as much carbohydrate (CHO) as feasible.  Stuff like bread, potatoes, rice, pasta and sugar. The rest of the time I do what I normally do.  There is no need for me to exercise and no need for me to fill up on Five Fruit and Veg.

LoCHO_Design

The chart above is the evidence of what happened. It shows a 7 kg reduction in weight over 140 days – and that is impressive given that it has required no extra exercise and no need to give up tasty treats completely and definitely no need to boost the bottom-line of a Get-Thin-Quick shark!

It also shows what to expect.  The weight loss starts steeper then tails off as it approaches a new equilibrium weight. This is the classic picture of what happens to a “system” when one of its “operational policies” is wisely re-designed.

Patience, persistence and a time-series chart are all that is needed. It takes less than a minute per day to monitor the improvement.

Even I can afford to invest a minute per day.

The BaseLine© chart clearly shows that the day-to-day variation is quite high: and that is expected – it is inherent in the 2-out-of-7 Lo-CHO design. It is the not the short-term change that is the measure of success – it is the long-term improvement that is important.

It is important to measure daily – because it is the daily habit that keeps me mindful, aligned, and  on-goal.  It is not the measurement itself that is the most important thing – it is the conscious act of measuring and then plotting the dot in the context of the previous dots. The picture tells the story. No further “statistical” analysis is required.

The power of this chart is that it provides hard evidence that is very effective for nudging other skeptics like me into giving the innovative idea a try.  I know because I have done that many times now.  I have converted other skeptics.  It is an innovation infection.

And the same principle appears to apply to other areas.  What is critical to success is tangible and visible proof of progress. That is what skeptics need. Then a rational and logical method and explanation that respects their individual opinion and requirements. The design has to work for them. And it must make sense.

They will come out with a string of “Yes … buts” and that is OK because that is how skeptics work.  Just answer their questions with evidence and explanations. It can get a bit wearing I admit but it is worth the effort.

An effective Improvement Scientist needs to be a healthy skeptic too – i.e. an open minded one.

Fear and Fuel

stick_figure_open_cupboard_150_wht_8038Improvement implies change.

Change requires motivation.

And there are two flavours of motivation juice – Fear and Fuel

Fear is the emotion that comes from anticipated loss in the future.  Loss means some form of damage. Physical, psychological or social harm.  We fear loss of peer-esteem and we fear loss of self-esteem … almost more than we fear physical harm.

Our fear of anticipated loss may be based on reality. Our experience of actual loss in the past.  We remember the emotional pain and we learn from past pain to fear future loss.

Our fear of anticipated loss may also be fueled by rhetoric.  The doom-mongering of the Shroud-Wavers, the Nay-Sayers, the Skeptics and the Cynics.


And there are examples where the rhetorical fear is deliberately generated to drive the fear-of-reality to “the solution” – which of course we have to pay dearly for. This is Machiavellian mass manipulation for commercial gain.

“Fear of germs, fear of fatness, fear of the invisible enemies outside and inside”.

Generating and ameliorating fear is big business. It is a Burn-and-Scrape design.

What we are seeing here is the Drama Triangle operating on a massive scale. The Persecutors create the fear, the Victims run away and the Persecutors then switch role to Rescuers and offer to sell the terrified-and-now-compliant Victims “the  solution” to their fear.  The Victims do not learn.  That is not the purpose – because that would end the Game and derail the Gravy Train.


So fear is not an effective way to motivate for sustained improvement,  and we have ample evidence to support that statement!  It might get us started, but it won’t keep us going.

The Burn-and-Scrape design that we see everywhere is a fear-driven-design.

Any improvements are transitory and usually only achieved at the emotional expense of a passionate idealist. When they get too tired to push any more the toast gets burnt again because the toaster is perfectly designed to burn toast.  Not intentionally designed to burn the toast but perfectly designed to nevertheless.

The use of Delusional Ratios and Arbitrary Targets (DRATs) is a fear-based-design-strategy. It ensures the Fear Game and Gravy Train continue.

And fear has a frightening cost. The cost of checking-and-correcting. The cost of the defensive-bureaucracy that may catch errors before too much local harm results but which itself creates unmeasurable global harm in a different way – by hoovering up the priceless human resource of life-time – like an emotional black hole.

The cost of errors. The cost of queues. The list of fear-based-design costs is long.

A fear-based-design for delivering improvement is a poor design.


So we need a better design.


And a better one is based on a positive-attractive-emotional force pulling us forwards into the future. The anticipation of gains for all. A win-win-win design.

Win-win-win design starts with the Common Purpose: the outcomes that everyone wants; and the outcomes that no-one wants.  We need both.  This balance creates alignment of effort on getting the NiceIfs (the wants) while avoiding the NoNos (the do not wants).

Then we ask the simple question: “What is preventing us having our win-win-win outcome now?

The blockers are the parts of our current design that we need to change: our errors of omission and our errors of commission.  Our gaps and our gaffes.

And to change them we need to be clear what they are; where they are and how they came to be there … and that requires a diagnostic skill that is one of our errors of omission. We have never learned how to diagnose our process design flaws.

Another common blocker is that we believe that a win-win-win outcome is impossible. This is a learned belief. And it is a self-fulfilling prophesy.

We may also believe that all swans are white because we have never seen a black swan – even though we know, in principle, that a black swan could be possible.

Rhetoric and Reality are not the same thing.  Feeling it could be possible and knowing that it actually is possible are different emotions. We need real evidence to challenge our life-limiting rhetoric.

Weary and wary skeptics crave real evidence not rhetorical exhortation.

So when that evidence is presented – and the Impossibility Hypothesis is disproved – then an emotional shock is inevitable.  We are now on the emotional roller-coaster called the Nerve Curve.  And the deeper our skepticism the bigger the shock.


After the shock we characteristically do one of three things:

1. We discount the evidence and go into denial.  We refuse to challenge our own rhetoric. Blissful ignorance is attractive.  The gap between intent and impact is scary.

2. We go quiet because we are now stuck in the the painful awareness of the transition zone between the past and the future. The feelings associated with the transition are anxiety and depression. We don’t want to go back and we don’t know how to go forwards.

3. We sit up, we take notice, we listen harder, we rub our chins, our minds race as we become more and more excited. The feelings associated with the stage of resolution are curiosity, excitement and hope.

It is actually a sequence and it is completely normal.


And those who reach Stage 3 of the Nerve Curve say things like “We have food for thought;  we feel inspired; our passion is re-ignited; we now have a beacon of hope for the future.

That is the flavour of motivation-juice that is needed to fuel the improvement-by-design engine and to deliver win-win-win designs that are both surprising and self-sustaining.

And what actually changes our belief of what is possible is when we learn to do it for ourselves. For real.

That is Improvement Science in action. It is a pragmatic science.

DRAT!

[Bing Bong]  The sound bite heralded Leslie joining the regular Improvement Science mentoring session with Bob.  They were now using web-technology to run virtual meetings because it allows a richer conversation and saves a lot of time. It is a big improvement.

<Bob> Hi Lesley, how are you today?

<Leslie> OK thank you Bob.  I have a thorny issue to ask you about today. It has been niggling me even since we started to share the experience we are gaining from our current improvement-by-design project.

<Bob> OK. That sounds interesting. Can you paint the picture for me?

<Leslie> Better than that – I can show you the picture, I will share my screen with you.

DRAT_01 <Bob> OK. I can see that RAG table. Can you give me a bit more context?

<Leslie> Yes. This is how our performance management team have been asked to produce their 4-weekly reports for the monthly performance committee meetings.

<Bob> OK. I assume the “Period” means sequential four week periods … so what is Count, Fail and Fail%?

<Leslie> Count is the number of discharges in that 4 week period, Fail is the number whose length of stay is longer than the target, and Fail% is the ratio of Fail/Count for each 4 week period.

<Bob> It looks odd that the counts are all 28.  Is there some form of admission slot carve-out policy?

<Leslie> Yes. There is one admission slot per day for this particular stream – that has been worked out from the average historical activity.

<Bob> Ah! And the Red, Amber, Green indicates what?

<Leslie> That is depends where the Fail% falls in a set of predefined target ranges; less than 5% is green, 5-10% is Amber and more than 10% is red.

<Bob> OK. So what is the niggle?

<Leslie>Each month when we are in the green we get no feedback – a deafening silence. Each month we are in amber we get a warning email.  Each month we are in the red we have to “go and explain ourselves” and provide a “back-on-track” plan.

<Bob> Let me guess – this feedback design is not helping much.

<Leslie> It is worse than that – it creates a perpetual sense of fear. The risk of breaching the target is distorting people’s priorities and their behaviour.

<Bob> Do you have any evidence of that?

<Leslie> Yes – but it is anecdotal.  There is a daily operational meeting and the highest priority topic is “Which patients are closest to the target length of stay and therefore need to have their  discharge expedited?“.

<Bob> Ah yes.  The “target tail wagging the quality dog” problem. So what is your question?

<Leslie> How do we focus on the cause of the problem rather than the symptoms?  We want to be rid of the “fear of the stick”.

<Bob> OK. What you have hear is a very common system design flaw. It is called a DRAT.

<Leslie> DRAT?

<Bob> “Delusional Ratio and Arbitrary Target”.

<Leslie> Ha! That sounds spot on!  “DRAT” is what we say every time we miss the target!

<Bob> Indeed.  So first plot this yield data as a time series chart.

<Leslie> Here we go.

DRAT_02<Bob>Good. I see you have added the cut-off thresholds for the RAG chart. These 5% and 10% thresholds are arbitrary and the data shows your current system is unable to meet them. Your design looks incapable.

<Leslie>Yes – and it also shows that the % expressed to one decimal place is meaningless because there are limited possibilities for the value.

<Bob> Yes. These are two reasons that this is a Delusional Ratio; there are quite a few more.

DRAT_03<Leslie> OK  and if I plot this as an Individuals charts I can see that this variation is not exceptional.

<Bob> Careful Leslie. It can be dangerous to do this: an Individuals chart of aggregate yield becomes quite insensitive with aggregated counts of relatively rare events, a small number of levels that go down to zero, and a limited number of points.  The SPC zealots are compounding the problem and plotting this data as a C-chart or a P-chart makes no difference.

This is all the effect of the common practice of applying  an arbitrary performance target then counting the failures and using that as means of control.

It is poor feedback loop design – but a depressingly common one.

<Leslie> So what do we do? What is a better design?

<Bob> First ask what the purpose of the feedback is?

<Leslie> To reduce the number of beds and save money by forcing down the length of stay so that the bed-day load is reduced and so we can do the same activity with fewer beds and at the same time avoid cancellations.

<Bob> OK. That sounds reasonable from the perspective of a tax-payer and a patient. It would also be a more productive design.

<Leslie> I agree but it seems to be having the opposite effect.  We are focusing on avoiding breaches so much that other patients get delayed who could have gone home sooner and we end up with more patients to expedite. It is like a vicious circle.  And every time we fail we get whacked with the RAG stick again. It is very demoralizing and it generates a lot of resentment and conflict. That is not good for anyone – least of all the patients.

<Bob>Yes.  That is the usual effect of a DRAT design. Remember that senior managers have not been trained in process improvement-by-design either so blaming them is also counter-productive.  We need to go back to the raw data. Can you plot actual LOS by patient in order of discharge as a run chart.

DRAT_04

<Bob> OK – is the maximum LOS target 8 days?

<Leslie> Yes – and this shows  we are meeting it most of the time.  But it is only with a huge amount of effort.

<Bob> Do you know where 8 days came from?

<Leslie> I think it was the historical average divided by 85% – someone read in a book somewhere that 85%  average occupancy was optimum and put 2 and 2 together.

<Bob> Oh dear! The “85% Occupancy is Best” myth combined with the “Flaw of Averages” trap. Never mind – let me explain the reasons why it is invalid to do this.

<Leslie> Yes please!

<Bob> First plot the data as a run chart and  as a histogram – do not plot the natural process limits yet as you have done. We need to do some validity checks first.

DRAT_05

<Leslie> Here you go.

<Bob> What do you see?

<Leslie> The histogram  has more than one peak – and there is a big one sitting just under the target.

<Bob>Yes. This is called the “Horned Gaussian” and is the characteristic pattern of an arbitrary lead-time target that is distorting the behaviour of the system.  Just as you have described subjectively. There is a smaller peak with a mode of 4 days and are a few very long length of stay outliers.  This multi-modal pattern means that the mean and standard deviation of this data are meaningless numbers as are any numbers derived from them. It is like having a bag of mixed fruit and then setting a maximum allowable size for an unspecified piece of fruit. Meaningless.

<Leslie> And the cases causing the breaches are completely different and could never realistically achieve that target! So we are effectively being randomly beaten with a stick. That is certainly how it feels.

<Bob> They are certainly different but you cannot yet assume that their longer LOS is inevitable. This chart just says – “go and have a look at these specific cases for a possible cause for the difference“.

<Leslie> OK … so if they are from a different system and I exclude them from the analysis what happens?

<Bob> It will not change reality.  The current design of  this process may not be capable of delivering an 8 day upper limit for the LOS.  Imposing  a DRAT does not help – it actually makes the design worse! As you can see. Only removing the DRAT will remove the distortion and reveal the underlying process behaviour.

<Leslie> So what do we do? There is no way that will happen in the current chaos!

<Bob> Apply the 6M Design® method. Map, Measure and Model it. Understand how it is behaving as it is then design out all the causes of longer LOS and that way deliver with a shorter and less variable LOS. Your chart shows that your process is stable.  That means you have enough flow capacity – so look at the policies. Draw on all your FISH training. That way you achieve your common purpose, and the big nasty stick goes away, and everyone feels better. And in the process you will demonstrate that there is a better feedback design than DRATs and RAGs. A win-win-win design.

<Leslie> OK. That makes complete sense. Thanks Bob!  But what you have described is not part of the FISH course.

<Bob> You are right. It is part of the ISP training that comes after FISH. Improvement Science Practitioner.

<Leslie> I think we will need to get a few more people trained in the theory, techniques and tools of Improvement Science.

<Bob> That would appear to be the case. They will need a real example to see what is possible.

<Leslie> OK. I am on the case!

Race for the Line

stick_figures_pulling_door_150_wht_6913It is surprising how competitive most people are. We are constantly comparing ourselves with others and using what we find to decide what to do next. Groan or Gloat.  Chase or Cruise.

This is because we are social animals.  Comparing with other is hard-wired into us. We have little choice.

But our natural competitive behaviour can become counter-productive when we learn that we can look better-by-comparison if we block or trip-up our competitors.  In a vainglorious attempt to make ourselves look better-by-comparison we spike the wheels of our competitors’ chariots.  We fight dirty.

It is not usually openly aggressive fighting.  Most of our spiking is done passively. Often by deliberately not doing something.  A deliberate act of omission.  And if we are challenged we often justify our act of omission by claiming we were too busy.

This habitual passive-aggressive learned behaviour is not only toxic to improvement, it creates a toxic culture too. It is toxic to everything.

And it ensures that we stay stuck in The Miserable Job Swamp.  It is a bad design.

So we need a better one.

One idea is to eliminate competition.  This sounds plausible but it does not work. We are hard-wired to compete because it has proven to be a very effective long term survival strategy. The non-competitive have not survived.  To be deliberately non-competitive will guarantee mediocrity and future failure.

A better design is to leverage our competitive nature and this is surprisingly easy to do.

We flip the “battle” into a “race”.

green_leader_running_the_race_150_wht_3444To do that we need:

1) A clear destination – a shared common purpose – that can be measured. We need to be able to plot our progress using objective evidence.

2) A proven, safe, effective and efficient route plan to get us to our destination.

3) A required arrival time that is realistic.  Open-ended time-scales do not work.

4) Regular feedback to measure our individual progress and to compare ourselves with others.  Selective feedback is ineffective.  Secrecy or anonymous feedback is counter-productive at best and toxic at worst.

5) The ability to re-invest our savings on all three win-win-win dimensions: emotional, temporal and financial.  This fuels the engine of improvement. Us.

The rest just happens – but not by magic – it happens because this is a better Improvement-by-Design.

Find and Fill

Many barriers to improvement are invisible.

This is because they are caused by what is not present rather than what is.  They are gaps or omissions.

Some gaps are blindingly obvious.  This is because we expect to see something there so we notice when it is missing. We would notice the gap if a rope bridge across chasm is obviously missing because only end posts are visible.

Many gaps are not obvious. This is because we have no experience or expectation.  The gap is invisible.  We are blind to the omission.

These are the gaps that we accidentally stumble into. Such as a gap in our knowledge and understanding that we cannot see. These are the gaps that create the fear of failure. And the fear is especially real because the gap is invisible and we only know when it is too late.

minefieldIt is like walking across an emotional minefield.  At any moment we could step on an ignorance mine and our confidence would be blasted into fragments.

So our natural and reasonable reaction is to stay outside the emotional minefield and inside our comfort zones – where we feel safe.  We give up trying to learn and trying to improve. Every-one hopes that Some-one or Any-one will do it for us.  No-one does.

The path to Improvement is always across an emotional minefield because improvement implies unlearning. So we need a better design than blundering about hoping not to fall into an invisible gap.  We need a safer design.

There are a number of options:

Option 1. Ask someone who knows the way across the minefield and can demonstrate it. Someone who knows where the mines are and knows how to avoid them. Someone to tell us where to step and where not to.

Option 2. Clear a new path and mark it clearly so others can trust that it is safe.  Remove the ignorance mines. Find and Fill the knowledge map.

Option 1 is quicker but it leaves the ignorance mines in place.  So sooner or later someone will step on one. Boom!

We need to be able to do Option 2.

The obvious  strategy for Option 2 is to clear the ignorance mines.  We could do this by deliberately blundering about setting off the mines. We could adopt the burn-and-scrape or learn-from-mistakes approach.

Or we could detect, defuse and remove them.

The former requires people willing to take emotional risks; the latter does not require such a sacrifice.

And “learn-by-mistakes” only works if people are able to make mistakes visibly so everyone can learn. In an adversarial, competitive, distrustful context this can not happen: and the result is usually for the unwilling troops to be forced into the minefield with the threat of a firing-squad if they do not!

And where a mistake implies irreversible harm it is not acceptable to learn that way. Mistakes are covered up. The ignorance mines are re-set for the next hapless victim to step on. The emotional carnage continues. Any change 0f sustained, system-wide improvement is blocked.

So in a low-trust cultural context the detect-defuse-and-remove strategy is the safer option.

And this requires a proactive approach to finding the gaps in understanding; a proactive approach to filling the knowledge holes; and a proactive approach to sharing what was learned.

Or we could ask someone who knows where the ignorance mines are and work our way through finding and filling our knowledge gaps. By that means any of us can build a safe, effective and efficient path to sustainable improvement.

And the person to ask is someone who can demonstrate a portfolio of improvement in practice – an experienced Improvement Science Practitioner.

And we can all learn to become an ISP and then guide others across their own emotional minefields.

All we need to do is take the first step on a well-trodden path to sustained improvement.

Fudge? We Love Fudge!

stick_figures_moving_net_150_wht_8609
It is almost autumn again.  The new school year brings anticipation and excitement. The evenings are drawing in and there is a refreshing chill in the early morning air.

This is the time of year for fudge.

Alas not the yummy sweet sort that Grandma cooked up and gave out as treats.

In healthcare we are already preparing the Winter Fudge – the annual guessing game of attempting to survive the Winter Pressures. By fudging the issues.

This year with three landmark Safety and Quality reports under our belts we have more at stake than ever … yet we seem as ill prepared as usual. Mr Francis, Prof Keogh and Dr Berwick have collectively exhorted us to pull up our socks.

So let us explore how and why we resort to fudging the issues.

Watch the animation of a highly simplified emergency department and follow the thoughts of the manager. You can pause, rewind, and replay as much as you like.  Follow the apparently flawless logic – it is very compelling. The exercise is deliberately simplified to eliminate wriggle room. But it is valid because the behaviour is defined by the Laws of Physics – and they are not negotiable.

http://www.youtube.com/watch?v=geRBGP-u5zg&rel=0&loop=1&modestbranding=1

The problem was combination of several planning flaws – two in particular.

First is the “Flaw of Averages” which is where the past performance-over-time is boiled down to one number. An average. And that is then used to predict precise future behaviour. This is a very big mistake.

The second is the “Flaw of Fudge Factors” which is a attempt to mitigate the effects of first error by fudging the answer – by adding an arbitrary “safety margin”.

This pseudo-scientific sleight-of-hand may polish the planning rhetoric and render it more plausible to an unsuspecting Board – but it does not fool Reality.

In reality the flawed design failed – as the animation dramatically demonstrated.  The simulated patients came to harm. Unintended harm to be sure – but harm nevertheless.

So what is the alternative?

The alternative is to learn how to avoid Sir Flaw of Averages and his slippery friend Mr Fudge Factor.

And learning how to do that is possible … it is called Improvement Science.

And you can start right now … click HERE.

Step 5 – Monitor

Improvement-by-Design is not the same as Improvement-by-Desire.

Improvement-by-Design has a clear destination and a design that we know can get us there because we have tested it before we implement it.

Improvement-by-Desire has a vague direction and no design – we do not know if the path we choose will take us in the direction we desire to go. We cannot see the twists and turns, the unknown decisions, the forks, the loops, and the dead-ends. We expect to discover those along the way. It is an exercise in hope.

So where pessimists and skeptics dominate the debate then Improvement-by-Design is a safer strategy.

Just over seven weeks ago I started an Improvement-by-Design project – a personal one. The destination was clear: to get my BMI (body mass index) into a “healthy” range by reducing weight by about 5 kg.  The design was clear too – to reduce energy input rather than increase energy output. It is a tried-and-tested method – “avoid burning the toast”.  The physical and physiological model predicted that the goal was achievable in 6 to 8 weeks.

So what has happened?

To answer that question requires two time-series charts. The input chart of calories ingested and the output chart of weight. This is Step 5 of the 6M Design® sequence.

Energy_Weight_ModelRemember that there was another parameter  in this personal Energy-Weight system: the daily energy expended.

But that is very difficult to measure accurately – so I could not do that.

What I could do was to estimate the actual energy expended from the model of the system using the measured effect of the change. But that is straying into the Department of Improvement Science Nerds. Let us stay in the real world a  bit longer.

Here is the energy input chart …

SRD_EnergyIn_XmR

It shows an average calorie intake of 1500 kcal – the estimated required value to achieve the weight loss given the assumptions of the physiological model. It also shows a wide day-to-day variation.  It does not show any signal flags (red dots) so an inexperienced Improvementologist might conclude that this just random noise.

It is not.  The data is not homogeneous. There is a signal in the system – a deliberate design change – and without that context it is impossible to correctly interpret the chart.

Remember Rule #1: Data without context is meaningless.

The deliberate process design change was to reduce calorie intake for just two days per week by omitting unnecessary Hi-Cal treats – like those nice-but-naughty Chocolate Hobnobs. But which two days varied – so there is no obvious repeating pattern in the chart. And the intake on all days varied – there were a few meals out and some BBQ action.

To separate out these two parts of the voice-of-the-process we need to rationally group the data into the Lo-cal days (F) and the OK-cal days (N).

SRD_EnergyIn_Grouped_XmR

The grouped BaseLine© chart tells a different story.  The two groups clearly have a different average and both have a lower variation-over-time than the meaningless mixed-up chart.

And we can now see a flag – on the second F day. That is a prompt for an “investigation” which revealed: will-power failure.  Thursday evening beer and peanuts! The counter measure was to avoid Lo-cal on a Thursday!

What we are seeing here is the fifth step of 6M Design® exercise  – the Monitor step.

And as well as monitoring the factor we are changing – the cause;  we also monitor the factor we want to influence – the effect.

The effect here is weight. And our design includes a way of monitoring that – the daily weighing.

SRD_WeightOut_XmRThe output metric BaseLine© chart – weight – shows a very different pattern. It is described as “unstable” because there are clusters of flags (red dots) – some at the start and some at the end. The direction of the instability is “falling” – which is the intended outcome.

So we have robust, statistically valid evidence that our modified design is working.

The weight is falling so the energy going in must be less than the energy being put out. I am burning off the excess lard and without doing any extra exercise.  The physics of the system mandate that this is the only explanation. And that was my design specification.

So that is good. Our design is working – but is it working as we designed?  Does observation match prediction? This is Improvement-by-Design.

Remember that we had to estimate the other parameter to our model – the average daily energy output – and we guessed a value of 2400 kcal per day using generic published data.  Now I can refine the model using my specific measured change in weight – and I can work backwards to calculate the third parameter.  And when I did that the number came out at 2300 kcal per day.  Not a huge difference – the equivalent of one yummy Chocolate Hobnob a day – but the effect is cumulative.  Over the 53 days of the 6M Design® project so far that would be a 5300 kcal difference – about 0.6kg of useless blubber.

So now I have refined my personal energy-weight model using the new data and I can update my prediction and create a new chart – a Deviation from Aim chart.

SRD_WeightOut_DFA
This is the  chart I need to watch to see  if I am on the predicted track – and it too is unstable -and not a good direction.  It shows that the deviation-from-aim is increasing over time and this is because my original guesstimate of an unmeasurable model parameter was too high.

This means that my current design will not get me to where I want to be, when I what to be there. This tells me  I need to tweak my design.  And I have a list of options.

1) I could adjust the target average calories per day down from 1500 to 1400 and cut out a few more calories; or

2) I could just keep doing what I am doing and accept that it will take me longer to get to the destination; or

3) I could do a bit of extra exercise to burn the extra 100 kcals a day off, or

4) I could do a bit of any or all three.

And because I am comparing experience with expectation using a DFA chart I will know very quickly if the design tweak is delivering.

And because some nice weather has finally arrived so the BBQ will be busy I have chosen to take longer to get there. I will enjoy the weather, have a few beers and some burgers. And that is OK. It is a perfectly reasonable design option – it is a rational and justifiable choice.

And I need to set my next destination – a weight if about 72 kg according to the BMI chart – and with my calibrated Energy-Weight model I will know exactly how to achieve that weight and how long it will take me. And I also know how to maintain it – by  increasing my calorie intake. More beer and peanuts – maybe – or the occasional Chocolate Hobnob even. Hurrah! Win-win-win!


6MDesign This real-life example illustrates 6M Design® in action and demonstrates that it is a generic framework.

The energy-weight model in this case is a very simple one that can be worked out on the back of a beer mat (which is what I did).

It is called a linear model because the relationship between calories-in and weight-out is approximately a straight line.

Most real-world systems are not like this. Inputs are not linearly related to outputs.  They are called non-linear systems: and that makes a BIG difference.

A very common error is to impose a “linear model” on a “non-linear system” and it is a recipe for disappointment and disaster.  We do that when we commit the Flaw of Averages error. We do it when we plot linear regression lines through time-series data. We do it when we extrapolate beyond the limits of our evidence.  We do it when we equate time with money.

The danger of this error is that our linear model leads us to make unwise decisions and we actually make the problem worse – not better.  We then give up in frustration and label the problem as “impossible” or “wicked” or get sucked into to various forms of Snake Oil Sorcery.

The safer approach is to assume the system is non-linear and just let the voice of the system talk to us through our BaseLine© charts. The challenge for us is to learn to understand what the system is saying.

That is why the time-series charts are called System Behaviour Charts and that is why they are an essential component of Improvement-by-Design.

However – there is a step that must happen before this – and that is to get the Foundations in place. The foundation of knowledge on which we can build our new learning. That gap must be filled first.

And anyone who wants to invest in learning the foundations of improvement science can now do so at their own convenience and at their own pace because it is on-line …. and it is here.

fish

Step 6 – Maintain

Anyone with much experience of  change will testify that one of the hardest parts is sustaining the hard won improvement.

The typical story is all too familiar – a big push for improvement, a dramatic improvement, congratulations and presentations then six months later it is back where it was before but worse. The cynics are feeding on the corpse of the dead change effort.

The cause of this recurrent nightmare is a simple error of omission.

Failure to complete the change sequence. Missing out the last and most important step. Step 6 – Maintain.

Regular readers may remember the story of the pharmacy project – where a sceptical department were surprised and delighted to discover that zero-cost improvement was achievable and that a win-win-win outcome was not an impossible dream.

Enough time has now passed to ask the question: “Was the improvement sustained?”

TTO_Yield_Nov12_Jun13The BaseLine© chart above shows their daily performance data on their 2-hour turnaround target for to-take-out prescriptions (TTOs) . The weekends are excluded because the weekend system is different from the weekday system. The first split in the data in Jan 2013 is when the improvement-by-design change was made. Step 4 on the 6M Design® sequence – Modify.

There was an immediate and dramatic improvement in performance that was sustained for about six weeks – then it started to drift back. Bit by Bit.  The time-series chart flags it clearly.


So what happened next?

The 12-week review happened next – and it was done by the change leader – in this case the Inspector/Designer/Educator.  The review data plotted as a time-series chart revealed instability and that justified an investigation of the root cause – which was that the final and critical step had not been completed as recommended. The inner feedback loop was missing. Step 6 – Maintain was not in place.

The outer feedback loop had not been omitted. That was the responsibility of the experienced change leader.

And the effect of closing the outer-loop is clearly shown by the third segment – a restoration of stability and improved capability. The system is again delivering the improvement it was designed to deliver.


What does this lesson teach us?

The message here is that the sponsors of improvement have essential parts to play in the initiation and the maintenance of change and improvement. If they fail in their responsibility then the outcome is inevitable and predictable. Mediocrity and cynicism.

Part 1: Setting the clarity and constancy of common purpose.

Without a clear purpose then alignment, focus and effectiveness are thwarted.  Purpose that changes frequently is not a purpose – it is reactive knee-jerk politics.  Constancy of purpose is required because improvement takes time to achieve and to embed.  There is always a lag so moving the target while the arrow is in flight is both dangerous and leads to disengagement.  Establishing common ground is essential to avoiding the time-wasting discussion and negotiation that is inevitable when opinions differ – which they always do.

Part 2: Respectful challenge.

Effective change leadership requires an ability to challenge from a position of mutual respect.  Telling people what to do is not leadership – it is dictatorship.  Dodging the difficult conversations and passing the buck to others is not leadership – it is ineffective delegation. Asking people what they want to do is not leadership – it is abdication of responsibility.  People need their leaders to challenge them and to respect them at the same time.  It is not a contradiction.  It is possible to do both.

And one way that a leader of change can challenge with respect is to expose the need for change; to create the context for change; and then to commit to holding those charged with change to account – including themselves.  And to make it clear at the start what their expectation is as a leader – and what the consequences of disappointment are.

It is a delight to see individuals,  teams, departments and organisations blossom and grow when the context of change is conducive.  And it is disappointing to see them wither and shrink when the context of change is laced with cynicide – the toxic product of cynicism.


So what is the next step?

What could an aspirant change leader do to get this for themselves and their organisations?

One option is to become a Student of Improvementology® – and they can do that here.

Six Weeks

team_puzzle_123456There seems to be a natural cycle to change and improvement.

A pace that feels right and that works well. Try to push faster and resistance increases. Relax and pull slower and interest wanders.

The pace that feels about right is a six week cycle.

So why six weeks? Is it 42 days that is important or it there something about a seven-day week and the number six?

The daily and the weekly cycles are dictated by the Celestial Clockwork.  The day is the Earth’s rotation and the week is one quarter if the 28 day Lunar cycle. These are not arbitrary policies – they are celestial physics. Not negotiable.

So where does the Six come from? That does seem to be something to do with people and psychology.

team_puzzle_SDABDRRemember the Nerve Curve?

The predictable sequence of emotional states that accompanies significant change? The sequence of Shock-Denial-Anger-Bargaining-Depression-Resolution?  It has six stages.  Is that just a co-incidence?

team_puzzle_MMMMMMRemember 6M Design®?

The required sequence of steps that structure any improvement-by-design challenge? It has six stages.

Is that just a co-incidence too?

And is seven days a convenient size? It was originally six-days-of-work and one-day-of-rest. The modern 5-and-2 design is a recent invention.

And if each stage requires at least one week to complete and we require six stages then we get a Six Week cycle.

It sounds lie a plausible hypothesis but is that what happens in reality?

There is a lot of empirical evidence to suggest that it does. It seems we feel comfortable working with six-week chunks of time.  We plan about six weeks ahead.  School terms are divided into about six week chunks. A financial “quarter” is about two chunks. We can fit four of those into a Year with a bit left over.  Action learning seems to work well in six week cycles. Courses are very often carved up into six week modules. It feels OK.

So what does this mean for the Improvement Scientist?

First it suggests that doing something every week makes sense. Leaving it all to the last minute does not.
Second it suggests that each week the step required and the emotional reaction is predictable.
Third it suggests that five weeks of facilitative investment are required.
Fourth it suggests that if something throws a spanner into the sequence the we need to add extra weeks.

And it suggests that in the Seventh Week we can rest, reflect, share and prepare for the next Six Week change cycle.

So maybe Douglas Adams was correct – the Answer to Life the Universe and Everything is Forty Two.

Burn-and-Scrape


telephone_ringing_300_wht_14975[Ring Ring]

<Bob> Hi Leslie how are you to today?

<Leslie> I am good thanks Bob and looking forward to today’s session. What is the topic?

<Bob> We will use your Niggle-o-Gram® to choose something. What is top of the list?

<Leslie> Let me see.  We have done “Engagement” and “Productivity” so it looks like “Near-Misses” is next.

<Bob> OK. That is an excellent topic. What is the specific Niggle?

<Leslie> “We feel scared when we have a safety near-miss because we know that there is a catastrophe waiting to happen.”

<Bob> OK so the Purpose is to have a system that we can trust not to generate avoidable harm. Is that OK?

<Leslie> Yes – well put. When I ask myself the purpose question I got a “do” answer rather than a “have” one. The word trust is key too.

<Bob> OK – what is the current safety design used in your organisation?

<Leslie> We have a computer system for reporting near misses – but it does not deliver the purpose above. If the issue is ranked as low harm it is just counted, if medium harm then it may be mentioned in a report, and if serious harm then all hell breaks loose and there is a root cause investigation conducted by a committee that usually results in a new “you must do this extra check” policy.

<Bob> Ah! The Burn-and-Scrape model.

<Leslie>Pardon? What was that? Our Governance Department call it the Swiss Cheese model.

<Bob> Burn-and-Scrape is where we wait for something to go wrong – we burn the toast – and then we attempt to fix it – we scrape the burnt toast to make it look better. It still tastes burnt though and badly burnt toast is not salvageable.

<Leslie>Yes! That is exactly what happens all the time – most issues never get reported – we just “scrape the burnt toast” at all levels.

fire_blaze_s_150_clr_618 fire_blaze_h_150_clr_671 fire_blaze_n_150_clr_674<Bob> One flaw with the Burn-and-Scrape design is that harm has to happen for the design to work.

It is all reactive.

Another design flaw is that it focuses attention on the serious harm first – avoidable mortality for example.  Counting the extra body bags completely misses the purpose.  Avoidable death means avoidably shortened lifetime.  Avoidable non-fatal will also shorten lifetime – and it is even harder to measure.  Just consider the cumulative effect of all that non-fatal life-shortening avoidable-but-ignored harm?

Most of the reasons that we live longer today is because we have removed a lot of lifetime shortening hazards – like infectious disease and severe malnutrition.

Take health care as an example – accurately measuring avoidable mortality in an inherently high-risk system is rather difficult.  And to conclude “no action needed” from “no statistically significant difference in mortality between us and the global average” is invalid and it leads to a complacent delusion that what we have is good enough.  When it comes to harm it is never “good enough”.

<Leslie> But we do not have the resources to investigate the thousands of cases of minor harm – we have to concentrate on the biggies.

<Bob> And do the near misses keep happening?

<Leslie> Yes – that is why they are top rank  on the Niggle-o-Gram®.

<Bob> So the Burn-and-Scrape design is not fit-for-purpose.

<Leslie> So it seems. But what is the alternative? If there was one we would be using it – surely?

<Bob> Look back Leslie. How many of the Improvement Science methods that you have already learned are business-as-usual?

<Leslie> Good point. Almost none.

<Bob> And do they work?

<Leslie> You betcha!

<Bob> This is another example.  It is possible to design systems to be safe – so the frequent near misses become rare events.

<Leslie> Is it?  Wow! That know-how would be really useful to have. Can you teach me?

<Bob> Yes. First we need to explore what the benefits would be.

<Leslie> OK – well first there would be no avoidable serious harm and we could trust in the safety of our system – which is the purpose.

<Bob> Yes …. and?

<Leslie> And … all the effort, time and cost spent “scraping the burnt toast” would be released.

<Bob> Yes …. and?

<Leslie> The safer-by-design processes would be quicker and smoother, a more enjoyable experience for both customers and suppliers, and probably less expensive as well!

<Bob> Yes. So what does that all add up to?

<Leslie> A win-win-win-win outcome!

<Bob> Indeed. So a one-off investment of effort, time and money in learning Safety-by-Design methods would appear to be a wise business decision.

<Leslie> Yes indeed!  When do we start?

<Bob> We have already started.


For a real-world example of this approach delivering a significant and sustained improvement in safety click here.

Invisible Design

Improvement Science is all about making some-thing better in some-way by some-means.

There are lots of things that might be improved – almost everything in fact.

There are lots of ways that those things might be improved. If it was a process we might improve safety, quality, delivery, and productivity. If it was a product we might improve reliability, usability, durability and affordability.

There are lots of means by which those desirable improvements might be achieved – lots of different designs.

Multiply that lot together and you get a very big number of options – so it is no wonder we get stuck in the “what to do first?” decision process.

So how do we approach this problem currently?

We use our intuition.

Intuition steers us to the obvious – hence the phrase intuitively obvious. Which means what looks to our minds-eye to be a good option.And that is OK. It is usually a lot better than guessing (but not always).

However, the problem using “intuitively obvious” is that we end up with mediocrity. We get “about average”. We get “OKish”.  We get “satisfactory”. We get “what we expected”. We get “same as always”. We do not get “significantly better-than-average’. We do not get “reliably good”. We do not get improvement. And we do not because anyone and everyone can do the “intuitively obvious” stuff.

To improve we need a better-than-average functional design. We need a Reliably Good Design. And that is invisible.

By “invisible” I mean not immediately obvious to our conscious awareness.  We do not notice good functional design because it does not get in the way of achieving our intention.  It does not trip us up.

We notice poor functional design because it trips us up. It traps us into making mistakes. It wastes out time. It fails to meet our expectation. And we are left feeling disappointed, irritated, and anxious. We feel Niggled.

We also notice exceptional design – because it works far better than we expected. We are surprised and we are delighted.

We do not notice Good Design because it just works. But there is a trap here. And that is we habitually link expectation to price.  We get what we paid for.  Higher cost => Better design => Higher expectation.

So we take good enough design for granted. And when we take stuff for granted we are on the slippery slope to losing it. As soon as something becomes invisible it is at risk of being discounted and deleted.

If we combine these two aspects of “invisible design” we arrive at an interesting conclusion.

To get from Poor Design to OK Design and then Good Design we have to think “counter-intuitively”.  We have to think “outside the box”. We have to “think laterally”.

And that is not a natural way for us to think. Not for individuals and not for teams. To get improvement we need to learn a method of how to counter our habit of thinking intuitively and we need to practice the method so that we can do it when we need to. When we want to need to improve.

To illustrate what I mean let us consider an real example.

Suppose we have 26 cards laid out in a row on a table; each card has a number on it; and our task is to sort the cards into ascending order. The constraint is that we can only move cards by swapping them.  How do we go about doing it?

There are many sorting designs that could achieve the intended purpose – so how do we choose one?

One criteria might be the time it takes to achieve the result. The quicker the better.

One criteria might be the difficulty of the method we use to achieve the result. The easier the better.

When individuals are given this task they usually do something like “scan the cards for the smallest and swap it with the first from the left, then repeat for the second from the left, and so on until we have sorted all the cards“.

This card-sorting-design is fit for purpose.  It is intuitively obvious, it is easy to explain, it is easy to teach and it is easy to do. But is it the quickest?

The answer is NO. Not by a long chalk.  For 26 randomly mixed up cards it will take about 3 minutes if we scan at a rate of 2 per second. If we have 52 cards it will take us about 12 minutes. Four times as long. Using this intuitively obvious design the time taken grows with the square of the number of cards that need sorting.

In reality there are much quicker designs and for this type of task one of the quickest is called Quicksort. It is not intuitively obvious though, it is not easy to describe, but it is easy to do – we just follow the Quicksort Policy.  (For those who are curious you can read about the method here and make up your own mind about how “intuitively obvious” it is.  Quicksort was not invented until 1960 so given that sorting stuff is not a new requirement, it clearly was not obvious for a few thousand years).

Using Quicksort to sort our 52 cards would take less than 3 minutes! That is a 400% improvement in productivity when we flip from an intuitive to a counter-intuitive design.  And Quicksort was not chance discovery – it was deliberately designed to address a specific sorting problem – and it was designed using robust design principles.

So our natural intuition tends to lead us to solutions that are “effective, easy and inefficient” – and that means expensive in terms of use of resources.

This has an important conclusion – if we are all is given the same improvement assignment and we all used our intuition to solve it then we will get similar and mediocre results.  It will feel OK and it will appear obvious but there will be no improvement.

We then conclude that “OK, this is the best we can expect.” which is intuitively obvious, logically invalid, and wrong. It is that sort of intuitive thinking trap that blocked us from inventing Quicksort for thousands of years.

And remember, to decide what is “best” we have to explore all options exhaustively – both intuitively obvious and counter-intuitively obscure. That impossible in practice.  This is why “best” and “optimum” are generally unhelpful concepts in the context of improvement science.

So how do we improve when good design is so counter-intuitive?

The answer is that we learn a set of “good designs” from a teacher who knows and understands them, and then we prove them to ourselves in practice. We leverage the “obvious in retrospect” effect. And we practice until we understand. And then we then teach others.

So if we wanted to improve the productivity of our designed-by-intuition card sorting process we could:
(a) consult a known list of proven sorting algorithms,
(b) choose one that meets our purpose (our design specification),
(c) compare the measured performance of our current “intuitively obvious” design with the predicted performance of that “counter-intuitively obscure” design,
(d) set about planning how to implement the higher performance design – possibly as a pilot first to confirm the prediction, reassure the fence-sitters, satisfy the skeptics, and silence the cynics.

So if these proven good designs are counter-intuitive then how do we get them?

The simplest and quickest way is to learn from people who already know and understand them. If we adopt the “not invented by us” attitude and attempt to re-invent the wheel then we may get lucky and re-discover a well-known design, we might even discover a novel design; but we are much more likely to waste a lot of time and end up no better off, or worse. This is called “meddling” and is driven by a combination of ignorance and arrogance.

So who are these people who know and understand good design?

They are called Improvement Scientists – and they have learned one-way-or-another what a good design looks like. That lalso means they can see poor design where others see only-possible design.

That difference of perception creates a lot of tension.

The challenge that Improvement Scientists face is explaining how counter-intuitive good design works: especially to highly intelligent, skeptical people who habitually think intuitively. They are called Academics.  And it is a pointless exercise trying to convince them using rhetoric.

Instead our Improvement Scientists side-steps the “theoretical discussion” and the “cynical discounting” by pragmatically demonstrating the measured effect of good design in practice. They use reality to make the case for good design – not rhetoric.

Improvement Scientists are Pragmatists.

And because they have learned how counter-intuitive good design is to the novice – how invisible it is to their intuition – then they are also Voracious Learners. They have enough humility to see themselves as Eternal Novices and enough confidence to be selective students.  They will actively seek learning from those who can demonstrate the “what” and explain the “how”.  They know and understand it is a much quicker and easier way to improve their knowledge and understanding.  It is Good Design.

 

The Tyranny of Choice

[Ding-a-Ling]
Bob’s new all-singing-and-dancing touchscreen phone pronounced the arrival of an email from an Improvement Science apprentice. This was always an opportunity for learning so he swiped the flashing icon and read the email. It was from Leslie.

<Leslie>Hi Bob, I have come across a new challenge that I never thought I would see – the team that I am working with are generating so many improvement-by-design ideas that we cannot decide what to try. Can you help?

Bob thumbed a reply immediately:
<Bob>Ah ha! The Tyranny of Choice challenge. Yes, I believe I can help. I am free to talk now if you are.

[“You have a call from Leslie”]
Bob’s new all-singing-and-dancing touchscreen phone said that it was Leslie on the line – (it actually said it in the synthetic robot voice that Bob had set as the default).

<Bob>Hello Leslie.

<Leslie>Hi Bob, thank you for replying so quickly. I gather that you have encountered this challenge before?

<Bob>Yes. It usually appears when a team are nearing the end of a bumpy ride on the Nerve Curve and are starting to see new possibilities that previously were there but hidden.

<Leslie>That is just where we are. The problem is we have flipped from no options to so many we cannot decide what to do.

<Bob>It is often assumed that choice is a good thing, but you can have too much of a good thing. Many studies have shown that when the number of innovative choices are limited then people are more likely to make a decision and actually do something. As the number of choices increase it gets much harder to choose so we default to the more comfortable and familiar status quo. We avoid making a decision and we do nothing. That is the Tyranny of Choice.

<Leslie>Yes, that is just how it feels. Paralyzed by indecision. So how do we get past this barrier?

<Bob>The same way we get past all barriers. We step back,  broaden our situational awareness and list all the obvious things and then consider doing exactly the opposite of what out intuition tells us. We just follow the tried-and-tested 6M Design script.

<Leslie>Arrgh! Yes, of course. We start with a 4N Chart.

<Bob>Yes, and specifically we start with the Nuggets.  We look for what is working despite the odds. The positive deviants. Who do you know is decisive when faced with a host of confusing and conflicting options? Not tyrannized by choice.

<Leslie>Other than you?

<Bob>It does not matter who. How do they do it?

<Leslie>Well – “they” use a special sort of map that I confess I have not mastered yet – the Right-2-Left Map.

<Bob>Yes, an effective way to avoid getting lost in the Labyrinth of Options. What else?

<Leslie>“They” know what the critical steps are and “they” give clear step-by-step guidance of what to do to complete them.

<Bob>This is called “story-boarding”.  It is rather like sketching each scene of a play – then practicing each scene script individually until they are second nature and ready when needed.

<Leslie>That is just like what the emergency medical teams do. They have scripts that they use for emergent situations where it is dangerous to try to plan what to do in the moment.  They call them “care bundles”. It avoids a lot of time-wasting, debate, prevarication and the evidence shows that it delivers better outcomes and saves lives.

<Bob>In an emergency situation the natural feeling of fear creates the emotional drive to act; but without a well-designed and fully-tested script the same fear can paralyze the decision process. It is the rabbit-in-the-headlights effect.  When the feeling of urgency is less a different approach is needed to engage the emotional-power-train.

<Leslie>Do you mean build engagement?

<Bob>Yes, and how do we do that?

<Leslie>We use a combination of subjective stories and objective evidence – heart stuff and head stuff. It is a very effective combination to break through the Carapace of Complacency as you call it. I have seen that work really well in practice.

<Bob>And the 4N Chart comes in handy here again because it helps us see the emotional-terrain in perspective and to align us in moving away from the Niggles towards the NiceIfs while avoiding the NoNos and leveraging the Nuggets.

<Leslie>Yes! I have seen that too. But what do we do when we are in new territory; when we are faced with a swarm of novel options; when we have no pre-designed scripts to help us?

<Bob>We use a meta-script?

<Leslie>A what?

<Bob>A meta-script is one that we use to design a novel action script when we need it.

<Leslie>You mean a single method for creating a plan that we are confident will work?

<Bob>Yes.

<Leslie>That is what the Right-2-Left Map is!

<Bob>Yes.

<Leslie>So the Tyranny of Choice is the result of our habitual Left-2-Right thinking.

<Bob>Yes.

<Leslie>And when the future choices we see are also shrouded in ambiguity it is even harder to make a decision!

<Bob>Yes. We cannot see past the barrier of uncertainty – so we stop and debate because it feels safer.

<Leslie>Which is why so many really clever people seem get stuck in the paralysis of analysis and valueless discussion.

<Bob>Yes.

<Leslie>So all we need to do is switch to the counter-intuitive Right-2-Left thinking and the path becomes clear?

<Bob>Not quite.  The choices become a lot easier so the Tyranny of Choice disappears. We still have choices. There are still many possible paths. But it does not matter which we choose because they all lead to the common goal.

<Leslie>Thank you Bob. I am going to have to mull this one over for a while – red wine may help.

<Bob>Yes – mulled wine is a favorite of mine too. Ching-ching!

Time-Reversed Insight

stick_figure_wheels_turning_150_wht_4572Thinking-in-reverse sounds like an odd thing to do but it delivers more insight and solves tougher problems than thinking forwards.  That is the reason it is called Time-Reversed Insight.   And once we have mastered how to do it, we discover that it comes in handy in all sorts of problematic situations where thinking forwards only hits a barrier or even makes things worse.

Time-reversed thinking is not the same thing as undoing what you just did. It is reverse thinking – not reverse acting.

We often hear the advice “Start with the end in mind …” and that certainly sounds like it might be time-reversed thinking, but it is often followed by “… to help guide your first step.” The second part tells us it is not. Jumping from outcome to choosing the first step is actually time-forward thinking.

Time-forward thinking comes in many other disguises: “Seeking your True North” is one and “Blue Sky Thinking” is another. They are certainly better than discounting the future and they certainly do help us to focus and to align our efforts – but they are still time-forward thinking. We know that because the next question is always “What do we do first? And then? And then?” in other words “What is our Plan?”.

This is not time-reversed insightful thinking: it is good old, tried-and-tested, cause-and-effect thinking. Great for implementation but a largely-ineffective, and a hugely-inefficient way to dissolve “difficult” problems. In those situation it becomes keep-busy behaviour. Plan-Do-Plan-Do-Plan-Do ……..


In time-reversed thinking the first question looks similar. It is a question about outcome but it is very specific.  It is “What outcome do we want? When do we want it? and How would we know we have got it?”  It is not a direction. It is a destination. The second question in time-reversed thinking is the clincher. It is  “What happened just before?” and is followed by “And before that? And before that?“.

We actually do this all the time but we do it unconsciously and we do it very fast.  It is called the “blindingly obvious in hindsight” phenomenon.  What happens is we feel the good or bad outcome and then we flip to the cause in one unconscious mental leap. Ah ha!

And we do this because thinking backwards in a deliberate, conscious, sequential way is counter-intuitive.

Our unconscious mind seems to have no problem doing it though. And that is because it is wired differently. Some psychologists believe that we literally have “two brains”: one that works sequentially in the direction of forward time – and one that works in parallel and in a forward-and backward in time fashion. It is the sequential one that we associate with conscious thinking; it is the parallel one that we associate with unconscious feeling. We do both and usually they work in synergy – but not always. Sometimes they antagonise each other.

The problem is that our sequential, conscious brain does not  like working backwards. Just like we do not like walking backwards, or driving backwards.  We have evolved to look, think, and move forwards. In time.

So what is so useful about deliberate, conscious, time-reversed thinking?

It can give us an uniquely different perspective – one that generates fresh insight – and that new view enables us to solve problems that we believed were impossible when looked at in a time-forward way.


An example of time-reverse thinking:

The 4N Chart is an emotional mapping tool.  More specifically it is an emotion-over-time mapping technique. The way it is used is quite specific and quite counter-intuitive.  If we ask ourselves the question “What is my top Niggle?” our reply is usually something like “Not enough time!” or “Person x!” or “Too much work!“.  This is not how The 4N Chart is designed to be used.  The question is “What is my commonest negative feeling?” and then the question “What happened just before I felt it?“.  What was the immediately preceding cause of  the Niggle? And then the questions continue deliberately and consciously to think backwards: “And before that?”, “And before that?” until the root causes are laid bare.

A typical Niggle-cause exposing dialog might be:

Q: What is my most commonest negative feeling?
A: I feel angry!
Q: What happened just before?
A: My boss gives me urgent jobs to do at half past 4 on Friday afternoon!
Q: And before that?
A: Reactive crisis management meetings are arranged at very short notice!
Q: And before that?
A: We have regular avoidable crises!
Q: And before that?
A: We are too distracted with other important work to spot each crisis developing!
Q: And before that?
A: We were not able to recruit when a valuable member of staff left.
Q: And before that?
A: Our budget was cut!

This is time-reversed  thinking and we can do this reasonably easily because we are working backwards from the present – so we can use our memory to help us. And we can do this individually and collectively. Working backwards from the actual outcome is safer because we cannot change the past.

It is surprisingly effective though because by doing this time-reverse thinking consciously we uncover where best to intervene in the cause-and-effect pathway that generates our negative emotions. Where it crosses the boundary of our Circle of Control. And all of us have the choice to step-in just before the feeling is triggered. We can all choose if we are going to allow the last cause to trigger to a negative feeling in us. We can all learn to dodge the emotional hooks. It takes practice but it is possible. And having deflected the stimulus and avoided being hijacked by our negative emotional response we are then able to focus our emotional effort into designing a way to break the cause-effect-sequence further upstream.

We might leave ourselves a reminder to check on something that could develop into a crisis without us noticing. Averting just one crisis would justify all the checking!

This is what calm-in-a-crisis people do. They disconnect their feelings. It is very helpful but it has a risk.

robot_builder_textThe downside is that they can disconnect all their feelings – including the positive ones. They can become emotionless, rational, logical, tough-minded robots.  And that can be destructive to individual and team morale. It is the antithesis of improvement.

So be careful when disconnecting emotional responses – do it only for defense – never for attack.


A more difficult form of time-reversed thinking is thinking backwards from future-to-present.  It is more difficult for many reasons, one of which is because we do not have a record of what actually happened to help us.  We do however have experience of  similar things from the past so we can make a good guess at the sort of things that could cause a future outcome.

Many people do this sort of thinking in a risk-avoidance way with the objective of blocking all potential threats to safety at an early stage. When taken to extreme it can manifest as turgid, red-taped, blind bureaucracy that impedes all change. For better or worse.

Future-to-present thinking can be used as an improvement engine – by unlocking potential opportunity at an early stage. Innovation is a fragile flower and can easily be crushed. Creative thinking needs to be nurtured long enough to be tested.

Change is deliberately destablising so this positive form of future-to-present thinking can also be counter-productive if taken to extreme when it becomes incessant meddling. Change for change sake is also damaging to morale.

So, either form of future-to-present thinking is OK in moderation and when used in synergy the effect is like magic!

Synergistic future-to-present time-reversed thinking is called Design Thinking and one formulation is called 6M Design.

The Seventh Flow

texting_a_friend_back_n_forth_150_wht_5352Bing Bong

Bob looked up from the report he was reading and saw the SMS was from Leslie, one of his Improvement Science Practitioners.

It said “Hi Bob, would you be able to offer me your perspective on another barrier to improvement that I have come up against.”

Bob thumbed a reply immediately “Hi Leslie. Happy to help. Free now if you would like to call. Bob

Ring Ring

<Bob> Hello, Bob here.

<Leslie> Hi Bob. Thank you for responding so quickly. Can I describe the problem?

<Bob> Hi Leslie – Yes, please do.

<Leslie> OK. The essence of it is that I have discovered that our current method of cash-flow control is preventing improvements in safety, quality, delivery and paradoxically in productivity too. I have tried to talk to the Finance department and all I get back is “We have always done it this way. That is what we are taught. It works. The rules are not negotiable and the problem is not Finance“. I am at a loss what to do.

<Bob> OK. Do not worry. This is a common issue that every ISP discovers at some point. What led you to your conclusion that the current methods are creating a barrier to change?

<Leslie> Well, the penny dropped when I started using the modelling tools you have shown me.  In particular when predicting the impact of process improvement-by-design changes on the financial performance of the system.

<Bob> OK. Can you be more specific?

<Leslie> Yes. The project was to design a new ambulatory diagnostic facility that will allow much more of the complex diagnostic work to be done on an outpatient basis.  I followed the 6M Design approach and looked first at the physical space design. We needed that to brief the architect.

<Bob> OK. What did that show?

<Leslie> It showed that the physical layout had a very significant impact on the flow in the process and that by getting all the pieces arranged in the right order we could create a physical design that felt spacious without actually requiring a lot of space. We called it the “Tardis Effect“. The most marked impact was on the size of the waiting areas – they were really small compared with what we have now which are much bigger and yet still feel cramped and chaotic.

<Bob> OK. So how does that physical space design link to the finance question?

<Leslie> Well, the obvious links were that the new design would have a smaller physical foot-print and at the same time give a higher throughput. It will cost less to build and will generate more activity than if we just copied the old design into a shiny new building.

<Bob> OK. I am sure that the Capital Allocation Committee and the Revenue Generation Committee will have been pleased with that outcome. What was the barrier?

<Leslie> Yes, you are correct. They were delighted because it left more in the Capital Pot for other equally worthy projects. The problem was not capital it was revenue.

<Bob> You said that activity was predicted to increase. What was the problem?

<Leslie>Yes – sorry, I was not clear – it was not the increased activity that was the problem – it was how to price the activity and  how to distribute the revenue generated. The Reference Cost Committee and Budget Allocation Committee were the problem.

<Bob> OK. What was the problem?

<Leslie> Well the estimates for the new operational budgets were basically the current budgets multiplied by the ratio of the future planned and historical actual activity. The rationale was that the major costs are people and consumables so the running costs should scale linearly with activity. They said the price should stay as it is now because the quality of the output is the same.

<Bob> OK. That does sound like a reasonable perspective. The variable costs will track with the activity if nothing else changes. Was it apportioning the overhead costs as part of the Reference Costing that was the problem?

<Leslie> No actually. We have not had that conversation yet. The problem was more fundamental. The problem is that the current budgets are wrong.

<Bob> Ah! That statement might come across as a bit of a challenge to the Finance Department. What was their reaction?

<Leslie> To para-phrase it was “We are just breaking even in the current financial year so the current budget must be correct. Please do not dabble in things that you clearly do not understand.”

<Bob> OK. You can see their point. How did you reply?

<Leslie> I tried to explain the concepts of the Cost-Of-The-Queue and how that cost was incurred by one part of the system with one budget but that the queue was created by a different part of the system with a different budget. I tried to explain that just because the budgets were 100% utilised does not mean that the budgets were optimal.

<Bob> How was that explanation received?

<Leslie> They did not seem to understand what I was getting at and kept saying “Inventory is an asset on the balance sheet. If profit is zero we must have planned our budgets perfectly. We cannot shift money between budgets within year if the budgets are already perfect. Any variation will average out. We have to stick to the financial plan and projections for the year. It works. The problem is not Finance – the problem is you.

<Bob> OK. Have you described the Seventh Flow and put it in context?

<Leslie> Arrrgh! No! Of course! That is how I should have approached it. Budgets are Cash-Inventories and what we need is Cash-Flow to where and when it is needed and in just the right amount according to the Principle of Parsimonious Pull. Thank you. I knew you would ask the crunch question. That has given me a fresh perspective on it. I will have another go.

<Bob> Let know how you get on. I am curious to hear the next instalment of the story.

<Leslie> Will do. Bye for now.

Drrrrrrrr

construction_blueprint_meeting_150_wht_10887Creating a productive and stable system design requires considering Seven Flows at the same time. The Seventh Flow is cash flow.

Cash is like energy – it is only doing useful work when it is flowing.

Energy is often described as two forms – potential energy and and kinetic energy.  The ‘doing’ happens when one form is being converted from potential to kinetic. Cash in the budget is like potential energy – sitting there ready to do some business.  Cash flow is like kinetic energy – it is the business.

The most versatile form of energy that we use is electrical energy. It is versatile because it can easily be converted into other forms – e.g. heat, light and movement. Since the late 1800’s our whole society has become highly dependent on electrical energy.  But electrical energy is tricky to store and even now our battery technology is pretty feeble. So, if we want to store energy we use a different form – chemical energy.  Gas, oil and coal – the fossil fuels – are all ancient stores of chemical energy that were originally derived from sunlight captured by vast carboniferous forests over millions of years. These carbon-rich fossil fuels are convenient to store near where they are needed, and when they are needed. But fossil fuels have a number of drawbacks: One is that they release their stored carbon when they are “burned”.  Another is that they are not renewable.  So, in the future we will need to develop better ways to capture, transport, use and store the energy from the Sun that will flow in glorious abundance for millions of years to come.

Plants discovered millions of years ago how to do this sunlight-to-chemical energy conversion and that biological legacy is built into every cell in every plant on the planet. Animals just do the reverse trick – they convert chemical-to-electrical. Every cell in every animal on the planet is a microscopic electrical generator that “burns” chemical fuel – carbohydrate. The other products are carbon dioxide and water. Plants use sunlight to recycle and store the carbon dioxide. It is a resilient and sustainable design.

plant_growing_anim_150_wht_9902Plants seemingly have it easy – the sunlight comes to them – they just sunbathe all day!  The animals have to work a bit harder – they have to move about gathering their chemical fuel. Some animals just feed on plants, others feed on other animals, and we do a bit of both. This food-gathering is a more complicated affair – and it creates a problem. Animals need a constant supply of energy – so they have to carry a store of chemical fuel around with them. That store is heavy so it needs energy to move it about.  Herbivors can be bigger and less intelligent because their food does not run away.  Carnivors need to be more agile; both physically and mentally. A balance is required. A big enough fuel store but not too big.  So, some animals have evolved additional strategies. Animals have become very good at not wasting energy – because the more that is wasted the more food that is needed and the greater the risk of getting eaten or getting too weak to catch the next meal.

To illustrate how amazing animals are at energy conservation we just need to look at an animal structure like the heart. The heart is there to pump blood around. Blood carries chemical nutrients and waste from one “department” of the body to another – just like ships, rail, roads and planes carry stuff around the world.

cardiogram_heart_working_150_wht_5747Blood is a sticky, viscous fluid that requires considerable energy to pump around the body and, because it is pumped continuously by the heart, even a small improvement in the energy efficiency of the circulation design has a big long-term cumulative effect. The flow of blood to any part of the body must match the requirements of that part.  If the blood flow to your brain slows down for even few seconds the brain cannot work properly and you lose consciousness – it is called “fainting”.

If the flow of blood to the brain is stopped for just a few minutes then the brain cells actually die. That is called a “stroke”. Our brains use a lot of electrical energy to do their job and our brain cells do not have big stores of fuel – so they need constant re-supply. And our brains are electrically active all the time – even when we are sleeping.

Other parts of the body are similar. Muscles for instance. The difference is that the supply of blood that muscles need is very variable – it is low when resting and goes up with exercise. It has been estimated that the change in blood flow for a muscle can be 30 fold!  That variation creates a design problem for the body because we need to maintain the blood flow to brain at all times but we only want blood to be flowing to the muscles in just the amount that they need, where they need it and when they need it. And we want to minimise the energy required to pump the blood at all times. How then is the total and differential allocation of blood flow decided and controlled?  It is certainly not a conscious process.

stick_figure_turning_valve_150_wht_8583The answer is that the brain and the muscles control their own flow. It is called autoregulation.  They open the tap when needed and just as importantly they close the tap when not needed. It is called the Principle of Parsimonious Pull. The brain directs which muscles are active but it does not direct the blood supply that they need. They are left to do that themselves.

So, if we equate blood-flow and energy-flow to cash-flow then we arrive at a surprising conclusion. The optimal design, the most energy and cash efficient, is where the separate parts of the system continuously determine the energy/cash flow required for them to operate effectively. They control the supply. They autoregulate their cash-flow. They pull only what they need when they need it.

BUT

For this to work then every part of the system needs to have a collaborative and parsimonious pull-design philosophy – one that wastes as little energy and cash as possible.  Minimum waste of energy requires careful design – it is called ergonomic design. Minimum waste of cash requires careful design – it is called economic design.

business_figures_accusing_anim_150_wht_9821Many socioeconomic systems are fragmented and have parts that behave in a “greedy” manner and that compete with each other for resources. It is a dog-eat-dog design. They would use whatever resources they can get for fear of being starved. Greed is Good. Collaboration is Weak.  In such a competitive situation a rigid-budget design is a requirement because it helps prevent one part selfishly and blindly destabilising the whole system for all. The problem is that this rigid financial design blocks change so it blocks improvement.

This means that greedy, competitive, selfish systems are unable to self-improve.

So, when the world changes too much and their survival depends on change then they risk becoming extinct just as the dinosaurs did.

red_arrow_down_crash_400_wht_2751Many will challenge this assertion by saying “But competition drives up performance“.  Actually, it is not as simple as that. Competition will weed out the weakest who “die” and remove themselves from the equation – apparently increasing the average. What actually drives improvement is customer choice. Organisations that are able to self-improve will create higher-quality and lower-cost products and in a globally-connected-economy the customers will vote with their wallets. The greedy and selfish competition lags behind.

So, to ensure survival in a global economy the Seventh Flow cannot be rigidly restricted by annually allocated departmental budgets. It is a dinosaur design.

And there is no difference between public and private organisations. The laws of cash-flow physics are universal.

How then is the cash flow controlled?

The “trick” is to design a monitoring and feedback component into the system design. This is called the Sixth Flow – and it must be designed so that just the right amount of cash is pulled to the just the right places and at just the right time and for just as long as needed to maximise the revenue.  The rest of the design – First Flow to Fifth Flow ensure the total amount of cash needed is a minimum.  All Seven Flows are needed.

So the essential ingredient for financial stability and survival is Sixth and Seventh Flow Design capability. That skill has another name – it is called Value Stream Accounting which is a component of complex adaptive systems engineering (CASE).

What? Never heard of Value Stream Accounting?

Maybe that is just another Error of Omission?

The Writing on the Wall – Part II

Who_Is_To_BlameThe retrospectoscope is the favourite instrument of the forensic cynic – the expert in the after-the-event-and-I-told-you-so rhetoric. The rabble-rouser for the lynch-mob.

It feels better to retrospectively nail-to-a-cross the person who committed the Cardinal Error of Omission, and leave them there in emotional and financial pain as a visible lesson to everyone else.

This form of public feedback has been used for centuries.

It is called barbarism, and it has no place in a modern civilised society.


A more constructive question to ask is:

Could the evolving Mid-Staffordshire crisis have been detected earlier … and avoided?”

And this question exposes a tricky problem: it is much more difficult to predict the future than to explain the past.  And if it could have been detected and avoided earlier, then how is that done?  And if the how-is-known then is everyone else in the NHS using this know-how to detect and avoid their own evolving Mid-Staffs crisis?

To illustrate how it is currently done let us use the actual Mid-Staffs data. It is conveniently available in Figure 1 embedded in Figure 5 on Page 360 in Appendix G of Volume 1 of the first Francis Report.  If you do not have it at your fingertips I have put a copy of it below.

MS_RawData

The message does not exactly leap off the page and smack us between the eyes does it? Even with the benefit of hindsight.  So what is the problem here?

The problem is one of ergonomics. Tables of numbers like this are very difficult for most people to interpret, so they create a risk that we ignore the data or that we just jump to the bottom line and miss the real message. And It is very easy to miss the message when we compare the results for the current period with the previous one – a very bad habit that is spread by accountants.

This was a slowly emerging crisis so we need a way of seeing it evolving and the better way to present this data is as a time-series chart.

As we are most interested in safety and outcomes, then we would reasonably look at the outcome we do not want – i.e. mortality.  I think we will all agree that it is an easy enough one to measure.

MS_RawDeathsThis is the raw mortality data from the table above, plotted as a time-series chart.  The green line is the average and the red-lines are a measure of variation-over-time. We can all see that the raw mortality is increasing and the red flags say that this is a statistically significant increase. Oh dear!

But hang on just a minute – using raw mortality data like this is invalid because we all know that the people are getting older, demand on our hospitals is rising, A&Es are busier, older people have more illnesses, and more of them will not survive their visit to our hospital. This rise in mortality may actually just be because we are doing more work.

Good point! Let us plot the activity data and see if there has been an increase.

MS_Activity

Yes – indeed the activity has increased significantly too.

Told you so! And it looks like the activity has gone up more than the mortality. Does that mean we are actually doing a better job at keeping people alive? That sounds like a more positive message for the Board and the Annual Report. But how do we present that message? What about as a ratio of mortality to activity? That will make it easier to compare ourselves with other hospitals.

Good idea! Here is the Raw Mortality Ratio chart.

MS_RawMortality_RatioAh ha. See! The % mortality is falling significantly over time. Told you so.

Careful. There is an unstated assumption here. The assumption that the case mix is staying the same over time. This pattern could also be the impact of us doing a greater proportion of lower complexity and lower risk work.  So we need to correct this raw mortality data for case mix complexity – and we can do that by using data from all NHS hospitals to give us a frame of reference. Dr Foster can help us with that because it is quite a complicated statistical modelling process. What comes out of Dr Fosters black magic box is the Global Hospital Raw Mortality (GHRM) which is the expected number of deaths for our case mix if we were an ‘average’ NHS hospital.

MS_ExpectedMortality_Ratio

What this says is that the NHS-wide raw mortality risk appears to be falling over time (which may be for a wide variety of reasons but that is outside the scope of this conversation). So what we now need to do is compare this global raw mortality risk with our local raw mortality risk  … to give the Hospital Standardised Mortality Ratio.

MS_HSMRThis gives us the Mid Staffordshire Hospital HSMR chart.  The blue line at 100 is the reference average – and what this chart says is that Mid Staffordshire hospital had a consistently higher risk than the average case-mix adjusted mortality risk for the whole NHS. And it says that it got even worse after 2001 and that it stayed consistently 20% higher after 2003.

Ah! Oh dear! That is not such a positive message for the Board and the Annual Report. But how did we miss this evolving safety catastrophe?  We had the Dr Foster data from 2001

This is not a new problem – a similar thing happened in Vienna between 1820 and 1850 with maternal deaths caused by Childbed Fever. The problem was detected by Dr Ignaz Semmelweis who also discovered a simple, pragmatic solution to the problem: hand washing.  He blew the whistle but unfortunately those in power did not like the implication that they had been the cause of thousands of avoidable mother and baby deaths.  Semmelweis was vilified and ignored, and he did not publish his data until 1861. And even then the story was buried in tables of numbers.  Semmelweis went mad trying to convince the World that there was a problem.  Here is the full story.

Also, time-series charts were not invented until 1924 – and it was not in healthcare – it was in manufacturing. These tried-and-tested safety and quality improvement tools are only slowly diffusing into healthcare because the barriers to innovation appear somewhat impervious.

And the pores have been clogged even more by the social poison called “cynicide” – the emotional and political toxin exuded by cynics.

So how could we detect a developing crisis earlier – in time to avoid a catastrophe?

The first step is to estimate the excess-death-equivalent. Dr Foster does this for you.MS_ExcessDeathsHere is the data from the table plotted as a time-series chart that shows that the estimated-excess-death-equivalent per year. It has an average of 100 (that is two per week) and the average should be close to zero. More worryingly the number was increasing steadily over time up to 200 per year in 2006 – that is about four excess deaths per week – on average.  It is important to remember that HSMR is a risk ratio and mortality is a multi-factorial outcome. So the excess-death-equivalent estimate does not imply that a clear causal chain will be evident in specific deaths. That is a complete misunderstanding of the method.

I am sorry – you are losing me with the statistical jargon here. Can you explain in plain English what you mean?

OK. Let us use an example.

Suppose we set up a tombola at the village fete and we sell 50 tickets with the expectation that the winner bags all the money. Each ticket holder has the same 1 in 50 risk of winning the wad-of-wonga and a 49 in 50 risk of losing their small stake. At the appointed time we spin the barrel to mix up the ticket stubs then we blindly draw one ticket out. At that instant the 50 people with an equal risk changes to one winner and 49 losers. It is as if the grey fog of risk instantly condenses into a precise, black-and-white, yes-or-no, winner-or-loser, reality.

Translating this concept back into HSMR and Mid Staffs – the estimated 1200 deaths are the just the “condensed risk of harm equivalent”.  So, to then conduct a retrospective case note analysis of specific deaths looking for the specific cause would be equivalent to trying to retrospectively work out the reason the particular winning ticket in the tombola was picked out. It is a search that is doomed to fail. To then conclude from this fruitless search that HSMR is invalid, is only to compound the delusion further.  The actual problem here is ignorance and misunderstanding of the basic Laws of Physics and Probability, because our brains are not good at solving these sort of problems.

But Mid Staffs is a particularly severe example and  it only shows up after years of data has accumulated. How would a hospital that was not as bad as this know they had a risk problem and know sooner? Waiting for years to accumulate enough data to prove there was a avoidable problem in the past is not much help. 

That is an excellent question. This type of time-series chart is not very sensitive to small changes when the data is noisy and sparse – such as when you plot the data on a month-by-month timescale and avoidable deaths are actually an uncommon outcome. Plotting the annual sum smooths out this variation and makes the trend easier to see, but it delays the diagnosis further. One way to increase the sensitivity is to plot the data as a cusum (cumulative sum) chart – which is conspicuous by its absence from the data table. It is the running total of the estimated excess deaths. Rather like the running total of swings in a game of golf.

MS_ExcessDeaths_CUSUMThis is the cusum chart of excess deaths and you will notice that it is not plotted with control limits. That is because it is invalid to use standard control limits for cumulative data.  The important feature of the cusum chart is the slope and the deviation from zero. What is usually done is an alert threshold is plotted on the cusum chart and if the measured cusum crosses this alert-line then the alarm bell should go off – and the search then focuses on the precursor events: the Near Misses, the Not Agains and the Niggles.

I see. You make it look easy when the data is presented as pictures. But aren’t we still missing the point? Isn’t this still after-the-avoidable-event analysis?

Yes! An avoidable death should be a Never-Event in a designed-to-be-safe healthcare system. It should never happen. There should be no coffins to count. To get to that stage we need to apply exactly the same approach to the Near-Misses, and then the Not-Agains, and eventually the Niggles.

You mean we have to use the SUI data and the IR1 data and the complaint data to do this – and also ask our staff and patients about their Niggles?

Yes. And it is not the number of complaints that is the most useful metric – it is the appearance of the cumulative sum of the complaint severity score. And we need a method for diagnosing and treating the cause of the Niggles too. We need to convert the feedback information into effective action.

Ah ha! Now I understand what the role of the Governance Department is: to apply the tools and techniques of Improvement Science proactively.  But our Governance Department have not been trained to do this!

Then that is one place to start – and their role needs to evolve from Inspectors and Supervisors to Demonstrators and Educators – ultimately everyone in the organisation needs to be a competent Healthcare Improvementologist.

OK – I now now what to do next. But wait a minute. This is going to cost a fortune!

This is just one small first step.  The next step is to redesign the processes so the errors do not happen in the first place. The cumulative cost saving from eliminating the repeated checking, correcting, box-ticking, documenting, investigating, compensating and insuring is much much more than the one-off investment in learning safe system design.

So the Finance Director should be a champion for safety and quality too.

Yup!

Brill. Thanks. And can I ask one more question? I do not want to appear to skeptical but how do we know we can trust that this risk-estimation system has been designed and implemented correctly? How do we know we are not being bamboozled by statisticians? It has happened before!

That is the best question yet.  It is important to remember that HSMR is counting deaths in hospital which means that it is not actually the risk of harm to the patient that is measured – it is the risk to the reputation of hospital! So the answer to your question is that you demonstrate your deep understanding of the rationle and method of risk-of-harm estimation by listing all the ways that such a system could be deliberately “gamed” to make the figures look better for the hospital. And then go out and look for hard evidence of all the “games” that you can invent. It is a sort of creative poacher-becomes-gamekeeper detective exercise.

OK – I sort of get what you mean. Can you give me some examples?

Yes. The HSMR method is based on deaths-in-hospital so discharging a patient from hospital before they die will make the figures look better. Suppose one hospital has more access to end-of-life care in the community than another: their HSMR figures would look better even though exactly the same number of people died. Another is that the HSMR method is weighted towards admissions classified as “emergencies” – so if a hospital admits more patients as “emergencies” who are not actually very sick and discharges them quickly then this will inflated their estimated deaths and make their actual mortality ratio look better – even though the risk-of-harm to patients has not changed.

OMG – so if we have pressure to meet 4 hour A&E targets and we get paid more for an emergency admission than an A&E attendance then admitting to an Assessmen Area and discharging within one day will actually reward the hospital financially, operationally and by apparently reducing their HSMR even though there has been no difference at all to the care that patients actually recieve?

Yes. It is an inevitable outcome of the current system design.

But that means that if I am gaming the system and my HSMR is not getting better then the risk-of-harm to patients is actually increasing and my HSMR system is giving me false reassurance that everything is OK.   Wow! I can see why some people might not want that realisation to be public knowledge. So what do we do?

Design the system so that the rewards are aligned with lower risk of harm to patients and improved outcomes.

Is that possible?

Yes. It is called a Win-Win-Win design.

How do we learn how to do that?

Improvement Science.

Footnote I:

The graphs tell a story but they may not create a useful sense of perspective. It has been said that there is a 1 in 300 chance that if you go to hospital you will not leave alive for avoidable causes. What! It cannot be as high as 1 in 300 surely?

OK – let us use the published Mid-Staffs data to test this hypothesis. Over 12 years there were about 150,000 admissions and an estimated 1,200 excess deaths (if all the risk were concentrated into the excess deaths which is not what actually happens). That means a 1 in 130 odds of an avoidable death for every admission! That is twice as bad as the estimated average.

The Mid Staffordshire statistics are bad enough; but the NHS-as-a-whole statistics are cumulatively worse because there are 100’s of other hospitals that are each generating not-as-obvious avoidable mortality. The data is very ‘noisy’ so it is difficult even for a statistical expert to separate the message from the morass.

And remember – that  the “expected” mortality is estimated from the average for the whole NHS – which means that if this average is higher than it could be then there is a statistical bias and we are being falsely reassured by being ‘not statistically significantly different’ from the pack.

And remember too – for every patient and family that suffers and avoidable death there are many more that have to live with the consequences of avoidable but non-fatal harm.  That is called avoidable morbidity.  This is what the risk really means – everyone has a higher risk of some degree of avoidable harm. Psychological and physical harm.

This challenge is not just about preventing another Mid Staffs – it is about preventing 1000’s of avoidable deaths and 100,000s of patients avoidably harmed every year in ‘average’ NHS trusts.

It is not a mass conspiracy of bad nurses, bad doctors, bad managers or bad policians that is the root cause.

It is poorly designed processes – and they are poorly designed because the nurses, doctors and managers have not learned how to design better ones.  And we do not know how because we were not trained to.  And that education gap was an accident – an unintended error of omission.  

Our urgently-improve-NHS-safety-challenge requires a system-wide safety-by-design educational and cultural transformation.

And that is possible because the knowledge of how to design, test and implement inherently safe processes exists. But it exists outside healthcare.

And that safety-by-design training is a worthwhile investment because safer-by-design processes cost less to run because they require less checking, less documenting, less correcting – and all the valuable nurse, doctor and manager time freed up by that can be reinvested in more care, better care and designing even better processes and systems.

Everyone Wins – except the cynics who have a choice: to eat humble pie or leave.

Footnote II:

In the debate that has followed the publication of the Francis Report a lot of scrutiny has been applied to the method by which an estimated excess mortality number is created and it is necessary to explore this in a bit more detail.

The HSMR is an estimate of relative risk – it does not say that a set of specific patients were the ones who came to harm and the rest were OK. So looking at individual deaths and looking for the specific causes is to completely misunderstand the method. So looking at the actual deaths individually and looking for identifiable cause-and-effect paths is an misuse of the message.  When very few if any are found to conclude that HSMR is flawed is an error of logic and exposes the ignorance of the analyst further.

HSMR is not perfect though – it has weaknesses.  It is a benchmarking process the”standard” of 100 is always moving because the collective goal posts are moving – the reference is always changing . HSMR is estimated using data submitted by hospitals themselves – the clinical coding data.  So the main weakness is that it is dependent on the quality of the clinicial coding – the errors of comission (wrong codes) and the errors of omission (missing codes). Garbage In Garbage Out.

Hospitals use clinically coded data for other reasons – payment. The way hospitals are now paid is based on the volume and complexity of that activity – Payment By Results (PbR) – using what are called Health Resource Groups (HRGs). This is a better and fairer design because hospitals with more complex (i.e. costly to manage) case loads get paid more per patient on average.  The HRG for each patient is determined by their clinical codes – including what are called the comorbidities – the other things that the patient has wrong with them. More comorbidites means more complex and more risky so more money and more risk of death – roughly speaking.  So when PbR came in it becamevery important to code fully in order to get paid “properly”.  The problem was that before PbR the coding errors went largely unnoticed – especially the comorbidity coding. And the errors were biassed – it is more likely to omit a code than to have an incorrect code. Errors of omission are harder to detect. This meant that by more complete coding (to attract more money) the estimated casemix complexity would have gone up compared with the historical reference. So as actual (not estimated) NHS mortality has gone down slightly then the HSMR yardstick becomes even more distorted.  Hospitals that did not keep up with the Coding Game would look worse even though  their actual risk and mortality may be unchanged.  This is the fundamental design flaw in all types of  benchmarking based on self-reported data.

The actual problem here is even more serious. PbR is actually a payment for activity – not a payment for outcomes. It is calculated from what it cost to run the average NHS hospital using a technique called Reference Costing which is the same method that manufacturing companies used to decide what price to charge for their products. It has another name – Absorption Costing.  The highest performers in the manufacturing world no longer use this out-of-date method. The implication of using Reference Costing and PbR in the NHS are profound and dangerous:

If NHS hospitals in general have poorly designed processes that create internal queues and require more bed days than actually necessary then the cost of that “waste” becomes built into the future PbR tariff. This means average length of stay (LOS) is financially rewarded. Above average LOS is financially penalised and below average LOS makes a profit.  There is no financial pressure to improve beyound average. This is called the Regression to the Mean effect.  Also LOS is not a measure of quality – so there is a to shorten length of stay for purely financial reasons – to generate a surplus to use to fund growth and capital investment.  That pressure is non-specific and indiscrimiate.  PbR is necessary but it is not sufficient – it requires an quality of outcome metric to complete it.    

So the PbR system is based on an out-of-date cost-allocation model and therefore leads to the very problems that are contributing to the MidStaffs crisis – financial pressure causing quality failures and increased risk of mortality.  MidStaffs may be a chance victim of a combination of factors coming together like a perfect storm – but those same factors are present throughout the NHS because they are built into the current design.

One solution is to move towards a more up-to-date financial model called stream costing. This uses the similar data to reference costing but it estimates the “ideal” cost of the “necessary” work to achieve the intended outcome. This stream cost becomes the focus for improvement – the streams where there is the biggest gap between the stream cost and the reference cost are the focus of the redesign activity. Very often the root cause is just poor operational policy design; sometimes it is quality and safety design problems. Both are solvable without investment in extra capacity. The result is a higher quality, quicker, lower-cost stream. Win-win-win. And in the short term that  is rewarded by a tariff income that exceeds cost and a lower HSMR.

Radically redesigning the financial model for healthcare is not a quick fix – and it requires a lot of other changes to happen first. So the sooner we start the sooner we will arrive. 

The Writing On The Wall – Part I

writing_on_the_wallThe writing is on the wall for the NHS.

It is called the Francis Report and there is a lot of it. Just the 290 recommendations runs to 30 pages. It would need a very big wall and very small writing to put it all up there for all to see.

So predictably the speed-readers have latched onto specific words – such as “Inspectors“.

Recommendation 137Inspection should remain the central method for monitoring compliance with fundamental standards.”

And it goes further by recommending “A specialist cadre of hospital inspectors should be established …”

A predictable wail of anguish rose from the ranks “Not more inspectors! The last lot did not do much good!”

The word “cadre” is not one that is used in common parlance so I looked it up:

Cadre: 1. a core group of people at the center of an organization, especially military; 2. a small group of highly trained people, often part of a political movement.

So it has a military, centralist, specialist, political flavour. No wonder there was a wail of anguish! Perhaps this “cadre of inspectors” has been unconsciously labelled with another name? Persecutors.

Of more interest is the “highly trained” phrase. Trained to do what? Trained by whom? Clearly none of the existing schools of NHS management who have allowed the fiasco to happen in the first place. So who – exactly? Are these inspectors intended to be protectors, persecutors, or educators?

And what would they inspect?

And how would they use the output of such an inspection?

Would the fear of the inspection and its possible unpleasant consequences be the stick to motivate compliance?

Is the language of the Francis Report going to create another brick wall of resistance from the rubble of the ruins of the reputation of the NHS?  Many self-appointed experts are already saying that implementing 290 recommendations is impossible.

They are incorrect.

The number of recommendations is a measure of the breadth and depth of the rot. So the critical-to-success factor is to implement them in a well-designed order. Get the first few in place and working and the rest will follow naturally.  Get the order wrong and the radical cure will kill the patient.

So where do we start?

Let us look at the inspection question again.  Why would we fear an external inspection? What are we resisting? There are three facets to this: first we do not know what is expected of us;  second we do not know if we can satisfy the expectation; and third we fear being persecuted for failing to achieve the impossible.

W Edwards Deming used a very effective demonstration of the dangers of well-intended but badly-implemented quality improvement by inspection: it was called the Red Bead Game.  The purpose of the game was to illustrate how to design an inspection system that actually helps to achieve the intended goal. Sustained improvement.

This is applied Improvement Science and I will illustrate how it is done with a real and current example.


I am assisting a department in a large NHS hospital to improve the quality of their service. I have been sent in as an external inspector.  The specific quality metric they have been tasked to improve is the turnaround time of the specialist work that they do. This is a flow metric because a patient cannot leave hospital until this work is complete – and more importantly it is a flow and quality metric because when the hospital is full then another patient, one who urgently needs to be admitted, will be waiting for the bed to be vacated. One in one out.

The department have been set a standard to meet, a target, a specification, a goal. It is very clear and it is easily measurable. They have to turnaround each job of work in less than 2 hours.  This is called a lead time specification and it is arbitrary.  But it is not unreasonable from the perspective of the patient waiting to leave and for the patient waiting to be admitted. Neither want to wait.

The department has a sophisticated IT system that measures their performance. They use it to record when each job starts and when each job is finished and from those two events the software calculates the lead time for each job in real-time. At the end of each day the IT system counts how many jobs were completed in less than 2 hours and compares this with how many were done in total and calculates a ratio which it presents as a percentage in the range of 0 and 100. This is called the process yield.  The department are dedicated and they work hard and they do all the work that arrives each day the same day – no matter how long it takes. And at the end of each day they have their score for that day. And it is almost never 100%.  Not never. Almost never. But it is not good enough and they are being blamed for it. In turn they blame others for making their job more difficult. It is a blame-game and it has been going on for years.

So how does an experienced Improvement Science-trained Inspector approach this sort of “wicked” problem?

First we need to get the writing on the wall – we need to see the reality – we need to “plot the dots” – we need to see what the performance is doing over time – we need to see the voice of the process. And that requires only their data, a pencil, some paper and for the chart to be put on the on the wall where everyone can see it.

Chart_1This is what their daily % yield data for three consecutive weeks looked like as a time-series chart. The thin blue line is the 100% yield target.

The 100% target was only achieved on three days – and they were all Sundays. On the other Sunday it was zero (which may mean that there was no data to calculate a ratio from).

There is wide variation from one day to the next and it is the variation as well as the average that is of interest to an improvement scientist. What is the source of the variation it? If 100% yield can be achieved some days then what is different about those days?

Chart_2

So our Improvement science-trained Inspector will now re-plot the data in a different way – as rational groups. This exposes the issue clearly. The variation on Weekends is very wide and the performance during the Weekdays is much less variable.  What this says is that the weekend system and the weekday system are different. This means that it is invalid to combine the data for both.

It also raises the question of why there is such high variation in yield only at weekends?  The chart cannot answer the question, so our IS-trained Inspector digs a bit deeper and discovers that the volume of work done at the weekend is low, the staffing of the department is different, and that the recording of the events is less reliable. In short – we cannot even trust the weekend data – so we have two reasons to justify excluding it from our chart and just focusing on what happens during the week.

Chart_3We re-plot our chart, marking the excluded weekend data as not for analysis.

We can now see that the weekday performance of our system is visible, less variable, and the average is a long way from 100%.

The team are working hard and still only achieving mediocre performance. That must mean that they need something that is missing. Motivating maybe. More people maybe. More technology maybe.  But there is no more money for more people or technology and traditional JFDI motivation does not seem to have helped.

This looks like an impossible task!

Chart_4

So what does our Inspector do now? Mark their paper with a FAIL and put them on the To Be Sacked for Failing to Meet an Externally Imposed Standard heap?

Nope.

Our IS-trained Inspector calculates the limits of expected performance from the data  and plots these limits on the chart – the red lines.  The computation is not difficult – it can be done with a calculator and the appropriate formula. It does not need a sophisticated IT system.

What this chart now says is “The current design of this process is capable of delivering between 40% and 85% yield. To expect it do do better is unrealistic”.  The implication for action is “If we want 100% yield then the process needs to be re-designed.” Persecution will not work. Blame will not work. Hoping-for-the-best will not work. The process must be redesigned.

Our improvement scientist then takes off the Inspector’s hat and dons the Designer’s overalls and gets to work. There is a method to this and it is called 6M Design®.

Chart_5

First we need to have a way of knowing if any future design changes have a statistically significant impact – for better or for worse. To do this the chart is extended into the future and the red lines are projected forwards in time as the black lines called locked-limits.  The new data is compared with this projected baseline as it comes in.  The weekends and bank holidays are excluded because we know that they are a different system. On one day (20/12/2012) the yield was surprisingly high. Not 100% but more than the expected upper limit of 85%.

Chart_6The alerts us to investigate and we found that it was a ‘hospital bed crisis’ and an ‘all hands to the pumps’ distress call went out.

Extra capacity was pulled to the process and less urgent work was delayed until later.  It is the habitual reaction-to-a-crisis behaviour called “expediting” or “firefighting”.  So after the crisis had waned and the excitement diminished the performance returned to the expected range. A week later the chart signals us again and we investigate but this time the cause was different. It was an unusually quiet day and there was more than enough hands on the pumps.

Both of these days are atypically good and we have an explanation for each of them. This is called an assignable cause. So we are justified in excluding these points from our measure of the typical baseline capability of our process – the performance the current design can be expected to deliver.

An inexperienced manager might conclude from these lessons that what is needed is more capacity. That sounds and feels intuitively obvious and it is correct that adding more capacity may improve the yield – but that does not prove that lack of capacity is the primary cause.  There are many other causes of long lead times  just as there are many causes of headaches other than brain tumours! So before we can decide the best treatment for our under-performing design we need to establish the design diagnosis. And that is done by inspecting the process in detail. And we need to know what we are looking for; the errors of design commission and the errors of design omission. The design flaws.

Only a trained and experienced process designer can spot the flaws in a process design. Intuition will trick the untrained and inexperienced.


Once the design diagnosis is established then the redesign stage can commence. Design always works to a specification and in this case it was clear – to significantly improve the yield to over 90% at no cost.  In other words without needing more people, more skills, more equipment, more space, more anything. The design assignment was made trickier by the fact that the department claimed that it was impossible to achieve significant improvement without adding extra capacity. That is why the Inspector had been sent in. To evaluate that claim.

The design inspection revealed a complex adaptive system – not a linear, deterministic, production-line that manufactures widgets.  The department had to cope with wide variation in demand, wide variation in quality of request, wide variation in job complexity, and wide variation in urgency – all at the same time.  But that is the nature of healthcare and acute hospital work. That is the expected context.

The analysis of the current design revealed that it was not well suited for this requirement – and the low yield was entirely predictable. The analysis also revealed that the root cause of the low yield was not lack of either flow-capacity or space-capacity.

This insight led to the suggestion that it would be possible to improve yield without increasing cost. The department were polite but they did not believe it was possible. They had never seen it, so why should they be expected to just accept this on faith?

Chart_7So, the next step was to develop, test and demonstrate a new design and that was done in three stages. The final stage was the Reality Test – the actual process design was changed for just one day – and the yield measured and compared with the predicted improvement.

This was the validity test – the proof of the design pudding. And to visualise the impact we used the same technique as before – extending the baseline of our time-series chart, locking the limits, and comparing the “after” with the “before”.

The yellow point marks the day of the design test. The measured yield was well above the upper limit which suggested that the design change had made a significant improvement. A statistically significant improvement.  There was no more capacity than usual and the day was not unusually quiet. At the end of the day we held a team huddle.

Our first question was “How did the new design feel?” The consensus was “Calmer, smoother, fewer interruptions” and best of all “We finished on time – there was no frantic catch up at the end of the day and no one had to stay late to complete the days work!”

The next question was “Do we want to continue tomorrow with this new design or revert back to the old one?” The answer was clear “Keep going with the new design. It feels better.”

The same chart was used to show what happened over the next few days – excluding the weekends as before. The improvement was sustained – it did not revert to the original because the process design had been changed. Same work, same capacity, different process – higher yield. The red flags on the charts mark the statistically significant evidence of change and the cluster of red flags is very strong statistical evidence that the improvement is not due to chance.

The next phase of the 6M Design® method is to continue to monitor the new process to establish the new baseline of expectation. That will require at least twelve data points and it is in progress. But we have enough evidence of a significant improvement. This means that we have no credible justification to return to the old design, and it also implies that it is no longer valid to compare the new data against the old projected limits. Our chart tells us that we need to split the data into before-and-after and to calculate new averages and limits for each segment separately. We have changed the voice of the process by changing the design.

Chart_8And when we split the data at the point-of-change then the red flags disappear – which means that our new design is stable. And it has a new capability – a better one. We have moved closer to our goal of 100% yield. It is still early days and we do not really have enough data to calculate the new capability.

What we can say is that we have improved average quality yield from 63% to about 90% at no cost using a sequence of process diagnose, design, deliver.  Study-Plan-Do.

And we have hard evidence that disproves the impossibility hypothesis.


And that was the goal of the first design change – it was not to achieve 100% yield in one jump. Our design simulation had predicted an improvement to about 90%.  And there are other design changes to follow that need this stable foundation to build on.  The order of implementation is critical – and each change needs time to bed in before the next change is made. That is the nature of the challenge of improving a complex adaptive system.

The cost to the department was zero but the benefit was huge.  The bigger benefit to the organisation was felt elsewhere – the ‘customers’ saw a higher quality, quicker process – and there will be a financial benefit for the whole system. It will be difficult to measure with our current financial monitoring systems but it will be real and it will be there – lurking in the data.

The improvement required a trained and experienced Inspector/Designer/Educator to start the wheel of change turning. There are not many of these in the NHS – but the good news is that the first level of this training is now available.

What this means for the post-Francis Report II NHS is that those who want to can choose to leap over the wall of resistance that is being erected by the massing legions of noisy cynics. It means we can all become our own inspectors. It means we can all become our own improvers. It means we can all learn to redesign our systems so that they deliver higher safety, better quality, more quickly and at no extra one-off or recurring cost.  We all can have nothing to fear from the Specialist Cadre of Hospital Inspectors.

The writing is on the wall.


15/02/2013 – Two weeks in and still going strong. The yield has improved from 63% to 92% and is stable. Improvement-by-design works.

10/03/2013 – Six weeks in and a good time to test if the improvement has been sustained.

TTO_Yield_WeeklyThe chart is the weekly performance plotted for 17 weeks before the change and for 5 weeks after. The advantage of weekly aggregated data is that it removes the weekend/weekday 7-day cycle and reduces the effect of day-to-day variation.

The improvement is obvious, significant and has been sustained. This is the objective improvement. More important is the subjective improvement.

Here is what Chris M (departmental operational manager) wrote in an email this week (quoted with permission):

Hi Simon

It is I who need to thank you for explaining to me how to turn our pharmacy performance around and ultimately improve the day to day work for the pharmacy team (and the trust staff). This will increase job satisfaction and make pharmacy a worthwhile career again instead of working in constant pressure with a lack of achievement that had made the team feel rather disheartened and depressed. I feel we can now move onwards and upwards so thanks for the confidence boost.

Best wishes and many thanks

Chris

This is what Improvement Science is all about!

Robert Francis QC

press_on_screen_anim_150_wht_7028Today is an important day.

The Robert Francis QC Report and recommendations from the Mid-Staffordshire Hospital Crisis has been published – and it is a sobering read.  The emotions that just the executive summary evoked in me were sadness, shame and anger.  Sadness for the patients, relatives, and staff who have been irreversibly damaged; shame that the clinical professionals turned a blind-eye; and anger that the root cause has still not been exposed to public scrutiny.

Click here to get a copy of the RFQC Report Executive Summary.

Click here to see the video of RFQC describing his findings. 

The root cause is ignorance at all levels of the NHS.  Not stupidity. Not malevolence. Just ignorance.

Ignorance of what is possible and ignorance of how to achieve it.

RFQC rightly focusses his recommendations on putting patients at the centre of healthcare and on making those paid to deliver care accountable for the outcomes.  Disappointingly, the report is notably thin on the financial dimension other than saying that financial targets took priority over safety and quality.  He is correct. They did. But the report does not say that this is unnecessary – it just says “in future put safety before finance” and in so doing he does not challenge the belief that we are playing a zero-sum-game. The  assumotion that higher-quality-always-costs-more.

This assumption is wrong and can easily be disproved.

A system that has been designed to deliver safety-and-quality-on-time-first-time-and-every-time costs less. And it costs less because the cost of errors, checking, rework, queues, investigation, compensation, inspectors, correctors, fixers, chasers, and all the other expensive-high-level-hot-air-generation-machinery that overburdens the NHS and that RFQC has pointed squarely at is unnecessary.  He says “simplify” which is a step in the right direction. The goal is to render it irrelevent.

The ignorance is ignorance of how to design a healthcare system that works right-first-time. The fact that the Francis Report even exists and is pointing its uncomfortable fingers-of-evidence at every level of the NHS from ward to government is tangible proof of this collective ignorance of system design.

And the good news is that this collective ignorance is also unnecessary … because the knowledge of how to design safe-and-affordable systems already exists. We just have to learn how. I call it 6M Design® – but  the label is irrelevent – the knowledge exists and the evidence that it works exists.

So here are some of the RFQC recommendations viewed though a 6M Design® lens:       

1.131 Compliance with the fundamental standards should be policed by reference to developing the CQC’s outcomes into a specification of indicators and metrics by which it intends to monitor compliance. These indicators should, where possible, be produced by the National Institute for Health and Clinical Excellence (NICE) in the form of evidence-based procedures and practice which provide a practical means of compliance and of measuring compliance with fundamental standards.

This is the safety-and-quality outcome specification for a healthcare system design – the required outcome presented as a relevent metric in time-series format and qualified by context.  Only a stable outcome can be compared with a reference standard to assess the system capability. An unstable outcome metric requires inquiry to understand the root cause and an appropriate action to restore stability. A stable but incapable outcome performance requires redesign to achieve both stability and capability. And if  the terms used above are unfamiliar then that is further evidence of system-design-ignorance.   
 
1.132 The procedures and metrics produced by NICE should include evidence-based tools for establishing the staffing needs of each service. These measures need to be readily understood and accepted by the public and healthcare professionals.

This is the capacity-and-cost specification of any healthcare system design – the financial envelope within which the system must operate. The system capacity design works backwards from this constraint in the manner of “We have this much resource – what design of our system is capable of delivering the required safety and quality outcome with this capacity?”  The essence of this challenge is to identify the components of poor (i.e. wasteful) design in the existing systems and remove or replace them with less wasteful designs that achieve the same or better quality outcomes. This is not impossible but it does require system diagnostic and design capability. If the NHS had enough of those skills then the Francis Report would not exist.

1.133 Adoption of these practices, or at least their equivalent, is likely to help ensure patients’ safety. Where NICE is unable to produce relevant procedures, metrics or guidance, assistance could be sought and commissioned from the Royal Colleges or other third-party organisations, as felt appropriate by the CQC, in establishing these procedures and practices to assist compliance with the fundamental standards.

How to implement evidence-based research in the messy real world is the Elephant in the Room. It is possible but it requires techniques and tools that fall outside the traditional research and audit framework – or rather that sit between research and audit. This is where Improvement Science sits. The fact that the Report only mentions evidence-based practice and audit implies that the NHS is still ignorant of this gap and what fills it – and so it appears is RFQC.   

1.136 Information needs to be used effectively by regulators and other stakeholders in the system wherever possible by use of shared databases. Regulators should ensure that they use the valuable information contained in complaints and many other sources. The CQC’s quality risk profile is a valuable tool, but it is not a substitute for active regulatory oversight by inspectors, and is not intended to be.

Databases store data. Sharing databases will share data. Data is not information. Information requires data and the context for that data.  Furthermore having been informed does not imply either knowledge or understanding. So in addition to sharing information, the capability to convert information-into-decision is also required. And the decisions we want are called “wise decisions” which are those that result in actions and inactions that lead inevitably to the intended outcome.  The knowledge of how to do this exists but the NHS seems ignorant of it. So the challenge is one of education not of yet more investigation.

1.137 Inspection should remain the central method for monitoring compliance with fundamental standards. A specialist cadre of hospital inspectors should be established, and consideration needs to be given to collaborative inspections with other agencies and a greater exploitation of peer review techniques.

This is audit. This is the sixth stage of a 6M Design® – the Maintain step.  Inspectors need to know what they are looking for, the errors of commission and the errors of omission;  and to know what those errors imply and what to do to identify and correct the root cause of these errors when discovered. The first cadre of inspectors will need to be fully trained in healthcare systems design and healthcare systems improvement – in short – they need to be Healthcare Improvementologists. And they too will need to be subject to the same framework of accreditation, and accountability as those who work in the system they are inspecting.  This will be one of the greatest of the challenges. The fact that the Francis report exists implies that we do not have such a cadre. Who will train, accredit and inspect the inspectors? Who has proven themselves competent in reality (not rhetorically)?

1.163 Responsibility for driving improvement in the quality of service should therefore rest with the commissioners through their commissioning arrangements. Commissioners should promote improvement by requiring compliance with enhanced standards that demand more of the provider than the fundamental standards.

This means that commissioners will need to understand what improvement requires and to include that expectation in their commissioning contracts. This challenge is even geater that the creation of a “cadre of inspectors”. What is required is a “generation of competent commissioners” who are also experienced and who have demonstrated competence in healthcare system design. The Commissioners-of-the-Future will need to be experienced healthcare improvementologists.

The NHS is sick – very sick. The medicine it needs to restore its health and vitality does exist – and it will not taste very nice – but to withold an effective treatment for an serious illness on that basis is clinical negligence.

It is time for the NHS to look in the mirror and take the strong medicine. The effect is quick – it will start to feel better almost immediately. 

To deliver safety and quality and quickly and affordably is possible – and if you do not believe that then you will need to muster the humility to ask to have the how demonstrated.

6MDesign

 

Kicking the Habit

no_smoking_400_wht_6805It is not easy to kick a habit. We all know that. And for some reason the ‘bad’ habits are harder to kick than the ‘good’ ones. So what is bad about a ‘bad habit’ and why is it harder to give up? Surely if it was really bad it would be easier to give up?

Improvement is all about giving up old ‘bad’ habits and replacing them with new ‘good’ habits – ones that will sustain the improvement. But there is an invisible barrier that resists us changing any habit – good or bad. And it is that barrier to habit-breaking that we need to understand to succeed. Luck is not a reliable ally.

What does that habit-breaking barrier look like?

The problem is that it is invisible – or rather it is emotional – or to be precise it is chemical.

Our emotions are the output of a fantastically complex chemical system – our brains. And influencing the chemical balance of our brains can have a profound effect on our emotions.  That is how anti-depressants work – they very slightly adjust the chemical balance of every part of our brains. The cumulative effect is that we feel happier.  Nicotine has a similar effect.

And we can achieve the same effect without resorting to drugs or fags – and we can do that by consciously practising some new mental habits until they become ingrained and unconscious. We literally overwrite the old mental habit.

So how do we do this?

First we need to make the mental barrier visible – and then we can focus our attention on eroding it. To do that we need to remove the psychological filter that we all use to exclude our emotions. It is rather like taking off our psychological sunglasses.

When we do that the invisible barrier jumps into view: illuminated by the glare of three negative emotions.  Sadness, fear, and anxiety.  So whenever we feel any of these we know there is a barrier to improvement hiding  the emotional smoke. This is the first stage: tune in to our emotions.

The next step is counter-intuitive. Instead of running away from the negative feeling we consciously flip into a different way of thinking.  We actively engage with our negative feelings – and in a very specific way. We engage in a detached, unemotional, logical, rational, analytical  ‘What caused that negative feeling?’ way.

We then focus on the causes of the negative emotions. And when we have the root causes of our Niggles we design around them, under them, and over them.  We literally design them out of our heads.

The effect is like magic.

And this week I witnessed a real example of this principle in action.

figure_pressing_power_button_150_wht_10080One team I am working with experienced the Power of Improvementology. They saw the effect with their own eyes.  There were no computers in the way, no delays, no distortion and no deletion of data to cloud the issue. They saw the performance of their process jump dramatically – from a success rate of 60% to 96%!  And not just the first day, the second day too.  “Surprised and delighted” sums up their reaction.

So how did we achieve this miracle?

We just looked at the process through a different lens – one not clouded and misshapen by old assumptions and blackened by ignorance of what is possible.  We used the 6M Design® lens – and with the clarity of insight it brings the barriers to improvement became obvious. And they were dissolved. In seconds.

Success then flowed as the Dam of Disbelief crumbled and was washed away.

figure_check_mark_celebrate_anim_150_wht_3617The chaos has gone. The interruptions have gone. The expediting has gone. The firefighting has gone. The complaining has gone.  These chronic Niggles have have been replaced by the Nuggets of calm efficiency, new hope and visible excitement.

And we know that others have noticed the knock-on effect because we got an email from our senior executive that said simply “No one has moaned about TTOs for two days … something has changed.”    

That is Improvementology-in-Action.

 

Curing Chronic Carveoutosis

pin_marker_lighting_up_150_wht_6683Last week the Ray Of Hope briefly illuminated a very common system design disease called carveoutosis.  This week the RoH will tarry a little longer to illuminate an example that reveals the value of diagnosing and treating this endemic process ailment.

Do you remember the days when we used to have to visit the Central Post Office in our lunch hour to access a quality-of-life-critical service that only a Central Post Office could provide – like getting a new road tax disc for our car?  On walking through the impressive Victorian entrances of these stalwart high street institutions our primary challenge was to decide which queue to join.

In front of each gleaming mahogony, brass and glass counter was a queue of waiting customers. Behind was the Post Office operative. We knew from experience that to be in-and-out before our lunch hour expired required deep understanding of the ways of people and processes – and a savvy selection.  Some queues were longer than others. Was that because there was a particularly slow operative behind that counter? Or was it because there was a particularly complex postal problem being processed? Or was it because the customers who had been waiting longer had identified that queue was fast flowing and had defected to it from their more torpid streams? We know that size is not a reliable indicator of speed or quality.figure_juggling_time_150_wht_4437

The social pressure is now mounting … we must choose … dithering is a sign of weakness … and swapping queues later is another abhorrent behaviour. So we employ our most trusted heuristic – we join the end of the shortest queue. Sometimes it is a good choice, sometimes not so good!  But intuitively it feels like the best option.

Of course  if we choose wisely and we succeed in leap-frogging our fellow customers then we can swagger (just a bit) on the way out. And if not we can scowl and mutter oaths at others who (by sheer luck) leap frog us. The Post Office Game is fertile soil for the Aint’ It Awful game which we play when we arrive back at work.

single_file_line_PA_150_wht_3113But those days are past and now we are more likely to encounter a single-queue when we are forced by necessity to embark on a midday shopping sortie. As we enter we see the path of the snake thoughtfully marked out with rope barriers or with shelves hopefully stacked with just-what-we-need bargains to stock up on as we drift past.  We are processed FIFO (first-in-first-out) which is fairer-for-all and avoids the challenge of the dreaded choice-of-queue. But the single-queue snake brings a new challenge: when we reach the head of the snake we must identify which operative has become available first – and quickly!

Because if we falter then we will incur the shame of the finger-wagging or the flashing red neon arrow that is easily visible to the whole snake; and a painful jab in the ribs from the impatient snaker behind us; and a chorus of tuts from the tail of the snake. So as we frantically scan left and right along the line of bullet-proof glass cells looking for clues of imminent availability we run the risk of developing acute vertigo or a painful repetitive-strain neck injury!

stick_figure_sitting_confused_150_wht_2587So is the single-queue design better?  Do we actually wait less time, the same time or more time? Do we pay a fair price for fair-for-all queue design? The answer is not intuitively obvious because when we are forced to join a lone and long queue it goes against our gut instinct. We feel the urge to push.

The short answer is “Yes”.  A single-queue feeding tasks to parallel-servers is actually a better design. And if we ask the Queue Theorists then they will dazzle us with complex equations that prove it is a better design – in theory.  But the scary-maths does not help us to understand how it is a better design. Most of us are not able to convert equations into experience; academic rhetoric into pragmatic reality. We need to see it with our own eyes to know it and understand it. Because we know that reality is messier than theory.    

And if it is a better design then just how much better is it?

To illustrate the potential advantage of a single-queue design we need to push the competing candiates to their performance limits and then measure the difference. We need a real example and some real data. We are Improvementologists! 

First we need to map our Post Office process – and that reveals that we have a single step process – just the counter. That is about as simple as a process gets. Our map also shows that we have a row of counters of which five are manned by fully trained Post Office service operatives.

stick_figure_run_clock_150_wht_7094Now we can measure our process and when we do that we find that we get an average of 30 customers per hour walking in the entrance and and average of 30 cusomers an hour walking out. Flow-out equals flow-in. Activity equals demand. And the average flow is one every 2 minutes. So far so good. We then observe our five operatives and we find that the average time from starting to serve one customer to starting to serve the next is 10 minutes. We know from our IS training that this is the cycle time. Good.

So we do a quick napkin calculation to check and that the numbers make sense: our system of five operatives working in parallel, each with an average cycle time of 10 minutes can collectively process a customer on average every 2 minutes – that is 30 per hour on average. So it appears we have just enough capacity to keep up with the flow of work  – we are at the limit of efficiency.  Good.

CarveOut_00We also notice that there is variation in the cycle time from customer to customer – so we plot our individual measurements asa time-series chart. There does not seem to be an obvious pattern – it looks random – and BaseLine says that it is statistically stable. Our chart tells us that a range of 5 to 15 minutes is a reasonable expectation to set.

We also observe that there is always a queue of waiting customers somewhere – and although the queues fluctuate in size and location they are always there.

 So there is always a wait for some customers. A variable wait; an unpredictable wait. And that is a concern for us because when the queues are too numerous and too long then we see customers get agitated, look at their watches, shrug their shoulders and leave – taking their custom and our income with them and no doubt telling all their friends of their poor experience. Long queues and long waits are bad for business.

And we do not want zero queues either because if there is no queue and our operatives run out of work then they become under-utilised and our system efficiency and productivity falls.  That means we are incurring a cost but not generating an income. No queues and idle resources are bad for business too.

And we do not want a mixture of quick queues and slow queues because that causes complaints and conflict.  A high-conflict customer complaint experience is bad for business too! 

What we want is a design that creates small and stable queues; ones that are just big enough to keep our operatives busy and our customers not waiting too long.

So which is the better design and how much better is it? Five-queues or a single-queue? Carve-out or no-carve-out?

To find the answer we decide to conduct a week-long series of experiments on our system and use real data to reveal the answer. We choose the time from a customer arriving to the same customer leaving as our measure of quality and performance – and we know that the best we can expect is somewhere between 5 and 15 minutes.  We know from our IS training that is called the Lead Time.

time_moving_fast_150_wht_10108On day #1 we arrange our Post Office with five queues – clearly roped out – one for each manned counter.  We know from our mapping and measuring that customers do not arrive in a steady stream and we fear that may confound our experiment so we arrange to admit only one of our loyal and willing customers every 2 minutes. We also advise our loyal and willing customers which queue they must join before they enter to avoid the customer choice challenges.  We decide which queue using a random number generator – we toss a dice until we get a number between 1 and 5.  We record the time the customer enters on a slip of paper and we ask the customer to give it to the operative and we instruct our service operatives to record the time they completed their work on the same slip and keep it for us to analyse later. We run the experiment for only 1 hour so that we have a sample of 30 slips and then we collect the slips,  calculate the difference between the arrival and departure times and plot them on a time-series chart in the order of arrival.

CarveOut_01This is what we found.  Given that the time at the counter is an average of 10 minutes then some of these lead times seem quite long. Some customers spend more time waiting than being served. And we sense that the performance is getting worse over time.

So for the next experiment we decide to open a sixth counter and to rope off a sixth queue. We expect that increasing capacity will reduce waiting time and we confidently expect the performance to improve.

On day #2 we run our experiment again, letting customers in one every 2 minutes as before and this time we use all the numbers on the dice to decide which queue to direct each customer to.  At the end of the hour we collect the slips, calculate the lead times and plot the data – on the same chart.

CarveOut_02This is what we see.

It does not look much better and that is big surprise!

The wide variation from customer to customer looks about the same but with the Eye of Optimism we get a sense that the overall performance looks a bit more stable.

So we conclude that adding capacity (and cost) may make a small difference.

But then we remember that we still only served 30 customers – which means that our income stayed the same while our cost increased by 20%. That is definitely NOT good for business: it is not goiug to look good in a business case “possible marginally better quality and 20% increase in cost and therefore price!”

So on day #3 we change the layout. This time we go back to five counters but we re-arrange the ropes to create a single-queue so the customer at the front can be ‘pulled’ to the first available counter. Everything else stays the same – one customer arriving every 2 minutes, the dice, the slips of paper, everything.  At the end of the hour we collect the slips, do our sums and plot our chart.

CarveOut_03And this is what we get! The improvement is dramatic. Both the average and the variation has fallen – especially the variation. But surely this cannot be right. The improvement is too good to be true. We check our data again. Yes, our customers arrived and departed on average one every 2 minutes as before; and all our operatives did the work in an average of 10 minutes just as before. And we had the exactly the same capacity as we had on day #1. And we finished on time. It is correct. We are gobsmaked. It is like a magic wand has been waved over our process. We never would have predicted  that just moving the ropes around to could have such a big impact.  The Queue Theorists were correct after all!

But wait a minute! We are delivering a much better customer experience in terms of waiting time and at the same cost. So could we do even better with six counters open? What will happen if we keep the single-queue design and open the sixth desk?  Before it made little difference but now we doubt our ability to guess what will happen. Our intuition seems to keep tricking us. We are losing our confidence in predicting what the impact will be. We are in counter-intuitive land! We need to run the experiment for real.

So on day #4 we keep the single-queue and we open six desks. We await the data eagerly.

CarveOut_04And this is what happened. Increasing the capacity by 20% has made virtually no difference – again. So we now have two pieces of evidence that say – adding extra capacity did not make a difference to waiting times. The variation looks a bit less though but it is marginal.

It was changing the Queue Design that made the difference! And that change cost nothing. Rien. Nada. Zippo!

That will look much better in our report but now we have to face the emotional discomfort of having to re-evaluate one of our deepest held assumptions.

Reality is telling us that we are delivering a better quality experience using exactly the same resources and it cost nothing to achieve. Higher quality did NOT cost more. In fact we can see that with a carve-out design when we added capacity we just increased the cost we did NOT improve quality. Wow!  That is a shock. Everything we have been led to believe seems to be flawed.

Our senior managers are not going to like this message at all! We will be challening their dogma directly. And they do not like that. Oh dear! 

Now we can see how much better a no-carveout single-queue pull-design can work; and now we can explain why single-queue designs  are used; and now we can show others our experiment and our data and if they do not believe us they can repeat the experiment themselves.  And we can see that it does not need a real Post Office – a pad of Post It® Notes, a few stopwatches and some willing helpers is all we need.

And even though we have seen it with our own eyes we still struggle to explain how the single-queue design works better. What actually happens? And we still have that niggling feeling that the performance on day #1 was unstable.  We need to do some more exploring.

So we run the day#1 experiment again – the five queues – but this time we run it for a whole day, not just an hour.

CarveOut_06

Ah ha!   Our hunch was right.  It is an unstable design. Over time the variation gets bigger and bigger.

But how can that happen?

Then we remember. We told the customers that they could not choose the shortest queue or change queue after they had joined it.  In effect we said “do not look at the other queues“.

And that happens all the time on our systems when we jealously hide performance data from each other! If we are seen to have a smaller queue we get given extra work by the management or told to slow down by the union rep!  

So what do we do now?  All we are doing is trying to improve the service and all we seem to be achieving is annoying more and more people.

What if we apply a maximum waiting time target, say of 1 hour, and allow customers to jump to the front of their queue if they are at risk if breaching the target? That will smooth out spikes and give everyone a fair chance. Customers will understand. It is intuitively obvious and common sense. But our intuition has tricked us before … 

So we run the experiment again and this time we tell our customers that if they wait 50 minutes then they can jump to the front of their queue. They appreciate this because they now have a upper limit on the time they will wait.  

CarveOut_07And this is what we observe. It looks better than before, at least initially, and then it goes pear-shaped.

All we have done with our ‘carve-out and-expedite-the-long-waiters’ design is to defer the inevitable – the crunch. We cannot keep our promise. By the end everyone is pushing to the frontof the queue. It is a riot!  

And there is more. Look at the lead time for the last few customers – two hours. Not only have they waited a long time, but we have had to stay open for two hours longer. That is a BIG cost pessure in overtime payments.

So, whatever way we look at it: a single-queue design is better.  And no one loses out! The customers have a short and predictable waiting time; the operatives are kept occupied and go home on time; and the executives bask in the reflected glory of the excellent customer feedback.  It is a Three Wins® design.

Seeing is believing – and we now know that it is worth diagnosing and treating carveoutosis.

And the only thing left to do is to explain is how a single-queue design works better. It is not obvious is it? 

puzzle_lightbulb_build_PA_150_wht_4587And the best way to do that is to play the Post Office Game and see what actually happens. 

A big light-bulb moment awaits!

 

 

Update: My little Sylvanian friends have tried the Post Office Game and kindly sent me this video of the before  Sylvanian Post Office Before and the after Sylvanian Post Office After. They say they now know how the single-queue design works better. 

 

A Ray Of Hope

stick_figure_shovel_snow_anim_150_wht_9579It does not seem to take much to bring a real system to an almost standstill.  Six inches of snow falling between 10 AM and 2 PM in a Friday in January seems to be enough!

It was not so much the amount of snow – it was the timing.  The decision to close many schools was not made until after the pupils had arrived – and it created a logistical nightmare for parents. 

Many people suddenly needed to get home before they expected which created an early rush hour and gridlocked the road system.

The same number of people travelled the same distance in the same way as they would normally – it just took them a lot longer.  And the queues created more problems as people tried to find work-arounds to bypass the traffic jams.

How many thousands of hours of life-time was wasted sitting in near-stationary queues of cars? How many millions of poundsworth of productivity was lost? How much will the catchup cost? 

And yet while we grumble we shrug our shoulders and say “It is just one of those things. We cannot control the weather. We just have to grin and bear it.”  

Actually we do not have to. And we do not need a weather machine to control the weather. Mother Nature is what it is.

Exactly the same behaviour happens in many systems – and our conclusion is the same.  We assume the chaos and queues are inevitable.

They are not.

They are symptoms of the system design – and specifically they are the inevitable outcomes of the time-design.

But it is tricky to visualise the time-design of a system.  We can see the manifestations of the poor time-design, the queues and chaos, but we do not so easily perceive the causes. So the poor time-design persists. We are not completely useless though; there are lots of obvious things we can do. We can devise ingenious ways to manage the queues; we can build warehouses to hold the queues; we can track the jobs in the queues using sophisticated and expensive information technology; we can identify the hot spots; we can recruit and deploy expediters, problem-solvers and fire-fighters to facilitate the flow through the hottest of them; and we can pump capacity and money into defences, drains and dramatics. And our efforts seem to work so we congratulate ourselves and conclude that these actions are the only ones that work.  And we keep clamouring for more and more resources. More capacity, MORE capacity, MORE CAPACITY.

Until we run out of money!

And then we have to stop asking for more. And then we start rationing. And then we start cost-cutting. And then the chaos and queues get worse. 

And all the time we are not aware that our initial assumptions were wrong.

The chaos and queues are not inevitable. They are a sign of the time-design of our system. So we do have other options.  We can improve the time-design of our system. We do not need to change the safety-design; nor the quality-design; nor the money-design.  Just improving the time-design will be enough. For now.

So the $64,000,000 question is “How?”

Before we explore that we need to demonstrate What is possible. How big is the prize?

The class of system design problem that cause particular angst are called mixed-priority mixed-complexity crossed-stream designs.  We encounter dozens of them in our daily life and we are not aware of it.  One of particular interest to many is called a hospital. The mixed-priority dimension is the need to manage some patients as emergencies, some as urgent and some as routine. The mixed-complexity dimension is that some patients are easy and some are complex. The crossed-stream dimension is the aggregation of specialised resources into departments. Expensive equipment and specific expertise.  We then attempt to push patients with different priorites long different paths through these different departments . And it is a management nightmare! 

BlueprintOur usual and “obvious” response to this challenge is called a carve-out design. And that means we chop up our available resource capacity into chunks.  And we do that in two ways: chunks of time and chunks of space.  We try to simplify the problem by dissecting it into bits that we can understand. We separate the emergency departments from the  planned-care facilities. We separate outpatients from inpatients. We separate medicine from surgery – and we then intellectually dissect our patients into organ systems: brains, lungs, hearts, guts, bones, skin, and so on – and we create separate departments for each one. Neurology, Respiratory, Cardiology, Gastroenterology, Orthopaedics, Dermatology to list just a few. And then we become locked into the carve-out design silos like prisoners in cages of our own making.

And so it is within the departments that are sub-systems of the bigger system. Simplification, dissection and separation. Ad absurdam.

The major drawback with our carve-up design strategy is that it actually makes the system more complicated.  The number of necessary links between the separate parts grows exponentially.  And each link can hold a small queue of waiting tasks – just as each side road can hold a queue of waiting cars. The collective complexity is incomprehensible. The cumulative queue is enormous. The opportunity for confusion and error grows exponentially. Safety and quality fall and cost rises. Carve-out is an inferior time-design.

But our goal is correct: we do need to simplify the system so that means simplifying the time-design.

To illustrate the potential of this ‘simplify the time-design’ approach we need a real example.

One way to do this is to create a real system with lots of carve-out time-design built into it and then we can observe how it behaves – in reality. A carefully designed Table Top Game is one way to do this – one where the players have defined Roles and by following the Rules they collectively create a real system that we can map, measure and modify. With our Table Top Team trained and ready to go we then pump realistic tasks into our realistic system and measure how long they take in reality to appear out of the other side. And we then use the real data to plot some real time-series charts. Not theoretical general ones – real specific ones. And then we use the actual charts to diagnose the actual causes of the actual queues and actual chaos.

TimeDesign_BeforeThis is the time-series chart of a real Time-Design Game that has been designed using an actual hospital department and real observation data.  Which department it was is not of importance because it could have been one of many. Carve-out is everywhere.

During one run of the Game the Team processed 186 tasks and the chart shows how long each task took from arriving to leaving (the game was designed to do the work in seconds when in the real department it took minutes – and this was done so that one working day could be condensed from 8 hours into 8 minutes!)

There was a mix of priority: some tasks were more urgent than others. There was a mix of complexity: some tasks required more steps that others. The paths crossed at separate steps where different people did defined work using different skills and special equipment.  There were handoffs between all of the steps on all of the streams. There were  lots of links. There were many queues. There were ample opportunities for confusion and errors.

But the design of the real process was such that the work was delivered to a high quality – there were very few output errors. The yield was very high. The design was effective. The resources required to achieve this quality were represented by the hours of people-time availability – the capacity. The cost. And the work was stressful, chaotic, pressured, and important – so it got done. Everyone was busy. Everyone pulled together. They helped each other out. They were not idle. They were a good team. The design was efficient.

The thin blue line on the time-series chart is the “time target” set by the Organisation.  But the effective and efficient system design only achieved it 77% of the time.  So the “obvious” solution was to clamour for more people and for more space and for more equipment so that the work can be done more quickly to deliver more jobs on-time.  Unfortunately the Rules of the Time-Design Game do not allow this more-money option. There is no more money.

To succeed at the Time-Design Game the team must find a way to improve their delivery time performance with the capacity they have and also to deliver the same quality.  But this is impossible! If it were possible then the solution would be obvious and they would be doing it already. No one can succeed on the Time-Design Game. 

Wrong. It is possible.  And the assumption that the solution is obvious is incorrect. The solution is not obvious – at least to the untrained eye.

To the trained eye the time-series chart shows the characteristic signals of a carve-out time-design. The high task-to-task variation is highly suggestive as is the pattern of some of the earlier arrivals having a longer lead time. An experienced system designer can diagnose a carve-out time-design from a set of time-series charts of a process just as a doctor can diagnose the disease from the vital signs chart for a patient.  And when the diagnosis is confirmed with a verification test then the time-Redesign phase can start. 

TimeDesign_AfterPhase1This chart shows what happened after the time-design of the system was changed – after some of the carve-out design was modified. The Y-axis scale is the same as before – and the delivery time improvement is dramatic. The Time-ReDesigned system is now delivering 98% achievement of the “on time target”.

The important thing to be aware of is that exactly the same work was done, using exactly the same steps, and exactly the same resources. No one had to be retrained, released or recruited.  The quality was not impaired. And the cost was actually less because less overtime was needed to mop up the spillover of work at the end of the day.

And the Time-ReDesigned system feels better to work in. It is not chaotic; flow is much smoother; and it is busy yet relaxed and even fun.  The same activity is achieved by the same people doing the same work in the same sequence. Only the Time-Design has changed. A change that delivered a win for the workers!

What was the impact of this cost-saving improvement on the customers of this service? They can now be 98% confident that they will get their task completed correctly in less than 120 minutes.  Before the Time-Redesign the 98% confidence limit was 470 minutes! So this is a win for the customers too!

And the Time-ReDesigned system is less expensive so it is a win for whoever is paying.

Same safety and quality, quicker with less variation, and at lower cost. Win-Win-Win.

And the usual reaction to playing the Time-ReDesign Game is incredulous disbelief.  Some describe it as a “light bulb” moment when they see how the diagnosis of the carve-out time-design is made and and how the Time-ReDesign is done. They say “If I had not seen it with my own eyes I would not have believed it.” And they say “The solutions are simple but not obvious!” And they say “I wish I had learned this years ago!”  And thay apologise for being so skeptical before.

And there are those who are too complacent, too careful or too cynical to play the Time-ReDesign Game (which is about 80% of people actually) – and who deny themselves the opportunity of a win-win-win outcome. And that is their choice. They can continue to grin and bear it – for a while longer.     

And for the 20% who want to learn how to do Time ReDesign for real in their actual systems there is now a Ray Of Hope.

And the Ray of Hope is illuminating a signpost on which is written “This Way to Improvementology“. 

Quality First or Time First?

Before we explore this question we need to establish something. If the issue is Safety then that always goes First – and by safety we mean “a risk of harm that everyone agrees is unacceptable”.


figure_running_hamster_wheel_150_wht_4308Many Improvement Zealots state dogmatically that the only way reach the Nirvanah of “Right Thing – On Time – On Budget” is to focus on Quality First.

This is incorrect.  And what makes it incorrect is the word only.

Experience teaches us that it is impossible to divert people to focus on quality when everyone is too busy just keeping afloat. If they stop to do something else then they will drown. And they know it.

The critical word here is busy.

‘Busy’ means that everyone is spending all their time doing stuff – important stuff – the work, the checking, the correcting, the expediting, the problem solving, and the fire-fighting. They are all busy all of the time.

So when a Quality Zealot breezes in and proclaims ‘You should always focus on quality first … that will solve all the problems’ then the reaction they get is predictable. The weary workers listen with their arms-crossed, roll-their eyes, exchange knowing glances, sigh, shrug, shake their heads, grit their teeth, and trudge back to fire-fighting. Their scepticism and cynicism has been cut a notch deeper. And the weary workers get labelled as ‘Not Interested In Quality’ and ‘Resisting Change’  and ‘Laggards’ by the Quality Zealot who has spent more time studying and regurgitating rhetoric than investing time in observing and understanding reality.

The problem here is the seemingly innocuous word ‘always’. It is too absolute. Too black-and-white. Too dogmatic. Too simple.

Sometimes focussing on Quality First is a wise decision. And that situation is when there is low-quality and idle-time. There is some spare capacity to re-invest in understanding the root causes of the quality issues,  in designing them out of the process, and in implementing the design changes.

But when everyone is busy – when there is no idle-time – then focussing on quality first is not a wise decision because it can actually make the problem worse!

[The Quality Zealots will now be turning a strange red colour, steam will be erupting from their ears and sparks will be coming from their finger-tips as they reach for their keyboards to silence the heretical anti-quality lunatic. “Burn, burn, burn” they rant]. 

When everyone is busy then the first thing to focus on is Time.

And because everyone is busy then the person doing the Focus-on-Time stuff must be someone else. Someone like an Improvementologist.  The Quality Zealot is a liability at this stage – but they become an asset later when the chaos has calmed.

And what our Improvementologist is looking for are queues – also known as Work-in-Progress or WIP.

Why WIP?  Why not where the work is happening? Why not focus on resource utilisation? Isn’t that a time metric?

Yes, resource utilisation is a time-related metric but because everyone is busy then resource utilisation will be high. So looking at utilisation will only confirm what we already know.  And everyone is busy doing important stuff – they are not stupid – they are busy and they are doing their best given the constraints of their process design.        

The queue is where an Improvementologist will direct attention first.  And the specific focus of their attention is the cause of the queue.

This is because there is only one cause of a queue: a mismatch-over-time between demand and activity.

So, the critical first step to diagnosing the cause of a queue is to make the flow visible – to plot the time-series charts of demand, activity and WIP.  Until that is done then no progress will be made with understanding what is happening and it wil be impossible to decide what to do. We need a diagnosis before we can treat. And to get a diagnosis we need data from an examination of our process; and we need data on the history of how it has developed. And we need to know how to convert that data into information, and then into understanding, and then into design options, and then into a wise decision, and then into action, and then into improvement.

And we now know how to spot an experienced Improvementologist because the first thing they will look for are the Queues not the Quality.

But why bother with the flow and the queues at all? Customers are not interested in them! If time is the focus then surely it is turnaround times and waiting times that we need to measure! Then we can compare our performance with our ‘target’ and if it is out of range we can then apply the necessary ‘pressure’!

This is indeed what we observe. So let us explore the pros and cons of this approach with an example.

We are the manager of a support department that receives requests, processes them and delivers the output back to the sender. We could be one of many support departments in an organisation:  human resources, procurement, supplies, finance, IT, estates and so on. We are the Backroom Brigade. We are the unsung heros and heroines.

The requests for our service come in different flavours – some are easy to deal with, others are more complex.  They also come with different priorities – urgent, soon and routine. And they arrive as a mixture of dribbles and deluges.  Our job is to deliver high quality work (i.e. no errors) within the delivery time expected by the originator of the request (i.e. on time). If  we do that then we do not get complaints (but we do not get compliments either).

From the outside things look mostly OK.  We deliver mostly on quality and mostly on time. But on the inside our department is in chaos! Every day brings a new fire to fight. Everyone is busy and the pressure and chaos are relentless. We are keeping our head above water – but only just.  We do not enjoy our work-life. It is not fun. Our people are miserable too. Some leave – others complain – others just come to work, do stuff, take the money and go home – like Zombies. They comply.

three_wins_agreementOnce in the past we were were seduced by the sweet talk of a Quality Zealot. We were promised Nirvanah. We were advised to look at the quality of the requests that we get. And this suggestion resonated with us because we were very aware that the requests were of variable quality. Our people had to spend time checking-and-correcting them before we could process them.  The extra checking had improved the quality of what we deliver – but it had increased our costs too. So the Quality Zealot told us we should work more closely with our customers and to ‘swim upstream’ to prevent the quality problems getting to us in the first place. So we sent some of our most experienced and most expensive Inspectors to paddle upstream. But our customers were also very busy and, much as they would have liked, they did not have time to focus on quality either. So our Inspectors started doing the checking-and-correcting for our customers. Our people are now working for our customers but we still pay their wages. And we do not have enough Inspectors to check-and-correct all the requests at source so we still need to keep a skeleton crew of Inspectors in the department. And these stay-at-home Inspectors  are stretched too thin and their job is too pressured and too stressful. So no one wants to do it.And given the choice they would all rather paddle out to the customers first thing in the morning to give them as much time as possible to check-and-correct the requests so the days work can be completed on time.  It all sounds perfectly logical and rational – but it does not seem to have worked as promised. The stay-at-home Inspectors can only keep up with the more urgent work,  delivery of the less urgent work suffers and the chronic chaos and fire-fighting are now aggravated by a stream of interruptions from customers asking when their ‘non-urgent’ requests will be completed.

figure_talk_giant_phone_anim_150_wht_6767The Quality Zealot insisted we should always answer the phone to our customers – so we take the calls – we expedite the requests – we solve the problems – and we fight-the-fire.  Day, after day, after day.

We now know what Purgatory means. Retirement with a pension or voluntary redundancy with a package are looking more attractive – if only we can keep going long enough.

And the last thing we need is more external inspection, more targets, and more expensive Quality Zealots telling us what to do! 

And when we go and look we see a workplace that appears just as chaotic and stressful and angry as we feel. There are heaps of work in progress everywhere – the phone is always ringing – and our people are running around like headless chickens, expediting, fire-fighting and getting burned-out: physically and emotionally. And we feel powerless to stop it. So we hide.

Does this fictional fiasco feel familiar? It is called the Miserable Job Purgatory Vortex.

Now we know the characteristic pattern of symptoms and signs:  constant pressure of work, ever present threat of quality failure, everyone busy, just managing to cope, target-stick-and-carrot management, a miserable job, and demotivated people.

The issue here is that the queues are causing some of the low quality. It is not always low quality that causes all of the queues.

figure_juggling_time_150_wht_4437Queues create delays, which generate interruptions, which force investigation, which generates expediting, which takes time from doing the work, which consumes required capacity, which reduces activity, which increases the demand-activity mismatch, which increases the queue, which increases the delay – and so on. It is a vicious circle. And interruptions are a fertile source of internally generated errors which generates even more checking and correcting which uses up even more required capacity which makes the queues grow even faster and longer. Round and round.  The cries for ‘we need more capacity’ get louder. It is all hands to the pump – but even then eventually there is a crisis. A big mistake happens. Then Senior Management get named-blamed-and shamed,  money magically appears and is thrown at the problem, capacity increases,  the symptoms settle, the cries for more capacity go quiet – but productivity has dropped another notch. Eventually the financial crunch arrives.    

One symptom of this ‘reactive fire-fight design’ is that people get used to working late to catch up at the end of the day so that the next day they can start the whole rollercoaster ride again. And again. And again. At least that is a form of stability. We can expect tomorrow to be just a s miserable as today and yesterday and the day before that. But TOIL (Time Off In Lieu) costs money.

The way out of the Miserable Job Purgatory Vortex is to diagnose what is causing the queue – and to treat that first.

And that means focussing on Time first – and that means Focussing on Flow first.  And by doing that we will improve delivery, improve quality and improve cost because chaotic systems generate errors which need checking and correcting which costs more. Time first is a win-win-win strategy too.

And we already have everything we need to start. We can easily count what comes in and when and what goes out and when.

The first step is to plot the inflow over time (the demand), the outflow over time (the activity), and from that we work out and plot the Work-in-Progress over time. With these three charts we can start the diagnostic process and by that path we can calm the chaos.

And then we can set to work on the Quality Improvement.  


13/01/2013Newspapers report that 17 hospitals are “dangerously understaffed”  Sound familiar?

Next week we will explore how to diagnose the root cause of a queue using Time charts.

For an example to explore please play the SystemFlow Game by clicking here

 

The Heart of Change

In 1628 a courageous and paradigm shifting act happened. A small 72-page book was published in Frankfurt that openly challenged 1500 years of medical dogma. The book challenged the authority of Galen (129-200) the most revered medical researcher of antiquity and Hippocrates (460 BC – 370 BC) the Father of Medicine.

The writer of the book was a respected and influential English doctor called William Harvey (1578-1657) who was physician to King James I and who became personal physician to King Charles I.

William_HarveyWilliam Harvey was from yeoman stock. The salt-of-the-earth. Loyal, honest and hard-working free men often owned their land – but who were way down the social pecking order. They were the servant class.

William was the eldest son of Thomas Harvey from Folkstone who had a burning ambition to raise the station of his family from yeoman to gentry. This implied that the family was allowed to have their own coat of arms. To the modern mind this is almost meaningless – in the 17th Century it was not!

And Thomas was wealthy enough to have William formally educated and the dutiful William worked hard at his studies and was rewarded by gaining a place at Caius College in Cambridge University.  John Caius (1510-1573) was a physician who had studied in Padua, Italy – the birthplace of modern medicine. William did well and after graduating from Cambridge in 1597 he too travelled through Europe to study in Padua. There he saw Galenic dogma challenged and defused using empirical evidence. This was at the same time that Galileo Galilei (1564-1642) was challenging the geocentric dogma of the Catholic Church using empirical evidence gained by simple celestial observation with his new telescope. This was the Renaissance. The Rebirth of Learning. This was the end of the Dark Ages of Dogma.

Harvey brought this “new thinking” back to Elizabethan England and decided to focus his attention on the heart. And what Harvey discovered was that the accepted truth from the ancients about how the heart worked was wrong. Galen was wrong. Hippocrates was wrong.

But this was not the most interesting part of the story.  It was the how he proved it that was radically different. He used evidence from reality to disprove the rhetoric. He used the empirical method espoused by Francis Bacon (1561-1626): what we now call the Scientific Method. In effect what Harvey said was “If you do not believe or agree with me then all you need to do is repeat the observation yourself.  Do an autopsy“.  [aut=self and opsy=see]. William Harvey saw and conducted human dissection in Padua, and practiced both it and animal vivisection back in England – and by that means he discovered how the heart actually worked.

Harvey opened a crack in the cultural ice that had frozen medical innovation for 1500 years. The crack in the paradigm was a seed of doubt planted by a combination of curiosity and empirical experimentation:

Q1: If Galen was wrong about the heart then what else was he wrong about? The Four Humours too?
Q2: If the heart is just a simple pump then where does the Spirit reside?

Looking back with our 21st century perspective these are meaningless questions.  To a person in the 17th Century these were fundamental paradigm-challenging questions.  They rocked the whole foundation of their belief system.  The believed that illness was a natural phenomenon and was not caused by magic, curses and evil spirits; but they believed that celestial objects, the stars and planets, were influential. In 1628 astronomy and astrology were the same thing.   

And Harvey was savvy. He was both religious and a devout Royalist and he knew that he would need the support of the most powerful person in England – the monarch. And he knew that he needed to be a respectable member of a powerful institution – the Royal College of Physicians (RCP) which he gained in 1604. A remarkable achievement in itself for someone of yeoman stock. With this ticket he was able to secure a position at St Bartholomew’s Hospital in Smithfield, London and in 1615 he became the RCP Lumleian Lecturer which involved lecturing on anatomy – which he did from 1616.  By virtue of his position Harvey was able to develop a lucrative private practice in London and by that route was introduced to the Court. In 1618 he was appointed as Physician Extraordinary to King James I. [The Physician Ordinary was the top job].

And even with this level of influence, credibility and royal support his paradigm-challenging message met massive cultural and political resistance because he was challenging a 1500 year old belief.

Over the 12 years between 1616 and 1628 Harvey invested a lot of time sharing his ideas and the evidence with influential friends and he used their feedback to deepen his understanding, to guide his experiments, and to sharpen his arguments. He had learned how to debate at school and had developed his skill at Cambridge so he know how to turn argments-against into arguments-for.

Harvey was intensely curious, he knew how to challenge himself, to learn, to influence others, and to change their worldview.  He knew that easily observable phenomemon could help spread the message – such as the demonstration of venous valves in the arm illustrated in his book.  

DeMotuCordisAfter the publication of De Motu Cordis in 1628 his personal credibility and private practice suffered massively because as a self-declared challenger of the current paradigm he was treated with skepticism and distrust by his peers. Gossip is effective.

And even with all his passion, education, evidence, influence and effort it still took 20 years for his message to become widely enough accepted to survive him.  And it did so because others resonated with the message; others like a Rene Descartes (1596-1650). 

William Harvey is now remembered as one of the founders of modern medical science.  When he published De Motu Cordis he triggered a paradim shift – one that we take for granted today.  Harvey showed that the path to improvement is through respectfully challenging accepted dogma with a combination of curiosity, humility, hard-work, and empirical evidence. Reality reinforced rhetoric.

Today we are used to having the freedom of speech and we are familiar with using experimental data to test our hypotheses.  In 1628 this was new thinking and was very risky. People were burned at the stake for challenging the authority of the Catholic Church and the Holy Roman Inquisition was still active well into the 18th Century!

Harvey was also innovative in the use of arithmetic. He showed that the volume of blood pumped by the heart in a day was far more than the liver could reasonably generate.  But at that time arithmetic was the domain of merchants, accountants and money-lenders and was not seen as a tool that a self-respecting natural philosopher would use!  The use of mathematics as a scientific tool did not really take off until after Sir Isaac Newton (1642-1727) published the Principia in 1687 – 30 years after Harvey’s death. [To read more about William Harvey click here].

William Harvey was an Improvementologist.

 So what lessons can modern Improvement Scientists draw from his story?

  • The first is that all significant challanges to current thinking will meet emotional and political resistance. They will be discounted and ridiculed because they challenge the authority of experts.
  • The second is that challenges must be made respectfully. The current thinking has both purpose and value. Improvements build on the foundation of knowledge and only challenge what is not fit for purpose.
  • The third is that the challenge must be more than rhetorical – it must be backed with replicatable evidence. A difference of opinion is just that. Reality is the ultimate arbiter.
  • The fourth is that having an idea is not enough – testig, proving, explaining and demonstrating are needed too. It is hard work to change a mental paradigm and it requires an emotionally secure context to do it. People who are under pressure will find it more difficult and more traumatic. 
  • The fifth is that patience and persistence are needed. Worldview change takes time and happen in small steps. The new paradigm needs to find its place.

And Harvey did not say that Galen and Hippocrates were completely wrong – just partly wrong. And he explained that the reason that Hippocrates and Galen could not test their ideas about human anatomy was because dissection of human bodies was illegal in Greek and Roman societies. Padua in Renaissance Italy was one of the first places where dissection was permitted by Law.   

So which part of the Galenic dogma did Harvey challenge?

He challenged the dogma that blood was created continuously by the liver. He challenged the dogma that there were invisible pores between the right and left sides of the heart. He challenged the dogma that the arteries ‘sucked’ the blood from the heart. He challenged the dogma that the ‘vitalised’ arterial blood was absorbed by the tissues. And he challenged these beliefs with empirical evidence. He showed evidence that the blood circulated fom the right heart to the lungs to the left heart to the body and back to the right heart. He showed evidence that the heart was a muscular pump. And he showed evidence that it worked the same way in man and in animals.  

FourHumoursIn so doing he undermined the foundation of the whole paradigm of ancient belief that illness was the result of an imbalance between the Four Humours. Yellow Bile (associated with the liver), Black Bile (associated with the Spleen), Blood (as ociated with the heart) and Phlegm (associated with the lungs).   

We still have the remnants of this ancient belief in our language.  The Four Humours were also associated with Four Temperaments – four observable personality types. The phlegmatic type (excess phlegm), the sanguine type (excess blood), the choleric type (excess yellow bile), and the melancholic type (excess black bile).

We still talk about “the heart of the matter” and being “heartless”, “heartfelt”  and “change of heart” because the heart was believed to be where emotion and passion resided. Sanguine is the term given to people who show warmth, passion, a live-now-pay-later, optimistic and energetic disposition. And this is not an unreasonable hypothesis given that we are all very aware of changes in how our heart beats when we are emotionally aroused; and how the color of our skin changes.

So when Harvey suggested that blood flowed in a circle from the heart to the arteries and back to the heart via the veins; and that the heart was just a pump then this idea shook the current paradigm on many levels – right down to its roots.

And the ancient justification for a whole raft of medical diagnoses, prognoses and treatments was challenged. The House of Cards was challenged. And many people owed their livelihoods to these ancient beliefs – so it is no surprise that his peers were not jumping  for joy to hear what Harvey said.

But Harvey had reality on his side – and reality trumps rhetoric.

And the same is true today, 500 years later.

The current paradigm is being shaken. The belief that we can all live today and pay tomorrow. The belief that our individual actions have no global impact and no long lasting consequences. The belief that competition is the best route to contentment.

The evidence is accumulating that these beliefs are wrong.

The difference is that today the paradigm is being challenged by a collective voice – not by a lone voice.

Subscribe: [smlsubform]

Shifting, Shaking and Shaping

Stop Press: For those who prefer cartoons to books please skip to the end to watch the Who Moved My Cheese video first.


ThomasKuhnIn 1962 – that is half a century ago – a controversial book was published. The title was “The Structure of Scientific Revolutions” and the author was Thomas S Kuhn (1922-1996) a physicist and historian at Harvard University.  The book ushered in the concept of a ‘paradigm shift’ and it upset a lot a people.

In particular it upset a lot of scientists because it suggested that the growth of knowledge and understanding is not smooth – it is jerky. And Kuhn showed that the scientists were causing the jerking.

Kuhn described the process of scientific progress as having three phases: pre-science, normal science and revolutionary science.  Most of the work scientists do is normal science which means exploring, consolidating, and applying the current paradigm. The current conceptual model of how things work.  Anyone who argues against the paradigm is regarded as ‘mistaken’ because the paradigm represents the ‘truth’.  Kuhn draws on the history of science for his evidence, quoting  examples of how innovators such as Galileo, Copernicus, Newton, Einstein and Hawking radically changed the way that we now view the Universe. But their different models were not accepted immediately and ethusiastically because they challenged the status quo. Galileo was under house arrest for much of his life because his ‘heretical’ writings challenged the Church.  

Each revolution in thinking was both disruptive and at the same time constructive because it opened a door to allow rapid expansion of knowledge and understanding. And that foundation of knowledge that has been built over the centuries is one that we all take for granted.  It is a fragile foundation though. It could be all lost and forgotten in one generation because none of us are born with this knowledge and understanding. It is not obvious. We all have to learn it.  Even scientists.

Kuhn’s book was controversial because it suggested that scientists spend most of their time blocking change. This is not necessarily a bad thing. Stability for a while is very useful and the output of normal science is mostly positive. For example the revolution in thinking introduced by Isaac Newton (1643-1727) led directly to the Industrial Revolution and to far-reaching advances in every sphere of human knowledge. Most of modern engineering is built on Newtonian mechanics and it is only at the scales of the very large, the very small and the very quick that it falls over. Relativistic and quantum physics are more recent and very profound shifts in thinking and they have given us the digital computer and the information revolution. This blog is a manifestation of the quantum paradigm.

Kuhn concluded that the progess of change is jerky because scientists create resistance to change to create stability while doing normal science experiments.  But these same experiments produce evidence that suggest that the current paradigm is flawed. Over time the pressure of conflicting evidence accumulates, disharmony builds, conflict is inevitable and intellectual battle lines are drawn.  The deeper and more fundamental the flaw the more bitter the battle.

In contrast, newcomers seek harmony in the cacophony and propose new theories that explain both the old and the new. New paradigms. The stage is now set for a drama and the public watch bemused as the academic heavyweights slug it out. Eventually a tipping point is reached and one of the new paradigms becomes dominant. Often the transition is triggered by one crucial experiment.

There is a sudden release of the tension and a painful and disruptive conceptual  lurch – a paradigm shift. Then the whole process starts over again. The creators of the new paradigm become the consolidators and in time the defenders and eventually the dogmatics!  And it can take decades and even generations for the transition to be completed.

It is said that Albert Einstein (1879-1955) never fully accepted quantum physics even though his work planted the seeds for it and experience showed that it explained the experimental observations better. [For more about Einstein click here].              

The message that some take from Kuhn’s book is that paradigm shifts are the only way that knowledge  can advance.  With this assumption getting change to happen requires creating a crisis – a burning platform. Unfortunatelty this is an error of logic – it is a unverified generalisation from an observed specific. The evidence is growing that this we-always-need-a-burning-platform assumption is incorrect.  It appears that the growth of  knowledge and understanding can be smoother, less damaging and more effective without creating a crisis.

So what is the evidence that this is possible?

Well, what pattern would you look for to illustrate that it is possible to improve smoothly and continually? A smooth growth curve of some sort? Yes – but it is more than that.  It is a smooth curve that is steeper than anyone else’s and one that is growing steeper over time.  Evidence that someone is learning to improve faster than their peers – and learning painlessly and continuously without crises; not painfully and intermittently using crises.

Two examples are Toyota and Apple.

ToyotaLogoToyota is a Japanese car manufacturer that has out-performed other car manufacturers consistently for 40 years – despite the global economic boom-bust cycles. What is their secret formula for their success?

WorldOilPriceChartWe need a bit of history. In the 1980’s a crisis-of-confidence hit the US economy. It was suddenly threatened by higher-quality and lower-cost imported Japanese products – for example cars.

The switch to buying Japanese cars had been triggered by the Oil Crisis of 1973 when the cost of crude oil quadrupled almost overnight – triggering a rush for smaller, less fuel hungry vehicles.  This is exactly what Toyota was offering.

This crisis was also a rude awakening for the US to the existence of a significant economic threat from their former adversary.  It was even more shocking to learn that W Edwards Deming, an American statistician, had sown the seed of Japan’s success thirty years earlier and that Toyota had taken much of its inspiration from Henry Ford.  The knee-jerk reaction of the automotive industry academics was to copy how Toyota was doing it, the Toyota Production System (TPS) and from that the school of Lean Tinkering was born.

This knowledge transplant has been both slow and painful and although learning to use the Lean Toolbox has improved Western manufacturing productivity and given us all more reliable, cheaper-to-run cars – no other company has been able to match the continued success of Japan.  And the reason is that the automotive industry academics did not copy the paradigm – the intangible, subjective, unspoken mental model that created the context for success.  They just copied the tangible manifestation of that paradigm.  The tools. That is just cynically copying information and knowledge to gain a competitive advantage – it is not respecfully growing understanding and wisdom to reach a collaborative vision.

AppleLogoApple is now one of the largest companies in the world and it has become so because Steve Jobs (1955-2011), its Californian, technophilic, Zen Bhuddist, entrepreneurial co-founder, had a very clear vision: To design products for people.  And to do that they continually challenged their own and their customers paradigms. Design is a logical-rational exercise. It is the deliberate use of explicit knowledge to create something that delivers what is needed but in a different way. Higher quality and lower cost. It is normal science.

Continually challenging our current paradigm is not normal science. It is revolutionary science. It is deliberately disruptive innovation. But continually challenging the current paradigm is uncomfortable for many and, by all accounts, Steve Jobs was not an easy person to work for because he was future-looking and demanded perfection in the present. But the success of this paradigm is a matter of fact: 

“In its fiscal year ending in September 2011, Apple Inc. hit new heights financially with $108 billion in revenues (increased significantly from $65 billion in 2010) and nearly $82 billion in cash reserves. Apple achieved these results while losing market share in certain product categories. On August 20, 2012 Apple closed at a record share price of $665.15 with 936,596,000 outstanding shares it had a market capitalization of $622.98 billion. This is the highest nominal market capitalization ever reached by a publicly traded company and surpasses a record set by Microsoft in 1999.”

And remember – Apple almost went bust. Steve Jobs had been ousted from the company he co-founded in a boardroom coup in 1985.  After he left Apple floundered and Steve Jobs proved it was his paradigm that was the essential ingredient by setting up NeXT computers and then Pixar. Apple’s fortunes only recovered after 1998 when Steve Jobs was invited back. The rest is history so click to see and hear Steve Jobs describing the Apple paradigm.

So the evidence states that Toyota and Apple are doing something very different from the rest of the pack and it is not just very good product design. They are continually updating their knowledge and understanding – and they are doing this using a very different paradigm.  They are continually challenging themselves to learn. To illustrate how they do it – here is a list of the five principles that underpin Toyota’s approach:

  • Challenge
  • Improvement
  • Go and see
  • Teamwork
  • Respect

This is Win-Win-Win thinking. This is the Science of Improvement. This is Improvementology®.


So what is the reason that this proven paradigm seems so difficult to replicate? It sounds easy enough in theory! Why is it not so simple to put into practice?

The requirements are clearly listed: Respect for people (challenge). Respect for learning (improvement). Respect for reality (go and see). Respect for systems (teamwork).

In a word – Respect.

Respect is a big challenge for the individualist mindset which is fundamentally disrespectful of others. The individualist mindset underpins the I-Win-You-Lose Paradigm; the Zero-Sum -Game Paradigm; the Either-Or Paradigm; the Linear-Thinking Paradigm; the Whole-Is-The-Sum-Of-The-Parts Paradigm; the Optimise-The-Parts-To-Optimise-The-Whole Paradigm.

Unfortunately these are the current management paradigms in much of the private and public worlds and the evidence is accumulating that this paradigm is failing. It may have been adequate when times were better, but it is inadequate for our current needs and inappropriate for our future needs. 


So how can we avoid having to set fire to the current failing management paradigm to force a leap into the cold and uninviting reality of impending global economic failure?  How can we harness our burning desire for survival, security and stability? How can we evolve our paradigm pro-actively and safely rather than re-actively and dangerously?

all_in_the_same_boat_150_wht_9404We need something tangible to hold on to that will keep us from drowning while the old I-am-OK-You-are-Not-OK Paradigm is dissolved and re-designed. Like the body of the caterpillar that is dissolved and re-assembled inside the pupa as the body of a completely different thing – a butterfly.

We need a robust  and resilient structure that will keep us safe in the transition from old to new and we also need something stable that we can steer to a secure haven on a distant shore.

We need a conceptual lifeboat. Not just some driftwood,  a bag of second-hand tools and no instructions! And we need that lifeboat now.

But why the urgency?

UK_PopulationThe answer is basic economics.

The UK population is growing and the proportion of people over 65 years old is growing faster.  Advances in healthcare means that more of us survive age-related illnesses such as cancer and heart disease. We live longer and with better quality of life – which is great.

But this silver-lining hides a darker cloud.

The proportion of elderly and very elderly will increase over the next 20 years as the post WWII baby-boom reaches retirement age. The number of people who are living on pensions is increasing and the demands on health and social services is increasing.  Pensions and public services are not paid out of past savings  they are paid out of current earnings.  So the country will need to earn more to pay the bills. The UK economy will need to grow.

UK_GDP_GrowthBut the UK economy is not growing.  Our Gross Domestic Product (GDP) is currently about £380 billion and flat as a pancake. This sounds like a lot of dosh – but when shared out across the population of 56 million it gives a more modest figure of just over £100 per person per week.  And the time-series chart for the last 20 years shows that the past growth of about 1% per quarter took a big dive in 2008 and went negative! That means serious recession. It recovered briefly but is now sagging towards zero.

So we are heading for a big economic crunch and hiding our heads in the sand and hoping for the best is not a rational strategy. The only way to survive is to cut public services or for tax-funded services to become more productive. And more productive means increasing the volume of goods and services for the same cost. These are the services that we will need to support the growing population of  dependents but without increasing the cost to the country – which means the taxpayer.

The success of Toyota and Apple stemmed from learning how to do just that: how to design and deliver what is needed; and how to eliminate what is not; and how to wisely re-invest the released cash. The difference can translate into higher profit, or into growth, or into more productivity. It just depends on the context.  Toyota and Apple went for profit and growth. Tax-funded public services will need to opt for productivity. 

And the learning-productivity-improvement-by-design paradigm will be a critical-to-survival factor in tax-payer funded public services such as the NHS and Social Care.  We do not have a choice if we want to maintain what we take for granted now.  We have to proactively evolve our out-of-date public sector management paradigm. We have to evolve it into one that can support dramatic growth in productivity without sacrificing quality and safety.

We cannot use the burning platform approach. And we have to act with urgency.

We need a lifeboat!

Our current public sector management paradigm is sinking fast and is being defended and propped up by the old school managers who were brought up in it.  Unfortunately the evidence of 500 years of change says that the old school cannot unlearn. Their mental models go too deep.  The captains and their crews will go down with their ships.  [Remember the Titanic the unsinkable ship that sank in 1912 on the maiden voyage. That was a victory of reality over rhetoric.]

Those of us who want to survive are the ‘rats’. We know when it is time to leave the sinking ship.  We know we need lifeboats because it could be a long swim! We do not want to freeze and drown during the transition to the new paradigm.

So where are the lifeboats?

One possibility is an unfamiliar looking boat called “6M Design”. This boat looks odd when viewed through the lens of the conventional management paradigm because it combines three apparently contradictiry things: the rational-logical elements of system design;  the respect-for-people and learning-through-challenge principles embodied by Toyota and Apple; and the counter-intuitive technique of systems thinking.

Another reason it feel odd is because “6M Design” is not a solution; it is a meta-solution. 6M Design is a way of creating a good-enough-for-now solution by changing the current paradigm a bit at a time. It is a-how-to-design framework; it is not the-what-to-do solution. 6M Design is a paradigm shaper – not a paradigm shaker or a paradigm shifter.

And there is yet another reason why 6M Design does not float the current management boat.  It does not need to be controlled by self-appointed experts.  Business schools and management consultants, who have a vested interest in defending the current management paradigm, cannot make a quick buck from it because they are irrelevant. 6M Design is intended to be used by anyone and everyone as a common language for collectively engaging in respectful challenge and lifelong learning. Anyone can learn to use it. Anyone.

We do not need a crisis to change. But without changing we will get the crisis we do not want. If we choose to change then we can choose a safer and smoother path of change.

The choice seems clear.  Do you want to go down with the ship or stay afloat aboard an innovation boat?

And we will need something to help us navigate our boat.

If you are a reflective, conceptual learner then you might ike to read a synopsis of Thomas Kuhn’s book.  You can download a copy here. [There is also a 50 year anniversary edition of the original that was published this year].

And if you prefer learning from stories then there is an excellent one called “Who Moved My Cheese” that describes the same challenge of change. And with the power of the digital paradigm you can watch the video here.


Defusing Trust Eroders – Part II

line_figure_phone_400_wht_9858<Ring Ring><Ring Ring>

B: Hello Leslie. How are you today?

L: Hi Bob – I am OK.  Thank you for your time today.  Is 15 minutes going to be enough?

B: Yes. There is evidence that the ideal chunk of time for effective learning is around 15 minutes.

L: OK.  I said I would read the material you sent me and reflect on it.

B: Yes.  Can you retell your Nerve Curve experience as a storyboard and highlight your ‘ah ha’ moments?

L: OK.  And that was the first ‘ah ha’.  I found the storyboard format a really effective way to capture my sequence of emotional states.

campfire_burning_150_wht_174B: Yes.  There are close links between stories, communication, learning and improvement.  Before we learned to write we used campfire stories to pass collective knowledge from generation to generation.   It is an ancient, in-built skill we all have and we all enjoy a good story.

L: Yes.  My first reaction was to the way you described the Victim role.  It really resonated with how I was feeling and how I was part of the dynamic.  You were spot on with the feelings that dominated my thinking – anxiety and fear. The big ‘ah ha’ for me was to understand the discount that I was making.  Not of others – of myself.

B: OK.  What was the image that you sketched on your storyboard?

L: I am embarrased to say – you will think I am silly.

B: I will not think you are silly.

employee_diciplined_400_wht_5635I know.  And I knew that as soon as I said it.  I think I was actually saying it to myself – or part of myself.  Like I was trying to appease part of myself.  Anyway, the picture I sketched was me as a small child at school standing with my head down, hands by my sides, and being told off in front of the whole class for getting a sum wrong.  I was crying.  I was not very good at maths and even now my mind sort of freezes and I get tears in my eyes and feel scared whenever someone tries to explain something using equations!  I can feel the terror starting to well up just talking about it.

B: OK. No need to panic. Take a long breath and exhale slowly.  The story you have told is very common.  Many of our fears of failure originate from early memories of experiencing ‘education by humiliation’.  It is a blunt and ineffective motivational tool that causes untold and long lasting damage.  It is a symptom of a low quality education system design. Education is an exercise in improvement of knowledge, understanding, capability and confidence.  The unintended outcome of this clumsy teaching tactic is a belief that we cannot solve problems ourselves and it is that invalid belief that creates the self-fulfilling prophecy of repeated failure.

L: Yes! And I know I can solve maths problems – I do it all the time – and I help my children with their maths homework.  So, it is not the maths that is triggering my fear.  What is it?

B: The answer to your question will become clear.  What is the next picture on your storyboard?

emotion_head_mad_400_wht_7632The next picture was of the teacher who was telling me off.  Or rather the face of the teacher.  It was a face of frustration and anger.  I drew a thought bubble and wrote in it “This small, irritating child cannot solve even a simple maths problem and is slowing down the whole lesson by bursting into tears everytime they get stuck.  I blame the parents who are clearly too soft.  They all need to learn some discipline – the hard way.

L: Does this shed any light on your question?

B: Wow!  Yes!  It is not the maths that I am reacting to – it is the behaviour of the teacher.  I am scared of the behaviour.  I feel powerless.  They are the teacher, I am just a small, incompetent, stupid, blubbing child.  They do not care that I do not understand the question, and that I am in distress, and that I am scared that I will be embarassed in front of the whole class, and that I am scared that my parents will see a bad mark on my school report.  And I feel trapped.  I need to rationalise this.  To make sense of it.  Maybe I am stupid?  That would explain why I cannot solve the mths problem.  Maybe I should just give in and accept that I am a failure and too stupid to do maths?

There was a pause.  Then Leslie continued in a different tone.  A more determined tone.

L: But I am not a failure.  This is just my knee jerk habitual reaction to an authority figure displaying anger towards me.  I can decide how I react.  I have complete control over that.  I can disconnect the behaviour I experience and my reaction to it.  I can choose.  Wow!

B: OK. How are you feeling right now?  Can you describe it using a visual metaphor?

ready_to_launch_PA_150_wht_5052L: Um – weird.  Mixed feelings.  I am picturing myself sitting on a giant catapault.  The ends of the huge elastic bands are anchored in the present and I am sitting in the loop but it is stretched way back into the past.  There is something formless in the past that has been holding me back and the tension has been slowly building over time.  And it feels that I have just cut that tie to the past, and I am free, and I am now being accelerated into the future.  I did that.  I am in control of my own destiny and it suddenly feels fun and exciting.

B: OK. How do you feel right now about the memory of the authority figure from the past?

L: OK actually.  That is really weird.  I thought that I would feel angry but I do not.  I just feel free.  It was not them that was the problem.  Their behaviour was not my fault – and it was my reaction to their behaviour that was the issue.  My habitual behaviour.  No, wait a second. Our habitual behaviour.  It is a dynamic.  It takes both people to play the game.

There was a pause.  Leslie sensed that Bob knew that some time was needed to let the emotions settle a bit.

B: Are you OK to continue with your storyboard?

emotion_head_sad_frown_400_wht_7644L: Yes.  The next picture is of the faces of my parents.  They are looking at my school report.  They look sad and are saying “We always dreamed that Leslie would be a doctor or something like that.  I suppose we will have to settle for something less ambitious.  Do not worry Leslie, it is not your fault, it will be OK, we will help you.”  I felt like I had let them down and I had shattered their dream.  I felt so ashamed.  They had given me everything I had ever asked for.  I also felt angry with myself and with them.  And that is when I started beating myself up.  I no longer needed anyone else to do that!  I could persecute myself.  I could play both parts of the game in my own head.  That is what I did just now when it felt like I was talking to myself.

B: OK.  You have now outlined the three roles that together create the dynamic for a stable system of learned behaviour.  A system that is very resistant to change.  It is like a triangular role-playing-game.  We pass the role-hats as we swap places in the triangle and we do it in collusion with others and ourselves and we do it unconsciously.  The purpose of the game is to create opportunities for social interaction – which we need and crave – the process has a clear purpose.  The unintended outcome of this design is that it generates bad feelings, it erodes trust and it blocks personal and organisational development and improvement.  We get stuck in it – rather like a small boat in a whirlpool.  And we cannot see that we are stuck in it.  We just feel bad as we spin around in an emotional maelstrom.  And we feel cheated out of something better but we do not know what it is and how to get it.

There was a long pause.  Leslie’s mind was racing.  The world had just changed.  The pieces had been blown apart and were now re-assembling in a different configuration.  A simpler, clearer and more elegant design.

L: So, tell me if I have this right.  Each of the three roles involves a different discount?

B: Yes.

And each discount requires a different – um – tactic to defuse?

B: Yes.

So, the way to break out of this trust eroding behavioural hamster-wheel is to learn to recognise which role we are in and to consciously deploy the discount defusing tactic.

B: Yes.

And by doing that enough times we learn how to spot the traps that other people are creating and avoid getting sucked into them.

B: Yes. And we also avoid starting them ourselves.

L: Of course! And by doing that we develop growing respect for ourselves and for each other and a growing level of trust in ourselves and in others?  We have started to defuse the trust eroding behaviour and that lowers the barrier to personal and organisational development and improvement.

B: Yes.

L: So what are the three discount defusing tactics?

There was a pause.  Leslie knew what was coming next.  It would be a question.

B: What role are you in now?

L: Oh!  Yes.  I see.  I am still feeling like that small school child at school but now I am asking for the answer and I am discounting myself by assuming that I cannot solve this problem myself.  I am assuming that I need you to rescue me by telling me the answer.  I am still in the trust eroding game, I do not trust myself and I am inviting you to play too, and to reinforce my belief that I cannot solve the problem.

B: And do you need me to tell you the answer?

L: No.  I can probably work this out myself.  And if I do get stuck then I can ask for hints or nudges – not for the answer.  I need to do the learning work and I want to do it.

B: OK.  I will commit to hinting and nudging if asked, and if I do not know the answer I will say so.

L: Phew!  That was definitely an emotional rollercoaster ride on the Nerve Curve.  Looking back it all makes complete sense and I now know what to do – but at the start it felt like I was heading into the Dark Unknown.  You are right.  It is liberating and exhilarating!

B: That feeling of clarity-of-hindsight and exhilaration from learning is what we always strive for.  Both as teachers and students.

L: You mean it is the same for you?  You are still riding the Nerve Curve?  Still feeling surprised, confused, scared, resolved, enlightened then delighted?

B: Ha ha!  Yes.  Every day.  It is fun.  I believe that there is No Limit to Learning so there is an inexhaustible Font of Fun.

L: Wow! I am off to have more Fun from Learning. Thank you so much yet again.

two_stickmen_shaking_hands_puzzle_150_wht_5229B: Thank you Leslie.


Defusing Trust Eroders – Part I

texting_a_friend_back_n_forth_150_wht_5352<Beep><Beep>

Bob heard the beep and looked at his phone. There was a text message from Leslie, one of his Improvementology coachees.

It said:

“Hi Bob, Do you have time to help me with a behaviour barrier that I keep hitting and cannot see a way around?”

Bob thumbed his reply:

“Yes. I am free at the moment – please feel free to call.”

<Ring><Ring>

B: Hello Leslie. What’s on your mind?

L: Hi Bob.  I really hope  you can help me with this recurring Niggle.  I have looked through my Foundation notes and I cannot see where it is described and it does not seem to be a Nerve Curve problem.

B: I will do my best. Can you outline the context or give me an example?

L: It is easier to give you an example.  This week I was working with a team in my organisation who approached me to help them with recurring niggles in their process.  I went to see for myself and I mapped their process and identified where their Niggles were and what was driving them.  That was the easy bit.  But when I started to make suggestions of what they could do to resolve their problems they started to give me a hard time and kept saying ‘Yes, but …”.  It was as if they were asking for help but did not really want it.  They kept emphasising that all their problems were caused by other people outside their department and kept asking me what I could do about it.  I felt as if they were pushing the problem onto me and I was also feeling guilty for not being able to sort it out for them.

There was a pause. Then Bob said.

B: You are correct Leslie.  This is not a Nerve Curve issue.   It is a different people-related system issue.  It is ubiquitous and it is a potentially deadly organisational disease.  We call it Trust Eroding Behaviour.

L: That sounds exactly how it felt for me.  I went to help in good faith and quickly started to feel distrustful of their motives.  It was not a good feeling and I do not know if I want to go back.  One part of me says “Keep going – you have made a commitment” and another part of me says “Stop – you are being suckered”.  What is happening?

B: Do you remember that the Improvement Science framework has three parts – Processes, People and Systems?

L: Yes.

B: OK.  This is part of the People component and it is similar to but different from the Nerve Curve.   The Nerve Curve is a hard-wired emotional response to any change.  The Fright, Freeze, Fight, Flight response.  It is just the way we are and it is not ‘correctable’.  This is different.  This is a learned behaviour.   Which means it can be unlearned.

L: Unlearned?  That is not a concept that I am familiar with.  Can you explain?  Is it the same as forgetting?

B: Forgetting means that you cannot bring something to conscious awareness.   Unlearning is different – it operates at a deeper psychological and emotional level.  Have you ever tried to change a bad habit?

L: Yes, I have!  I used to smoke which is definitely a bad habit and I managed to give up but it was really tough.

B: What you did was to unlearn the smoking habit and replaced it with a healthier one.  You did not forget about smoking.  You could not because you are repeatedly reminded by other people who still indulge in the habit.

L:  Ah ha! I see what you mean.  Yes – after I kicked the habit I became a bit of a Stop-Smoking evangelist.  It did not seem to make much impact on the still-smokers though.  If anything my behaviour seemed to make them more determined to keep doing it – just to spite me!

B: Yes.  What you describe is what many people report.  It is part if the same learned behaviour patterns.  The habit that is causing the issue is rather like smoking because it causes short-term pleasure and long-term pain.  It is both attractive and destructive.  The reactive behaviour generates a positive feeling briefly but it is toxic to trust over the longer term, which is why we call it a Trust Eroding Behaviour.

L: What is the bad habit? I do not recognise the behaviour that you are referring to.

B: The habit is called discounting.  The reason we are not aware of it is because we do it unconsciously.

L: What is it that we do?

B: I will give you some examples.  How do you feel when all the feedback you get is silence? How do you feel when someone complains that their mistake was not their fault? How do you feel when you try to help but you hit invisible barriers that block your progess?

sad_faceL: Ouch!  Those are uncomfortable questions. When I get no feedback I feel anxious and even fearful that I have made a mistake,  and no one is telling me.  There is a conspiracy of silence and a nasty surprise is on its way.  When someone keeps complaining that even though they made the mistake they are not to blame I feel angry.  When I try to help others and I fail to then I feel anxious and sad because my reputation, credibility and self-confidence is damaged.

B: OK. No need to panic. These negative emotional reactions are the normal reaction to discounting behaviour.  Another word for discounting is disrespect.  The three primary emotions we feel are sadness, anger and fear.  Fear is the sense of impending loss; anger is the sense of present loss; and sadness is the sense of past loss.  They are the same emotions that we feel on the Nerve Curve.  What is different is the cause.  Discounting is a disrepectful behaviour that is learned.  So, it can be unlearned.

L: Oooo!  That really resonates with me.  Just reflecting on one day at work I can think of lots of examples of all of those negative feelings.  So, when and how do we learn this discounting habit?

B: It is believed that we learn this behaviour when we are very young – before the age of seven.  And because we learn it so young we internalise it and we become unaware of it.  It then becomes a habit that is reinforced with years of experience and practice.

L: Wow!  That rings true for me – and it may explain why I actively avoided some people at school – they were just toxic.  But they had friends, went to college, got jobs, married and started families – just like me.  Does that mean we grow out of it?

B: Most people unlearn some of these behavioural habits because life-experience teaches them that they are counter-productive.  We all carry some of them though, and they tend to emerge when we are tired and under pressure.  Some people get sort of stuck and carry these behaviours into their adult life.  Their behaviour can be toxic to their relationships and their organisations.

L: I definitely resonate with that statement!  Is there a way to unlearn this discounting habit?

B: Yes – just becoming aware of its existence is the first step.  There are some strategies that we can learn, practice and use to defuse the discounting behaviour and over time our bad habit can be “kicked”.

L: Wow! That sounds really useful.  And not just at work – I can see benefits in other areas of my life too.

B: Yes. Improvement science is powerful medicine.

L: So what do I need to do?

B: You have learned the 6M Design framework for resolving process niggles. There is an equivalent one for dissolving people niggles.  I will send you some links to material to read and then we can talk again.

L: Will it help me resolve the problem that I have with the department that asked for my help who are behaving like Victims?

B: Yes.

L: OK – please send me the material.  I promise to read it, reflect on it and I will arrange another conversation.  I cannot wait to learn how to nail this niggle!  I can see a huge win-win-win opportunity here.

B: OK.  The email is on its way.  I look forward to our next conversation.


The Six Dice Game

<Ring Ring><Ring Ring>

Hello, you are through to the Improvement Science Helpline. How can we help?

This is Leslie, one of your apprentices.  Could I speak to Bob – my Improvement Science coach?

Yes, Bob is free. I will connect you now.

<Ring Ring><Ring Ring>

B: Hello Leslie, Bob here. What is on your mind?

L: Hi Bob, I have a problem that I do not feel my Foundation training has equipped me to solve. Can I talk it through with you?

B: Of course. Can you outline the context for me?

L: OK. The context is a department that is delivering an acceptable quality-of-service and is delivering on-time but is failing financially. As you know we are all being forced to adopt austerity measures and I am concerned that if their budget is cut then they will fail on delivery and may start cutting corners and then fail on quality too.  We need a win-win-win outcome and I do not know where to start with this one.

B: OK – are you using the 6M Design method?

L: Yes – of course!

B: OK – have you done The 4N Chart for the customer of their service?

L: Yes – it was their customers who asked me if I could help and that is what I used to get the context.

B: OK – have you done The 4N Chart for the department?

L: Yes. And that is where my major concerns come from. They feel under extreme pressure; they feel they are working flat out just to maintain the current level of quality and on-time delivery; they feel undervalued and frustrated that their requests for more resources are refused; they feel demoralized; demotivated and scared that their service may be ‘outsourced’. On the positive side they feel that they work well as a team and are willing to learn. I do not know what to do next.

B: OK. Dispair not. This sounds like a very common and treatable system illness.  It is a stream design problem which may be the reason your Foundations training feels insufficient. Would you like to see how a Practitioner would approach this?

L: Yes please!

B: OK. Have you mapped their internal process?

L: Yes. It is a six-step process for each job. Each step has different requirements and are done by different people with different skills. In the past they had a problem with poor service quality so extra safety and quality checks were imposed by the Governance department.  Now the quality of each step is measured on a 1-6 scale and the quality of the whole process is the sum of the individual steps so is measured on a scale of 6 to 36. They now have been given a minimum quality target of 21 to achieve for every job. How they achieve that is not specified – it was left up to them.

B: OK – do they record their quality measurement data?

L: Yes – I have their report.

B: OK – how is the information presented?

L: As an average for the previous month which is reported up to the Quality Performance Committee.

B: OK – what was the average for last month?

L: Their results were 24 – so they do not have an issue delivering the required quality. The problem is the costs they are incurring and they are being labelled by others as ‘inefficient’. Especially the departments who are in budget and they are annoyed that this failing department keeps getting ‘bailed out’.

B: OK. One issue here is the quality reporting process is not alerting you to the real issue. It sounds from what you say that you have fallen into the Flaw of Averages trap.

L: I don’t understand. What is the Flaw of Averages trap?

B: The answer to your question will become clear. The finance issue is a symptom – an effect – it is unlikely to be the cause. When did this finance issue appear?

L: Just after the Safety and Quality Review. They needed to employ more agency staff to do the extra work created by having to meet the new Minimum Quality target.

B: OK. I need to ask you a personal question. Do you believe that improving quality always costs more?

L: I have to say that I am coming to that conclusion. Our Governance and Finance departments are always arguing about it. Governance state ‘a minimum standard of safety and quality is not optional’ and finance say ‘but we are going out of business’. They are at loggerheads. The service departments get caught in the cross-fire.

B: OK. We will need to use reality to demonstrate that this belief is incorrect. Rhetoric alone does not work. If it did then we would not be having this conversation. Do you have the raw data from which the averages are calculated?

L: Yes. We have the data. The quality inspectors are very thorough!

B: OK – can you plot the quality scores for the last fifty jobs as a BaseLine chart?

L: Yes – give me a second. The average is 24 as I said.

B: OK – is the process stable?

L: Yes – there is only one flag for the fifty. I know from my Foundations training that is not a cause for alarm.

B: OK – what is the process capability?

L: I am sorry – I don’t know what you mean by that?

B: My apologies. I forgot that you have not completed the Practitioner training yet. The capability is the range between the red lines on the chart.

L: Um – the lower line is at 17 and the upper line is at 31.

L: OK – how many points lie below the target of 21.

B: None of course. They are meeting their Minimum Quality target. The issue is not quality – it is money.

There was a pause.  Leslie knew from experience that when Bob paused there was a surprise coming.

B: Can you email me your chart?

A cold-shiver went down Leslie’s back. What was the problem here? Bob had never asked to see the data before.

Sure. I will send it now.  The recent fifty is on the right, the data on the left is from after the quality inspectors went in and before the the Minimum Quality target was imposed. This is the chart that Governance has been using as evidence to justify their existence because they are claiming the credit for improving the quality.

B: OK – thanks. I have got it – let me see.  Oh dear.

Leslie was shocked. She had never heard Bob use language like ‘Oh dear’.

There was another pause.

B: Leslie, what is the context for this data? What does the X-axis represent?

Leslie looked at the chart again – more closely this time. Then she saw what Bob was getting at. There were fifty points in the first group, and about the same number in the second group. That was not the interesting part. In the first group the X-axis went up to 50 in regular steps of five; in the second group it went from 50 to just over 149 and was no longer regularly spaced. Eventually she replied.

Bob, that is a really good question. My guess it is that this is the quality of the completed work.

B: It is unwise to guess. It is better to go and see reality.

You are right. I knew that. It is drummed into us during the Foundations training! I will go and ask. Can I call you back?

B: Of course. I will email you my direct number.


<Ring Ring><Ring Ring>

B: Hello, Bob here.

L: Bob – it is Leslie. I am  so excited! I have discovered something amazing.

B: Hello Leslie. That is good to hear. Can you tell me what you have discovered?

L: I have discovered that better quality does not always cost more.

B: That is a good discovery. Can you prove it with data?

L: Yes I can!  I am emailing you the chart now.

B: OK – I am looking at your chart. Can you explain to me what you have discovered?

L: Yes. When I went to see for myself I saw that when a job failed the Minimum Quality check at the end then the whole job had to be re-done because there was no time to investigate and correct the causes of the failure.  The people doing the work said that they were helpless victims of errors that were made upstream of them – and they could not predict from one job to the next what the error would be. They said it felt like quality was a lottery and that they were just firefighting all the time. They knew that just repeating the work was not solving the problem but they had no other choice because they were under enormous pressure to deliver on-time as well. The only solution they could see is was to get more resources but their requests were being refused by Finance on the grounds that there is no more money. They felt completely trapped.

B: OK. Can you describe what you did?

L: Yes. I saw immediately that there were so many sources of errors that it would be impossible for me to tackle them all. So I used the tool that I had learned in the Foundations training: the Niggle-o-Gram. That focussed us and led to a surprisingly simple, quick, zero-cost process design change. We deliberately did not remove the Inspection-and-Correction policy because we needed to know what the impact of the change would be. Oh, and we did one other thing that challenged the current methods. We plotted every attempt, both the successes and the failures, on the BaseLine chart so we could see both the the quality and the work done on one chart.  And we updated the chart every day and posted it chart on the notice board so everyone in the department could see the effect of the change that they had designed. It worked like magic! They have already slashed their agency staff costs, the whole department feels calmer and they are still delivering on-time. And best of all they now feel that they have the energy and time to start looking at the next niggle. Thank you so much! Now I see how the tools and techniques I learned in Foundations are so powerful and now I understand better the reason we learned them first.

B: Well done Leslie. You have taken an important step to becoming a fully fledged Practitioner. You have learned some critical lessons in this challenge.


This scenario is fictional but realistic.

And it has been designed so that it can be replicated easily using a simple game that requires only pencil, paper and some dice.

If you do not have some dice handy then you can use this little program that simulates rolling six dice.

The Six Digital Dice program (for PC only).

Instructions
1. Prepare a piece of A4 squared paper with the Y-axis marked from zero to 40 and the X-axis from 1 to 80.
2. Roll six dice and record the score on each (or roll one die six times) – then calculate the total.
3. Plot the total on your graph. Left-to-right in time order. Link the dots with lines.
4. After 25 dots look at the chart. It should resemble the leftmost data in the charts above.
5. Now draw a horizontal line at 21. This is the Minimum Quality Target.
6. Keep rolling the dice – six per cycle, adding the totals to the right of your previous data.

But this time if the total is less than 21 then repeat the cycle of six dice rolls until the score is 21 or more. Record on your chart the output of all the cycles – not just the acceptable ones.

7. Keep going until you have 25 acceptable outcomes. As long as it takes.

Now count how many cycles you needed to complete in order to get 25 acceptable outcomes.  You should find that it is about twice as many as before you “imposed” the Inspect-and-Correct QI policy.

This illustrates the problem of an Inspection-and-Correction design for quality improvement.  It does improve the quality of the final output – but at a higher cost.

We are treating the symptoms (effects) and ignoring the disease (causes).

The internal design of the process is unchanged so it is still generating mistakes.

How much quality improvement you get and how much it costs you is determined by the design of the underlying process – which has not changed. There is a Law of Diminishing returns here – and a big risk.

The risk is that if quality improves as the result of applying a quality target then it encourages the Governance thumbscrews to be tightened further and forces those delivering the service further into cross-fire between Governance and Finance.

The other negative consequence of the Inspect-and-Correct approach is that it increases both the average and the variation in lead time which also fuels the calls for more targets, more sticks, calls for  more resources and pushes costs up even further.

The lesson from this simple exercise seems clear.

The better strategy for improving quality is to design the root causes of errors out of the processes  because then we will get improved quality and improved delivery and improved productivity and we will discover that we have improved safety as well.  Win-win-win-win.

The Six Dice Game is a simpler version of the famous Red Bead Game that W Edwards Deming used to explain why, in the modern world, the arbitrary-target-driven-command-and-control-stick-and-carrot style of performance management creates more problems than it solves.

The illusion is of short-term gain but the reality is of long-term pain.

And if you would like to see and hear Deming talking about the science of improvement there is a video of him speaking in 1984. He is at the bottom of the page.  Click here.

The Three R’s

Processes are like people – they get poorly – sometimes very poorly.

Poorly processes present with symptoms. Symptoms such as criticism, complaints, and even catastrophes.

Poorly processes show signs. Signs such as fear, queues and deficits.

So when a process gets very poorly what do we do?

We follow the Three R’s

1-Resuscitate
2-Review
3-Repair

Resuscitate means to stabilize the process so that it is not getting sicker.

Review means to quickly and accurately diagnose the root cause of the process sickness.

Repair means to make changes that will return the process to a healthy and stable state.

So the concept of ‘stability’ is fundamental and we need to understand what that means in practice.

Stability means ‘predictable within limits’. It is not the same as ‘constant’. Constant is stable but stable is not necessarily constant.

Predictable implies time – so any measure of process health must be presented as time-series data.

We are now getting close to a working definition of stability: “a useful metric of system performance that is predictable within limits over time”.

So what is a ‘useful metric’?

There will be at least three useful metrics for every system: a quality metric, a time metric and a money metric.

Quality is subjective. Money is objective. Time is both.

Time is the one to start with – because it is the easiest to measure.

And if we treat our system as a ‘black box’ then from the outside there are three inter-dependent time-related metrics. These are external process metrics (EPMs) – sometimes called Key Performance Indicators (KPIs).

Flow in – also called demand
Flow out – also called activity
Delivery time – which is the time a task spends inside our system – also called the lead time.

But this is all starting to sound like rather dry, conceptual, academic mumbo-jumbo … so let us add a bit of realism and drama – let us tell this as a story …

[reveal heading=”Click here to reveal the story …“] 


Picture yourself as the manager of a service that is poorly. Very poorly. You are getting a constant barrage of criticism and complaints and the occasional catastrophe. Your service is struggling to meet the required delivery time performance. Your service is struggling to stay in budget – let alone meet future cost improvement targets. Your life is a constant fire-fight and you are getting very tired and depressed. Nothing you try seems to make any difference. You are starting to think that anything is better than this – even unemployment! But you have a family to support and jobs are hard to come by in austere times so jumping is not an option. There is no way out. You feel you are going under. You feel are drowning. You feel terrified and helpless!

In desperation you type “Management fire-fighting” into your web search box and among the list of hits you see “Process Improvement Emergency Service”.  That looks hopeful. The link takes you to a website and a phone number. What have you got to lose? You dial the number.

It rings twice and a calm voice answers.

?“You are through to the Process Improvement Emergency Service – what is the nature of the process emergency?”

“Um – my service feels like it is on fire and I am drowning!”

The calm voice continues in a reassuring tone.

?“OK. Have you got a minute to answer three questions?”

“Yes – just about”.

?“OK. First question: Is your service safe?”

“Yes – for now. We have had some catastrophes but have put in lots of extra safety policies and checks which seems to be working. But they are creating a lot of extra work and pushing up our costs and even then we still have lots of criticism and complaints.”

?“OK. Second question: Is your service financially viable?”

“Yes, but not for long. Last year we just broke even, this year we are projecting a big deficit. The cost of maintaining safety is ‘killing’ us.”

?“OK. Third question: Is your service delivering on time?”

“Mostly but not all of the time, and that is what is causing us the most pain. We keep getting beaten up for missing our targets.  We constantly ask, argue and plead for more capacity and all we get back is ‘that is your problem and your job to fix – there is no more money’. The system feels chaotic. There seems to be no rhyme nor reason to when we have a good day or a bad day. All we can hope to do is to spot the jobs that are about to slip through the net in time; to expedite them; and to just avoid failing the target. We are fire-fighting all of the time and it is not getting better. In fact it feels like it is getting worse. And no one seems to be able to do anything other than blame each other.”

There is a short pause then the calm voice continues.

?“OK. Do not panic. We can help – and you need to do exactly what we say to put the fire out. Are you willing to do that?”

“I do not have any other options! That is why I am calling.”

The calm voice replied without hesitation. 

?“We all always have the option of walking away from the fire. We all need to be prepared to exercise that option at any time. To be able to help then you will need to understand that and you will need to commit to tackling the fire. Are you willing to commit to that?”

You are surprised and strangely reassured by the clarity and confidence of this response and you take a moment to compose yourself.

“I see. Yes, I agree that I do not need to get toasted personally and I understand that you cannot parachute in to rescue me. I do not want to run away from my responsibility – I will tackle the fire.”

?“OK. First we need to know how stable your process is on the delivery time dimension. Do you have historical data on demand, activity and delivery time?”

“Hey! Data is one thing I do have – I am drowning in the stuff! RAG charts that blink at me like evil demons! None of it seems to help though – the more data I get sent the more confused I become!”

?“OK. Do not panic.  The data you need is very specific. We need the start and finish events for the most recent one hundred completed jobs. Do you have that?”

“Yes – I have it right here on a spreadsheet – do I send the data to you to analyse?”

?“There is no need to do that. I will talk you through how to do it.”

“You mean I can do it now?”

?“Yes – it will only take a few minutes.”

“OK, I am ready – I have the spreadsheet open – what do I do?”

?“Step 1. Arrange the start and finish events into two columns with a start and finish event for each task on each row.

You copy and paste the data you need into a new worksheet. 

“OK – done that”.

?“Step 2. Sort the two columns into ascending order using the start event.”

“OK – that is easy”.

?“Step 3. Create a third column and for each row calculate the difference between the start and the finish event for that task. Please label it ‘Lead Time’”.

“OK – do you want me to calculate the average Lead Time next?”

There was a pause. Then the calm voice continued but with a slight tinge of irritation.

?“That will not help. First we need to see if your system is unstable. We need to avoid the Flaw of Averages trap. Please follow the instructions exactly. Are you OK with that?”

This response was a surprise and you are starting to feel a bit confused.    

“Yes – sorry. What is the next step?”

?“Step 4: Plot a graph. Put the Lead Time on the vertical axis and the start time on the horizontal axis”.

“OK – done that.”

?“Step 5: Please describe what you see?”

“Um – it looks to me like a cave full of stalagtites. The top is almost flat, there are some spikes, but the bottom is all jagged.”

?“OK. Step 6: Does the pattern on the left-side and on the right-side look similar?”

“Yes – it does not seem to be rising or falling over time. Do you want me to plot the smoothed average over time or a trend line? They are options on the spreadsheet software. I do that use all the time!”

The calm voice paused then continued with the irritated overtone again.

?“No. There is no value is doing that. Please stay with me here. A linear regression line is meaningless on a time series chart. You may be feeling a bit confused. It is common to feel confused at this point but the fog will clear soon. Are you OK to continue?”

An odd feeling starts to grow in you: a mixture of anger, sadness and excitement. You find yourself muttering “But I spent my own hard-earned cash on that expensive MBA where I learned how to do linear regression and data smoothing because I was told it would be good for my career progression!”

?“I am sorry I did not catch that? Could you repeat it for me?”

“Um – sorry. I was talking to myself. Can we proceed to the next step?”

?”OK. From what you say it sounds as if your process is stable – for now. That is good.  It means that you do not need to Resuscitate your process and we can move to the Review phase and start to look for the cause of the pain. Are you OK to continue?”

An uncomfortable feeling is starting to form – one that you cannot quite put your finger on.

“Yes – please”. 

?Step 7: What is the value of the Lead Time at the ‘cave roof’?”

“Um – about 42”

?“OK – Step 8: What is your delivery time target?”

“42”

?“OK – Step 9: How is your delivery time performance measured?”

“By the percentage of tasks that are delivered late each month. Our target is better than 95%. If we fail any month then we are named-and-shamed at the monthly performance review meeting and we have to explain why and what we are going to do about it. If we succeed then we are spared the ritual humiliation and we are rewarded by watching others else being mauled instead. There is always someone in the firing line and attendance at the meeting is not optional!”

You also wanted to say that the data you submit is not always completely accurate and that you often expedite tasks just to avoid missing the target – in full knowkedge that the work had not been competed to the required standard. But you hold that back. Someone might be listening.

There was a pause. Then the calm voice continued with no hint of surprise. 

?“OK. Step 10. The most likely diagnosis here is a DRAT. You have probably developed a Gaussian Horn that is creating the emotional pain and that is fuelling the fire-fighting. Do not panic. This is a common and curable process illness.”

You look at the clock. The conversation has taken only a few minutes. Your feeling of panic is starting to fade and a sense of relief and curiosity is growing. Who are these people?

“Can you tell me more about a DRAT? I am not familiar with that term.”

?“Yes.  Do you have two minutes to continue the conversation?”

“Yes indeed! You have my complete attention for as long as you need. The emails can wait.”

The calm voice continues.

?“OK. I may need to put you on hold or call you back if another emergency call comes in. Are you OK with that?”

“You mean I am not the only person feeling like this?”

?“You are not the only person feeling like this. The process improvement emergency service, or PIES as we call it, receives dozens of calls like this every day – from organisations of every size and type.”

“Wow! And what is the outcome?”

There was a pause. Then the calm voice continued with an unmistakeable hint of pride.

?“We have a 100% success rate to date – for those who commit. You can look at our performance charts and the client feedback on the website.”

“I certainly will! So can you explain what a DRAT is?” 

And as you ask this you are thinking to yourself ‘I wonder what happened to those who did not commit?’ 

The calm voice interrupts your train of thought with a well-practiced explanation.

?“DRAT stands for Delusional Ratio and Arbitrary Target. It is a very common management reaction to unintended negative outcomes such as customer complaints. The concept of metric-ratios-and-performance-specifications is not wrong; it is just applied indiscriminately. Using DRATs can drive short-term improvements but over a longer time-scale they always make the problem worse.”

One thought is now reverberating in your mind. “I knew that! I just could not explain why I felt so uneasy about how my service was being measured.” And now you have a new feeling growing – anger.  You control the urge to swear and instead you ask:

“And what is a Horned Gaussian?”

The calm voice was expecting this question.

?“It is easier to demonstrate than to explain. Do you still have your spreadsheet open and do you know how to draw a histogram?”

“Yes – what do I need to plot?”

?“Use the Lead Time data and set up ten bins in the range 0 to 50 with equal intervals. Please describe what you see”.

It takes you only a few seconds to do this.  You draw lots of histograms – most of them very colourful but meaningless. No one seems to mind though.

“OK. The histogram shows a sort of heap with a big spike on the right hand side – at 42.”

The calm voice continued – this time with a sense of satisfaction.

?“OK. You are looking at the Horned Gaussian. The hump is the Gaussian and the spike is the Horn. It is a sign that your complex adaptive system behaviour is being distorted by the DRAT. It is the Horn that causes the pain and the perpetual fire-fighting. It is the DRAT that causes the Horn.”

“Is it possible to remove the Horn and put out the fire?”

?“Yes.”

This is what you wanted to hear and you cannot help cutting to the closure question.

“Good. How long does that take and what does it involve?”

The calm voice was clearly expecting this question too.

?“The Gaussian Horn is a non-specific reaction – it is an effect – it is not the cause. To remove it and to ensure it does not come back requires treating the root cause. The DRAT is not the root cause – it is also a knee-jerk reaction to the symptoms – the complaints. Treating the symptoms requires learning how to diagnose the specific root cause of the lead time performance failure. There are many possible contributors to lead time and you need to know which are present because if you get the diagnosis wrong you will make an unwise decision, take the wrong action and exacerbate the problem.”

Something goes ‘click’ in your head and suddently your fog of confusion evaporates. It is like someone just switched a light on.

“Ah Ha! You have just explained why nothing we try seems to work for long – if at all.  How long does it take to learn how to diagnose and treat the specific root causes?”

The calm voice was expecting this question and seemed to switch to the next part of the script.

?“It depends on how committed the learner is and how much unlearning they have to do in the process. Our experience is that it takes a few hours of focussed effort over a few weeks. It is rather like learning any new skill. Guidance, practice and feedback are needed. Just about anyone can learn how to do it – but paradoxically it takes longer for the more experienced and, can I say, cynical managers. We believe they have more unlearning to do.”

You are now feeling a growing sense of urgency and excitement.

“So it is not something we can do now on the phone?”

?“No. This conversation is just the first step.”

You are eager now – sitting forward on the edge of your chair and completely focussed.

“OK. What is the next step?”

There is a pause. You sense that the calm voice is reviewing the conversation and coming to a decision.

?“Before I can answer your question I need to ask you something. I need to ask you how you are feeling.”

That was not the question you expected! You are not used to talking about your feelings – especially to a complete stranger on the phone – yet strangely you do not sense that you are being judged. You have is a growing feeling of trust in the calm voice.

You pause, collect your thoughts and attempt to put your feelings into words. 

“Er – well – a mixture of feelings actually – and they changed over time. First I had a feeling of surprise that this seems so familiar and straightforward to you; then a sense of resistance to the idea that my problem is fixable; and then a sense of confusion because what you have shown me challenges everything I have been taught; and then a feeling distrust that there must be a catch and then a feeling of fear of embarassement if I do not spot the trick. Then when I put my natural skepticism to one side and considered the possibility as real then there was a feeling of anger that I was not taught any of this before; and then a feeling of sadness for the years of wasted time and frustration from battling something I could not explain.  Eventually I started to started to feel that my cherished impossibility belief was being shaken to its roots. And then I felt a growing sense of curiosity, optimism and even excitement that is also tinged with a feeling of fear of disappointment and of having my hopes dashed – again.”

There was a pause – as if the calm voice was digesting this hearty meal of feelings. Then the calm voice stated:

?“You are experiencing the Nerve Curve. It is normal and expected. It is a healthy sign. It means that the healing process has already started. You are part of your system. You feel what it feels – it feels what you do. The sequence of negative feelings: the shock, denial, anger, sadness, depression and fear will subside with time and the positive feelings of confidence, curiosity and excitement will replace them. Do not worry. This is normal and it takes time. I can now suggest the next step.”

You now feel like you have just stepped off an emotional rollercoaster – scary yet exhilarating at the same time. A sense of relief sweeps over you. You have shared your private emotional pain with a stranger on the phone and the world did not end! There is hope.

“What is the next step?”

This time there was no pause.

?“To commit to learning how to diagnose and treat your process illnesses yourself.”

“You mean you do not sell me an expensive training course or send me a sharp-suited expert who will come tell me what to do and charge me a small fortune?”

There is an almost sarcastic tone to your reply that you regret as soon as you have spoken.

Another pause.  An uncomfortably long one this time. You sense the calm voice knows that you know the answer to your own question and is waiting for you to answer it yourself.

You answer your own question.  

“OK. I guess not. Sorry for that. Yes – I am definitely up for learning how! What do I need to do.”

?“Just email us. The address is on the website. We will outline the learning process. It is neither difficult nor expensive.”

The way this reply was delivered – calmly and matter-of-factly – was reassuring but it also promoted a new niggle – a flash of fear.

“How long have I got to learn this?”

This time the calm voice had an unmistakable sense of urgency that sent a cold prickles down your spine.

?”Delay will add no value. You are being stalked by the Horned Gaussian. This means your system is on the edge of a catastrophe cliff. It could tip over any time. You cannot afford to relax. You must maintain all your current defenses. It is a learning-by-doing process. The sooner you start to learn-by-doing the sooner the fire starts to fade and the sooner you move away from the edge of the cliff.”       

“OK – I understand – and I do not know why I did not seek help a long time ago.”

The calm voice replied simply.

?”Many people find seeking help difficult. Especially senior people”.

Sensing that the conversation is coming to an end you feel compelled to ask:

“I am curious. Where do the DRATs come from?”

?“Curiosity is a healthy attitude to nurture. We believe that DRATs originated in finance departments – where they were originally called Fiscal Averages, Ratios and Targets.  At some time in the past they were sucked into operations and governance departments by a knowledge vacuum created by an unintended error of omission.”

You are not quite sure what this unfamiliar language means and you sense that you have strayed outside the scope of the “emergency script” but the phrase ‘error of omission sounds interesting’ and pricks your curiosity. You ask: 

“What was the error of omission?”

?“We believe it was not investing in learning how to design complex adaptive value systems to deliver capable win-win-win performance. Not investing in learning the Science of Improvement.”

“I am not sure I understand everything you have said.”

?“That is OK. Do not worry. You will. We look forward to your email.  My name is Bob by the way.”

“Thank you so much Bob. I feel better just having talked to someone who understands what I am going through and I am grateful to learn that there is a way out of this dark pit of despair. I will look at the website and send the email immediately.”

?”I am happy to have been of assistance.”

[/reveal]

Systems within Systems

Each of us is a small part of a big system.  Each of us is a big system made of smaller parts. The concept of a system is the same at all scales – it is called scale invariant

When we put a system under a microscope we see parts that are also systems. And when we zoom in on those we see their parts are also systems. And if we look outwards with a telescope we see that we are part of a bigger system which in turn is part of an even bigger system.

This concept of systems-within-systems has a down-side and an up-side.

The down-side is that it quickly becomes impossible to create a mental picture of the whole system-of-systems. Our caveman brains are just not up to the job. So we just focus our impressive-but-limited cognitive capacity on the bit that affects us most. The immediate day-to-day people-and-process here-and-now stuff. And we ignore the ‘rest’. We deliberately become ignorant – and for good reason. We do not ask about the ‘rest’ because we do not want to know because we cannot comprehend the complexity. We create cognitive comfort zones and personal silos.

And we stay inside our comfort zones and we hide inside our silos.


Unfortunately – ignoring the ‘rest’ does not make it go away.

We are part of a system – we are affected by it and it is affected by us. That is how systems work.


The up-side is that all systems behave in much the same way – irrespective of the level.  This is very handy because if we can master a method for understanding and improving a system at one level – then we can use the same method at any level.  The only change is the degree of detail. We can chunk up and down and still use the same method.  

The improvement scientist needs to be a master of one method and to be aware of three levels: the system level, the stream level and the step level.

The system provides the context for the streams. The steps provide the content of the streams.

  1. Direction operates at the system level.
  2. Delivery operates at the stream level.
  3. Doing operates at the step level.

So an effective and efficient improvement science method must work at all three levels – and one method that has been demonstrated to do that is called 6M Design®.


6M Design® is not the only improvement science method, and it is not intended to be the best. Being the best is not the purpose because it is not necessary. Having better than what we had before is the purpose because it is sufficient. That is improvement.


6M Design® works at all three levels.  It is sufficient for system-wide and system-deep improvement. So that is what I use.


The first M stands for Map.

Maps are designed to be visual and two-dimensional because that is how our Mark-I eyeballs abd visual sensory systems work. Our caveman brains are good at using pictures and in extraction meaning from the detail. It is a survival skill. 

All real systems have a lot more than two dimensions. Safety, Quality, Flow and Cost are four dimensions to start with, and there are many more. So we need lots of maps. Each one looking at just two of the dimensions.  It is our set of maps that provide us with a multi-dimensional picture of the system we want to improve.

One dimension features more often in the maps than any other – and that dimension is time.

The Western cultural convention is to put time on the horizonal axis with past in the left and future on the right. Left-to-right means looking forward in time.  Right-to-left means looking backwards in time. 


We have already seen one of the time-dependent maps – The 4N Chart®.

It is a Emotion-Time map. How do we feel now and why? What do we want to feel in the futrure and why? It is a status-at-a-glance map. A static map. A snapshot.

The emotional roller coaster of change – the Nerve Curve – is an Emotion-Time map too. It is a dynamic map – an expected trajectory map.  The emotional ups and downs that we expect to encounter when we engage in significant change.

Change usually involves several threads at the same time – each with its own Nerve Curve. 

The 4N Charts® are snapshots of all the parallel threads of change – they evolve over time – they are our day-to-day status-at-a-glance maps – and they guide us to which Nerve Curve to pay attention to next and what to do. 

The map that links the three – the purposes, the pathways and the parts – is the map that underpins 6M Design®. A map that most people are not familiar with because it represents a counter-intuitive way of thinking.

And it is that critical-to-success map which differentiates innovative design from incremental improvement.

And using that map can be learned quite quickly – if you have a guide – an Improvement Scientist.

A Recipe for Improvement PIE.

Most of us are realists. We have to solve problems in the real world so we prefer real examples and step-by-step how-to-do recipes.

A minority of us are theorists and are more comfortable with abstract models and solving rhetorical problems.

Many of these Improvement Science blog articles debate abstract concepts – because I am a strong iNtuitor by nature. Most realists are Sensors – so by popular request here is a “how-to-do” recipe for a Productivity Improvement Exercise (PIE)

Step 1 – Define Productivity.

There are many definitions we could choose because productivity means the results delivered divided by the resources used.  We could use any of the three currencies – quality, time or money – but the easiest is money. And that is because it is easier to measure and we have well established department for doing it – Finance – the guardians of the money.  There are two other departments who may need to be involved – Governance (the guardians of the safety) and Operations (the guardians of the delivery).

So the definition we will use is productivity = revenue generated divided cost incurred.

Step 2 – Draw a map of the process we want to make more productive.

This means creating a picture of the parts and their relationships to each other – in particular what the steps in the process are; who does what, where and when; what is done in parallel and what is done in sequence; what feeds into what and what depends on what. The output of this step is a diagram with boxes and arrows and annotations – called a process map. It tells us at a glance how complex our process is – the number of boxes and the number of arrows.  The simpler the process the easier it is to demonstrate a productivity improvement quickly and unambiguously.

Step 3 – Decide the objective metrics that will tell us our productivity.

We have chosen a finanical measure of productivity so we need to measure revenue and cost over time – and our Finance department do that already so we do not need to do anything new. We just ask them for the data. It will probably come as a monthly report because that is how Finance processes are designed – the calendar month accounting cycle is not negotiable.

We will also need some internal process metrics (IPMs) that will link to the end of month productivity report values because we need to be observing our process more often than monthly. Weekly, daily or even task-by-task may be necessary – and our monthly finance reports will not meet that time-granularity requirement.

These internal process metrics will be time metrics.

Start with objective metrics and avoid the subjective ones at this stage. They are necessary but they come later.

Step 4 – Measure the process.

There are three essential measures we usually need for each step in the process: A measure of quality, a measure of time and a measure of cost.  For the purposes of this example we will simplify by making three assumptions. Quality is 100% (no mistakes) and Predictability is 100% (no variation) and Necessity is 100% (no worthless steps). This means that we are considering a simplified and theoretical situation but we are novices and we need to start with the wood and not get lost in the trees.

The 100% Quality means that we do not need to worry about Governance for the purposes of this basic recipe.

The 100% Predictability means that we can use averages – so long as we are careful.

The 100% Necessity means that we must have all the steps in there or the process will not work.

The best way to measure the process is to observe it and record the events as they happen. There is no place for rhetoric here. Only reality is acceptable. And avoid computers getting in the way of the measurement. The place for computers is to assist the analysis – and only later may they be used to assist the maintenance – after the improvement has been achieved.

Many attempts at productivity improvement fail at this point – because there is a strong belief that the more computers we add the better. Experience shows the opposite is usually the case – adding computers adds complexity, cost and the opportunity for errors – so beware.

Step 5 – Identify the Constraint Step.

The meaning of the term constraint in this context is very specific – it means the step that controls the flow in the whole process.  The critical word here is flow. We need to identify the current flow constraint.

A tap or valve on a pipe is a good example of a flow constraint – we adjust the tap to control the flow in the whole pipe. It makes no difference how long or fat the pipe is or where the tap is, begining, middle or end. (So long as the pipe is not too long or too narrow or the fluid too gloopy because if they are then the pipe will become the flow constraint and we do not want that).

The way to identify the constraint in the system is to look at the time measurements. The step that shows the same flow as the output is the constraint step. (And remember we are using the simplified example of no errors and no variation – in real life there is a bit more to identifying the constraint step).

Step 6 – Identify the ideal place for the Constraint Step.

This is the critical-to-success step in the PIE recipe. Get this wrong and it will not work.

This step requires two pieces of measurement data for each step – the time data and the cost data. So the Operational team and the Finance team will need to collaborate here. Tricky I know but if we want improved productivity then there is no alternative.

Lots of productivity improvement initiatives fall at the Sixth Fence – so beware.  If our Finance and Operations departments are at war then we should not consider even starting the race. It will only make the bad situation even worse!

If they are able to maintain an adult and respectful face-to-face conversation then we can proceed.

The time measure for each step we need is called the cycle time – which is the time interval from starting one task to being ready to start the next one. Please note this is a precise definition and it should be used exactly as defined.

The money measure for each step we need is the fully absorbed cost of time of providing the resource.  Your Finance department will understand that – they are Masters of FACTs!

The magic number we need to identify the Ideal Constraint is the product of the Cycle Time and the FACT – the step with the highest magic number should be the constraint step. It should control the flow in the whole process. (In reality there is a bit more to it than this but I am trying hard to stay out of the trees).

Step 7 – Design the capacity so that the Ideal Constraint is the Actual Constraint.

We are using a precise definition of the term capacity here – the amount of resource-time available – not just the number of resources available. Again this is a precise definition and should be used as defined.

The capacity design sequence  means adding and removing capacity to and from steps so that the constraint moves to where we want it.

The sequence  is:
7a) Set the capacity of the Ideal Constraint so it is capable of delivering the required activity and revenue.
7b) Increase the capacity of the all the other steps so that the Ideal Constraint actually controls the flow.
7c) Reduce the capacity of each step in turn, a click at a time until it becomes the constraint then back off one click.

Step 8 – Model your whole design to predict the expected productivity improvement.

This is critical because we are not interested in suck-it-and-see incremental improvement. We need to be able to decide if the expected benefit is worth the effort before we authorise and action any changes.  And we will be asked for a business case. That necessity is not negotiable either.

Lots of productivity improvement projects try to dodge this particularly thorny fence behind a smoke screen of a plausible looking business case that is more fiction than fact. This happens when any of Steps 2 to 7 are omitted or done incorrectly.  What we need here is a model and if we are not prepared to learn how to build one then we should not start. It may only need a simple model – but it will need one. Intuition is too unreliable.

A model is defined as a simplified representation of reality used for making predictions.

All models are approximations of reality. That is OK.

The art of modeling is to define the questions the model needs to be designed to answer (and the precision and accuracy needed) and then design, build and test the model so that it is just simple enough and no simpler. Adding unnecessary complexity is difficult, time consuming, error prone and expensive. Using a computer model when a simple pen-and-paper model would suffice is a good example of over-complicating the recipe!

Many productivity improvement projects that get this far still fall at this fence.  There is a belief that modeling can only be done by Marvins with brains the size of planets. This is incorrect.  There is also a belief that just using a spreadsheet or modelling software is all that is needed. This is incorrect too. Competent modelling requires tools and training – and experience because it is as much art as science.

Step 9 – Modify your system as per the tested design.

Once you have demonstrated how the proposed design will deliver a valuable increase in productivity then get on with it.

Not by imposing it as a fait accompli – but by sharing the story along with the rationale, real data, explanation and results. Ask for balanced, reasoned and respectful feedback. The question to ask is “Can you think of any reasons why this would not work?” Very often the reply is “It all looks OK in theory but I bet it won’t work in practice but I can’t explain why”. This is an emotional reaction which may have some basis in fact. It may also just be habitual skepticism/cynicism. Further debate is usually  worthless – the only way to know for sure is by doing the experiment. As an experiment – as a small-scale and time-limited pilot. Set the date and do it. Waiting and debating will add no value. The proof of the pie is in the eating.

Step 10 – Measure and maintain your system productivity.

Keep measuring the same metrics that you need to calculate productivity and in addition monitor the old constraint step and the new constraint steps like a hawk – capturing their time metrics for every task – and tracking what you see against what the model predicted you should see.

The correct tool to use here is a system behaviour chart for each constraint metric.  The before-the-change data is the baseline from which improvement is measured over time;  and with a dot plotted for each task in real time and made visible to all the stakeholders. This is the voice of the process (VoP).

A review after three months with a retrospective financial analysis will not be enough. The feedback needs to be immediate. The voice of the process will dictate if and when to celebrate. (There is a bit more to this step too and the trees are clamoring for attention but we must stay out of the wood a bit longer).

And after the charts-on-the-wall have revealed the expected improvement has actually happened; and after the skeptics have deleted their ‘we told you so’ emails; and after the cynics have slunk off to sulk; and after the celebration party is over; and after the fame and glory has been snatched by the non-participants – after all of that expected change management stuff has happened …. there is a bit more work to do.

And that is to establish the new higher productivity design as business-as-usual which means tearing up all the old policies and writing new ones: New Policies that capture the New Reality. Bin the out-of-date rubbish.

This is an essential step because culture changes slowly.  If this step is omitted then out-of-date beliefs, attitudes, habits and behaviours will start to diffuse back in, poison the pond, and undo all the good work.  The New Policies are the reference – but they alone will not ensure the improvement is maintained. What is also needed is a PFL – a performance feedback loop.

And we have already demonstrated what that needs to be – the tactical system behaviour charts for the Intended Constraint step.

The finanical productivity metric is the strategic output and is reported monthly – as a system behaviour chart! Just comparing this month with last month is meaningless.  The tactical SBCs for the constraint step must be maintained continuously by the people who own the constraint step – because they control the productivity of the whole process.  They are the guardians of the productivity improvement and their SBCs are the Early Warning System (EWS).

If the tactical SBCs set off an alarm then investigate the root cause immediately – and address it. If they do not then leave it alone and do not meddle.

This is the simplified version of the recipe. The essential framework.

Reality is messier. More complicated. More fun!

Reality throws in lots of rusty spanners so we do also need to understand how to manage the complexity; the unnecessary steps; the errors; the meddlers; and the inevitable variation.  It is possible (though not trivial) to design real systems to deliver much higher productivity by using the framework above and by mastering a number of other tools and techniques.  And for that to succeed the Governance, Operations and Finance functions need to collaborate closely with the People and the Process – initially with guidance from an experienced and competent Improvement Scientist. But only initially. This is a learnable skill. And it takes practice to master – so start with easy ones and work up.

If any of these bits are missing or are dysfunctional the recipe will not work. So that is the first nettle the Executive must grasp. Get everyone who is necessary on the same bus going in the same direction – and show the cynics the exit. Skeptics are OK – they will counter-balance the Optimists. Cynics add no value and are a liability.

What you may have noticed is that 8 of the 10 steps happen before any change is made. 80% of the effort is in the design – only 20% is in the doing.

If we get the design wrong the the doing will be an ineffective and inefficient waste of effort, time and money.


The best complement to real Improvement PIE is a FISH course.


NIGYYSOB

This is the image of an infamous headline printed on May 4th 1982 in a well known UK newspaper.  It refers to the sinking of the General Belgrano in the Falklands war.

It is the clarion call of revenge – the payback for past grievances.

The full title is NIGYYSOB which stands for Now I Gotcha You Son Ofa B**** and is the name of one of Eric Berne’s Games that People Play.  In this case it is a Level 4 Game – played out on the global stage by the armed forces of the protagonists and resulting in both destruction and death.


The NIGYYSOB game is played out much more frequently at Level 1 – in the everyday interactions between people – people who believe that revenge has a sweet taste.

The reason this is important to the world of Improvement Science is because sometimes a well-intentioned improvement can get unintentionally entangled in a game of NIGYYSOB.

Here is how the drama unfolds.

Someone complains frequently about something that is not working, a Niggle, that they believe that they are powerless to solve. Their complaints are either ignored, discounted or not acted upon because the person with the assumed authority to resolve it cannot do so because they do not know how and will not admit that.  This stalemate can fester for a long time and can build up a Reservoir of Resentment. The Niggle persists and keeps irritating the emotional wound which remains an open cultural sore.  It is not unusual for a well-intentioned third party to intervene to resolve the standoff but as they too are unable to resolve the underlying problem – and all that results is either meddling or diktat which can actually make the problem worse.

The outcome is a festering three-way stalemate with a history of failed expectations and a deepening Well of Cynicism.

Then someone with an understanding of Improvement Science appears on the scene – and the stage is set for a new chapter of the drama because they risk of being “hooked” into The Game.  The newcomer knows how to resolve the problem and, with the grudging consent of the three protagonists, as if by magic, the Niggle is dissolved.  Wow!   The walls of the Well of Cynicism are breached by the new reality and the three protagonists suddenly realise that they may need to radically re-evaluate their worldviews.  That was not expected!

What can happen next is an emotional backlash – rather like a tight elastic band being released at one end. Twang! Snap! Ouch!


We all have a the same psychological reaction to a sudden and surprising change in our reality – be it for the better or for the worse. It takes time to adjust to a new worldview and that transition phase is both fragile and unstable; so there is a risk of going off course.

Experience teaches us that it does not take much to knock the tentative improvement over.


The application of Improvement Science will generate transitions that need to be anticipated and proactively managed because if this is not done then there is a risk that the emotional backlash will upset the whole improvement apple-cart.

What appears to occur is: after reality shows that the improvement has worked then the realisation dawns that the festering problem was always solvable, and the chronic emotional pain was avoidable. This comes as a psychological shock that can trigger a reflex emotional response called anger: the emotion that signals the unconscious perception of sudden loss of the old, familiar, worldview. The anger is often directed externally and at the perceived obstruction that blocked the improvement; the person who “should” have known what to do; often the “boss”.  This backlash, the emotional payoff, carries the implied message of “You are not OK because you hold the power, and you could not solve this, and you were too arrogant to ask for help and now I have proved you wrong and that I was right all the time!”  Sweet-tasting revenge?

Unfortunately not. The problem is that this emotional backlash damages the fragile, emerging, respectful relationship and can effectively scupper any future tentative inclinations to improve. The chronic emotional pain returns even worse than before; the Well of Cynicism deepens; and the walls are strengthened and become less porous.

The improvement is not maintained and it dies of neglect.


The reality of the situation was that none of the three protagonists actually knew what to do – hence the stalemate – and the only way out of that situation is for them all to recognise and accept the reality of their collective ignorance – and then to learn together.

Managing the improvement transition is something that an experienced facilitator needs to understand. If there is a them-and-us cultural context; a frustrated standoff; a high pressure store of accumulated bad feeling; and a deep well of cynicism then that emotional abscess needs to diagnosed, incised and drained before any attempt at sustained improvement can be made.

If we apply direct pressure on an emotional abscess then it is likely to rupture and squirt you with cynicide; or worse still force the emotional toxin back into the organisation and poison the whole system. (Email is a common path-of-low-resistance for emotional toxic waste!).

One solution is to appreciate that the toxic emotional pressure needs to be released in a safe and controlled way before the healing process can start.  Most of the pain goes away as soon as the abscess is lanced – the rest dissipates as the healing process engages.

One model that is helpful in proactively managing this dynamic is the Elizabeth Kubler-Ross model of grief which describes the five stages: denial, anger, bargaining, depression, and acceptance.  Grief is the normal emotional reaction to a sudden change in reality – such as the loss of a loved one – and the same psychological process operates for all emotionally significant changes.  The facilitator just needs to provide a game-free and constructive way to manage the anger by reinvesting the passion into the next cycle of improvement.  A more recent framework for this is the Lewis-Parker model which has seven stages:

  1. Immobilisation – Shock. Overwhelmed mismatch: expectations vs reality.
  2. Denial of Change – Temporary retreat. False competence.
  3. Incompetence – Awareness and frustration.
  4. Acceptance of Reality – ‘Letting go’.
  5. Testing – New ways to deal with new reality.
  6. Search for Meaning – Internalisation and seeking to understand.
  7. Integration – Incorporation of meanings within behaviours.

An effective tool for getting the emotional rollercoaster moving is The 4N Chart® – it allows the emotional pressure and pain to be released in a safe way. The complementary tool for diagnosing and treating the cultural abscess is called AFPS (Argument Free Problem Solving) which is a version of Edward De Bono’s Six Thinking Hats®.

The two are part of the improvement-by-design framework called 6M Design® which in turn is a rational, learnable, applicable and teachable manifestation of Improvement Science.

 

The Rubik Cube Problem

Look what popped out of Santa’s sack!

I have not seen one of these for years and it brought back memories of hours of frustration and time wasted in attempting to solve it myself; a sense of failure when I could not; a feeling of envy for those who knew how to; and a sense of indignation when they jealously guarded the secret of their “magical” power.

The Rubik Cube got me thinking – what sort of problem is this?

At first it is easy enough but it becomes quickly apparent that it becomes more difficult the closer we get to the final solution – because our attempts to reach perfection undo our previous good work.  It is very difficult to maintain our initial improvement while exploring new options. 

This insight struck me as very similar to many of the problems we face in life and the sense of futility that creates a powerful force that resists further attempts at change.  Fortunately, we know that it is possible to solve the Rubik cube – so the question this raises is “Is there a way to solve it in a rational, reliable and economical way from any starting point?

One approach is to try every possible combination of moves until we find the solution. That is the way a computer might be programmed to solve it – the zero intelligence or brute force approach.

The problem here is that it works in theory but fails in practice because of the number of possible combinations of moves. At each step you can move one of the six faces in one of two directions – that is 12 possible options; and for each of these there are 12 second moves or 12 x 12 possible two-move paths; 12 x 12 x 12 = 1728 possible three-move paths; about 3 million six-move paths; and nearly half a billion eight-move paths!

You get the idea – solving it this way is not feasible unless you are already very close to the solution.

So how do we actually solve the Rubik Cube?  Well, the instructions that come with a new one tells you – a combination of two well-known ingredients: strategy and tactics. The strategy is called goal-directed and in my instructions the recommended strategy is to solving each layer in sequence. The tactics are called heuristics: tried-tested-and-learned sequences of actions that are triggered by specific patterns.

At each step we look for a small set of patterns and when we find one we follow the pre-designed heuristic and that moves us forward along the path towards the next goal. Of the billions of possible heuristics we only learn, remember, use and teach the small number that preserve the progress we have already made – these are our magic spells.

So where do these heuristics come from?

Well, we can search for them ourselves or we can learn them from someone else.  The first option holds the opportunity for new insights and possible breakthroughs – the second option is quicker!  Someone who designs or discovers a better heuristic is assured a place in history – most of us only ever learn ones that have been discovered or taught by others – it is a much quicker way to solve problems.  

So, for a bit of fun I compared the two approaches using a computer: the competitive-zero-intelligence-brute-force versus the collaborative-goal-directed-learned-and-shared-heuristics.  The heuristic method won easily every time!

The Rubik Cube is an example of a mechanical system: each of the twenty-six parts are interdependent, we cannot move one facet independently of the others, we can only move groups of nine at a time. Every action we make has nine consequences – not just one.  To solve the whole Rubik Cube system problem we must be mindful of the interdependencies and adopt methods that preserve what works while improving what does not.

The human body is a complex biological system. In medicine we have a phrase for this concept of preserving what works while improving what does not: “primum non nocere” which means “first of all do no harm”.  Doctors are masters of goal-directed heuristics; the medical model of diagnosis before prognosis before treatment is a goal-directed strategy and the common tactic is to quickly and accurately pattern-match from a small set of carefully selected data. 

In reality we all employ goal-directed-heuristics all of the time – it is the way our caveman brains have evolved.  Relative success comes from having a more useful set of heuristics – and these can be learned.  Just as with the Rubik Cube – it is quicker to learn what works from someone who can demonstrate that it works and can explain how it works – than to always laboriously work it out for ourselves.

An organisation is a bio-psycho-socio-economic system: a set of interdependent parts called people connected together by relationships and communication processes we call culture.  Improvement Science is a set of heuristics that have been discovered or designed to guide us safely and reliably towards any goal we choose to select – preserving what has been shown to work and challenging what does not.  Improvement Science does not define the path it only helps us avoid getting stuck, or going around in circles, or getting hopelessly lost while we are on the life-journey to our chosen goal.

And Improvement Science is learnable.

Lies, Damned Lies and Statistics!

Most people are confused by statistics and because of this experts often regard them as ignorant, stupid or both.  However, those who claim to be experts in statistics need to proceed with caution – and here is why.

The people who are confused by statistics are confused for a reason – the statistics they see presented do not make sense to them in their world.  They are not stupid – many are graduates and have high IQ’s – so this means they must be ignorant and the obvious solution is to tell them to go and learn statistics. This is the strategy adopted in medicine: Trainees are expected to invest some time doing research and in the process they are expected to learn how to use statistics in order to develop their critical thinking and decision making.  So far so good, so what  is the outcome?

Well, we have been running this experiment for decades now – there are millions of peer reviewed papers published – each one having passed the scrutiny of a statistical expert – and yet we still have a health care system that is not delivering what we need at a cost we can afford.  So, there must be someone else at fault – maybe the managers! They are not expected to learn or use statistics so that statistically-ignorant rabble must be the problem -so the next plan is “Beat up the managers” and “Put statistically trained doctors in charge”.

Hang on a minute! Before we nail the managers and restructure the system let us step back and consider another more radical hypothesis. What if there is something not right about the statistics we are using? The medical statistics experts will rise immediately and state “Research statistics is a rigorous science derived from first principles and is mathematically robust!”  They are correct. It is. But all mathematical derivations are based on some initial fundamental assumptions so when the output does not seem to work in all cases then it is always worth re-examining the initial assumptions. That is the tried-and-tested path to new breakthroughs and new understanding.

The basic assumption that underlies research statistics is that all measurements are independent of each other which also implies that order and time can be ignored.  This is the reason that so much effort, time and money is invested in the design of a research trial – to ensure that the statistical analysis will be correct and the conclusions will be valid. In other words the research trial is designed around the statistical analysis method and its founding assumption. And that is OK when we are doing research.

However, when we come to apply the output of our research trials to the Real World we have a problem.

How do we demonstrate that implementing the research recommendation has resulted in an improvement? We are outside the controlled environment of research now and we cannot distort the Real World to suit our statistical paradigm.  Are the statistical tools we used for the research still OK? Is the founding assumption still valid? Can we still ignore time? Our answer is clearly “NO” because we are looking for a change over time! So can we assume the measurements are independent – again our answer is “NO” because for a process the measurement we make now is influenced by the system before, and the same system will also influence the next measurement. The measurements are NOT independent of each other.

Our statistical paradigm suddenly falls apart because the founding assumption on which it is built is no longer valid. We cannot use the statistics that we used in the research when we attempt to apply the output of the research to the Real World. We need a new and complementary statistical approach.

Fortunately for us it already exists and it is called improvement statistics and we use it all the time – unconsciously. No doctor would manage the blood pressure of a patient on Ward A  based on the average blood pressure of the patients on Ward B – it does not make sense and would not be safe.  This single flash of insight is enough to explain our confusion. There is more than one type of statistics!

New insights also offer new options and new actions. One action would be that the Academics learn improvement statistics so that they can understand better the world outside research; another action would be that the Pragmatists learn improvement statistics so that they can apply the output of well-conducted research in the Real World in a rational, robust and safe way. When both groups have a common language the opportunities for systemic improvment increase. 

BaseLine© is a tool designed specifically to offer the novice a path into the world of improvement statistics.