Burn-and-Scrape


telephone_ringing_300_wht_14975[Ring Ring]

<Bob> Hi Leslie how are you to today?

<Leslie> I am good thanks Bob and looking forward to today’s session. What is the topic?

<Bob> We will use your Niggle-o-Gram® to choose something. What is top of the list?

<Leslie> Let me see.  We have done “Engagement” and “Productivity” so it looks like “Near-Misses” is next.

<Bob> OK. That is an excellent topic. What is the specific Niggle?

<Leslie> “We feel scared when we have a safety near-miss because we know that there is a catastrophe waiting to happen.”

<Bob> OK so the Purpose is to have a system that we can trust not to generate avoidable harm. Is that OK?

<Leslie> Yes – well put. When I ask myself the purpose question I got a “do” answer rather than a “have” one. The word trust is key too.

<Bob> OK – what is the current safety design used in your organisation?

<Leslie> We have a computer system for reporting near misses – but it does not deliver the purpose above. If the issue is ranked as low harm it is just counted, if medium harm then it may be mentioned in a report, and if serious harm then all hell breaks loose and there is a root cause investigation conducted by a committee that usually results in a new “you must do this extra check” policy.

<Bob> Ah! The Burn-and-Scrape model.

<Leslie>Pardon? What was that? Our Governance Department call it the Swiss Cheese model.

<Bob> Burn-and-Scrape is where we wait for something to go wrong – we burn the toast – and then we attempt to fix it – we scrape the burnt toast to make it look better. It still tastes burnt though and badly burnt toast is not salvageable.

<Leslie>Yes! That is exactly what happens all the time – most issues never get reported – we just “scrape the burnt toast” at all levels.

fire_blaze_s_150_clr_618 fire_blaze_h_150_clr_671 fire_blaze_n_150_clr_674<Bob> One flaw with the Burn-and-Scrape design is that harm has to happen for the design to work.

It is all reactive.

Another design flaw is that it focuses attention on the serious harm first – avoidable mortality for example.  Counting the extra body bags completely misses the purpose.  Avoidable death means avoidably shortened lifetime.  Avoidable non-fatal will also shorten lifetime – and it is even harder to measure.  Just consider the cumulative effect of all that non-fatal life-shortening avoidable-but-ignored harm?

Most of the reasons that we live longer today is because we have removed a lot of lifetime shortening hazards – like infectious disease and severe malnutrition.

Take health care as an example – accurately measuring avoidable mortality in an inherently high-risk system is rather difficult.  And to conclude “no action needed” from “no statistically significant difference in mortality between us and the global average” is invalid and it leads to a complacent delusion that what we have is good enough.  When it comes to harm it is never “good enough”.

<Leslie> But we do not have the resources to investigate the thousands of cases of minor harm – we have to concentrate on the biggies.

<Bob> And do the near misses keep happening?

<Leslie> Yes – that is why they are top rank  on the Niggle-o-Gram®.

<Bob> So the Burn-and-Scrape design is not fit-for-purpose.

<Leslie> So it seems. But what is the alternative? If there was one we would be using it – surely?

<Bob> Look back Leslie. How many of the Improvement Science methods that you have already learned are business-as-usual?

<Leslie> Good point. Almost none.

<Bob> And do they work?

<Leslie> You betcha!

<Bob> This is another example.  It is possible to design systems to be safe – so the frequent near misses become rare events.

<Leslie> Is it?  Wow! That know-how would be really useful to have. Can you teach me?

<Bob> Yes. First we need to explore what the benefits would be.

<Leslie> OK – well first there would be no avoidable serious harm and we could trust in the safety of our system – which is the purpose.

<Bob> Yes …. and?

<Leslie> And … all the effort, time and cost spent “scraping the burnt toast” would be released.

<Bob> Yes …. and?

<Leslie> The safer-by-design processes would be quicker and smoother, a more enjoyable experience for both customers and suppliers, and probably less expensive as well!

<Bob> Yes. So what does that all add up to?

<Leslie> A win-win-win-win outcome!

<Bob> Indeed. So a one-off investment of effort, time and money in learning Safety-by-Design methods would appear to be a wise business decision.

<Leslie> Yes indeed!  When do we start?

<Bob> We have already started.


For a real-world example of this approach delivering a significant and sustained improvement in safety click here.

Invisible Design

Improvement Science is all about making some-thing better in some-way by some-means.

There are lots of things that might be improved – almost everything in fact.

There are lots of ways that those things might be improved. If it was a process we might improve safety, quality, delivery, and productivity. If it was a product we might improve reliability, usability, durability and affordability.

There are lots of means by which those desirable improvements might be achieved – lots of different designs.

Multiply that lot together and you get a very big number of options – so it is no wonder we get stuck in the “what to do first?” decision process.

So how do we approach this problem currently?

We use our intuition.

Intuition steers us to the obvious – hence the phrase intuitively obvious. Which means what looks to our minds-eye to be a good option.And that is OK. It is usually a lot better than guessing (but not always).

However, the problem using “intuitively obvious” is that we end up with mediocrity. We get “about average”. We get “OKish”.  We get “satisfactory”. We get “what we expected”. We get “same as always”. We do not get “significantly better-than-average’. We do not get “reliably good”. We do not get improvement. And we do not because anyone and everyone can do the “intuitively obvious” stuff.

To improve we need a better-than-average functional design. We need a Reliably Good Design. And that is invisible.

By “invisible” I mean not immediately obvious to our conscious awareness.  We do not notice good functional design because it does not get in the way of achieving our intention.  It does not trip us up.

We notice poor functional design because it trips us up. It traps us into making mistakes. It wastes out time. It fails to meet our expectation. And we are left feeling disappointed, irritated, and anxious. We feel Niggled.

We also notice exceptional design – because it works far better than we expected. We are surprised and we are delighted.

We do not notice Good Design because it just works. But there is a trap here. And that is we habitually link expectation to price.  We get what we paid for.  Higher cost => Better design => Higher expectation.

So we take good enough design for granted. And when we take stuff for granted we are on the slippery slope to losing it. As soon as something becomes invisible it is at risk of being discounted and deleted.

If we combine these two aspects of “invisible design” we arrive at an interesting conclusion.

To get from Poor Design to OK Design and then Good Design we have to think “counter-intuitively”.  We have to think “outside the box”. We have to “think laterally”.

And that is not a natural way for us to think. Not for individuals and not for teams. To get improvement we need to learn a method of how to counter our habit of thinking intuitively and we need to practice the method so that we can do it when we need to. When we want to need to improve.

To illustrate what I mean let us consider an real example.

Suppose we have 26 cards laid out in a row on a table; each card has a number on it; and our task is to sort the cards into ascending order. The constraint is that we can only move cards by swapping them.  How do we go about doing it?

There are many sorting designs that could achieve the intended purpose – so how do we choose one?

One criteria might be the time it takes to achieve the result. The quicker the better.

One criteria might be the difficulty of the method we use to achieve the result. The easier the better.

When individuals are given this task they usually do something like “scan the cards for the smallest and swap it with the first from the left, then repeat for the second from the left, and so on until we have sorted all the cards“.

This card-sorting-design is fit for purpose.  It is intuitively obvious, it is easy to explain, it is easy to teach and it is easy to do. But is it the quickest?

The answer is NO. Not by a long chalk.  For 26 randomly mixed up cards it will take about 3 minutes if we scan at a rate of 2 per second. If we have 52 cards it will take us about 12 minutes. Four times as long. Using this intuitively obvious design the time taken grows with the square of the number of cards that need sorting.

In reality there are much quicker designs and for this type of task one of the quickest is called Quicksort. It is not intuitively obvious though, it is not easy to describe, but it is easy to do – we just follow the Quicksort Policy.  (For those who are curious you can read about the method here and make up your own mind about how “intuitively obvious” it is.  Quicksort was not invented until 1960 so given that sorting stuff is not a new requirement, it clearly was not obvious for a few thousand years).

Using Quicksort to sort our 52 cards would take less than 3 minutes! That is a 400% improvement in productivity when we flip from an intuitive to a counter-intuitive design.  And Quicksort was not chance discovery – it was deliberately designed to address a specific sorting problem – and it was designed using robust design principles.

So our natural intuition tends to lead us to solutions that are “effective, easy and inefficient” – and that means expensive in terms of use of resources.

This has an important conclusion – if we are all is given the same improvement assignment and we all used our intuition to solve it then we will get similar and mediocre results.  It will feel OK and it will appear obvious but there will be no improvement.

We then conclude that “OK, this is the best we can expect.” which is intuitively obvious, logically invalid, and wrong. It is that sort of intuitive thinking trap that blocked us from inventing Quicksort for thousands of years.

And remember, to decide what is “best” we have to explore all options exhaustively – both intuitively obvious and counter-intuitively obscure. That impossible in practice.  This is why “best” and “optimum” are generally unhelpful concepts in the context of improvement science.

So how do we improve when good design is so counter-intuitive?

The answer is that we learn a set of “good designs” from a teacher who knows and understands them, and then we prove them to ourselves in practice. We leverage the “obvious in retrospect” effect. And we practice until we understand. And then we then teach others.

So if we wanted to improve the productivity of our designed-by-intuition card sorting process we could:
(a) consult a known list of proven sorting algorithms,
(b) choose one that meets our purpose (our design specification),
(c) compare the measured performance of our current “intuitively obvious” design with the predicted performance of that “counter-intuitively obscure” design,
(d) set about planning how to implement the higher performance design – possibly as a pilot first to confirm the prediction, reassure the fence-sitters, satisfy the skeptics, and silence the cynics.

So if these proven good designs are counter-intuitive then how do we get them?

The simplest and quickest way is to learn from people who already know and understand them. If we adopt the “not invented by us” attitude and attempt to re-invent the wheel then we may get lucky and re-discover a well-known design, we might even discover a novel design; but we are much more likely to waste a lot of time and end up no better off, or worse. This is called “meddling” and is driven by a combination of ignorance and arrogance.

So who are these people who know and understand good design?

They are called Improvement Scientists – and they have learned one-way-or-another what a good design looks like. That lalso means they can see poor design where others see only-possible design.

That difference of perception creates a lot of tension.

The challenge that Improvement Scientists face is explaining how counter-intuitive good design works: especially to highly intelligent, skeptical people who habitually think intuitively. They are called Academics.  And it is a pointless exercise trying to convince them using rhetoric.

Instead our Improvement Scientists side-steps the “theoretical discussion” and the “cynical discounting” by pragmatically demonstrating the measured effect of good design in practice. They use reality to make the case for good design – not rhetoric.

Improvement Scientists are Pragmatists.

And because they have learned how counter-intuitive good design is to the novice – how invisible it is to their intuition – then they are also Voracious Learners. They have enough humility to see themselves as Eternal Novices and enough confidence to be selective students.  They will actively seek learning from those who can demonstrate the “what” and explain the “how”.  They know and understand it is a much quicker and easier way to improve their knowledge and understanding.  It is Good Design.

 

Do Not Give Up Too Soon

clock_hands_spinning_import_150_wht_3149Tangible improvement takes time. Sometimes it takes a long time.

The more fundamental the improvement the more people are affected. The more people involved the greater the psychological inertia. The greater the resistance the longer it takes to show tangible effects.

The advantage of deep-level improvement is that the cumulative benefit is greater – the risk is that the impatient Improvementologist may give up too early – sometimes just before the benefit becomes obvious to all.

The seeds of change need time to germinate and to grow – and not all good ideas will germinate. The green shoots of innovation do not emerge immediately – there is often a long lag and little tangible evidence for a long time.

This inevitable  delay is a source of frustration, and the impatient innovator can unwittingly undo their good work.  By pushing too hard they can drag a failure from the jaws of success.

Q: So how do we avoid this trap?

The trick is to understand the effect of the change on the system.  This means knowing where it falls on our Influence Map that is marked with the Circles of Control, Influence and Concern.

Our Circle of Concern includes all those things that we are aware of that present a threat to our future survival – such as a chunk of high-velocity space rock smashing into the Earth and wiping us all out in a matter of milliseconds. Gulp! Very unlikely but not impossible.

Some concerns are less dramatic – such as global warming – and collectively we may have more influence over changing that. But not individually.

Our Circle of Influence lies between the limit of our individual control and the limit of our collective control. This a broad scope because “collective” can mean two, twenty, two hundred, two thousand, two million, two billion and so on.

Making significant improvements is usually a Circle of Influence challenge and only collectively can we make a difference.  But to deliver improvement at this level we have to influence others to change their knowledge, understanding, attitudes, beliefs and behaviour. That is not easy and that is not quick. It is possible though – with passion, plausibility, persistence, patience – and an effective process.

It is here that we can become impatient and frustrated and are at risk of giving up too soon – and our temperaments influence the risk. Idealists are impatient for fundamental change. Rationals, Guardians and Artisans do not feel the same pain – and it is a rich source of conflict.

So if we need to see tangible results quickly then we have to focus closer to home. We have to work inside our Circle of Individual Influence and inside our Circle of Control.  The scope of individual influence varies from person-to-person but our Circle of Control is the same for all of us: the outer limit is our skin.  We all choose our behaviour and it is that which influences others: for better or for worse.  It is not what we think it is what we do. We cannot read or control each others minds. We can all choose our attitudes and our actions.

So if we want to see tangible improvement quickly then we must limit the scope of our action to our Circle of Individual Influence and get started.  We do what we can and as soon as we can.

Choosing what to do and what not do requires wisdom. That takes time to develop too.


Making an impact outside the limit of our Circle of Individual Influence is more difficult because it requires influencing many other people.

So it is especially rewarding for to see examples of how individual passion, persistence and patience have led to profound collective improvement.  It proves that it is still possible. It provides inspiration and encouragement for others.

One example is the recently published Health Foundation Quality, Cost and Flow Report.

This was a three-year experiment to test if the theory, techniques and tools of Improvement Science work in healthcare: specifically in two large UK acute hospitals – Sheffield and Warwick.

The results showed that Improvement Science does indeed work in healthcare and it worked for tough problems that were believed to be very difficult if not impossible to solve. That is very good news for everyone – patients and practitioners.

But the results have taken some time to appear in published form – so it is really good news to report that the green shoots of improvement are now there for all to see.

The case studies provide hard evidence that win-win-win outcomes are possible and achievable in the NHS.

The Impossibility Hypothesis has been disproved. The cynics can step off the bus. The skeptics have their evidence and can now become adopters.

And the report offers a lot of detail on how to do it including two references that are available here:

  1. A Recipe for Improvement PIE
  2. A Study of Productivity Improvement Tactics using a Two-Stream Production System Model

These references both describe the fundamentals of how to align financial improvement with quality and delivery improvement to achieve the elusive win-win-win outcome.

A previously invisible door has opened to reveal a new Land of Opportunity. A land inhabited by Improvementologists who mark the path to learning and applying this new knowledge and understanding.

There are many who do not know what to do to solve the current crisis in healthcare – they now have a new vista to explore.

Do not give up too soon –  there is a light at the end of the dark tunnel.

And to get there safely and quickly we just need to learn and apply the Foundations of Improvement Science in Healthcare – and we first learn to FISH in our own ponds first.

fish

The Seventh Flow

texting_a_friend_back_n_forth_150_wht_5352Bing Bong

Bob looked up from the report he was reading and saw the SMS was from Leslie, one of his Improvement Science Practitioners.

It said “Hi Bob, would you be able to offer me your perspective on another barrier to improvement that I have come up against.”

Bob thumbed a reply immediately “Hi Leslie. Happy to help. Free now if you would like to call. Bob

Ring Ring

<Bob> Hello, Bob here.

<Leslie> Hi Bob. Thank you for responding so quickly. Can I describe the problem?

<Bob> Hi Leslie – Yes, please do.

<Leslie> OK. The essence of it is that I have discovered that our current method of cash-flow control is preventing improvements in safety, quality, delivery and paradoxically in productivity too. I have tried to talk to the Finance department and all I get back is “We have always done it this way. That is what we are taught. It works. The rules are not negotiable and the problem is not Finance“. I am at a loss what to do.

<Bob> OK. Do not worry. This is a common issue that every ISP discovers at some point. What led you to your conclusion that the current methods are creating a barrier to change?

<Leslie> Well, the penny dropped when I started using the modelling tools you have shown me.  In particular when predicting the impact of process improvement-by-design changes on the financial performance of the system.

<Bob> OK. Can you be more specific?

<Leslie> Yes. The project was to design a new ambulatory diagnostic facility that will allow much more of the complex diagnostic work to be done on an outpatient basis.  I followed the 6M Design approach and looked first at the physical space design. We needed that to brief the architect.

<Bob> OK. What did that show?

<Leslie> It showed that the physical layout had a very significant impact on the flow in the process and that by getting all the pieces arranged in the right order we could create a physical design that felt spacious without actually requiring a lot of space. We called it the “Tardis Effect“. The most marked impact was on the size of the waiting areas – they were really small compared with what we have now which are much bigger and yet still feel cramped and chaotic.

<Bob> OK. So how does that physical space design link to the finance question?

<Leslie> Well, the obvious links were that the new design would have a smaller physical foot-print and at the same time give a higher throughput. It will cost less to build and will generate more activity than if we just copied the old design into a shiny new building.

<Bob> OK. I am sure that the Capital Allocation Committee and the Revenue Generation Committee will have been pleased with that outcome. What was the barrier?

<Leslie> Yes, you are correct. They were delighted because it left more in the Capital Pot for other equally worthy projects. The problem was not capital it was revenue.

<Bob> You said that activity was predicted to increase. What was the problem?

<Leslie>Yes – sorry, I was not clear – it was not the increased activity that was the problem – it was how to price the activity and  how to distribute the revenue generated. The Reference Cost Committee and Budget Allocation Committee were the problem.

<Bob> OK. What was the problem?

<Leslie> Well the estimates for the new operational budgets were basically the current budgets multiplied by the ratio of the future planned and historical actual activity. The rationale was that the major costs are people and consumables so the running costs should scale linearly with activity. They said the price should stay as it is now because the quality of the output is the same.

<Bob> OK. That does sound like a reasonable perspective. The variable costs will track with the activity if nothing else changes. Was it apportioning the overhead costs as part of the Reference Costing that was the problem?

<Leslie> No actually. We have not had that conversation yet. The problem was more fundamental. The problem is that the current budgets are wrong.

<Bob> Ah! That statement might come across as a bit of a challenge to the Finance Department. What was their reaction?

<Leslie> To para-phrase it was “We are just breaking even in the current financial year so the current budget must be correct. Please do not dabble in things that you clearly do not understand.”

<Bob> OK. You can see their point. How did you reply?

<Leslie> I tried to explain the concepts of the Cost-Of-The-Queue and how that cost was incurred by one part of the system with one budget but that the queue was created by a different part of the system with a different budget. I tried to explain that just because the budgets were 100% utilised does not mean that the budgets were optimal.

<Bob> How was that explanation received?

<Leslie> They did not seem to understand what I was getting at and kept saying “Inventory is an asset on the balance sheet. If profit is zero we must have planned our budgets perfectly. We cannot shift money between budgets within year if the budgets are already perfect. Any variation will average out. We have to stick to the financial plan and projections for the year. It works. The problem is not Finance – the problem is you.

<Bob> OK. Have you described the Seventh Flow and put it in context?

<Leslie> Arrrgh! No! Of course! That is how I should have approached it. Budgets are Cash-Inventories and what we need is Cash-Flow to where and when it is needed and in just the right amount according to the Principle of Parsimonious Pull. Thank you. I knew you would ask the crunch question. That has given me a fresh perspective on it. I will have another go.

<Bob> Let know how you get on. I am curious to hear the next instalment of the story.

<Leslie> Will do. Bye for now.

Drrrrrrrr

construction_blueprint_meeting_150_wht_10887Creating a productive and stable system design requires considering Seven Flows at the same time. The Seventh Flow is cash flow.

Cash is like energy – it is only doing useful work when it is flowing.

Energy is often described as two forms – potential energy and and kinetic energy.  The ‘doing’ happens when one form is being converted from potential to kinetic. Cash in the budget is like potential energy – sitting there ready to do some business.  Cash flow is like kinetic energy – it is the business.

The most versatile form of energy that we use is electrical energy. It is versatile because it can easily be converted into other forms – e.g. heat, light and movement. Since the late 1800’s our whole society has become highly dependent on electrical energy.  But electrical energy is tricky to store and even now our battery technology is pretty feeble. So, if we want to store energy we use a different form – chemical energy.  Gas, oil and coal – the fossil fuels – are all ancient stores of chemical energy that were originally derived from sunlight captured by vast carboniferous forests over millions of years. These carbon-rich fossil fuels are convenient to store near where they are needed, and when they are needed. But fossil fuels have a number of drawbacks: One is that they release their stored carbon when they are “burned”.  Another is that they are not renewable.  So, in the future we will need to develop better ways to capture, transport, use and store the energy from the Sun that will flow in glorious abundance for millions of years to come.

Plants discovered millions of years ago how to do this sunlight-to-chemical energy conversion and that biological legacy is built into every cell in every plant on the planet. Animals just do the reverse trick – they convert chemical-to-electrical. Every cell in every animal on the planet is a microscopic electrical generator that “burns” chemical fuel – carbohydrate. The other products are carbon dioxide and water. Plants use sunlight to recycle and store the carbon dioxide. It is a resilient and sustainable design.

plant_growing_anim_150_wht_9902Plants seemingly have it easy – the sunlight comes to them – they just sunbathe all day!  The animals have to work a bit harder – they have to move about gathering their chemical fuel. Some animals just feed on plants, others feed on other animals, and we do a bit of both. This food-gathering is a more complicated affair – and it creates a problem. Animals need a constant supply of energy – so they have to carry a store of chemical fuel around with them. That store is heavy so it needs energy to move it about.  Herbivors can be bigger and less intelligent because their food does not run away.  Carnivors need to be more agile; both physically and mentally. A balance is required. A big enough fuel store but not too big.  So, some animals have evolved additional strategies. Animals have become very good at not wasting energy – because the more that is wasted the more food that is needed and the greater the risk of getting eaten or getting too weak to catch the next meal.

To illustrate how amazing animals are at energy conservation we just need to look at an animal structure like the heart. The heart is there to pump blood around. Blood carries chemical nutrients and waste from one “department” of the body to another – just like ships, rail, roads and planes carry stuff around the world.

cardiogram_heart_working_150_wht_5747Blood is a sticky, viscous fluid that requires considerable energy to pump around the body and, because it is pumped continuously by the heart, even a small improvement in the energy efficiency of the circulation design has a big long-term cumulative effect. The flow of blood to any part of the body must match the requirements of that part.  If the blood flow to your brain slows down for even few seconds the brain cannot work properly and you lose consciousness – it is called “fainting”.

If the flow of blood to the brain is stopped for just a few minutes then the brain cells actually die. That is called a “stroke”. Our brains use a lot of electrical energy to do their job and our brain cells do not have big stores of fuel – so they need constant re-supply. And our brains are electrically active all the time – even when we are sleeping.

Other parts of the body are similar. Muscles for instance. The difference is that the supply of blood that muscles need is very variable – it is low when resting and goes up with exercise. It has been estimated that the change in blood flow for a muscle can be 30 fold!  That variation creates a design problem for the body because we need to maintain the blood flow to brain at all times but we only want blood to be flowing to the muscles in just the amount that they need, where they need it and when they need it. And we want to minimise the energy required to pump the blood at all times. How then is the total and differential allocation of blood flow decided and controlled?  It is certainly not a conscious process.

stick_figure_turning_valve_150_wht_8583The answer is that the brain and the muscles control their own flow. It is called autoregulation.  They open the tap when needed and just as importantly they close the tap when not needed. It is called the Principle of Parsimonious Pull. The brain directs which muscles are active but it does not direct the blood supply that they need. They are left to do that themselves.

So, if we equate blood-flow and energy-flow to cash-flow then we arrive at a surprising conclusion. The optimal design, the most energy and cash efficient, is where the separate parts of the system continuously determine the energy/cash flow required for them to operate effectively. They control the supply. They autoregulate their cash-flow. They pull only what they need when they need it.

BUT

For this to work then every part of the system needs to have a collaborative and parsimonious pull-design philosophy – one that wastes as little energy and cash as possible.  Minimum waste of energy requires careful design – it is called ergonomic design. Minimum waste of cash requires careful design – it is called economic design.

business_figures_accusing_anim_150_wht_9821Many socioeconomic systems are fragmented and have parts that behave in a “greedy” manner and that compete with each other for resources. It is a dog-eat-dog design. They would use whatever resources they can get for fear of being starved. Greed is Good. Collaboration is Weak.  In such a competitive situation a rigid-budget design is a requirement because it helps prevent one part selfishly and blindly destabilising the whole system for all. The problem is that this rigid financial design blocks change so it blocks improvement.

This means that greedy, competitive, selfish systems are unable to self-improve.

So, when the world changes too much and their survival depends on change then they risk becoming extinct just as the dinosaurs did.

red_arrow_down_crash_400_wht_2751Many will challenge this assertion by saying “But competition drives up performance“.  Actually, it is not as simple as that. Competition will weed out the weakest who “die” and remove themselves from the equation – apparently increasing the average. What actually drives improvement is customer choice. Organisations that are able to self-improve will create higher-quality and lower-cost products and in a globally-connected-economy the customers will vote with their wallets. The greedy and selfish competition lags behind.

So, to ensure survival in a global economy the Seventh Flow cannot be rigidly restricted by annually allocated departmental budgets. It is a dinosaur design.

And there is no difference between public and private organisations. The laws of cash-flow physics are universal.

How then is the cash flow controlled?

The “trick” is to design a monitoring and feedback component into the system design. This is called the Sixth Flow – and it must be designed so that just the right amount of cash is pulled to the just the right places and at just the right time and for just as long as needed to maximise the revenue.  The rest of the design – First Flow to Fifth Flow ensure the total amount of cash needed is a minimum.  All Seven Flows are needed.

So the essential ingredient for financial stability and survival is Sixth and Seventh Flow Design capability. That skill has another name – it is called Value Stream Accounting which is a component of complex adaptive systems engineering (CASE).

What? Never heard of Value Stream Accounting?

Maybe that is just another Error of Omission?

Creep-Crack-Crunch

The current crisis of confidence in the NHS has all the hallmarks of a classic system behaviour called creep-crack-crunch.

The first obvious crunch may feel like a sudden shock but it is usually not a complete surprise and it is actually one of a series of cracks that are leading up to a BIG CRUNCH. These cracks are an early warning sign of pressure building up in parts of the system and causing localised failures. These cracks weaken the whole system. The underlying cause is called creep.

SanFrancisco_PostEarthquake

Earthquakes are a perfect example of this phenomemon. Geological time scales are measured in thousands of years and we now know that the surface of the earth is a dynamic structure with vast contient-sized plates of solid rock floating on a liquid core of molten magma. Over millions of years the continents have moved huge distances and the world we see today on our satellite images is just a single frame in a multi-billion year geological video.  That is the geological creep bit. The cracks first appear at the edges of these tectonic plates where they smash into each other, grind past each other or are pulled apart from each other.  The geological hot-spots are marked out on our global map by lofty mountain ranges, fissured earthquake zones, and deep mid-ocean trenches. And we know that when a geological crunch arrives it happens in a blink of the geological eye.

The panorama above shows the devastation of San Francisco caused by the 1906 earthquake. San Francisco is built on the San Andreas Fault – the junction between the Pacific plate and the North American plate. The dramatic volcanic eruption in Iceland in 2010 came and went in a matter of weeks but the irreversible disruption it caused for global air traffic will be felt for years. The undersea earthquakes that caused the devastating tsunamis in 2006 and 2011 lasted only a few minutes; the deadly shock waves crossed an ocean in a matter of hours; and when they arrived the silent killer wiped out whole shoreside communities in seconds. Tens of thousands of lives were lost and the social after-shocks of that geological-crunch will be felt for decades.

These are natural disasters. We have little or no influence over them. Human-engineered disasters are a different matter – and they are just as deadly.

The NHS is an example. We are all painfully aware of the recent crisis of confidence triggered by the Francis Report. Many could see the cracks appearing and tried to blow their warning whistles but with little effect – they were silenced with legal gagging clauses and the opening cracks were papered over. It was only after the crunch that we finally acknowledged what we already knew and we started to search for the creep. Remorse and revenge does not bring back those who have been lost.  We need to focus on the future and not just point at the past.

UK_PopulationPyramid_2013Socio-economic systems evolve at a pace that is measured in years. So when a social crunch happens it is necessary to look back several decades for the tell-tale symptoms of creep and the early signs of cracks appearing.

Two objective measures of a socio-economic system are population and expenditure.

Population is people-in-progress; and national expenditure is the flow of the cash required to keep the people-in-progress watered, fed, clothed, housed, healthy and occupied.

The diagram above is called a population pyramid and it shows the distribution by gender and age of the UK population in 2013. The wobbles tell a story. It does rather look like the profile of a bushy-eyebrowed, big-nosed, pointy-chinned old couple standing back-to-back and maybe there is a hidden message for us there?

The “eyebrow” between ages 67 and 62 is the increase in births that happened 62 to 67 years ago: betwee 1946 and 1951. The post WWII baby boom.  The “nose” of 42-52 year olds are the “children of the 60’s” which was a period of rapid economic growth and new optimism. The “upper lip” at 32-42 correlates with the 1970’s that was a period of stagnant growth,  high inflation, strikes, civil unrest and the dark threat of global thermonuclear war. This “stagflation” is now believed to have been triggered by political meddling in the Middle-East that led to the 1974 OPEC oil crisis and culminated in the “winter of discontent” in 1979.  The “chin” signals there was another population expansion in the 1980s when optimism returned (SALT-II was signed in 1979) and the economy was growing again. Then the “neck” contraction in the 1990’s after the 1987 Black Monday global stock market crash.  Perhaps the new optimism of the Third Millenium led to the “chest” expansion but the financial crisis that followed the sub-prime bubble to burst in 2008 has yet to show its impact on the population chart. This static chart only tells part of the story – the animated chart reveals a significant secondary expansion of the 20-30 year old age group over the last decade. This cannot have been caused by births and is evidence of immigration of a large number of young couples – probably from the expanding Europe Union.

If this “yo-yo” population pattern is repeated then the current economic downturn will be followed by a contraction at the birth end of the spectrum and possibly also net emigration. And that is a big worry because each population wave takes a 100 years to propagate through the system. The most economically productive population – the  20-60 year olds  – are the ones who pay the care bills for the rest. So having a population curve with lots of wobbles in it causes long term socio-economic instability.

Using this big-picture long-timescale perspective; evidence of an NHS safety and quality crunch; silenced voices of cracks being papered-over; let us look for the historical evidence of the creep.

Nowadays the data we need is literally at our fingertips – and there is a vast ocean of it to swim around in – and to drown in if we are not careful.  The Office of National Statistics (ONS) is a rich mine of UK socioeconomic data – it is the source of the histogram above.  The trick is to find the nuggets of knowledge in the haystack of facts and then to convert the tables of numbers into something that is a bit more digestible and meaningful. This is what Russ Ackoff descibes as the difference between Data and Information. The data-to-information conversion needs context.

Rule #1: Data without context is meaningless – and is at best worthless and at worse is dangerous.

boxes_connected_PA_150_wht_2762With respect to the NHS there is a Minotaur’s Labyrinth of data warehouses – it is fragmented but it is out there – in cyberspace. The Department of Health publishes some on public sites but it is a bit thin on context so it can be difficult to extract the meaning.

Relying on our memories to provide the necessary context is fraught with problems. Memories are subject to a whole range of distortions, deletions, denials and delusions.  The NHS has been in existence since 1948 and there are not many people who can personally remember the whole story with objective clarity.  Fortunately cyberspace again provides some of what we need and with a few minutes of surfing we can discover something like a website that chronicles the history of the NHS in decades from its creation in 1948 – http://www.nhshistory.net/ – created and maintained by one person and a goldmine of valuable context. The decade that is of particular interest is 1998-2007 – Chapter 6

With just some data and some context it is possible to pull together the outline of the bigger picture of the decade that led up to the Mid Staffordshire healthcare quality crunch.

We will look at this as a NHS system evolving over time within its broader UK context. Here is the time-series chart of the population of England – the source of the demand on the NHS.

Population_of_England_1984-2010This shows a significant and steady increase in population – 12% overall between 1984 an 2012.

This aggregate hides a 9% increase in the under 65 population and 29% growth in the over 65 age group.

This is hard evidence of demographic creep – a ticking health and social care time bomb. And the curve is getting steeper. The pressure is building.

The next bit of the map we need is a measure of the flow through hospitals – the activity – and this data is available as the annual HES (Hospital Episodes Statistics) reports.  The full reports are hundreds of pages of fine detail but the headline summaries contain enough for our present purpose.

NHS_HES_Admissions_1997-2011

The time- series chart shows a steady increase in hospital admissions. Drilling into the summaries revealed that just over a third are emergency admissions and the rest are planned or maternity.

In the decade from 1998 to 2008 there was a 25% increase in hospital activity. This means more work for someone – but how much more and who for?

But does it imply more NHS beds?

Beds require wards, buildings and infrastructure – but it is the staff that deliver the health care. The bed is just a means of storage.  One measure of capacity and cost is the number of staffed beds available to be filled.  But this like measuring the number of spaces in a car park – it does not say much about flow – it is a just measure of maximum possible work in progress – the available space to hold the queue of patients who are somewhere between admission and discharge.

Here is the time series chart of the number of NHS beds from 1984 to 2006. The was a big fall in the number of beds in the decade after 1984 [Why was that?]

NHS_Beds_1984-2006

Between 1997 and 2007 there was about a 10% fall in the number of beds. The NHS patient warehouse was getting smaller.

But the activity – the flow – grew by 25% over the same time period: so the Laws Of Physics say that the flow must have been faster.

The average length of stay must have been falling.

This insight has another implication – fewer beds must mean smaller hospitals and lower costs – yes?  After all everyone seems to equate beds-to-cost; more-beds-cost-more less-beds-cost-less. It sounds reasonable. But higher flow means more demand and more workload so that would require more staff – and that means higher costs. So which is it? Less, the same or more cost?

NHS_Employees_1996_2007The published data says that staff headcount  went up by 25% – which correlates with the increase in activity. That makes sense.

And it looks like it “jumped” up in 2003 so something must have triggered that. More cash pumped into the system perhaps? Was that the effect of the Wanless Report?

But what type of staff? Doctors? Nurses? Admin and Clerical? Managers?  The European Working Time Directive (EWTD) forced junior doctors hours down and prompted an expansion of consultants to take on the displaced service work. There was also a gradual move towards specialisation and multi-disciplinary teams. What impact would that have on cost? Higher most likely. The system is getting more complex.

Of course not all costs have the same impact on the system. About 4% of staff are classified as “management” and it is this group that are responsible for strategic and tactical planning. Managers plan the work – workers work the plan.  The cost and efficiency of the management component of the system is not as useful a metric as the effectiveness of its collective decision making. Unfortuately there does not appear to be any published data on management decision making qualty and effectiveness. So we cannot estimate cost-effectiveness. Perhaps that is because it is not as easy to measure effectiveness as it is to count admissions, discharges, head counts, costs and deaths. Some things that count cannot easily be counted. The 4% number is also meaningless. The human head represents about 4% of the bodyweight of an adult person – and we all know that it is not the size of our heads that is important it is the effectiveness of the decisions that it makes which really counts!  Effectiveness, efficiency and costs are not the same thing.

Back to the story. The number of beds went down by 10% and number of staff went up by 25% which means that the staff-per-bed ratio went up by nearly 40%.  Does this mean that each bed has become 25% more productive or 40% more productive or less productive? [What exactly do we mean by “productivity”?]

To answer that we need to know what the beds produced – the discharges from hospital and not just the total number, we need the “last discharges” that signal the end of an episode of hospital care.

NHS_LastDischarges_1998-2011The time-series chart of last-discharges shows the same pattern as the admissions: as we would expect.

This output has two components – patients who leave alive and those who do not.

So what happened to the number of deaths per year over this period of time?

That data is also published annually in the Hospital Episode Statistics (HES) summaries.

This is what it shows ….

NHS_Absolute_Deaths_1998-2011The absolute hospital mortality is reducing over time – but not steadily. It went up and down between 2000 and 2005 – and has continued on a downward trend since then.

And to put this into context – the UK annual mortality is about 600,000 per year. That means that only about 40% of deaths happen in hospitals. UK annual mortality is falling and births are rising so the population is growing bigger and older.  [My head is now starting to ache trying to juggle all these numbers and pictures in it].

This is not the whole story though – if the absolute hospital activity is going up and the absolute hospital mortality is going down then this raw mortality number may not be telling the whole picture. To correct for those effects we need the ratio – the Hospital Mortality Ratio (HMR).

NHS_HospitalMortalityRatio_1998-2011This is the result of combining these two metrics – a 40% reduction in the hospital mortality ratio.

Does this mean that NHS hospitals are getting safer over time?

This observed behaviour can be caused by hospitals getting safer – it can also be caused by hospitals doing more low-risk work that creates a dilution effect. We would need to dig deeper to find out which. But that will distract us from telling the story.

Back to productivity.

The other part of the productivity equation is cost.

So what about NHS costs?  A bigger, older population, more activity, more staff, and better outcomes will all cost more taxpayer cash, surely! But how much more?  The activity and head count has gone up by 25% so has cost gone up by the same amount?

NHS_Annual_SpendThis is the time-series chart of the cost per year of the NHS and because buying power changes over time it has been adjusted using the Consumer Price Index using 2009 as the reference year – so the historical cost is roughly comparable with current prices.

The cost has gone up by 100% in one decade!  That is a lot more than 25%.

The published financial data for 2006-2010 shows that the proportion of NHS spending that goes to hospitals is about 50% and this has been relatively stable over that period – so it is reasonable to say that the increase in cash flowing to hospitals has been about 100% too.

So if the cost of hospitals is going up faster than the output then productivity is falling – and in this case it works out as a 37% drop in productivity (25% increase in activity for 100% increase in cost = 37% fall in productivity).

So the available data which anyone with a computer, an internet connection, and some curiosity can get; and with bit of spreadsheet noggin can turn into pictures shows that over the decade of growth that led up to the the Mid Staffs crunch we had:

1. A slightly bigger population; and a
2. significantly older population; and a
3. 25% increase in NHS hospital activity; and a
4. 10% fall in NHS beds; and a
5. 25% increase in NHS staff; which gives a
6. 40% increase in staff-per-bed ratio; an an
7. 8% reduction in absolute hospital mortality; which gives a
8. 40% reduction in relative hospital mortality; and a
9. 100% increase in NHS  hospital cost; which gives a
10. 37% fall drop in “hospital productivity”.

An experienced Improvement Scientist knows that a system that has been left to evolve by creep-crack-and-crunch can be re-designed to deliver higher quality and higher flow at lower total cost.

The safety creep at Mid-Staffs is now there for all to see. A crack has appeared in our confidence in the NHS – and raises a couple of crunch questions:

Where Has All The Extra Money Gone?

 How Will We Avoid The BIG CRUNCH?

The huge increase in NHS funding over the last decade was the recommendation of the Wanless Report but the impact of implementing the recommendations has never been fully explored. Healthcare is a service system that is designed to deliver two intangible products – health and care. So the major cost is staff-time – particularly the clinical staff.  A 25% increase in head count and a 100% increase in cost implies that the heads are getting more expensive.  Either a higher proportion of more expensive clinically trained and registered staff, or more pay for the existing staff or both.  The evidence shows that about 50% of NHS Staff are doctors and nurses and over the last decade there has been a bigger increase in the number of doctors than nurses. Added to that the Agenda for Change programme effectively increased the total wage bill and the new contracts for GPs and Consultants added more upward wage pressure.  This is cost creep and it adds up over time. The Kings Fund looked at the impact in 2006 and suggested that, in that year alone, 72% of the additional money was sucked up by bigger wage bills and other cost-pressures! The previous year they estimated 87% of the “new money” had disappeared hte same way. The extra cash is gushing though the cracks in the bottom of the fiscal bucket that had been clumsily papered-over. And these are recurring revenue costs so they add up over time into a future financial crunch.  The biggest one may be yet to come – the generous final-salary pensions that public-sector employees enjoy!

So it is even more important that the increasingly expensive clinical staff are not being forced to spend their time doing work that has no direct or indirect benefit to patients.

Trying to do a good job in a poorly designed system is both frustrating and demotivating – and the outcome can be a cynical attitude of “I only work here to pay the bills“. But as public sector wages go up and private sector pensions evaporate the cynics are stuck in a miserable job that they cannot afford to give up. And their negative behaviour poisons the whole pool. That is the long term cumulative cultural and financial cost of poor NHS process design. That is the outcome of not investing earlier in developing an Improvement Science capability.

The good news is that the time-series charts illustrate that the NHS is behaving like any other complex, adaptive, human-engineered value system. This means that the theory, techniques and tools of Improvement Science and value system design can be applied to answer these questions. It means that the root causes of the excessive costs can be diagnosed and selectively removed without compromising safety and quality. It means that the savings can be wisely re-invested to improve the resilience of some parts and to provide capacity in other parts to absorb the expected increases in demand that are coming down the population pipe.

This is Improvement Science. It is a learnable skill.

18/03/2013: Update

The question “Where Has The Money Gone?” has now been asked at the Public Accounts Committee

 

Curing Chronic Carveoutosis

pin_marker_lighting_up_150_wht_6683Last week the Ray Of Hope briefly illuminated a very common system design disease called carveoutosis.  This week the RoH will tarry a little longer to illuminate an example that reveals the value of diagnosing and treating this endemic process ailment.

Do you remember the days when we used to have to visit the Central Post Office in our lunch hour to access a quality-of-life-critical service that only a Central Post Office could provide – like getting a new road tax disc for our car?  On walking through the impressive Victorian entrances of these stalwart high street institutions our primary challenge was to decide which queue to join.

In front of each gleaming mahogony, brass and glass counter was a queue of waiting customers. Behind was the Post Office operative. We knew from experience that to be in-and-out before our lunch hour expired required deep understanding of the ways of people and processes – and a savvy selection.  Some queues were longer than others. Was that because there was a particularly slow operative behind that counter? Or was it because there was a particularly complex postal problem being processed? Or was it because the customers who had been waiting longer had identified that queue was fast flowing and had defected to it from their more torpid streams? We know that size is not a reliable indicator of speed or quality.figure_juggling_time_150_wht_4437

The social pressure is now mounting … we must choose … dithering is a sign of weakness … and swapping queues later is another abhorrent behaviour. So we employ our most trusted heuristic – we join the end of the shortest queue. Sometimes it is a good choice, sometimes not so good!  But intuitively it feels like the best option.

Of course  if we choose wisely and we succeed in leap-frogging our fellow customers then we can swagger (just a bit) on the way out. And if not we can scowl and mutter oaths at others who (by sheer luck) leap frog us. The Post Office Game is fertile soil for the Aint’ It Awful game which we play when we arrive back at work.

single_file_line_PA_150_wht_3113But those days are past and now we are more likely to encounter a single-queue when we are forced by necessity to embark on a midday shopping sortie. As we enter we see the path of the snake thoughtfully marked out with rope barriers or with shelves hopefully stacked with just-what-we-need bargains to stock up on as we drift past.  We are processed FIFO (first-in-first-out) which is fairer-for-all and avoids the challenge of the dreaded choice-of-queue. But the single-queue snake brings a new challenge: when we reach the head of the snake we must identify which operative has become available first – and quickly!

Because if we falter then we will incur the shame of the finger-wagging or the flashing red neon arrow that is easily visible to the whole snake; and a painful jab in the ribs from the impatient snaker behind us; and a chorus of tuts from the tail of the snake. So as we frantically scan left and right along the line of bullet-proof glass cells looking for clues of imminent availability we run the risk of developing acute vertigo or a painful repetitive-strain neck injury!

stick_figure_sitting_confused_150_wht_2587So is the single-queue design better?  Do we actually wait less time, the same time or more time? Do we pay a fair price for fair-for-all queue design? The answer is not intuitively obvious because when we are forced to join a lone and long queue it goes against our gut instinct. We feel the urge to push.

The short answer is “Yes”.  A single-queue feeding tasks to parallel-servers is actually a better design. And if we ask the Queue Theorists then they will dazzle us with complex equations that prove it is a better design – in theory.  But the scary-maths does not help us to understand how it is a better design. Most of us are not able to convert equations into experience; academic rhetoric into pragmatic reality. We need to see it with our own eyes to know it and understand it. Because we know that reality is messier than theory.    

And if it is a better design then just how much better is it?

To illustrate the potential advantage of a single-queue design we need to push the competing candiates to their performance limits and then measure the difference. We need a real example and some real data. We are Improvementologists! 

First we need to map our Post Office process – and that reveals that we have a single step process – just the counter. That is about as simple as a process gets. Our map also shows that we have a row of counters of which five are manned by fully trained Post Office service operatives.

stick_figure_run_clock_150_wht_7094Now we can measure our process and when we do that we find that we get an average of 30 customers per hour walking in the entrance and and average of 30 cusomers an hour walking out. Flow-out equals flow-in. Activity equals demand. And the average flow is one every 2 minutes. So far so good. We then observe our five operatives and we find that the average time from starting to serve one customer to starting to serve the next is 10 minutes. We know from our IS training that this is the cycle time. Good.

So we do a quick napkin calculation to check and that the numbers make sense: our system of five operatives working in parallel, each with an average cycle time of 10 minutes can collectively process a customer on average every 2 minutes – that is 30 per hour on average. So it appears we have just enough capacity to keep up with the flow of work  – we are at the limit of efficiency.  Good.

CarveOut_00We also notice that there is variation in the cycle time from customer to customer – so we plot our individual measurements asa time-series chart. There does not seem to be an obvious pattern – it looks random – and BaseLine says that it is statistically stable. Our chart tells us that a range of 5 to 15 minutes is a reasonable expectation to set.

We also observe that there is always a queue of waiting customers somewhere – and although the queues fluctuate in size and location they are always there.

 So there is always a wait for some customers. A variable wait; an unpredictable wait. And that is a concern for us because when the queues are too numerous and too long then we see customers get agitated, look at their watches, shrug their shoulders and leave – taking their custom and our income with them and no doubt telling all their friends of their poor experience. Long queues and long waits are bad for business.

And we do not want zero queues either because if there is no queue and our operatives run out of work then they become under-utilised and our system efficiency and productivity falls.  That means we are incurring a cost but not generating an income. No queues and idle resources are bad for business too.

And we do not want a mixture of quick queues and slow queues because that causes complaints and conflict.  A high-conflict customer complaint experience is bad for business too! 

What we want is a design that creates small and stable queues; ones that are just big enough to keep our operatives busy and our customers not waiting too long.

So which is the better design and how much better is it? Five-queues or a single-queue? Carve-out or no-carve-out?

To find the answer we decide to conduct a week-long series of experiments on our system and use real data to reveal the answer. We choose the time from a customer arriving to the same customer leaving as our measure of quality and performance – and we know that the best we can expect is somewhere between 5 and 15 minutes.  We know from our IS training that is called the Lead Time.

time_moving_fast_150_wht_10108On day #1 we arrange our Post Office with five queues – clearly roped out – one for each manned counter.  We know from our mapping and measuring that customers do not arrive in a steady stream and we fear that may confound our experiment so we arrange to admit only one of our loyal and willing customers every 2 minutes. We also advise our loyal and willing customers which queue they must join before they enter to avoid the customer choice challenges.  We decide which queue using a random number generator – we toss a dice until we get a number between 1 and 5.  We record the time the customer enters on a slip of paper and we ask the customer to give it to the operative and we instruct our service operatives to record the time they completed their work on the same slip and keep it for us to analyse later. We run the experiment for only 1 hour so that we have a sample of 30 slips and then we collect the slips,  calculate the difference between the arrival and departure times and plot them on a time-series chart in the order of arrival.

CarveOut_01This is what we found.  Given that the time at the counter is an average of 10 minutes then some of these lead times seem quite long. Some customers spend more time waiting than being served. And we sense that the performance is getting worse over time.

So for the next experiment we decide to open a sixth counter and to rope off a sixth queue. We expect that increasing capacity will reduce waiting time and we confidently expect the performance to improve.

On day #2 we run our experiment again, letting customers in one every 2 minutes as before and this time we use all the numbers on the dice to decide which queue to direct each customer to.  At the end of the hour we collect the slips, calculate the lead times and plot the data – on the same chart.

CarveOut_02This is what we see.

It does not look much better and that is big surprise!

The wide variation from customer to customer looks about the same but with the Eye of Optimism we get a sense that the overall performance looks a bit more stable.

So we conclude that adding capacity (and cost) may make a small difference.

But then we remember that we still only served 30 customers – which means that our income stayed the same while our cost increased by 20%. That is definitely NOT good for business: it is not goiug to look good in a business case “possible marginally better quality and 20% increase in cost and therefore price!”

So on day #3 we change the layout. This time we go back to five counters but we re-arrange the ropes to create a single-queue so the customer at the front can be ‘pulled’ to the first available counter. Everything else stays the same – one customer arriving every 2 minutes, the dice, the slips of paper, everything.  At the end of the hour we collect the slips, do our sums and plot our chart.

CarveOut_03And this is what we get! The improvement is dramatic. Both the average and the variation has fallen – especially the variation. But surely this cannot be right. The improvement is too good to be true. We check our data again. Yes, our customers arrived and departed on average one every 2 minutes as before; and all our operatives did the work in an average of 10 minutes just as before. And we had the exactly the same capacity as we had on day #1. And we finished on time. It is correct. We are gobsmaked. It is like a magic wand has been waved over our process. We never would have predicted  that just moving the ropes around to could have such a big impact.  The Queue Theorists were correct after all!

But wait a minute! We are delivering a much better customer experience in terms of waiting time and at the same cost. So could we do even better with six counters open? What will happen if we keep the single-queue design and open the sixth desk?  Before it made little difference but now we doubt our ability to guess what will happen. Our intuition seems to keep tricking us. We are losing our confidence in predicting what the impact will be. We are in counter-intuitive land! We need to run the experiment for real.

So on day #4 we keep the single-queue and we open six desks. We await the data eagerly.

CarveOut_04And this is what happened. Increasing the capacity by 20% has made virtually no difference – again. So we now have two pieces of evidence that say – adding extra capacity did not make a difference to waiting times. The variation looks a bit less though but it is marginal.

It was changing the Queue Design that made the difference! And that change cost nothing. Rien. Nada. Zippo!

That will look much better in our report but now we have to face the emotional discomfort of having to re-evaluate one of our deepest held assumptions.

Reality is telling us that we are delivering a better quality experience using exactly the same resources and it cost nothing to achieve. Higher quality did NOT cost more. In fact we can see that with a carve-out design when we added capacity we just increased the cost we did NOT improve quality. Wow!  That is a shock. Everything we have been led to believe seems to be flawed.

Our senior managers are not going to like this message at all! We will be challening their dogma directly. And they do not like that. Oh dear! 

Now we can see how much better a no-carveout single-queue pull-design can work; and now we can explain why single-queue designs  are used; and now we can show others our experiment and our data and if they do not believe us they can repeat the experiment themselves.  And we can see that it does not need a real Post Office – a pad of Post It® Notes, a few stopwatches and some willing helpers is all we need.

And even though we have seen it with our own eyes we still struggle to explain how the single-queue design works better. What actually happens? And we still have that niggling feeling that the performance on day #1 was unstable.  We need to do some more exploring.

So we run the day#1 experiment again – the five queues – but this time we run it for a whole day, not just an hour.

CarveOut_06

Ah ha!   Our hunch was right.  It is an unstable design. Over time the variation gets bigger and bigger.

But how can that happen?

Then we remember. We told the customers that they could not choose the shortest queue or change queue after they had joined it.  In effect we said “do not look at the other queues“.

And that happens all the time on our systems when we jealously hide performance data from each other! If we are seen to have a smaller queue we get given extra work by the management or told to slow down by the union rep!  

So what do we do now?  All we are doing is trying to improve the service and all we seem to be achieving is annoying more and more people.

What if we apply a maximum waiting time target, say of 1 hour, and allow customers to jump to the front of their queue if they are at risk if breaching the target? That will smooth out spikes and give everyone a fair chance. Customers will understand. It is intuitively obvious and common sense. But our intuition has tricked us before … 

So we run the experiment again and this time we tell our customers that if they wait 50 minutes then they can jump to the front of their queue. They appreciate this because they now have a upper limit on the time they will wait.  

CarveOut_07And this is what we observe. It looks better than before, at least initially, and then it goes pear-shaped.

All we have done with our ‘carve-out and-expedite-the-long-waiters’ design is to defer the inevitable – the crunch. We cannot keep our promise. By the end everyone is pushing to the frontof the queue. It is a riot!  

And there is more. Look at the lead time for the last few customers – two hours. Not only have they waited a long time, but we have had to stay open for two hours longer. That is a BIG cost pessure in overtime payments.

So, whatever way we look at it: a single-queue design is better.  And no one loses out! The customers have a short and predictable waiting time; the operatives are kept occupied and go home on time; and the executives bask in the reflected glory of the excellent customer feedback.  It is a Three Wins® design.

Seeing is believing – and we now know that it is worth diagnosing and treating carveoutosis.

And the only thing left to do is to explain is how a single-queue design works better. It is not obvious is it? 

puzzle_lightbulb_build_PA_150_wht_4587And the best way to do that is to play the Post Office Game and see what actually happens. 

A big light-bulb moment awaits!

 

 

Update: My little Sylvanian friends have tried the Post Office Game and kindly sent me this video of the before  Sylvanian Post Office Before and the after Sylvanian Post Office After. They say they now know how the single-queue design works better. 

 

A Ray Of Hope

stick_figure_shovel_snow_anim_150_wht_9579It does not seem to take much to bring a real system to an almost standstill.  Six inches of snow falling between 10 AM and 2 PM in a Friday in January seems to be enough!

It was not so much the amount of snow – it was the timing.  The decision to close many schools was not made until after the pupils had arrived – and it created a logistical nightmare for parents. 

Many people suddenly needed to get home before they expected which created an early rush hour and gridlocked the road system.

The same number of people travelled the same distance in the same way as they would normally – it just took them a lot longer.  And the queues created more problems as people tried to find work-arounds to bypass the traffic jams.

How many thousands of hours of life-time was wasted sitting in near-stationary queues of cars? How many millions of poundsworth of productivity was lost? How much will the catchup cost? 

And yet while we grumble we shrug our shoulders and say “It is just one of those things. We cannot control the weather. We just have to grin and bear it.”  

Actually we do not have to. And we do not need a weather machine to control the weather. Mother Nature is what it is.

Exactly the same behaviour happens in many systems – and our conclusion is the same.  We assume the chaos and queues are inevitable.

They are not.

They are symptoms of the system design – and specifically they are the inevitable outcomes of the time-design.

But it is tricky to visualise the time-design of a system.  We can see the manifestations of the poor time-design, the queues and chaos, but we do not so easily perceive the causes. So the poor time-design persists. We are not completely useless though; there are lots of obvious things we can do. We can devise ingenious ways to manage the queues; we can build warehouses to hold the queues; we can track the jobs in the queues using sophisticated and expensive information technology; we can identify the hot spots; we can recruit and deploy expediters, problem-solvers and fire-fighters to facilitate the flow through the hottest of them; and we can pump capacity and money into defences, drains and dramatics. And our efforts seem to work so we congratulate ourselves and conclude that these actions are the only ones that work.  And we keep clamouring for more and more resources. More capacity, MORE capacity, MORE CAPACITY.

Until we run out of money!

And then we have to stop asking for more. And then we start rationing. And then we start cost-cutting. And then the chaos and queues get worse. 

And all the time we are not aware that our initial assumptions were wrong.

The chaos and queues are not inevitable. They are a sign of the time-design of our system. So we do have other options.  We can improve the time-design of our system. We do not need to change the safety-design; nor the quality-design; nor the money-design.  Just improving the time-design will be enough. For now.

So the $64,000,000 question is “How?”

Before we explore that we need to demonstrate What is possible. How big is the prize?

The class of system design problem that cause particular angst are called mixed-priority mixed-complexity crossed-stream designs.  We encounter dozens of them in our daily life and we are not aware of it.  One of particular interest to many is called a hospital. The mixed-priority dimension is the need to manage some patients as emergencies, some as urgent and some as routine. The mixed-complexity dimension is that some patients are easy and some are complex. The crossed-stream dimension is the aggregation of specialised resources into departments. Expensive equipment and specific expertise.  We then attempt to push patients with different priorites long different paths through these different departments . And it is a management nightmare! 

BlueprintOur usual and “obvious” response to this challenge is called a carve-out design. And that means we chop up our available resource capacity into chunks.  And we do that in two ways: chunks of time and chunks of space.  We try to simplify the problem by dissecting it into bits that we can understand. We separate the emergency departments from the  planned-care facilities. We separate outpatients from inpatients. We separate medicine from surgery – and we then intellectually dissect our patients into organ systems: brains, lungs, hearts, guts, bones, skin, and so on – and we create separate departments for each one. Neurology, Respiratory, Cardiology, Gastroenterology, Orthopaedics, Dermatology to list just a few. And then we become locked into the carve-out design silos like prisoners in cages of our own making.

And so it is within the departments that are sub-systems of the bigger system. Simplification, dissection and separation. Ad absurdam.

The major drawback with our carve-up design strategy is that it actually makes the system more complicated.  The number of necessary links between the separate parts grows exponentially.  And each link can hold a small queue of waiting tasks – just as each side road can hold a queue of waiting cars. The collective complexity is incomprehensible. The cumulative queue is enormous. The opportunity for confusion and error grows exponentially. Safety and quality fall and cost rises. Carve-out is an inferior time-design.

But our goal is correct: we do need to simplify the system so that means simplifying the time-design.

To illustrate the potential of this ‘simplify the time-design’ approach we need a real example.

One way to do this is to create a real system with lots of carve-out time-design built into it and then we can observe how it behaves – in reality. A carefully designed Table Top Game is one way to do this – one where the players have defined Roles and by following the Rules they collectively create a real system that we can map, measure and modify. With our Table Top Team trained and ready to go we then pump realistic tasks into our realistic system and measure how long they take in reality to appear out of the other side. And we then use the real data to plot some real time-series charts. Not theoretical general ones – real specific ones. And then we use the actual charts to diagnose the actual causes of the actual queues and actual chaos.

TimeDesign_BeforeThis is the time-series chart of a real Time-Design Game that has been designed using an actual hospital department and real observation data.  Which department it was is not of importance because it could have been one of many. Carve-out is everywhere.

During one run of the Game the Team processed 186 tasks and the chart shows how long each task took from arriving to leaving (the game was designed to do the work in seconds when in the real department it took minutes – and this was done so that one working day could be condensed from 8 hours into 8 minutes!)

There was a mix of priority: some tasks were more urgent than others. There was a mix of complexity: some tasks required more steps that others. The paths crossed at separate steps where different people did defined work using different skills and special equipment.  There were handoffs between all of the steps on all of the streams. There were  lots of links. There were many queues. There were ample opportunities for confusion and errors.

But the design of the real process was such that the work was delivered to a high quality – there were very few output errors. The yield was very high. The design was effective. The resources required to achieve this quality were represented by the hours of people-time availability – the capacity. The cost. And the work was stressful, chaotic, pressured, and important – so it got done. Everyone was busy. Everyone pulled together. They helped each other out. They were not idle. They were a good team. The design was efficient.

The thin blue line on the time-series chart is the “time target” set by the Organisation.  But the effective and efficient system design only achieved it 77% of the time.  So the “obvious” solution was to clamour for more people and for more space and for more equipment so that the work can be done more quickly to deliver more jobs on-time.  Unfortunately the Rules of the Time-Design Game do not allow this more-money option. There is no more money.

To succeed at the Time-Design Game the team must find a way to improve their delivery time performance with the capacity they have and also to deliver the same quality.  But this is impossible! If it were possible then the solution would be obvious and they would be doing it already. No one can succeed on the Time-Design Game. 

Wrong. It is possible.  And the assumption that the solution is obvious is incorrect. The solution is not obvious – at least to the untrained eye.

To the trained eye the time-series chart shows the characteristic signals of a carve-out time-design. The high task-to-task variation is highly suggestive as is the pattern of some of the earlier arrivals having a longer lead time. An experienced system designer can diagnose a carve-out time-design from a set of time-series charts of a process just as a doctor can diagnose the disease from the vital signs chart for a patient.  And when the diagnosis is confirmed with a verification test then the time-Redesign phase can start. 

TimeDesign_AfterPhase1This chart shows what happened after the time-design of the system was changed – after some of the carve-out design was modified. The Y-axis scale is the same as before – and the delivery time improvement is dramatic. The Time-ReDesigned system is now delivering 98% achievement of the “on time target”.

The important thing to be aware of is that exactly the same work was done, using exactly the same steps, and exactly the same resources. No one had to be retrained, released or recruited.  The quality was not impaired. And the cost was actually less because less overtime was needed to mop up the spillover of work at the end of the day.

And the Time-ReDesigned system feels better to work in. It is not chaotic; flow is much smoother; and it is busy yet relaxed and even fun.  The same activity is achieved by the same people doing the same work in the same sequence. Only the Time-Design has changed. A change that delivered a win for the workers!

What was the impact of this cost-saving improvement on the customers of this service? They can now be 98% confident that they will get their task completed correctly in less than 120 minutes.  Before the Time-Redesign the 98% confidence limit was 470 minutes! So this is a win for the customers too!

And the Time-ReDesigned system is less expensive so it is a win for whoever is paying.

Same safety and quality, quicker with less variation, and at lower cost. Win-Win-Win.

And the usual reaction to playing the Time-ReDesign Game is incredulous disbelief.  Some describe it as a “light bulb” moment when they see how the diagnosis of the carve-out time-design is made and and how the Time-ReDesign is done. They say “If I had not seen it with my own eyes I would not have believed it.” And they say “The solutions are simple but not obvious!” And they say “I wish I had learned this years ago!”  And thay apologise for being so skeptical before.

And there are those who are too complacent, too careful or too cynical to play the Time-ReDesign Game (which is about 80% of people actually) – and who deny themselves the opportunity of a win-win-win outcome. And that is their choice. They can continue to grin and bear it – for a while longer.     

And for the 20% who want to learn how to do Time ReDesign for real in their actual systems there is now a Ray Of Hope.

And the Ray of Hope is illuminating a signpost on which is written “This Way to Improvementology“. 

Quality First or Time First?

Before we explore this question we need to establish something. If the issue is Safety then that always goes First – and by safety we mean “a risk of harm that everyone agrees is unacceptable”.


figure_running_hamster_wheel_150_wht_4308Many Improvement Zealots state dogmatically that the only way reach the Nirvanah of “Right Thing – On Time – On Budget” is to focus on Quality First.

This is incorrect.  And what makes it incorrect is the word only.

Experience teaches us that it is impossible to divert people to focus on quality when everyone is too busy just keeping afloat. If they stop to do something else then they will drown. And they know it.

The critical word here is busy.

‘Busy’ means that everyone is spending all their time doing stuff – important stuff – the work, the checking, the correcting, the expediting, the problem solving, and the fire-fighting. They are all busy all of the time.

So when a Quality Zealot breezes in and proclaims ‘You should always focus on quality first … that will solve all the problems’ then the reaction they get is predictable. The weary workers listen with their arms-crossed, roll-their eyes, exchange knowing glances, sigh, shrug, shake their heads, grit their teeth, and trudge back to fire-fighting. Their scepticism and cynicism has been cut a notch deeper. And the weary workers get labelled as ‘Not Interested In Quality’ and ‘Resisting Change’  and ‘Laggards’ by the Quality Zealot who has spent more time studying and regurgitating rhetoric than investing time in observing and understanding reality.

The problem here is the seemingly innocuous word ‘always’. It is too absolute. Too black-and-white. Too dogmatic. Too simple.

Sometimes focussing on Quality First is a wise decision. And that situation is when there is low-quality and idle-time. There is some spare capacity to re-invest in understanding the root causes of the quality issues,  in designing them out of the process, and in implementing the design changes.

But when everyone is busy – when there is no idle-time – then focussing on quality first is not a wise decision because it can actually make the problem worse!

[The Quality Zealots will now be turning a strange red colour, steam will be erupting from their ears and sparks will be coming from their finger-tips as they reach for their keyboards to silence the heretical anti-quality lunatic. “Burn, burn, burn” they rant]. 

When everyone is busy then the first thing to focus on is Time.

And because everyone is busy then the person doing the Focus-on-Time stuff must be someone else. Someone like an Improvementologist.  The Quality Zealot is a liability at this stage – but they become an asset later when the chaos has calmed.

And what our Improvementologist is looking for are queues – also known as Work-in-Progress or WIP.

Why WIP?  Why not where the work is happening? Why not focus on resource utilisation? Isn’t that a time metric?

Yes, resource utilisation is a time-related metric but because everyone is busy then resource utilisation will be high. So looking at utilisation will only confirm what we already know.  And everyone is busy doing important stuff – they are not stupid – they are busy and they are doing their best given the constraints of their process design.        

The queue is where an Improvementologist will direct attention first.  And the specific focus of their attention is the cause of the queue.

This is because there is only one cause of a queue: a mismatch-over-time between demand and activity.

So, the critical first step to diagnosing the cause of a queue is to make the flow visible – to plot the time-series charts of demand, activity and WIP.  Until that is done then no progress will be made with understanding what is happening and it wil be impossible to decide what to do. We need a diagnosis before we can treat. And to get a diagnosis we need data from an examination of our process; and we need data on the history of how it has developed. And we need to know how to convert that data into information, and then into understanding, and then into design options, and then into a wise decision, and then into action, and then into improvement.

And we now know how to spot an experienced Improvementologist because the first thing they will look for are the Queues not the Quality.

But why bother with the flow and the queues at all? Customers are not interested in them! If time is the focus then surely it is turnaround times and waiting times that we need to measure! Then we can compare our performance with our ‘target’ and if it is out of range we can then apply the necessary ‘pressure’!

This is indeed what we observe. So let us explore the pros and cons of this approach with an example.

We are the manager of a support department that receives requests, processes them and delivers the output back to the sender. We could be one of many support departments in an organisation:  human resources, procurement, supplies, finance, IT, estates and so on. We are the Backroom Brigade. We are the unsung heros and heroines.

The requests for our service come in different flavours – some are easy to deal with, others are more complex.  They also come with different priorities – urgent, soon and routine. And they arrive as a mixture of dribbles and deluges.  Our job is to deliver high quality work (i.e. no errors) within the delivery time expected by the originator of the request (i.e. on time). If  we do that then we do not get complaints (but we do not get compliments either).

From the outside things look mostly OK.  We deliver mostly on quality and mostly on time. But on the inside our department is in chaos! Every day brings a new fire to fight. Everyone is busy and the pressure and chaos are relentless. We are keeping our head above water – but only just.  We do not enjoy our work-life. It is not fun. Our people are miserable too. Some leave – others complain – others just come to work, do stuff, take the money and go home – like Zombies. They comply.

three_wins_agreementOnce in the past we were were seduced by the sweet talk of a Quality Zealot. We were promised Nirvanah. We were advised to look at the quality of the requests that we get. And this suggestion resonated with us because we were very aware that the requests were of variable quality. Our people had to spend time checking-and-correcting them before we could process them.  The extra checking had improved the quality of what we deliver – but it had increased our costs too. So the Quality Zealot told us we should work more closely with our customers and to ‘swim upstream’ to prevent the quality problems getting to us in the first place. So we sent some of our most experienced and most expensive Inspectors to paddle upstream. But our customers were also very busy and, much as they would have liked, they did not have time to focus on quality either. So our Inspectors started doing the checking-and-correcting for our customers. Our people are now working for our customers but we still pay their wages. And we do not have enough Inspectors to check-and-correct all the requests at source so we still need to keep a skeleton crew of Inspectors in the department. And these stay-at-home Inspectors  are stretched too thin and their job is too pressured and too stressful. So no one wants to do it.And given the choice they would all rather paddle out to the customers first thing in the morning to give them as much time as possible to check-and-correct the requests so the days work can be completed on time.  It all sounds perfectly logical and rational – but it does not seem to have worked as promised. The stay-at-home Inspectors can only keep up with the more urgent work,  delivery of the less urgent work suffers and the chronic chaos and fire-fighting are now aggravated by a stream of interruptions from customers asking when their ‘non-urgent’ requests will be completed.

figure_talk_giant_phone_anim_150_wht_6767The Quality Zealot insisted we should always answer the phone to our customers – so we take the calls – we expedite the requests – we solve the problems – and we fight-the-fire.  Day, after day, after day.

We now know what Purgatory means. Retirement with a pension or voluntary redundancy with a package are looking more attractive – if only we can keep going long enough.

And the last thing we need is more external inspection, more targets, and more expensive Quality Zealots telling us what to do! 

And when we go and look we see a workplace that appears just as chaotic and stressful and angry as we feel. There are heaps of work in progress everywhere – the phone is always ringing – and our people are running around like headless chickens, expediting, fire-fighting and getting burned-out: physically and emotionally. And we feel powerless to stop it. So we hide.

Does this fictional fiasco feel familiar? It is called the Miserable Job Purgatory Vortex.

Now we know the characteristic pattern of symptoms and signs:  constant pressure of work, ever present threat of quality failure, everyone busy, just managing to cope, target-stick-and-carrot management, a miserable job, and demotivated people.

The issue here is that the queues are causing some of the low quality. It is not always low quality that causes all of the queues.

figure_juggling_time_150_wht_4437Queues create delays, which generate interruptions, which force investigation, which generates expediting, which takes time from doing the work, which consumes required capacity, which reduces activity, which increases the demand-activity mismatch, which increases the queue, which increases the delay – and so on. It is a vicious circle. And interruptions are a fertile source of internally generated errors which generates even more checking and correcting which uses up even more required capacity which makes the queues grow even faster and longer. Round and round.  The cries for ‘we need more capacity’ get louder. It is all hands to the pump – but even then eventually there is a crisis. A big mistake happens. Then Senior Management get named-blamed-and shamed,  money magically appears and is thrown at the problem, capacity increases,  the symptoms settle, the cries for more capacity go quiet – but productivity has dropped another notch. Eventually the financial crunch arrives.    

One symptom of this ‘reactive fire-fight design’ is that people get used to working late to catch up at the end of the day so that the next day they can start the whole rollercoaster ride again. And again. And again. At least that is a form of stability. We can expect tomorrow to be just a s miserable as today and yesterday and the day before that. But TOIL (Time Off In Lieu) costs money.

The way out of the Miserable Job Purgatory Vortex is to diagnose what is causing the queue – and to treat that first.

And that means focussing on Time first – and that means Focussing on Flow first.  And by doing that we will improve delivery, improve quality and improve cost because chaotic systems generate errors which need checking and correcting which costs more. Time first is a win-win-win strategy too.

And we already have everything we need to start. We can easily count what comes in and when and what goes out and when.

The first step is to plot the inflow over time (the demand), the outflow over time (the activity), and from that we work out and plot the Work-in-Progress over time. With these three charts we can start the diagnostic process and by that path we can calm the chaos.

And then we can set to work on the Quality Improvement.  


13/01/2013Newspapers report that 17 hospitals are “dangerously understaffed”  Sound familiar?

Next week we will explore how to diagnose the root cause of a queue using Time charts.

For an example to explore please play the SystemFlow Game by clicking here

 

Shifting, Shaking and Shaping

Stop Press: For those who prefer cartoons to books please skip to the end to watch the Who Moved My Cheese video first.


ThomasKuhnIn 1962 – that is half a century ago – a controversial book was published. The title was “The Structure of Scientific Revolutions” and the author was Thomas S Kuhn (1922-1996) a physicist and historian at Harvard University.  The book ushered in the concept of a ‘paradigm shift’ and it upset a lot a people.

In particular it upset a lot of scientists because it suggested that the growth of knowledge and understanding is not smooth – it is jerky. And Kuhn showed that the scientists were causing the jerking.

Kuhn described the process of scientific progress as having three phases: pre-science, normal science and revolutionary science.  Most of the work scientists do is normal science which means exploring, consolidating, and applying the current paradigm. The current conceptual model of how things work.  Anyone who argues against the paradigm is regarded as ‘mistaken’ because the paradigm represents the ‘truth’.  Kuhn draws on the history of science for his evidence, quoting  examples of how innovators such as Galileo, Copernicus, Newton, Einstein and Hawking radically changed the way that we now view the Universe. But their different models were not accepted immediately and ethusiastically because they challenged the status quo. Galileo was under house arrest for much of his life because his ‘heretical’ writings challenged the Church.  

Each revolution in thinking was both disruptive and at the same time constructive because it opened a door to allow rapid expansion of knowledge and understanding. And that foundation of knowledge that has been built over the centuries is one that we all take for granted.  It is a fragile foundation though. It could be all lost and forgotten in one generation because none of us are born with this knowledge and understanding. It is not obvious. We all have to learn it.  Even scientists.

Kuhn’s book was controversial because it suggested that scientists spend most of their time blocking change. This is not necessarily a bad thing. Stability for a while is very useful and the output of normal science is mostly positive. For example the revolution in thinking introduced by Isaac Newton (1643-1727) led directly to the Industrial Revolution and to far-reaching advances in every sphere of human knowledge. Most of modern engineering is built on Newtonian mechanics and it is only at the scales of the very large, the very small and the very quick that it falls over. Relativistic and quantum physics are more recent and very profound shifts in thinking and they have given us the digital computer and the information revolution. This blog is a manifestation of the quantum paradigm.

Kuhn concluded that the progess of change is jerky because scientists create resistance to change to create stability while doing normal science experiments.  But these same experiments produce evidence that suggest that the current paradigm is flawed. Over time the pressure of conflicting evidence accumulates, disharmony builds, conflict is inevitable and intellectual battle lines are drawn.  The deeper and more fundamental the flaw the more bitter the battle.

In contrast, newcomers seek harmony in the cacophony and propose new theories that explain both the old and the new. New paradigms. The stage is now set for a drama and the public watch bemused as the academic heavyweights slug it out. Eventually a tipping point is reached and one of the new paradigms becomes dominant. Often the transition is triggered by one crucial experiment.

There is a sudden release of the tension and a painful and disruptive conceptual  lurch – a paradigm shift. Then the whole process starts over again. The creators of the new paradigm become the consolidators and in time the defenders and eventually the dogmatics!  And it can take decades and even generations for the transition to be completed.

It is said that Albert Einstein (1879-1955) never fully accepted quantum physics even though his work planted the seeds for it and experience showed that it explained the experimental observations better. [For more about Einstein click here].              

The message that some take from Kuhn’s book is that paradigm shifts are the only way that knowledge  can advance.  With this assumption getting change to happen requires creating a crisis – a burning platform. Unfortunatelty this is an error of logic – it is a unverified generalisation from an observed specific. The evidence is growing that this we-always-need-a-burning-platform assumption is incorrect.  It appears that the growth of  knowledge and understanding can be smoother, less damaging and more effective without creating a crisis.

So what is the evidence that this is possible?

Well, what pattern would you look for to illustrate that it is possible to improve smoothly and continually? A smooth growth curve of some sort? Yes – but it is more than that.  It is a smooth curve that is steeper than anyone else’s and one that is growing steeper over time.  Evidence that someone is learning to improve faster than their peers – and learning painlessly and continuously without crises; not painfully and intermittently using crises.

Two examples are Toyota and Apple.

ToyotaLogoToyota is a Japanese car manufacturer that has out-performed other car manufacturers consistently for 40 years – despite the global economic boom-bust cycles. What is their secret formula for their success?

WorldOilPriceChartWe need a bit of history. In the 1980’s a crisis-of-confidence hit the US economy. It was suddenly threatened by higher-quality and lower-cost imported Japanese products – for example cars.

The switch to buying Japanese cars had been triggered by the Oil Crisis of 1973 when the cost of crude oil quadrupled almost overnight – triggering a rush for smaller, less fuel hungry vehicles.  This is exactly what Toyota was offering.

This crisis was also a rude awakening for the US to the existence of a significant economic threat from their former adversary.  It was even more shocking to learn that W Edwards Deming, an American statistician, had sown the seed of Japan’s success thirty years earlier and that Toyota had taken much of its inspiration from Henry Ford.  The knee-jerk reaction of the automotive industry academics was to copy how Toyota was doing it, the Toyota Production System (TPS) and from that the school of Lean Tinkering was born.

This knowledge transplant has been both slow and painful and although learning to use the Lean Toolbox has improved Western manufacturing productivity and given us all more reliable, cheaper-to-run cars – no other company has been able to match the continued success of Japan.  And the reason is that the automotive industry academics did not copy the paradigm – the intangible, subjective, unspoken mental model that created the context for success.  They just copied the tangible manifestation of that paradigm.  The tools. That is just cynically copying information and knowledge to gain a competitive advantage – it is not respecfully growing understanding and wisdom to reach a collaborative vision.

AppleLogoApple is now one of the largest companies in the world and it has become so because Steve Jobs (1955-2011), its Californian, technophilic, Zen Bhuddist, entrepreneurial co-founder, had a very clear vision: To design products for people.  And to do that they continually challenged their own and their customers paradigms. Design is a logical-rational exercise. It is the deliberate use of explicit knowledge to create something that delivers what is needed but in a different way. Higher quality and lower cost. It is normal science.

Continually challenging our current paradigm is not normal science. It is revolutionary science. It is deliberately disruptive innovation. But continually challenging the current paradigm is uncomfortable for many and, by all accounts, Steve Jobs was not an easy person to work for because he was future-looking and demanded perfection in the present. But the success of this paradigm is a matter of fact: 

“In its fiscal year ending in September 2011, Apple Inc. hit new heights financially with $108 billion in revenues (increased significantly from $65 billion in 2010) and nearly $82 billion in cash reserves. Apple achieved these results while losing market share in certain product categories. On August 20, 2012 Apple closed at a record share price of $665.15 with 936,596,000 outstanding shares it had a market capitalization of $622.98 billion. This is the highest nominal market capitalization ever reached by a publicly traded company and surpasses a record set by Microsoft in 1999.”

And remember – Apple almost went bust. Steve Jobs had been ousted from the company he co-founded in a boardroom coup in 1985.  After he left Apple floundered and Steve Jobs proved it was his paradigm that was the essential ingredient by setting up NeXT computers and then Pixar. Apple’s fortunes only recovered after 1998 when Steve Jobs was invited back. The rest is history so click to see and hear Steve Jobs describing the Apple paradigm.

So the evidence states that Toyota and Apple are doing something very different from the rest of the pack and it is not just very good product design. They are continually updating their knowledge and understanding – and they are doing this using a very different paradigm.  They are continually challenging themselves to learn. To illustrate how they do it – here is a list of the five principles that underpin Toyota’s approach:

  • Challenge
  • Improvement
  • Go and see
  • Teamwork
  • Respect

This is Win-Win-Win thinking. This is the Science of Improvement. This is Improvementology®.


So what is the reason that this proven paradigm seems so difficult to replicate? It sounds easy enough in theory! Why is it not so simple to put into practice?

The requirements are clearly listed: Respect for people (challenge). Respect for learning (improvement). Respect for reality (go and see). Respect for systems (teamwork).

In a word – Respect.

Respect is a big challenge for the individualist mindset which is fundamentally disrespectful of others. The individualist mindset underpins the I-Win-You-Lose Paradigm; the Zero-Sum -Game Paradigm; the Either-Or Paradigm; the Linear-Thinking Paradigm; the Whole-Is-The-Sum-Of-The-Parts Paradigm; the Optimise-The-Parts-To-Optimise-The-Whole Paradigm.

Unfortunately these are the current management paradigms in much of the private and public worlds and the evidence is accumulating that this paradigm is failing. It may have been adequate when times were better, but it is inadequate for our current needs and inappropriate for our future needs. 


So how can we avoid having to set fire to the current failing management paradigm to force a leap into the cold and uninviting reality of impending global economic failure?  How can we harness our burning desire for survival, security and stability? How can we evolve our paradigm pro-actively and safely rather than re-actively and dangerously?

all_in_the_same_boat_150_wht_9404We need something tangible to hold on to that will keep us from drowning while the old I-am-OK-You-are-Not-OK Paradigm is dissolved and re-designed. Like the body of the caterpillar that is dissolved and re-assembled inside the pupa as the body of a completely different thing – a butterfly.

We need a robust  and resilient structure that will keep us safe in the transition from old to new and we also need something stable that we can steer to a secure haven on a distant shore.

We need a conceptual lifeboat. Not just some driftwood,  a bag of second-hand tools and no instructions! And we need that lifeboat now.

But why the urgency?

UK_PopulationThe answer is basic economics.

The UK population is growing and the proportion of people over 65 years old is growing faster.  Advances in healthcare means that more of us survive age-related illnesses such as cancer and heart disease. We live longer and with better quality of life – which is great.

But this silver-lining hides a darker cloud.

The proportion of elderly and very elderly will increase over the next 20 years as the post WWII baby-boom reaches retirement age. The number of people who are living on pensions is increasing and the demands on health and social services is increasing.  Pensions and public services are not paid out of past savings  they are paid out of current earnings.  So the country will need to earn more to pay the bills. The UK economy will need to grow.

UK_GDP_GrowthBut the UK economy is not growing.  Our Gross Domestic Product (GDP) is currently about £380 billion and flat as a pancake. This sounds like a lot of dosh – but when shared out across the population of 56 million it gives a more modest figure of just over £100 per person per week.  And the time-series chart for the last 20 years shows that the past growth of about 1% per quarter took a big dive in 2008 and went negative! That means serious recession. It recovered briefly but is now sagging towards zero.

So we are heading for a big economic crunch and hiding our heads in the sand and hoping for the best is not a rational strategy. The only way to survive is to cut public services or for tax-funded services to become more productive. And more productive means increasing the volume of goods and services for the same cost. These are the services that we will need to support the growing population of  dependents but without increasing the cost to the country – which means the taxpayer.

The success of Toyota and Apple stemmed from learning how to do just that: how to design and deliver what is needed; and how to eliminate what is not; and how to wisely re-invest the released cash. The difference can translate into higher profit, or into growth, or into more productivity. It just depends on the context.  Toyota and Apple went for profit and growth. Tax-funded public services will need to opt for productivity. 

And the learning-productivity-improvement-by-design paradigm will be a critical-to-survival factor in tax-payer funded public services such as the NHS and Social Care.  We do not have a choice if we want to maintain what we take for granted now.  We have to proactively evolve our out-of-date public sector management paradigm. We have to evolve it into one that can support dramatic growth in productivity without sacrificing quality and safety.

We cannot use the burning platform approach. And we have to act with urgency.

We need a lifeboat!

Our current public sector management paradigm is sinking fast and is being defended and propped up by the old school managers who were brought up in it.  Unfortunately the evidence of 500 years of change says that the old school cannot unlearn. Their mental models go too deep.  The captains and their crews will go down with their ships.  [Remember the Titanic the unsinkable ship that sank in 1912 on the maiden voyage. That was a victory of reality over rhetoric.]

Those of us who want to survive are the ‘rats’. We know when it is time to leave the sinking ship.  We know we need lifeboats because it could be a long swim! We do not want to freeze and drown during the transition to the new paradigm.

So where are the lifeboats?

One possibility is an unfamiliar looking boat called “6M Design”. This boat looks odd when viewed through the lens of the conventional management paradigm because it combines three apparently contradictiry things: the rational-logical elements of system design;  the respect-for-people and learning-through-challenge principles embodied by Toyota and Apple; and the counter-intuitive technique of systems thinking.

Another reason it feel odd is because “6M Design” is not a solution; it is a meta-solution. 6M Design is a way of creating a good-enough-for-now solution by changing the current paradigm a bit at a time. It is a-how-to-design framework; it is not the-what-to-do solution. 6M Design is a paradigm shaper – not a paradigm shaker or a paradigm shifter.

And there is yet another reason why 6M Design does not float the current management boat.  It does not need to be controlled by self-appointed experts.  Business schools and management consultants, who have a vested interest in defending the current management paradigm, cannot make a quick buck from it because they are irrelevant. 6M Design is intended to be used by anyone and everyone as a common language for collectively engaging in respectful challenge and lifelong learning. Anyone can learn to use it. Anyone.

We do not need a crisis to change. But without changing we will get the crisis we do not want. If we choose to change then we can choose a safer and smoother path of change.

The choice seems clear.  Do you want to go down with the ship or stay afloat aboard an innovation boat?

And we will need something to help us navigate our boat.

If you are a reflective, conceptual learner then you might ike to read a synopsis of Thomas Kuhn’s book.  You can download a copy here. [There is also a 50 year anniversary edition of the original that was published this year].

And if you prefer learning from stories then there is an excellent one called “Who Moved My Cheese” that describes the same challenge of change. And with the power of the digital paradigm you can watch the video here.


The Six Dice Game

<Ring Ring><Ring Ring>

?Hello, you are through to the Improvement Science Helpline. How can we help?

This is Leslie, one of your FISH apprentices.  Could I speak to Bob – my ISP coach?

?Yes, Bob is free. I will connect you now.

<Ring Ring><Ring Ring>

?Hello Leslie, Bob here. How can I help?

Hi Bob, I have a problem that I do not feel my Foundation training has equipped me to solve. Can I talk it through with you?

?Of course. Can you outline the context for me?

Yes. The context is a department that is delivering an acceptable quality-of-service and is delivering on-time but is failing financially. As you know we are all being forced to adopt austerity measures and I am concerned that if their budget is cut then they will fail on delivery and may start cutting corners and then fail on quality too.  We need a win-win-win outcome and I do not know where to start with this one.

?OK – are you using the 6M Design method?

Yes – of course!

?OK – have you done The 4N Chart for the customer of their service?

Yes – it was their customers who asked me if I could help and that is what I used to get the context.

?OK – have you done The 4N Chart for the department?

Yes. And that is where my major concerns come from. They feel under extreme pressure; they feel they are working flat out just to maintain the current level of quality and on-time delivery; they feel undervalued and frustrated that their requests for more resources are refused; they feel demoralized; demotivated and scared that their service may be ‘outsourced’. On the positive side they feel that they work well as a team and are willing to learn. I do not know what to do next.

?OK. Do not panic. This sounds like a very common and treatable system illness.  It is a stream design problem which may be the reason your Foundation training feels insufficient. Would you like to see how a Practitioner would approach this?

Yes please!

?OK. Have you mapped their internal process?

Yes. It is a six-step process for each job. Each step has different requirements and are done by different people with different skills. In the past they had a problem with poor service quality so extra safety and quality checks were imposed by the Governance department.  Now the quality of each step is measured on a 1-6 scale and the quality of the whole process is the sum of the individual steps so is measured on a scale of 6 to 36. They now have been given a minimum quality target of 21 to achieve for every job. How they achieve that is not specified – it was left up to them.

?OK – do they record their quality measurement data?

Yes – I have their report.

?OK – how is the information presented?

As an average for the previous month which is reported up to the Quality Performance Committee.

?OK – what was the average for last month?

Their results were 24 – so they do not have an issue delivering the required quality. The problem is the costs they are incurring and they are being labelled by others as ‘inefficient’. Especially the departments who are in budget and are annoyed that this department keeps getting ‘bailed out’.

?OK. One issue here is the quality reporting process is not alerting you to the real issue. It sounds from what you say that you have fallen into the Flaw of Averages trap.

I don’t understand. What is the Flaw of Averages trap?

?The answer to your question will become clear. The finance issue is a symptom – an effect – it is unlikely to be the cause. When did this finance issue appear?

Just after the Safety and Quality Review. They needed to employ more agency staff to do the extra work created by having to meet the new Minimum Quality target.

?OK. I need to ask you a personal question. Do you believe that improving quality always costs more?

I have to say that I am coming to that conclusion. Our Governance and Finance departments are always arguing about it. Governance state ‘a minimum standard of safety and quality is not optional’ and finance say ‘but we are going out of business’. They are at loggerheads. The departments get caught in the cross-fire.

?OK. We will need to use reality to demonstrate that this belief is incorrect. Rhetoric alone does not work. If it did then we would not be having this conversation. Do you have the raw data from which the averages are calculated?

Yes. We have the data. The quality inspectors are very thorough!

?OK – can you plot the quality scores for the last fifty jobs as a BaseLine chart?

Yes – give me a second. The average is 24 as I said.

?OK – is the process stable?

Yes – there is only one flag for the fifty. I know from my FISH training that is not a cause for alarm.

?OK – what is the process capability?

I am sorry – I don’t know what you mean by that?

?My apologies. I forgot that you have not completed the Practitioner training yet. The capability is the range between the red lines on the chart.

Um – the lower line is at 17 and the upper line is at 31.

?OK – how many points lie below the target of 21.

None of course. They are meeting their Minimum Quality target. The issue is not quality – it is money.

There was a pause.  Leslie knew from experience that when Bob paused there was a surprise coming.

?Can you email me your chart?

A cold-shiver went down Leslie’s back. What was the problem here? Bob had never asked to see the data before.

Sure. I will send it now.  The recent fifty is on the right, the data on the left is from after the quality inspectors went in and before the the Minimum Quality target was imposed. This is the chart that Governance has been using as evidence to justify their existence because they are claiming the credit for improving the quality.

?OK – thanks. I have got it – let me see.  Oh dear.

Leslie was shocked. She had never heard Bob use language like ‘Oh dear’.

There was another pause.

?Leslie, what is the context for this data? What does the X-axis represent?

Leslie looked at the chart again – more closely this time. Then she saw what Bob was getting at. There were fifty points in the first group, and about the same number in the second group. That was not the interesting part. In the first group the X-axis went up to 50 in regular steps of five; in the second group it went from 50 to just over 149 and was no longer regularly spaced. Eventually she replied.

Bob, that is a really good question. My guess it is that this is the quality of the completed work.

?It is unwise to guess. It is better to go and see reality.

You are right. I knew that. It is drummed into us during the Foundation training! I will go and ask. Can I call you back?

?Of course. I will email you my direct number.


[reveal heading=”Click here to read the rest of the story“]


<Ring Ring><Ring Ring>

?Hello, Bob here.

Bob – it is Leslie. I am  so excited! I have discovered something amazing.

?Hello Leslie. That is good to hear. Can you tell me what you have discovered?

I have discovered that better quality does not always cost more.

?That is a good discovery. Can you prove it with data?

Yes I can!  I am emailing you the chart now.

?OK – I am looking at your chart. Can you explain to me what you have discovered?

Yes. When I went to see for myself I saw that when a job failed the Minimum Quality check at the end then the whole job had to be re-done because there was no time to investigate and correct the causes of the failure.  The people doing the work said that they were helpless victims of errors that were made upstream of them – and they could not predict from one job to the next what the error would be. They said it felt like quality was a lottery and that they were just firefighting all the time. They knew that just repeating the work was not solving the problem but they had no other choice because they were under enormous pressure to deliver on-time as well. The only solution they could see is was to get more resources but their requests were being refused by Finance on the grounds that there is no more money. They felt completely trapped.

?OK. Can you describe what you did?

Yes. I saw immediately that there were so many sources of errors that it would be impossible for me to tackle them all. So I used the tool that I had learned in the Foundation training: the Niggle-o-Gram. That focussed us and led to a surprisingly simple, quick, zero-cost process design change. We deliberately did not remove the Inspection-and-Correction policy because we needed to know what the impact of the change would be. Oh, and we did one other thing that challenged the current methods. We plotted both the successes and the failures on the BaseLine chart so we could see both the the quality and the work done on one chart.  And we updated the chart every day and posted it chart on the notice board so everyone in the department could see the effect of the change that they had designed. It worked like magic! They have already slashed their agency staff costs, the whole department feels calmer and they are still delivering on-time. And best of all they now feel that they have the energy and time to start looking at the next niggle. Thank you so much! Now I see how the tools and techniques I learned in FISH school are so powerful and now I understand better the reason we learned them first.

?Well done Leslie. You have taken an important step to becoming a fully fledged Improvement Science Practitioner. There are many more but you have learned some critical lessons in this challenge.


This scenario is fictional but realistic.

And it has been designed so that it can be replicated easily using a simple game that requires only pencil, paper and some dice.

If you do not have some dice handy then you can use this little program that simulates rolling six dice.

The Six Digital Dice program (for PC only).

Instructions
1. Prepare a piece of A4 squared paper with the Y-axis marked from zero to 40 and the X-axis from 1 to 80.
2. Roll six dice and record the score on each (or one die six times) – then calculate the total.
3. Plot the total on your graph. Left-to-right in time order. Link the dots with lines.
4. After 25 dots look at the chart. It should resemble the leftmost data in the charts above.
5. Now draw a horizontal line at 21. This is the Minimum Quality Target.
6. Keep rolling the dice – six per cycle, adding the totals to the right of your previous data.

But this time if the total is less than 21 then repeat the cycle of six dice rolls until the score is 21 or more. Record on your chart the output of all the cycles – not just the acceptable ones.

7. Keep going until you have 25 acceptable outcomes. As long as it takes.

Now count how many cycles you needed to complete in order to get 25 acceptable outcomes.  You should find that it is about twice as many as before you “imposed” the Inspect-and-Correct QI policy.

This illustrates the problem of an Inspection-and-Correction design for quality improvement.  It does improve the quality of the output – but at a higher cost.  We are treating the symptoms and ignoring the disease.

The internal design of the process is unchanged – and it is still generating mistakes.

How much quality improvement you get and how much it costs you is determined by the design of the underlying process – which has not changed. There is a Law of Diminishing returns here – and a risk.

The risk is that if quality improves as the result of applying a quality target then it encourages the Governance thumbscrews to be tightened further and forces the people further into cross-fire between Governance and Finance.

The other negative consequence of the Inspection-and-Correction approach is that it increases both the average and the variation in lead time which also fuels the calls for more targets, more sticks, calls for  more resources and pushes costs up even further.

The lesson from this simple reality check seems clear.

The better strategy for improving quality is to design the root causes of errors out of the processes  because then we will get improved quality and improved delivery and improved productivity and we will discover that we have improved safety as well.

The Six Dice Game is a simpler version of the famous Red Bead Game that W Edwards Deming used to explain why the arbitrary-target-driven-stick-and-carrot style of management creates more problems than it solves.

The illusion of short-term gain but the reality of long-term pain.

And if you would like to see and hear Deming talking about the science of improvement there is a video of him speaking in 1984. He is at the bottom of the page.  Click here.

[/reveal]

A Recipe for Improvement PIE.

Most of us are realists. We have to solve problems in the real world so we prefer real examples and step-by-step how-to-do recipes.

A minority of us are theorists and are more comfortable with abstract models and solving rhetorical problems.

Many of these Improvement Science blog articles debate abstract concepts – because I am a strong iNtuitor by nature. Most realists are Sensors – so by popular request here is a “how-to-do” recipe for a Productivity Improvement Exercise (PIE)

Step 1 – Define Productivity.

There are many definitions we could choose because productivity means the results delivered divided by the resources used.  We could use any of the three currencies – quality, time or money – but the easiest is money. And that is because it is easier to measure and we have well established department for doing it – Finance – the guardians of the money.  There are two other departments who may need to be involved – Governance (the guardians of the safety) and Operations (the guardians of the delivery).

So the definition we will use is productivity = revenue generated divided cost incurred.

Step 2 – Draw a map of the process we want to make more productive.

This means creating a picture of the parts and their relationships to each other – in particular what the steps in the process are; who does what, where and when; what is done in parallel and what is done in sequence; what feeds into what and what depends on what. The output of this step is a diagram with boxes and arrows and annotations – called a process map. It tells us at a glance how complex our process is – the number of boxes and the number of arrows.  The simpler the process the easier it is to demonstrate a productivity improvement quickly and unambiguously.

Step 3 – Decide the objective metrics that will tell us our productivity.

We have chosen a finanical measure of productivity so we need to measure revenue and cost over time – and our Finance department do that already so we do not need to do anything new. We just ask them for the data. It will probably come as a monthly report because that is how Finance processes are designed – the calendar month accounting cycle is not negotiable.

We will also need some internal process metrics (IPMs) that will link to the end of month productivity report values because we need to be observing our process more often than monthly. Weekly, daily or even task-by-task may be necessary – and our monthly finance reports will not meet that time-granularity requirement.

These internal process metrics will be time metrics.

Start with objective metrics and avoid the subjective ones at this stage. They are necessary but they come later.

Step 4 – Measure the process.

There are three essential measures we usually need for each step in the process: A measure of quality, a measure of time and a measure of cost.  For the purposes of this example we will simplify by making three assumptions. Quality is 100% (no mistakes) and Predictability is 100% (no variation) and Necessity is 100% (no worthless steps). This means that we are considering a simplified and theoretical situation but we are novices and we need to start with the wood and not get lost in the trees.

The 100% Quality means that we do not need to worry about Governance for the purposes of this basic recipe.

The 100% Predictability means that we can use averages – so long as we are careful.

The 100% Necessity means that we must have all the steps in there or the process will not work.

The best way to measure the process is to observe it and record the events as they happen. There is no place for rhetoric here. Only reality is acceptable. And avoid computers getting in the way of the measurement. The place for computers is to assist the analysis – and only later may they be used to assist the maintenance – after the improvement has been achieved.

Many attempts at productivity improvement fail at this point – because there is a strong belief that the more computers we add the better. Experience shows the opposite is usually the case – adding computers adds complexity, cost and the opportunity for errors – so beware.

Step 5 – Identify the Constraint Step.

The meaning of the term constraint in this context is very specific – it means the step that controls the flow in the whole process.  The critical word here is flow. We need to identify the current flow constraint.

A tap or valve on a pipe is a good example of a flow constraint – we adjust the tap to control the flow in the whole pipe. It makes no difference how long or fat the pipe is or where the tap is, begining, middle or end. (So long as the pipe is not too long or too narrow or the fluid too gloopy because if they are then the pipe will become the flow constraint and we do not want that).

The way to identify the constraint in the system is to look at the time measurements. The step that shows the same flow as the output is the constraint step. (And remember we are using the simplified example of no errors and no variation – in real life there is a bit more to identifying the constraint step).

Step 6 – Identify the ideal place for the Constraint Step.

This is the critical-to-success step in the PIE recipe. Get this wrong and it will not work.

This step requires two pieces of measurement data for each step – the time data and the cost data. So the Operational team and the Finance team will need to collaborate here. Tricky I know but if we want improved productivity then there is no alternative.

Lots of productivity improvement initiatives fall at the Sixth Fence – so beware.  If our Finance and Operations departments are at war then we should not consider even starting the race. It will only make the bad situation even worse!

If they are able to maintain an adult and respectful face-to-face conversation then we can proceed.

The time measure for each step we need is called the cycle time – which is the time interval from starting one task to being ready to start the next one. Please note this is a precise definition and it should be used exactly as defined.

The money measure for each step we need is the fully absorbed cost of time of providing the resource.  Your Finance department will understand that – they are Masters of FACTs!

The magic number we need to identify the Ideal Constraint is the product of the Cycle Time and the FACT – the step with the highest magic number should be the constraint step. It should control the flow in the whole process. (In reality there is a bit more to it than this but I am trying hard to stay out of the trees).

Step 7 – Design the capacity so that the Ideal Constraint is the Actual Constraint.

We are using a precise definition of the term capacity here – the amount of resource-time available – not just the number of resources available. Again this is a precise definition and should be used as defined.

The capacity design sequence  means adding and removing capacity to and from steps so that the constraint moves to where we want it.

The sequence  is:
7a) Set the capacity of the Ideal Constraint so it is capable of delivering the required activity and revenue.
7b) Increase the capacity of the all the other steps so that the Ideal Constraint actually controls the flow.
7c) Reduce the capacity of each step in turn, a click at a time until it becomes the constraint then back off one click.

Step 8 – Model your whole design to predict the expected productivity improvement.

This is critical because we are not interested in suck-it-and-see incremental improvement. We need to be able to decide if the expected benefit is worth the effort before we authorise and action any changes.  And we will be asked for a business case. That necessity is not negotiable either.

Lots of productivity improvement projects try to dodge this particularly thorny fence behind a smoke screen of a plausible looking business case that is more fiction than fact. This happens when any of Steps 2 to 7 are omitted or done incorrectly.  What we need here is a model and if we are not prepared to learn how to build one then we should not start. It may only need a simple model – but it will need one. Intuition is too unreliable.

A model is defined as a simplified representation of reality used for making predictions.

All models are approximations of reality. That is OK.

The art of modeling is to define the questions the model needs to be designed to answer (and the precision and accuracy needed) and then design, build and test the model so that it is just simple enough and no simpler. Adding unnecessary complexity is difficult, time consuming, error prone and expensive. Using a computer model when a simple pen-and-paper model would suffice is a good example of over-complicating the recipe!

Many productivity improvement projects that get this far still fall at this fence.  There is a belief that modeling can only be done by Marvins with brains the size of planets. This is incorrect.  There is also a belief that just using a spreadsheet or modelling software is all that is needed. This is incorrect too. Competent modelling requires tools and training – and experience because it is as much art as science.

Step 9 – Modify your system as per the tested design.

Once you have demonstrated how the proposed design will deliver a valuable increase in productivity then get on with it.

Not by imposing it as a fait accompli – but by sharing the story along with the rationale, real data, explanation and results. Ask for balanced, reasoned and respectful feedback. The question to ask is “Can you think of any reasons why this would not work?” Very often the reply is “It all looks OK in theory but I bet it won’t work in practice but I can’t explain why”. This is an emotional reaction which may have some basis in fact. It may also just be habitual skepticism/cynicism. Further debate is usually  worthless – the only way to know for sure is by doing the experiment. As an experiment – as a small-scale and time-limited pilot. Set the date and do it. Waiting and debating will add no value. The proof of the pie is in the eating.

Step 10 – Measure and maintain your system productivity.

Keep measuring the same metrics that you need to calculate productivity and in addition monitor the old constraint step and the new constraint steps like a hawk – capturing their time metrics for every task – and tracking what you see against what the model predicted you should see.

The correct tool to use here is a system behaviour chart for each constraint metric.  The before-the-change data is the baseline from which improvement is measured over time;  and with a dot plotted for each task in real time and made visible to all the stakeholders. This is the voice of the process (VoP).

A review after three months with a retrospective financial analysis will not be enough. The feedback needs to be immediate. The voice of the process will dictate if and when to celebrate. (There is a bit more to this step too and the trees are clamoring for attention but we must stay out of the wood a bit longer).

And after the charts-on-the-wall have revealed the expected improvement has actually happened; and after the skeptics have deleted their ‘we told you so’ emails; and after the cynics have slunk off to sulk; and after the celebration party is over; and after the fame and glory has been snatched by the non-participants – after all of that expected change management stuff has happened …. there is a bit more work to do.

And that is to establish the new higher productivity design as business-as-usual which means tearing up all the old policies and writing new ones: New Policies that capture the New Reality. Bin the out-of-date rubbish.

This is an essential step because culture changes slowly.  If this step is omitted then out-of-date beliefs, attitudes, habits and behaviours will start to diffuse back in, poison the pond, and undo all the good work.  The New Policies are the reference – but they alone will not ensure the improvement is maintained. What is also needed is a PFL – a performance feedback loop.

And we have already demonstrated what that needs to be – the tactical system behaviour charts for the Intended Constraint step.

The finanical productivity metric is the strategic output and is reported monthly – as a system behaviour chart! Just comparing this month with last month is meaningless.  The tactical SBCs for the constraint step must be maintained continuously by the people who own the constraint step – because they control the productivity of the whole process.  They are the guardians of the productivity improvement and their SBCs are the Early Warning System (EWS).

If the tactical SBCs set off an alarm then investigate the root cause immediately – and address it. If they do not then leave it alone and do not meddle.

This is the simplified version of the recipe. The essential framework.

Reality is messier. More complicated. More fun!

Reality throws in lots of rusty spanners so we do also need to understand how to manage the complexity; the unnecessary steps; the errors; the meddlers; and the inevitable variation.  It is possible (though not trivial) to design real systems to deliver much higher productivity by using the framework above and by mastering a number of other tools and techniques.  And for that to succeed the Governance, Operations and Finance functions need to collaborate closely with the People and the Process – initially with guidance from an experienced and competent Improvement Scientist. But only initially. This is a learnable skill. And it takes practice to master – so start with easy ones and work up.

If any of these bits are missing or are dysfunctional the recipe will not work. So that is the first nettle the Executive must grasp. Get everyone who is necessary on the same bus going in the same direction – and show the cynics the exit. Skeptics are OK – they will counter-balance the Optimists. Cynics add no value and are a liability.

What you may have noticed is that 8 of the 10 steps happen before any change is made. 80% of the effort is in the design – only 20% is in the doing.

If we get the design wrong the the doing will be an ineffective and inefficient waste of effort, time and money.


The best complement to real Improvement PIE is a FISH course.


The First Step Looks The Steepest

Getting started on improvement is not easy.

It feels like we have to push a lot to get anywhere and when we stop pushing everything just goes back to where it was before and all our effort was for nothing.

And it is easy to become despondent.  It is easy to start to believe that improvement is impossible. It is easy to give up. It is not easy to keep going.


One common reason for early failure is that we often start by  trying to improve something that we have little control over. Which is natural because many of the things that niggle us are not of our making.

But not all Niggles are like that; there are also many Niggles over which we have almost complete control.

It is these close-to-home Niggles that we need to start with – and that is surprisingly difficult too – because it requires a bit of time-investment.


The commonest reason for not investing time in improvement is: “I am too busy.”

Q: Too busy doing what – specifically?

This simple question is  a  good place to start because just setting aside a few minutes each day to reflect on where we have been spending our time is a worthwhile task.

And the output of our self-reflection is usually surprising.

We waste lifetime every day doing worthless work.

Then we complain that we are too busy to do the worthwhile stuff.

Q: So what are we scared of? Facing up to the uncomfortable reality of knowing how much lifetime we have wasted already?

We cannot change the past. We can only influence the future. So we need to learn from the past to make wiser choices.


Lifetime is odd stuff.  It both is and is not like money.

We can waste lifetime and we can waste money. In that  respect they are the same. Money we do not use today we can save for tomorrow, but lifetime not used today is gone forever.

We know this, so we have learned to use up every last drop of lifetime – we have learned to keep ourselves busy.

And if we are always busy then any improvement will involve a trade-off: dis-investing and re-investing our lifetime. This implies the return on our lifetime re-investment must come quickly and predictably – or we give up.


One tried-and-tested strategy is to start small and then to re-invest our time dividend in the next cycle of improvement.  An if we make wise re-investment choices, the benefit will grow exponentially.

Successful entrepreneurs do not make it big overnight.

If we examine their life stories we will find a repeating cycle of bigger and bigger business improvement cycles.

The first thing successful entrepreneurs learn is how to make any investment lead to a return – consistently. It is not luck.  They practice with small stuff until they can do it reliably.

Successful entrepreneurs are disciplined and they only take calculated risks.

Unsuccessful entrepreneurs are more numerous and they have a different approach.

They are the get-rich-quick brigade. The undisciplined gamblers. And the Laws of Probability ensure that they all will fail eventually.

Sustained success is not by chance, it is by design.

The same is true for improvement.  The skill to learn is how to spot an opportunity to release some valuable time resource by nailing a time-sapping-niggle; and then to reinvest that time in the next most promising cycle of improvement  – consistently and reliably.  It requires discipline and learning to use some novel tools and techniques.

This is where Improvement Science helps – because the tools and techniques apply to any improvement. Safety. Flow. Quality. Productivity. Stability. Reliability.

In a nutshell … trustworthy.


The first step looks the steepest because the effort required feels high and the benefit gained looks small.  But it is climbing the first step that separates the successful from the unsuccessful. And successful people are self-disciplined people.

After a few invest-release-reinvest cycles the amount of time released exceeds the amount needed to reinvest. It is then we have time to spare – and we can do what we choose with that.

Ask any successful athlete or entrepreneur – they keep doing it long after they need to – just for the “rush” it gives them.


The tool I use, because it is quick, easy and effective, is called The 4N Chart®.  And it has a helpful assistant called a Niggle-o-Gram®.   Together they work like a focusing lens – they show where the most fertile opportunity for improvement is – the best return on an investment of time and effort.

And when we have proved to yourself that the first step of improvement is not as steep as you believed – then we have released some time to re-invest in the next cycle of improvement – and in sharing what we have discovered.

That is where the big return comes from.

10/11/2012: Feedback from people who have used The 4N Chart and Niggle-o-Gram for personal development is overwhelmingly positive.

Structure Time to Fuel Improvement

The expected response to any suggestion of change is “Yes, but I am too busy – I do not have time.”

And the respondent is correct. They do not.

All their time is used just keeping their head above water or spinning the hamster wheel or whatever other metaphor they feel is appropriate.  We are at an impasse. A stalemate. We know change requires some investment of time and there is no spare time to invest so change cannot happen. Yes?  But that is not good enough – is it?

Well-intended experts proclaim that “I’m too busy” actually means “I have other things to do that are higher priority“. And by that we mean ” … that are a greater threat to my security and to what I care about“. So to get our engagement our well-intended expert pours emotional petrol on us and sets light to it. They show us dramatic video evidence of how our “can’t do” attitude and behaviour is part of the problem. We are the recalcitrant child who is standing in the way of  change and we need to have our face rubbed in our own cynical poo.

Now our platform is really burning. Inflamed is exactly what we are feeling – angry in fact. “Thanks-a-lot. Now #!*@ off!”   And our well-intentioned expert retreats – it is always the same. The Dinosaurs and the Dead Wood are clogging the way ahead.

Perhaps a different perspective might be more constructive.


It is not just how much time we have that is most important – it is how our time is structured.


Humans hate unstructured time. We like to be mentally active for all of our waking moments. 

To test this hypothesis try this demonstration of our human need to fill idle time with activity. When you next talk to someone you know well – at some point after they have finished telling you something just say nothing;  keep looking at them; and keep listening – and say nothing. For up to twenty seconds if necessary. Both you and they will feel an overwhelming urge to say something, anything – to fill the silence. It is called the “pregnant pause effect” and most people find even a gap of a second or two feels uncomfortable. Ten seconds would be almost unbearable. Hold your nerve and stay quiet. They will fill the gap.

This technique is used by cognitive behavioural therapists, counsellors and coaches to help us reveal stuff about ourselves to ourselves – and it works incredibly well. It is also used for less altrusitic purposes by some – so when you feel the pain of the pregnant pause just be aware of what might be going on and counter with a question.


If we have no imposed structure for our time then we will create one – because we feel better for it. We have a name for these time-structuring behaviours: habits, past-times and rituals. And they are very important to us because they reduce anxiety.

There is another name for a pre-meditated time-structure:  it is called a plan or a process design. Many people hate not having a plan – and to them any plan is better than none. So in the absence of an imposed alternative we habitually make do with time-wasting plans and poorly designed processes.  We feel busy because that is the purpose of our time-structuring behaviour – and we look busy too – which is also important. This has an important lesson for all improvement scientists: Using a measure of “business” such as utilisation as a measure of efficiency and productivity is almost meaningless. Utilisation does not distinguish between useful busi-ness and useless busi-ness.

We also time-structure our non-working lives. Reading a newspaper, doing the crossword, listening to the radio,  watching television, and web-browsing are all time-structuring behaviours.


This insight into our need for structured time leads to a rational way to release time for change and improvement – and that is to better structure some of our busy time.

A useful metaphor for a time-structure is a tangible structure – such as a building. Buildings have two parts – a supporting, load bearing, structural framework and the functional fittings that are attached to it. Often the structural framework is invisible in the final building – invisible but essential. That is why we need structural engineers. The same is true for time-structuring: the supporting form should be there but it should not not get in the way of the intended function. That is why we need process design engineers too. Good process design is invisible time-structuring.


One essential investment of time in all organisations is communication. Face-to-face talking, phone calls, SMS, emails, reports, meetings, presentations, webex and so on. We spend more time communicating with each other than doing anything else other than sleeping.  And more niggles are generated by poorly designed and delivered communication processes than everything else combined. By a long way.


As an example let us consider management meetings.

From a process design perspective mmany management meetings are both ineffective and inefficient. They are unproductive.  So why do we still have them?

One possibkle answer is because meetings have two other important purposes: first as a tool for social interaction, and second as a way to structure time.  It turns out that we dislike loneliness even more than idleness – and we can meet both needs at the same time by having a meeting. Productivity is not the primary purpose.


So when we do have to communicate effectively and efficiently in order to collectively resolve a real and urgent problem then we are ill prepared. And we know this. We know that as soon as Crisis Management Committees start to form then we are in really big trouble. What we want in a time of crisis is for someone to structure time for us. To tell us what to do.

And some believe that we unconsciously create crisis after crisis for just that purpose.


Recently I have been running an improvement experiment.  I have  been testing the assumption that we have to meet face-to-face to be effective. This has big implications for efficiency because I work in a multi-site organisation and to attend a meeting on another site implies travelling there and back. That travel takes one hour in each direction when all the separate parts are added together. It has two other costs. The financial cost of the fuel – which is a variable cost – if I do not travel then I do not incur the cost. And there is an emotional cost – I have to concentrate on driving and will use up some of my brain-fuel in doing so. There are three currencies – emotional, temporal and financial.

The experiment was a design change. I changed the design of the communication process from at-the-same-place-and-time to just at-the-same-time. I used an internet-based computer-to-computer link (rather like Skype or FaceTime but with some other useful tools like application sharing).

It worked much better than I expected.

There was the anticipated “we cannot do this because we do not have webcams and no budget for even pencils“. This was solved by buying webcams from the money saved by not burning petrol. The conversion rate was one webcam per four trips – and the webcam is a one off capital cost not a recurring revenue cost. This is accpiuntant-speak for “the actual cash released will fund the change“. No extra budget is required. And combine the fuel savings for everyone, and parking charges and the payback time is even shorter.

There were also the anticipated glitches as people got used to the unfamiliar technology (they did not practice of course because they were too busy) but the niggles go away with a few iterations.

So what were the other benefits?

Well one was the travel time saved – two hours per meeting – which was longer than the meeting! The released time cannot be stored and used later like the money can – it has to be reinvested immediately. I reinvested it in other improvement work. So the benefit was amplified.

Another was the brain-fuel saved from not having to drive – which I used to offset my cumuative brain-fuel deficit called chronic fatigue. The left over was re-invested in the improvement work. 100% recycled. Nothing was wasted.


The unexpected benefit was the biggest one.

The different communication design of a virtual meeting required a different form of meeting structure and discipline. It took a few iterations to realise this – then click – both effectiveness and efficiency jumped up. The time became even better structured, more productive and released even more time to reinvest. Wow!

And the whole thing funded itself.

Productivity Improvement Science

Very often there is a requirement to improve the productivity of a process and operational managers are usually measured and rewarded for how well they do that. Their primary focus is neither safety nor quality – it is productivity – because that is their job.

For-profit organisations see improved productivity as a path to increased profit. Not-for-profit organisations see improved productivity as a path to being able to grow through re-investment of savings.  The goal may be different but the path is the same – productivity improvement.

First we need to define what we mean by productivity: it is the ratio of a system output to a system input. There are many input and output metrics to choose from and a convenient one to use is the ratio of revenue to expenses for a defined period of time.  Any change that increases this ratio represents an improvement in productivity on this purely financial dimension and we know that this financial data is measured. We just need to look at the bank statement.

There are two ways to approach productivity improvement: by considering the forces that help productivity and the forces that hinder it. This force-field metaphor was described by the psychologist Kurt Lewin (1890-1947) and has been developed and applied extensively and successfully in many organisations and many scenarios in the context of change management.

Improvement results from either strengthening helpers or weakening hinderers or both – and experience shows that it is often quicker and easier to focus attention on the hinderers because that leads to both more improvement and to less stress in the system. Usually it is just a matter of alignment. Two strong forces in opposition results in high stress and low motion; but in alignment creates low stress and high acceleration.

So what hinders productivity?

Well, anything that reduces or delays workflow will reduce or delay revenue and therefore hinder productivity. Anything that increases resource requirement will increase cost and therefore hinder productivity. So looking for something that causes both and either removing or realigning it will have a Win-Win impact on productivity!

A common factor that reduces and delays workflow is the design of the process – in particular a design that has a lot of sequential steps performed by different people in different departments. The handoffs between the steps are a rich source of time-traps and bottlenecks and these both delay and limit the flow.  A common factor that increases resource requirement is making mistakes because errors generate extra work – to detect and to correct.  And there is a link between fragmentation and errors: in a multi-step process there are more opportunities for errors – particularly at the handoffs between steps.

So the most useful way to improve the productivity of a process is to simplify it by combining several, small, separate steps into single large ones.

A good example of this can be found in healthcare – and specifically in the outpatient department.

Traditionally visits to outpatients are defined as “new” – which implies the first visit for a particular problem – and “review” which implies the second and subsequent visits.  The first phase is the diagnostic work and this often requires special tests or investigations to be performed (such as blood tests, imaging, etc) which are usually done by different departments using specialised equipment and skills. The design of departmental work schedules requires a patient to visit on a separate occasion to a different department for each test. Each of these separate visits incurs a delay and a risk of a number of errors – the commonest of which is a failure to attend for the test on the appointed day and time. Such did-not-attend or DNA rates are surprisingly high – and values of 10% are typical in the NHS.

The cumulative productivity hindering effect of this multi-visit diagnostic process design is large.  Suppose there are three steps: New-Test-Review and each step has a 10% DNA rate and a 4 week wait. The quickest that a patient could complete the process is 12 weeks and the chance of getting through right first time (the yield) is about 90% x 90% x 90% = 73% which implies that 27% extra resource is needed to correct the failures.  Most attempts to improve productivity focus on forcing down the DNA rate – usually with limited success. A more effective approach is to redesign process by combining the three New-Test-Review steps into one visit.  Exactly the same resources are needed to do the work as before but now the minimum time would be 4 weeks, the right-first-time yield would increase to 90% and the extra resources required to manage the two handoffs, the two queues, and the two sources of DNAs would be unnecessary.  The result is a significant improvement in productivity at no cost.  It is also an improvement in the quality of the patient experience but that is a unintended bonus.

So if the solution is that obvious and that beneficial then why are we not doing this everywhere? The answer is that we do in some areas – in particular where quality and urgency is important such as fast-track one-stop clinics for suspected cancer. However – we are not doing it as widely as we could and one reason for that is a hidden hinderer: the way that the productivity is estimated in the business case and measured in the the day-to-day business.

Typically process productivity is estimated using the calculated unit price of the product or service. The unit price is arrived at by adding up the unit costs of the steps and adding an allocation of the overhead costs (how overhead is allocated is subject to a lot of heated debate by accountants!). The unit price is then multiplied by expected activity to get expected revenue and divided by the total cost (or budget) to get the productivity measure.  This approach is widely taught and used and is certainly better than guessing but it has a number of drawbacks. Firstly, it does not take into account the effects of the handoffs and the queues between the steps and secondly it drives step-optimisation behaviour. A departmental operational manager who is responsible and accountable for one step in the process will focus their attention on driving down costs and pushing up utilisation of their step because that is what they are performance managed on. This in itself is not wrong – but it can become counter-productive when it is done in isolation and independently of the other steps in the process.  Unfortunately our traditional management accounting methods do not prevent this unintentional productivity hindering behaviour – and very often they actually promote it – literally!

This insight is not new – it has been recognised by some for a long time – so we might ask ourselves why this is still the case? This is a very good question that opens another “can of worms” which for the sake of brevity will be deferred to a later conversation.

So, when applying Improvement Science in the domain of financial productivity improvement then the design of both the process and of the productivity modelling-and-monitoring method may need addressing at the same time.  Unfortunately this does not seem to be common knowledge and this insight may explain why productivity improvements do not happen more often – especially in publically funded not-for-profit service organisations such as the NHS.

All Aboard for the Ride of Our Lives!

In 1825 the world changed when the Age of Rail was born with the opening of the Darlington-to-Stockton line and the demonstration that a self-powered mobile steam engine could pull more trucks of coal than a team of horses.

This launched the industrial revolution into a new phase by improving the capability to transport heavy loads over long distances more conveniently, reliably, quickly, and cheaply than could canals or roads.

Within 25 years the country was criss-crossed by thousands of miles of railway track and thousands more miles were rapidly spreading across the world. We take it for granted now but this almost overnight success was the result of over 100 years of painful innovation and improvement. Iron rail tracks had been in use for a long time – particularly in quarries and ports. Newcomen’s atmospheric steam engine had been pumping water out of mines since 1712; James Watt and Matthew Boulton had patented their improved separate condenser static steam engine in 1775; and Richard Trevethick had built a self-propelled high pressure steam engine called “Puffing Devil” in 1801. So why did it take so long for the idea to take off? The answer was quite simple – it needed the lure of big profits to attract the entrepreneurs who had the necessary influence and cash to make it happen at scale and pace.  The replacement of windmills and watermills by static steam engines had already allowed factories to be built anywhere – rather than limiting them to the tops of windy hills and the sides of fast flowing rivers. But it was not until the industrial revolution had achieved sufficient momentum that road and canal transport became a serious constraint to further growth of industry, wealth and the British Empire.

But not everyone was happy with the impact that mechanisation brought – the Luddites were the skilled craftsmen who opposed the use of mechanised looms that could be operated by lower-skilled and therefore cheaper labour.  They were crushed in 1812 by political forces more powerful than they were – and the term “luddite” is now used for anyone who blindly opposes change from a position self-protection.

Only 140 years later it was all over for the birthplace of the Rail Age – the steam locomotive was relegated to the museums when Dr Richard Beeching , the efficiency-focussed Technical Director of ICI, published his reports that led to the cost-improvement-programme (CIP) that reorganised the railways and led to the loss of 70,000 jobs, hundreds of small “unprofitable” stations and 1000’s of miles of track.  And the reason for the collapse of the railways was that roads had leap-frogged both canals and railways because the “internal combustion engine” proved a smaller, lighter, more powerful, cheaper and more flexible alternative to steam or horses.

It is of historical interest that Henry Ford developed the production line to mass produce automobiles at a price that a factory worker could afford – and Toyoda invented a self-stopping mechanised loom that improved productivity dramatically by preventing damaged cloth being produced if a thread broke by accident. The historical links come together because Toyoda sold the patents to his self-stopping loom to fund the creation of the Toyota Motor Company which used Henry Ford’s production-line design and integrated the Toyoda self-monitoring, stopping and continuous improvement philosophy.

It was not until twenty years after British Rail was created that Japan emerged as an industrial superpower by demonstrating that it had learned how to improve both quality and reduce cost much more effectively than the “complacent” Europe and America. The tables were turned and this time it was the West that had to learn – and quickly.  Unfortunately not quickly enough. Other developing countries seized the opportunity that mass mechanisation, customisation and a large, low-expectation, low-cost workforce offered. They now produce manufactured goods at prices that European and American companies cannot compete with. Made in Britain has become Made in China.

The lesson of history has been repeated many times – innovations are like seeds that germinate but do not disseminate until the context is just right – then they grow, flower, seed and spread – and are themselves eventually relegated to museums by the innovations that they spawned.

Improvement Science has been in existence for a long time in various forms, and it is now finding more favourable soil to grow as traditional reactive and incremental improvement methods run out of steam when confronted with complex system problems. Wicked problems such as a world population that is growing larger and older at the same time as our reserves of non-renewable natural resources are dwindling.

The promise that Improvement Science offers is the ability to avoid the boom-to-bust economic roller-coaster that devastates communities twice – on the rise and again on the fall. Improvement Science offers an approach that allows sensible and sustainable changes to be planned, implemented and then progressively improved.

So what do we want to do? Watch from the sidelines and hope, or leap aboard and help?

And remember what happened to the Luddites!

Resistance to Change

Many people who are passionate about improvement become frustrated when they encounter resistance-to-change.

It does not matter what sort of improvement is desired – safety, delivery, quality, costs, revenue, productivity or all of them.

The natural and intuitive reaction to meeting resistance is to push harder – and our experience of the physical world has taught us that if we apply enough pressure at the right place then resistance will be overcome and we will move forward.

Unfortunately we sometimes discover that we are pushing against an immovable object and even our maximum effort is futile – so we give up and label it as “impossible”.

Much of Improvement Science appears counter-intuitive at first sight and the challenge of resistance is no different.  The counter-intuitive response to feeling resistance is to pull back, and that is exactly what works better. But why does it work better? Isn’t that just giving up and giving in? How can that be better?

To explain the rationale it is necessary to examine the nature of resistance more closely.

Resistance to change is an emotional reaction to an unconsciously perceived threat that is translated into a conscious decision, action and justification: the response. The range of verbal responses is large, as illustrated in the caption, and the range of non-verbal responses is just as large.  Attempting to deflect or defuse all of them is impractical, ineffective and leads to a feeling of frustration and futility.

This negative emotional reaction we call resistance is non-specific because that is how our emotions work – and it is triggered as much by the way the change is presented as by what the change is.

Many change “experts” recommend  the better method of “driving” change is selling-versus-telling and recommend learning psycho-manipulation techniques to achieve it – close-the-deal sales training for example. Unfortunately this strategy can create a psychological “arms race” which can escalate just as quickly and lead to the same outcome: an  emotional battle and psychological casualties. This outcome is often given the generic label of “stress”.

An alternative approach is to regard resistance behaviour as multi-factorial and one model separates the non-specific resistance response into separate categories: Why DoDon’t Do – Can’t Do – Won’t Do.

The Why Do response is valuable feedback because is says “we do not understand the purpose of the proposed change” and it is not unusual for proposals to be purposeless. This is sometimes called “meddling”.  This is fear of the unknown.

The Don’t Do  is valuable feedback that is saying “there is a risk with this proposed change – an unintended negative consequence that may be greater than the intended positive outcome“.  Often it is very hard to explain this NoNo reaction because it is the output of an unconscious thought process that operates out of awareness. It just doesn’t feel good. And some people are better at spotting the risks – they prefer to wear the Black Hat – they are called skeptics.  This is fear of failure.

The Can’t Do is also valuable feedback that is saying “we get the purpose and we can see the problem and the benefit of a change – we just cannot see the path that links the two because it is blocked by something.” This reaction is often triggered by an unconscious recognition that some form of collaborative working will be required but the cultural context is low on respect and trust. It can also just be a manifestation of a knowledge, skill or experience gap – the “I don’t know how to do” gap. Some people habitually adopt the Victim role – most are genuine and do not know how.

The Won’t Do response is also valuable feedback that is saying “we can see the purpose, the problem, the benefit, and the path but we won’t do it because we don’t trust you“. This reaction is common in a low-trust culture where manipulation, bullying and game playing is the observed and expected behaviour. The role being adopted here is the Persecutor role – and the psychological discount is caring for others. Persecutors lack empathy.

The common theme here is that all resistance-to-change responses represent valuable feedback and explains why the better reaction to resistance is to stop talking and start listening because to make progress will require using the feedback to diagnose what components or resistance are present. This is necessary because each category requires a different approach.

For example Why Do requires making the both problem and the purpose explicit; Don’t Do requires exploring the fear and bringing to awareness what is fuelling it; Can’t Do requires searching for the skill gaps and filling them; and Won’t Do requires identifying the trust-eroding beliefs, attitudes and behaviours and making it safe to talk about them.

Resistance-to-change is generalised as a threat when in reality it represents an opportunity to learn and to improve – which is what Improvement Science is all about.

The Bucket Brigade Fire Fighting Service

Fire-fighting is a behaviour that has a long history, and before Fireman Sam arrived on the scene we had the Bucket Brigade.  This was a people-intensive process designed to deliver water from the nearest pump, pond or river with as little risk, delay and effort as possible. The principle of a bucket-brigade is that a chain of people forms between the pump and the fire and they pass buckets in two directions – full ones from the pump to the fire and empty ones from the fire back to the pump.

A bucket brigade is useful metaphor for many processes and an Improvement Science Practitioner (ISP) can learn a lot from exploring its behaviour.

First of all the number of steps in the process or stream is fixed because it is determined by the distance between the pump and the fire. The time it takes for a Bucket Passer to pass a bucket to the next person is predictable  too and it is this cycle-time that determines the rate at which a bucket will move along the line. The fixed step-number and fixed cycle-time implies that the time it takes for a bucket to pass from one end of the line to the other is fixed too. It does not matter if the bucket is empty, half empty or full – the delivery time per bucket is consistent from bucket to bucket. The outflow however is not fixed – it is determined by how full each bucket is when it reaches the end of the line: empty buckets means zero flow, full buckets means maximum flow.

This implies that the process is behaving like a time-trap because the delivery time and the delivery volume (i.e. flow) are independent. Having bigger buckets or fuller buckets makes no difference to the time it takes to traverse the line but it does influence the outflow.

Most systems have many processes that are structured just like a bucket brigade: each step in the process contributes to completing the task before handing the part-completed task on to the next step.

The four dimensions of improvement are Safety, Flow, Quality and Productivity and we can see that, if we are not dropping buckets, then the safety, flow and quality are fixed by the design of the process. So what can we do to improve productivity?

Well, it is evident that the time it takes to do the hand-off adds to the cycle-time of each step. So along comes the Fire Service Finance Department who sees time-as-money and they work out that the unit cost of each step of the process could be reduced by accumulating the jobs at each stage and then handing them off as a batch – because the time-is-money and the cost of the hand-off can now be shared across several buckets. They conclude that the unit cost for the steps will come down and productivity will go up – simple maths and intuitively obvious in theory – but does it actually work in reality?

Q1: Does it reduce the number of Bucket Passers? No. We need just as many as we did before. What we are doing is replacing the smaller buckets with bigger ones – and that will require capital investment.  So when our Finance Department use the lower unit cost as justification then the bigger, more expensive buckets start to look like a good financial option – on paper. But looking at the wage bills we can see that they are the same as before so this raises a question: have the bigger buckets increased the flow or reduced the delivery time? We will need a tangible, positive and measurable  improvement in productivity to justify our capital investment.

To summarise: we have the same number of Bucket Passers working at the same cycle time so there is no improvement in how long it takes for the water to reach the fire from the pump! The delivery time is unchanged. And using bigger buckets implies that the pump needs to be able to work faster to fill them in one cycle of the process – but to minimise cost when we created the Fire Service we bought a pump with just enough average flow capacity and it cannot be made to increase its flow. So, equipped with a bigger bucket the first Bucket Passer has to wait longer for their bigger bucket to be filled before passing it on down the line.  This implies a longer cycle-time for the first step, and therefore also for every step in the chain. So the delivery-time will actually get longer and the flow will stay the same – on average. All we have appear to have achieved is a higher cost and longer delivery time – which is precisely the opposite of what we intended. Productivity has actually fallen!

In a state of  near-panic the Fire Service Finance Department decide to measure the utilisation of the Bucket Passers and discover that it has fallen which must mean that they have become lazy! So a Push Policy is imposed to make them work faster – the Service cannot afford financial inducements – and threats cost nothing. The result is that in their haste to avoid penalties the bigger, fuller, heavier buckets get fumbled and some of the precious water is lost – so less reaches the fire.  The yield of the process falls and now we have a more expensive, longer delivery time, lower flow process. Productivity has fallen even further and now the Bucket Passers and Accountants are at war. How much worse can it get?

Where did we go wrong?

We made an error of omission. We omitted to learn the basics of process design before attempting to improve the productivity of our time-trap dominated process!  Our error of omission led us to confuse the step, stage, stream and system and we incorrectly used stage metrics (unit cost and utilisation) in an attempt to improve system performance (productivity). The outcome was the exact opposite of what we intended; a line of unhappy Bucket Passers; a frustrated Finance Department and an angry Customer whose house burned down because our Fire Service did not deliver enough water on time. Lose-Lose-Lose.

Q1: Is it possible to improve the productivity of a time-trap design?

Q1: Yes, it is.

Q2: How do we avoid making the same error?

A2: Follow the FISH .

Leading from the Middle

Cuthbert Simpson is reputed to be the first person to be “stretched” during the reign of Mary I – pulled in more than one direction at the same time while trying, in vain, to satisfy the simultaneous demands of his three interrogators.

Being a middle manager in a large organisation feels rather like this – pulled in many directions trying to satisfy the insatiable appetites for improvement of Governance (quality), Operations (delivery) and Finance (productivity).

The critical-to-survival skill for the over-stretched middle manager is the ability to influence others – or rather three complementary influencing styles.

One dimension is vertical and strategic-tactical and requires using the organisational strategy to influence operational tactics; and to use front line feedback to influence future strategic decisions. This influencing dimension requires two complementary styles of behaviour: followership and leadership.  

One dimension is horizontal and operational and requires influencing peer-middle-managers in other departpments. This requires yet a different style of leadership: collaboration.

The successful middle manager is able to switch influencing style as effortlessly as changing gear when driving. Select the wrong style at the wrong time and there is an unpleasant grating of teeth and possibly a painful career-grinding-to-a-halt experience.

So what do these three styles have to do with Improvement Science?

Taking the last point first.  Middle managers are the lynch-pin on which whole system improvement depends.  Whole system improvement is impossible without their commitment – just as a car without a working gearbox is just a heap of near useless junk.  Whole system improvement needs middle managers who are skilled in the three styles of behaviour.

The most important style is collaboration – the ability to influence peers – because that is the key to the other two.  Let us consider a small socioeconomic system that we all have experience of – the family. How difficult is it to manage children when the parent-figures do not get on with each other and who broadcast confusingly mixed messages? Almost impossible. The children learn quickly to play one off against the other and sit back and enjoy the spectacle.  And as a child how difficult it is to manage the parent-figures when you are always fighting and arguing with your siblings and peers and competing with each other for attention? Almost impossible again. Children are much more effective in getting what they want when they learn how to work together.

The same is true in organisations. When influencing from-middle-to-strategic it is more effective to influence your peers and then work together to make the collective case; and when influencing from-middle-to-tactical it is more effective to influence your peers and then work together to set a clear and unambiguous expectations.

The key survival skill is the ability to influence your peers effectively and that means respect for their opinion, their knowledge, their skill and their time – and setting the same expectation of them. Collaboration requires trust; and trust requires respect; and respect is earned by example.

PS. It also helps a lot to be able to answer the question “Can you show us how?”

Pushmepullyu

The pushmepullyu is a fictional animal immortalised in the 1960’s film Dr Dolittle featuring Rex Harrison who learned from a parrot how to talk to animals.  The pushmepullyu was a rare, mysterious animal that was never captured and displayed in zoos. It had a sharp-horned head at both ends and while one head slept the other stayed awake so it was impossible to sneak up on and capture.

The spirit of the pushmepullyu lives on in Improvement Science as Push-Pull and remains equally mysterious and difficult to understand and explain. It is confusing terminology. So what does Push-Pull acually mean?

To decode the terminology we need to first understand a critical metric of any process – the constraint cycle time (CCT) – and to do that we need to define what the terms constraint and cycle time mean.

Consider a process that comprises a series of steps that must be completed in sequence.  If we put one task through the process we can measure how long each step takes to complete its contribution to the whole task.  This is the touch time of the step and if the resource is immediately available to start the next task this is also the cycle time of the step.

If we now start two tasks at the same time then we will observe when an upstream step has a longer cycle time than the next step downstream because it will shadow the downstream step. In contrast, if the upstream step has a shorter cycle time than the next step down stream then it will expose the downstream step. The differences in the cycle times of the steps will determine the behaviour of the process.

Confused? Probably.  The description above is correct BUT hard to understand because we learn better from reality than from rhetoric; and we find pictures work better than words.  Pragmatic comes before academic; reality before theory.  We need a realistic example to learn from.

Suppose we have a process that we are told has three steps in sequence, and when one task is put through it takes 30 mins to complete.  This is called the lead time and is an important process output metric. We now know it is possible to complete the work in 30 mins so we can set this as our lead time expectation.  

Suppose we plot a chart of lead times in the order that the tasks start and record the start time and lead time for each one – and we get a chart that looks like this. It is called a lead time run chart.  The first six tasks complete in 30 mins as expected – then it all goes pear-shaped. But why?  The run chart does not tell  us the reason – it just alerts us to dig deeper. 

The clue is in the run chart but we need to know what to look for.  We do not know how to do that yet so we need to ask for some more data.

We are given this run chart – which is a count of the number of tasks being worked on recorded at 5 minute intervals. It is the work in progress run chart.

We know that we have a three step process and three separate resources – one for each step. So we know that that if there is a WIP of less than 3 we must have idle resources; and if there is a WIP of more than 3 we must have queues of tasks waiting.

We can see that the WIP run chart looks a bit like the lead time run chart.  But it still does not tell us what is causing the unstable behaviour.

In fact we do already have all the data we need to work it out but it is not intuitively obvious how to do it. We feel we need to dig deeper.

 We decide to go and see for ourselves and to observe exactly what happens to each of the twelve tasks and each of the three resources. We use these observations to draw a Gantt chart.

Now we can see what is happening.

We can see that the cycle time of Step 1 (green) is 10 mins; the cycle time for Step 2 (amber) is 15 mins; and the cycle time for Step 3 (blue) is 5 mins.

 

This explains why the minimum lead time was 30 mins: 10+15+5 = 30 mins. OK – that makes sense now.

Red means tasks waiting and we can see that a lead time longer than 30 mins is associated with waiting – which means one or more queues.  We can see that there are two queues – the first between Step 1 and Step 2 which starts to form at Task G and then grows; and the second before Step 1 which first appears for Task J  and then grows. So what changes at Task G and Task J?

Looking at the chart we can see that the slope of the left hand edge is changing – it is getting steeper – which means tasks are arriving faster and faster. We look at the interval between the start times and it confirms our suspicion. This data was the clue in the original lead time run chart. 

Looking more closely at the differences between the start times we can see that the first three arrive at one every 20 mins; the next three at one every 15 mins; the next three at one every 10 mins and the last three at one every 5 mins.

Ah ha!

Tasks are being pushed  into the process at an increasing rate that is independent of the rate at which the process can work.     

When we compare the rate of arrival with the cycle time of each step in a process we find that one step will be most exposed – it is called the constraint step and it is the step that controls the flow in the whole process. The constraint cycle time is therefore the critical metric that determines the maximum flow in the whole process – irrespective of how many steps it has or where the constraint step is situated.

If we push tasks into the process slower than the constraint cycle time then all the steps in the process will be able to keep up and no queues will form – but all the resources will be under-utilised. Tasks A to C;

If we push tasks into the process faster than the cycle time of any step then queues will grow upstream of these multiple constraint steps – and those queues will grow bigger, take up space and take up time, and will progressively clog up the resources upstream of the constraints while starving those downstream of work. Tasks G to L.

The optimum is when the work arrives at the same rate as the cycle time of the constraint – this is called pull and it means that the constraint is as the pacemaker and used to pull the work into the process. Tasks D to F.

With this new understanding we can see that the correct rate to load this process is one task every 15 mins – the cycle time of Step 2.

We can use a Gantt chart to predict what would happen.

The waiting is eliminated, the lead time is stable and meeting our expectation, and when task B arrives thw WIP is 2 and stays stable.

In this example we can see that there is now spare capacity at the end for another task – we could increase our productivity; and we can see that we need less space to store the queue which also improves our productivity.  Everyone wins. This is called pull scheduling.  Pull is a more productive design than push. 

To improve process productivity it is necessary to measure the sequence and cycle time of every step in the process.  Without that information it is impossible to understand and rationally improve our process.     

BUT in reality we have to deal with variation – in everything – so imagine how hard it is to predict how a multi-step process will behave when work is being pumped into it at a variable rate and resources come and go! No wonder so many processes feel unpredictable, chaotic, unstable, out-of-control and impossible to both understand and predict!

This feeling is an illusion because by learning and using the tools and techniques of Improvement Science it is possible to design and predict-within-limits how these complex systems will behave.  Improvement Science can unravel this Gordian knot!  And it is not intuitively obvious. If it were we would be doing it.

Flap-Flop-Flip

The world seems to is getting itself into a real flap at the moment.

The global economy is showing signs of faltering – the perfect dream of eternal financial growth seems to be showing cracks and is increasingly looking tarnished.

The doom mongers are surprisingly quiet – perhaps because they do not have any new ideas either.


It feels like the system is heading for a big flop and that is not a great feeling.

Last week I posed the Argument-Free-Problem-Solving challenge – and some were curious enough to have a go. It seems that the challenge needs more explanation of how it works to create enough engagement to climb the skepticism barrier.

At the heart of the AFPS method is The 4N Chart® – a simple, effective and efficient way to get a balanced perspective of the emotional contours of the change terrain.  The improvement process boils down to recognising, celebrating, and maintaining the Nuggets, flipping the Niggles into NoNos and reinvesting the currencies that are released into converting NiceIfs into more Nuggets.

The trick is the flip.


To perform a flip we have to make our assumptions explicit – which means we have to use external reality to challenge our internal rhetoric.  We need real data – presented in an easily digestible format – as a picture – and in context which converts the data into information that we can then ingest and use to grow our knowledge and broaden our understanding.

To convert knowledge into understanding we must ask a question: “Is our assumption a generalisation from a specific experience?

For example – it is generally assumed that high utilisation is associated with high productivity – and we want high productivity so we push for high utilisation.  And if we look at reality we can easily find evidence to support our assumption.  If I have under-utilised fixed-cost resources and I push more work into the process, I see an increase the flow in the stream, and an increase in utilisation, and an increase in revenue, and no increase in cost – higher outcome: higher productivity.

But if we look more carefully we can also find examples that seem to disprove our assumption. I have under-utilised resources and I push more work into the process, and the flow increases initially then falls dramatically, the revenue falls, productivity falls and when I look at all my resources they are still fully utilised.  The system has become gridlocked – and when I investigate is discover that the resource I need to unlock the flow is tied up somewhere else in the process with more urgent work. My system does not have an anti-deadlock design.

Our rhetoric of generalisation has been challenged by the reality of specifics – and it only takes one example.  One black swan will disprove the generalisation that “all swans are white”.

We now know we need to flip the “general assumption” into “specific evidence” – changing the words “all”, “always”, “none” and “never” into “some” and “sometimes”.

In our example we flip our assumption into “sometimes utilisation and productivity go up together, and sometimes they do not”. This flip reveals a new hidden door in the invisible wall that limits the breadth of our understanding and that unconsciously hinders our progress.

To open that door we must learn how to tell one specific from another and opening that door will lead to a path of discovery, more knowledge, broader understanding, deeper wisdom, better decisions, more effective actions and sustained improvement.

Flap-Flop-Flip.


This week has seen the loss of one of the greatest Improvement Scientists – Steve Jobs – creator of Apple – who put the essence of Improvement Science into words more eloquently than anyone in his 2005 address at Stanford University.

“Your time is limited, so don’t waste it living someone else’s life. Don’t be trapped by dogma – which is living with the results of other people’s thinking. Don’t let the noise of other’s opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. They somehow already know what you truly want to become. Everything else is secondary.” Steve Jobs (1955-2011).

And with a lifetime of experience of leading an organisation that epitomises quality by design Steve Jobs had the most credibility of any person on the planet when it comes to management of improvement.

Argument-Free-Problem-Solving

I used to be puzzled when I reflected on the observation that we seem to be able to solve problems as individuals much more quickly and with greater certainty than we could as groups.

I used to believe that having many different perspectives of a problem would be an asset – but in reality it seems to be more of a liability.

Now when I receive an invitation to a meeting to discuss an issue of urgent importance my little heart sinks as I recall the endless hours of my limited life-time wasted in worthless, unproductive discussion.

But, not to be one to wallow in despair I have been busy applying the principles of Improvement Science to this ubiquitous and persistent niggle.  And I have discovered something called Argument Free Problem Solving (AFPS) – or rather that is my name for it because it does what it says on the tin – it solves problems without arguments.

The trick was to treat problem-solving as a process; to understand how we solve problems as individuals; what are the worthwhile bits; and how we scupper the process when we add-in more than one person; and then how to design-to-align the  problem-solving workflow so that it …. flows. So that it is effective and efficient.

The result is AFPS and I’ve been testing it out. Wow! Does it work or what!

I have also discovered that we do not need to create an artificial set of Rules or a Special Jargon – we can  apply the recipe to any situation in a very natural and unobtrusive way.  Just this week I have seen it work like magic several times: once in defusing what was looking like a big bust up looming; once t0 resolve a small niggle that had been magnified into a huge monster and a big battle – the smoke of which was obscuring the real win-win-win opportunity; and once in a collaborative process improvement exercise that demonstrated a 2000% improvement in system productivity – yes – two thousand percent!

So AFPS  has been added to the  Improvement Science treasure chest and (because I like to tease and have fun) I have hidden the key in cyberspace at coordinates  http://www.saasoft.com/moodle

Mwah ha ha ha – me hearties! 

Design-for-Productivity

One tangible output of process or system design exercise is a blueprint.

This is the set of Policies that define how the design is built and how it is operated so that it delivers the specified performance.

These are just like the blueprints for an architectural design, the latter being the tangible structure, the former being the intangible function.

A computer system has the same two interdependent components that must be co-designed at the same time: the hardware and the software.


The functional design of a system is manifest as the Seven Flows and one of these is Cash Flow, because if the cash does not flow to the right place at the right time in the right amount then the whole system can fail to meet its design requirement. That is one reason why we need accountants – to manage the money flow – so a critical component of the system design is the Budget Policy.

We employ accountants to police the Cash Flow Policies because that is what they are trained to do and that is what they are good at doing – they are the Guardians of the Cash.

Providing flow-capacity requires providing resource-capacity, which requires providing resource-time; and because resource-time-costs-money then the flow-capacity design is intimately linked to the budget design.

This raises some important questions:
Q: Who designs the budget policy?
Q: Is the budget design done as part of the system design?
Q: Are our accountants trained in system design?

The challenge for all organisations is to find ways to improve productivity, to provide more for the same in a not-for-profit organisation, or to deliver a healthy return on investment in the for-profit arena (and remember our pensions are dependent on our future collective productivity).

To achieve the maximum cash flow (i.e. revenue) at the minimum cash cost (i.e. expense) then both the flow scheduling policy and the resource capacity policy must be co-designed to deliver the maximum productivity performance.


If we have a single-step process it is relatively easy to estimate both the costs and the budget to generate the required activity and revenue; but how do we scale this up to the more realistic situation when the flow of work crosses many departments – each of which does different work and has different skills, resources and budgets?

Q: Does it matter that these departments and budgets are managed independently?
Q: If we optimise the performance of each department separately will we get the optimum overall system performance?

Our intuition suggests that to maximise the productivity of the whole system we need to maximise the productivity of the parts.  Yes – that is clearly necessary – but is it sufficient?


To answer this question we will consider a process where the stream flows though several separate steps – separate in the sense that that they have separate budgets – but not separate in that they are linked by the same flow.

The separate budgets are allocated from the total revenue generated by the outflow of the process. For the purposes of this exercise we will assume the goal is zero profit and we just need to calculate the price that needs to be charged the “customer” for us to break even.

The internal reports produced for each of our departments for each time period are:
1. Activity – the amount of work completed in the period.
2. Expenses – the cost of the resources made available in the period – the budget.
3. Utilisation – the ratio of the time spent using resources to the total time the resources were available.

We know that the theoretical maximum utilisation of resources is 100% and this can only be achieved when there is zero-variation. This is impossible in the real world but we will assume it is achievable for the purpose of this example.

There are three questions we need answers to:
Q1: What is the lowest price we can achieve and meet the required demand?
Q2: Will optimising each step independently step give us this lowest price?
Q3: How do we design our budgets to deliver maximum productivity?


To explore these questions let us play with a real example.

Let us assume we have a single stream of work that crosses six separate departments labelled A-F in that sequence. The department budgets have been allocated based on historical activity and utilisation and our required activity of 50 jobs per time period. We have already worked hard to remove all the errors, variation and “waste” within each department and we have achieved 100% observed utilisation of all our resources. We are very proud of our high effectiveness and our high efficiency.

Our current not-for-profit price is £202,000/50 = £4,040 and because our observed utilisation of resources at each step is 100% we conclude this is the most efficient design and that this is the lowest possible price.

Unfortunately our celebration is short-lived because the market for our product is growing bigger and more competitive and our market research department reports that to retain our market share we need to deliver 20% more activity at 80% of the current price!

A quick calculation shows that our productivity must increase by 50% (New Activity/New Price = 120%/80% = 150%) but as we already have a utilisation of 100% then this challenge looks hopelessly impossible.  To increase activity by 20% will require increasing flow-capacity by 20% which will imply a 20% increase in costs so a 20% increase in budget – just to maintain the current price.  If we no longer have customers who want to pay our current price then we are in trouble.

Fortunately our conclusion is incorrect – and it is incorrect because we are not using the data available to co-design the system such that cash flow and work flow are aligned.  And we do not do that because we have not learned how to design-for-productivity.  We are not even aware that this is possible.  It is, and it is called Value Stream Accounting.

The blacked out boxes in the table above hid the data that we need to do this – an we do not know what they are. Yet.

But if we apply the theory, techniques and tools of system design, and we use the data that is already available then we get this result …

 We can see that the total budget is less, the budget allocations are different, the activity is 20% up and the zero-profit price is 34% less – which is a 83% increase in productivity!

More than enough to stay in business.

Yet the observed resource utilisation is still 100%  and that is counter-intuitive and is a very surprising discovery for many. It is however the reality.

And it is important to be reminded that the work itself has not changed – the ONLY change here is the budget policy design – in other words the resource capacity available at each stage.  A zero-cost policy change.

The example answers our first two questions:
A1. We now have a price that meets our customers needs, offers worthwhile work, and we stay in business.
A2. We have disproved our assumption that 100% utilisation at each step implies maximum productivity.

Our third question “How to do it?” requires learning the tools, techniques and theory of System Engineering and Design.  It is not difficult and it is not intuitively obvious – if it were we would all be doing it.

Want to satisfy your curiosity?
Want to see how this was done?
Want to learn how to do it yourself?

You can do that here.


For more posts like this please vote here.
For more information please subscribe here.

Intention-Decision-Action

Many of us use the terms “effective” and “efficient” and we assume that if we achieve both at the same time then we can call it “success”. They are certainly both necessary but are they sufficient? If they were then every process that was both effective (zero mistakes) and efficient (zero waste) would be hailed a success. This is our hypothesis and to disprove it we only need one example where it fails. Let us see if we can find one in our collective experience.

Threats focus our attention more than opportunities. When our safety is at risk it is a sensible strategy to give the threat our full attention – and our caveman wetware has a built-in personal threat management system: it is called the Fright, Flight, Fight response. The FFF is coordinated by the oldest, most unconscious bits of our wetware and we know it as the fast heart, dry mouth, cold sweat reaction – or adrenalin rush. When we perceive a threat we are hard-wired to generate the emotion called fear, and this tells us we need to make a decision between two actions: to stand our ground or to run away. The decision needs to be made quickly because the outcome of it may determine our survival – so we need a quick, effective and efficient way to do it. If we choose to “fight” then another emotion takes over – anger – and it hijacks our rationality: arguments, fights, battles and wars are all tangible manifestations of our collective reaction – and when the conditions are just right even a single word or action may be perceived as a threat and trigger an argument, then a fight, then a battle, then a war – a classic example of a positive feedback loop that can literally explode into an unstoppable orgy of death and destruction.

Can we measure the “success” of our hard-wired FFF system: let us consider the outcome of a war – a winner and a loser; and let us also count the cost of a war – lots of valuable resources consumed and lots of dead people on both sides. Wars inflict high costs on both sides and the “loser” is the one who loses most – the winner loses too – just less. But is it all negative? If it were then no one would ever do it – so there must be some tangible benefit. When the sides are unequally matched the victor can survive the losses and can grow from “absorbing” what remains of the loser. This is the dog-eat-dog world of survival of the strongest and represents another positive feedback loop – he who has most takes more.

Threats focus our attention and if we are not at immediate risk then they can also stimulate our creativity – and what is learned in the process of managing a threat can be of lasting value after the threat has passed.  Many of the benefits we enjoy today were “stimulated” by the threats in WWII – for example: digital computers were invented to assist making ballistics calculations and for breaking enemy secret codes. Much of the theory, techniques and tools of Improvement Science were developed during WWII to increase the productivity of weapons-of-war creation – and they have been applied more constructively in peacetime.  Wars are created by people and the “great” warriors create the most effective and efficient lose-lose processes. Using threats to drive creativity is a low-productivity design – ee can do much better than that – surely?

So, our experience suggests that effectiveness and efficiency are not enough – there seems to be a piece missing – and this piece is “intention”. Our Purpose.  This insight explains why asking the “What is our purpose?” question is so revealing:  if you do not get a reply it is likely that your audience is seeing challenge as a battle – and the First Rule of War is never to reveal your intention to your enemy – so their battle metaphor prevents them from answering honestly. If you do get an answer it is very often a “to do” answer rather than a “to get” one – unconsciously masking purpose with process and side-stepping the issue.  Their language gives it away though – processes are flagged by verbs, purposes are flagged by nouns – so if you listen to what they say then you can tell.  The other likely answer is a question: not a question for clarification, a question for deflection and the objective is more threat-assessment data and more thinking and preparation time.

If the answer to the Purpose Question is immediate, an outcome, and positive then the respondent is not using a war meta-program; they do not view the challenge as a threat and they do see a creative opportunity for improvement – they see it as a Race. Their intention is improvement for all on all dimensions: quality, delivery and money – and they recognise that healthy competition can be good for both. Do not be fooled – they are neither weak not stupid – if they perceive a safety threat they will deploy all their creative resources to eliminate it.

One of the commonest errors of commission is to eliminate healthy competition; which is what happens when we have not learned how to challenge with respect: we have let things slip to the point that we are forced to fight or flee. We have not held ourselves to account and we have not learned to ask the ourselves “What is my purpose?” People need to have a purpose to channel their effectivess and efficiency – and processes also need a purpose because socio-economic systems are the combination of people and processes.

The purpose for any socioeconomic system is the generic phrase “right-thing, right-place, right-price, on-time, first-time, every-time” and is called the system goal.  The purpose of a specific process or person within that system will be aligned to the goal and there are two parts to this: the “right-” parts which are a matter of subjectivity and the “-time” parts which are a matter of objectivity. The process must be designed to deliver the objectives – and before we know what to do we must understand how to decide what to do; and before we know how to decide we must have the wisdom and courage to ask the question and to state our purpose. Intention – Decision – Action.

Low-Tech-Toc

Beware the Magicians who wave High Technology Wands and promise Miraculous Improvements if you buy their Black Magic Boxes!

If a Magician is not willing to open the box and show you the inner workings then run away – quickly.  Their story may be true, the Miracle may indeed be possible, but if they cannot or will not explain HOW the magic trick is done then you will be caught in their spell and will become their slave forever.

Not all Magicians have honourable intentions – those who have been seduced by the Dark Side will ensnare you and will bleed you dry like greedy leeches!

In the early 1980’s a brilliant innovator called Eli Goldratt created a Black Box called OPT that was the tangible manifestation of his intellectual brainchild called ToC – Theory of Constraints. OPT was a piece of complex computer software that was intended to rescue manufacturing from their ignorance and to miraculously deliver dramatic increases in profit. It didn’t.

Eli Goldratt was a physicist and his Black Box was built on strong foundations of Process Physics – it was not Snake Oil – it did work.  The problem was that it did not sell: Not enough people believed his claims and those who did discovered that the Black Box was not as easy to use as the Magician suggested.  So Eli Goldratt wrote a book called The Goal in which he explained, in parable form, the Principles of ToC and the theoretical foundations on which his Black Box was built.  The book was a big success but his Black Box still did not sell; just an explanation of how his Black Box worked was enough for people to apply the Principles of ToC and to get dramatic results. So, Eli abandoned his plan of making a fortune selling Black Boxes and set up the Goldratt Institute to disseminate the Principles of ToC – which he did with considerably more success. Eli Goldratt died in June 2011 after a short battle with cancer and the World has lost a great innovator and a founding father of Improvement Science. His legacy lives on in the books he wrote that chart his personal journey of discovery.

The Principles of ToC are central both to process improvement and to process design.  As Eli unintentionally demonstrated, it is more effective and much quicker to learn the Principles of ToC pragmatically and with low technology – such as a book – than with a complex, expensive, high technology Black Box.  As many people have discovered – adding complex technology to a complex problem does not create a simple solution! Many processes are relatively uncomplicated and do not require high technology solutions. An example is the challenge of designing a high productivity schedule when there is variation in both the content and the volume of the work.

If our required goal is to improve productivity (or profit) then we want to improve the throughput and/or to reduce the resources required. That is relatively easy when there is no variation in content and no variation in volume – such as when we are making just one product at a constant rate – like a Model-T Ford in Black! Add some content and volume variation and the challenge becomes a lot trickier! From the 1950’s the move from mass production to mass customisation in the automobile industry created this new challenge and spawned a series of  innovative approaches such as the Toyota Production System (Lean), Six Sigma and Theory of Constraints.  TPS focussed on small batches, fast changeovers and low technology (kanbans or cards) to keep inventory low and flow high; Six Sigma focussed on scientifically identifying and eliminating all sources of variation so that work flows smoothly and in “statistical control”; ToC focussed on identifying the “constraint steps” in the system and then on scheduling tasks so that the constraints never run out of work.

When applied to a complex system of interlinked and interdependent processes the ToC method requires a complicated Black Box to do the scheduling because we cannot do it in our heads. However, when applied to a simpler system or to a part of a complex system it can be done using a low technology method called “paper and pen”. The technique is called Template Scheduling and there is a real example in the “Three Wins” book where the template schedule design was tested using a computer simulation to measure the resilience of the design to natural variation – and the computer was not used to do the actual scheduling. There was no Black Box doiung the scheduling. The outcome of the design was a piece of paper that defined the designed-and-tested template schedule: and the design testing predicted a 40% increase in throughput using the same resources. This dramatic jump in productivity might be regarded as  “miraculous” or even “impossible” but only to someone who was not aware of the template scheduling method. The reality is that that the designed schedule worked just as predicted – there was no miracle, no magic, no Magician and no Black Box.

The Crime of Metric Abuse

We live in a world that is increasingly intolerant of errors – we want everything to be right all the time – and if it is not then someone must have erred with deliberate intent so they need to be named, blamed and shamed! We set safety standards and tough targets; we measure and check; and we expose and correct anyone who is non-conformant. We accept that is the price we must pay for a Perfect World … Yes? Unfortunately the answer is No. We are deluded. We are all habitual criminals. We are all guilty of committing a crime against humanity – the Crime of Metric Abuse. And we are blissfully ignorant of it so it comes as a big shock when we learn the reality of our unconscious complicity.

You might want to sit down for the next bit.

First we need to set the scene:
1. Sustained improvement requires actions that result in irreversible and beneficial changes to the structure and function of the system.
2. These actions require making wise decisions – effective decisions.
3. These actions require using resources well – efficient processes.
4. Making wise decisions requires that we use our system metrics correctly.
5. Understanding what correct use is means recognising incorrect use – abuse awareness.

When we commit the Crime of Metric Abuse, even unconsciously, we make poor decisions. If we act on those decisions we get an outcome that we do not intend and do not want – we make an error.  Unfortunately, more efficiency does not compensate for less effectiveness – if fact it makes it worse. Efficiency amplifies Effectiveness – “Doing the wrong thing right makes it wronger not righter” as Russell Ackoff succinctly puts it.  Paradoxically our inefficient and bureaucratic systems may be our only defence against our ineffective and potentially dangerous decision making – so before we strip out the bureaucracy and strive for efficiency we had better be sure we are making effective decisions and that means exposing and treating our nasty habit for Metric Abuse.

Metric Abuse manifests in many forms – and there are two that when combined create a particularly virulent addiction – Abuse of Ratios and Abuse of Targets. First let us talk about the Abuse of Ratios.

A ratio is one number divided by another – which sounds innocent enough – and ratios are very useful so what is the danger? The danger is that by combining two numbers to create one we throw away some information. This is not a good idea when making the best possible decision means squeezing every last drop of understanding our of our information. To unconsciously throw away useful information amounts to incompetence; to consciously throw away useful information is negligence because we could and should know better.

Here is a time-series chart of a process metric presented as a ratio. This is productivity – the ratio of an output to an input – and it shows that our productivity is stable over time.  We started OK and we finished OK and we congratulate ourselves for our good management – yes? Well, maybe and maybe not.  Suppose we are measuring the Quality of the output and the Cost of the input; then calculating our Value-For-Money productivity from the ratio; and then only share this derived metric. What if quality and cost are changing over time in the same direction and by the same rate? The productivity ratio will not change.

 

Suppose the raw data we used to calculate our ratio was as shown in the two charts of measured Ouput Quality and measured Input Cost  – we can see immediately that, although our ratio is telling us everything is stable, our system is actually changing over time – it is unstable and therefore it is unpredictable. Systems that are unstable have a nasty habit of finding barriers to further change and when they do they have a habit of crashing, suddenly, unpredictably and spectacularly. If you take your eyes of the white line when driving and drift off course you may suddenly discover a barrier – the crash barrier for example, or worse still an on-coming vehicle! The apparent stability indicated by a ratio is an illusion or rather a delusion. We delude ourselves that we are OK – in reality we may be on a collision course with catastrophe. 

But increasing quality is what we want surely? Yes – it is what we want – but at what cost? If we use the strategy of quality-by-inspection and add extra checking to detect errors and extra capacity to fix the errors we find then we will incur higher costs. This is the story that these Quality and Cost charts are showing.  To stay in business the extra cost must be passed on to our customers in the price we charge: and we have all been brainwashed from birth to expect to pay more for better quality. But what happens when the rising price hits our customers finanical constraint?  We are no longer able to afford the better quality so we settle for the lower quality but affordable alternative.  What happens then to the company that has invested in quality by inspection? It loses customers which means it loses revenue which is bad for its financial health – and to survive it starts cutting prices, cutting corners, cutting costs, cutting staff and eventually – cutting its own throat! The delusional productivity ratio has hidden the real problem until a sudden and unpredictable drop in revenue and profit provides a reality check – by which time it is too late. Of course if all our competitors are committing the same crime of metric abuse and suffering from the same delusion we may survive a bit longer in the toxic mediocrity swamp – but if a new competitor who is not deluded by ratios and who learns how to provide consistently higher quality at a consistently lower price – then we are in big trouble: our customers leave and our end is swift and without mercy. Competition cannot bring controlled improvement while the Abuse of Ratios remains rife and unchallenged.

Now let us talk about the second Metric Abuse, the Abuse of Targets.

The blue line on the Productivity chart is the Target Productivity. As leaders and managers we have bee brainwashed with the mantra that “you get what you measure” and with this belief we commit the crime of Target Abuse when we set an arbitrary target and use it to decide when to reward and when to punish. We compound our second crime when we connect our arbitrary target to our accounting clock and post periodic praise when we are above target and periodic pain when we are below. We magnify the crime if we have a quality-by-inspection strategy because we create an internal quality-cost tradeoff that generates conflict between our governance goal and our finance goal: the result is a festering and acrimonious stalemate. Our quality-by-inspection strategy paradoxically prevents improvement in productivity and we learn to accept the inevitable oscillation between good and bad and eventually may even convince ourselves that this is the best and the only way.  With this life-limiting-belief deeply embedded in our collective unconsciousness, the more enthusiastically this quality-by-inspection design is enforced the more fear, frustration and failures it generates – until trust is eroded to the point that when the system hits a problem – morale collapses, errors increase, checks are overwhelmed, rework capacity is swamped, quality slumps and costs escalate. Productivity nose-dives and both customers and staff jump into the lifeboats to avoid going down with the ship!  

The use of delusional ratios and arbitrary targets (DRATs) is a dangerous and addictive behaviour and should be made a criminal offense punishable by Law because it is both destructive and unnecessary.

With painful awareness of the problem a path to a solution starts to form:

1. Share the numerator, the denominator and the ratio data as time series charts.
2. Only put requirement specifications on the numerator and denominator charts.
3. Outlaw quality-by-inspection and replace with quality-by-design-and-improvement.  

Metric Abuse is a Crime. DRATs are a dangerous addiction. DRATs kill Motivation. DRATs Kill Organisations.

Charts created using BaseLine

July 5th 2018 – The old NHS is dead.

Today is the last day of the old NHS – ironically on the 70th anniversary of its birth. Its founding principles are no more – care is no longer free at the point of delivery and is no longer provided according to needs rather than means. SickCare®, as it is now called, is a commodity just like food, water, energy, communications, possessions, housing, transport, education and leisure – and the the only things we get free-of-charge are air, sunlight, rain and gossip.  SickCare® is now only available from fiercely competitive service conglomerates – TescoHealth and VirginHealth being the two largest.  We now buy SickCare® like we buy groceries – online and instore.

Gone forever is the public-central-tax-funded-commissioner-and-provider market. Gone forever are the foundation trusts, the clinical commissioning groups and the social enterprises. Gone is the dream of cradle-to-grave equitable health care  – and all in a terrifyingly short time!

The once proud and independent professionals are now paid employees of profit-seeking private providers. Gone is their job-for-life security and gone is their gold-plated index-linked-final-salary-pensions.  Everyone is now hired and fired on the basis of performance, productivity and profit. Step out of line or go outside the limits of acceptability and it is “Sorry but you have breached your contract and we have to let you go“.

So what happened? How did the NHS-gravy-train come off the taxpayer-funded-track so suddenly?

It is easy to see with hindsight when the cracks started to appear. No-one and every-one is to blame.

We did this to ourselves. And by the time we took notice it was too late.

The final straw was when the old NHS became unaffordable because we all took it for granted and we all abused it.  Analysts now agree that there were two core factors that combined to initiate the collapse and they are unflatteringly referred to as “The Arrogance of Clinicians” and “The Ignorance of Managers“.  The latter is easier to explain.

When the global financial crisis struck 10 years ago it destabilised the whole economy and drastic “austerity” measures had to be introduced by the new coalition government. This opened the innards of the NHS to scrutiny by commercial organisations with an eager eye on the £100bn annual budget. What they discovered was a massive black-hole of management ignorance!

Protected for decades from reality by their public sector status the NHS managers had not seen the need to develop their skills and experience in Improvement Science and, when the chips were down, they were simply unable to compete.

Thousands of them hit the growing queues of the unemployed or had to settle for painful cuts in their pay and conditions before they really knew what had hit them. They were ruthlessly replaced by a smaller number of more skilled and more experienced managers from successful commercial service companies – managers who understood how systems worked and how to design them to deliver quality, productivity and profit.

The medical profession also suffered.

With the drop in demand for unproven treatments, the availability of pre-prescribed evidence-based standard protocols for 80% of the long-term conditions, and radically redesigned community-based delivery processes – a large number of super-specialised doctors were rendered “surplus to requirement”. This skill-glut created the perfect buyers market for their specialist knowledge – and they were forced to trade autonomy for survival. No longer could a GP or a Consultant choose when and how they worked; no longer were they able to discount patient opinion or patient expectation; and no longer could they operate autonomous empires within the bloated and bureaucratic trusts that were powerless to performance manage them effectively. Many doctors tried to swim against the tide and were lost – choosing to jump ship and retire early. Many who left it too late to leap failed to be appointed to their previous jobs because of “lack of required team-working and human-factor skills”.

And the public have fared no better than the public-servants. The service conglomerates have exercised their considerable financial muscle to create low-cost insurance schemes that cover only the most expensive and urgent treatments because, even in our Brave New NHS, medical bankruptcy is not politically palatable.  State subsidised insurance payouts provide a safety net  – but they cover only basic care. The too-poor-to-pay are not left to expire on the street as in some countries – but once our immediate care needs are met we have to leave or start paying the going rate.  Our cashless society and our EzeeMonee cards now mean that we pay-as-we-go for everything. The cash is transferred out of our accounts before the buy-as-you-need drug has even started to work!

A small yet strident band of evangelical advocates of the Brave New NHS say it is long overdue and that, in the long term, the health of the nation will be better for it. No longer able to afford the luxury of self-abuse through chronic overindulgence of food, cigarettes, and alcohol – and faced with the misery of the outcome of their own actions –  many people are shepherded towards healthier lifestyles. Those who comply enjoy lower insurance premiums and attractive no-claims benefits.  Healthier in body perhaps – but what price have we paid for our complacency? “


On July 15th 2012 the following headline appeared in one Sunday paper: “Nurses hired at £1,600 a day to cover shortages” and in another “Thousands of doctors face sack: NHS staff contracts could be terminated unless they agree to drastic changes to their pay and conditions“.  We were warned and it is not too late.


The Seven Flows

Improvement Science is the knowledge and experience required to improve … but to improve what?

Improve safety, delivery, quality, and productivity?

Yes – ultimately – but they are the outputs. What has to be improved to achieve these improved outputs? That is a much more interesting question.

The simple answer is “flow”. But flow of what? That is an even better question!

Let us consider a real example. Suppose we want to improve the safety, quality, delivery and productivity of our healthcare system – which we do – what “flows” do we need to consider?

The flow of patients is the obvious one – the observable, tangible flow of people with health issues who arrive and leave healthcare facilities such as GP practices, outpatient departments, wards, theatres, accident units, nursing homes, chemists, etc.

What other flows?

Healthcare is a service with an intangible product that is produced and consumed at the same time – and in for those reasons it is very different from manufacturing. The interaction between the patients and the carers is where the value is added and this implies that “flow of carers” is critical too. Carers are people – no one had yet invented a machine that cares.

As soon as we have two flows that interact we have a new consideration – how do we ensure that they are coordinated so that they are able to interact at the same place, same time, in the right way and is the right amount?

The flows are linked – they are interdependent – we have a system of flows and we cannot just focus on one flow or ignore the inter-dependencies. OK, so far so good. What other flows do we need to consider?

Healthcare is a problem-solving process and it is reliant on data – so the flow of data is essential – some of this is clinical data and related to the practice of care, and some of it is operational data and related to the process of care. Data flow supports the patient and carer flows.

What else?

Solving problems has two stages – making decisions and taking actions – in healthcare the decision is called diagnosis and the action is called treatment. Both may involve the use of materials (e.g. consumables, paper, sheets, drugs, dressings, food, etc) and equipment (e.g. beds, CT scanners, instruments, waste bins etc). The provision of materials and equipment are flows that require data and people to support and coordinate as well.

So far we have flows of patients, people, data, materials and equipment and all the flows are interconnected. This is getting complicated!

Anything else?

The work has to be done in a suitable environment so the buildings and estate need to be provided. This may not seem like a flow but it is – it just has a longer time scale and is more jerky than the other flows – planning-building-using a new hospital has a time span of decades.

Are we finished yet? Is anything needed to support the these flows?

Yes – the flow that links them all is money. Money flowing in is called revenue and investment and money flowing out is called costs and dividends and so long as revenue equals or exceeds costs over the long term the system can function. Money is like energy – work only happens when it is flowing – and if the money doesn’t flow to the right part at the right time and in the right amount then the performance of the whole system can suffer – because all the parts and flows are interdependent.

So, we have Seven Flows – Patients, People, Data, Materials, Equipment, Estate and Money – and when considering any process or system improvement we must remain mindful of all Seven because they are interdependent.

And that is a challenge for us because our caveman brains are not designed to solve seven-dimensional time-dependent problems! We are OK with one dimension, struggle with two, really struggle with three and that is about it. We have to face the reality that we cannot do this in our heads – we need assistance – we need tools to help us handle the Seven Flows simultaneously.

Fortunately these tools exist – so we just need to learn how to use them – and that is what Improvement Science is all about.

JIT, WIP, LIP and PIP

It is a fantastic feeling when a piece of the jigsaw falls into place and suddenly an important part of the bigger picture emerges. Feelings of confusion, anxiety and threat dissipate and are replaced by a sense of insight, calm and opportunitity.

Improvement Science is about 80% subjective and 20% objective: more cultural than technical – but the technical parts are necessary. Processes obey the Laws of Physics – and unlike the Laws of People these not open to appeal or repeal. So when an essential piece of process physics is missing the picture is incomplete and confusion reigns.

One piece of the process physics jigsaw is JIT (Just-In-Time) and process improvement zealots rant on about JIT as if it were some sort of Holy Grail of Improvement Science.  JIT means what you need arrives just when you need it – which implies that there is no waiting of it-for-you or you-for-it.  JIT is an important output of an improved process; it is not an input!  The danger of confusing output with input is that we may then try to use delivery time as a mangement metric rather than a performance metric – and if we do that we get ourselves into a lot of trouble. Delivery time targets are often set and enforced and to a large extent fail to achieve their intention because of this confusion.  To understand how to achieve JIT requires more pieces of the process physics jigsaw. The piece that goes next to JIT is labelled WIP (Work In Progress) which is the number of jobs that are somewhere between starting and finishing.  JIT is achieved when WIP is low enough to provide the process with just the right amount of resilience to absorb inevitable variation; and WIP is a more useful management metric than JIT for many reasons (which for brevity I will not explain here). Monitoring WIP enables a process manager to become more proactive because changes in WIP can signal a future problem with JIT – giving enough warning to do something.  However, although JIT and WIP are necessary they are not sufficient – we need a third piece of the jigsaw to allow us to design our process to deliver the JIT performance we want.  This third piece is called LIP (Load-In-Progress) and is the parameter needed to plan and schedule  the right capacity at the right place and the right time to achieve the required WIP and JIT.  Together these three pieces provide the stepping stones on the path to Productivity Improvement Planning (PIP) that is the combination of QI (Quality Improvement) and CI (Cost Improvement).

So if we want our PIP then we need to know our LIP and WIP to get the JIT.  Reddit? Geddit?         

Inborn Errors of Management

There is a group of diseases called “inborn errors of metabolism” which are caused by a faulty or missing piece of DNA – the blueprint of life that we inherit from our parents. DNA is the chemical memory that stores the string of instructions for how to build every living organism – humans included. If just one DNA instruction becomes damaged or missing then we may lose the ability to make or to remove one specific chemical – and that can lead to a deficiency or an excess of other chemicals – which can then lead to dysfunction – which can then make us feel unwell – and can then limit both our quality and quantity of life.  We are a biological system of interdependent parts. If an inborn error of metabolism is lethal it will not be passed on to our offspring because we don’t live long enough – so the ones we see are the ones which and not lethal.  We treat the symptoms of an inborn error of metabolism by artificially replacing the missing chemical – but the way to treat the cause is to repair, replace or remove the faulty DNA.

The same metaphor can be applied to any social system. It too has a form of DNA which is called culture – the inherited set of knowledge, beliefs, attitudes and behaviours that the organisation uses to conduct itself in its day-to-day business of survival. These patterns of behaviour are called memes – the social equivalent to genes – and are passed on from generation to generation through language – body language and symbolic language; spoken words – stories, legends, myths, songs, poems and books – the cultural collective memory of the human bio-psycho-social system. All human organisations share a large number of common memes – just as we share a large number of common genes with other animals and plants and even bacteria. Despite this much larger common cultural heritage – it is the differences rather than the similarities that we notice – and it is these differences that spawn the cultural conflict that we observe at all levels of society.

If, by chance alone, an organisation inherits a depleted set of memes it will appear different to all the others and it will tend to defend that difference rather than to change it. If an organisation has a meme defect, a cultural mutation that affects a management process, then we have the organisational condition called an Inborn Error of Management – and so long as the mutation is not lethal to the organisation it will tend to persist and be passed largely unnoticed from one generation of managers to the next!

The NHS was born in 1948 without a professional management arm, and while it survived and grew initally, it became gradually apparent that the omisson of the professional management limb was a problem; so in the 1980’s, following the Griffiths Report, a large dose professional management was grafted on and a dose of new management memes were injected. These included finance, legal and human resource management memes but one important meme was accidentally omitted – process engineering – the ability to design a process to meet a specific quality, time and cost specification.  This omission was not noticed initially because the rapid development of new medical technologies and new treatments was delivering improvements that obscured the inborn error of management. The NHS became the envy of many other countries – high quality healthcare available to all and free at the point of delivery.  Population longevity improved, public expectation increased, demand for healthcare increased and inevitably the costs increased.  In the 1990’s the growing pains of the burgeoning NHS led to a call for more funding, quoting other countries as evidence, and at the turn of the New Millenium a ten year plan to pump billions of pounds per year into the NHS was hatched.  Unfortunately, the other healthcare services had inherited the same meme defect – so the NHS grew 40% bigger but no better – and the evidence is now accumulatung that productivity (the ratio of output quality to input cost) has actally fallen by more than 10% – there are more people doing more work but less well.  The UK along with many other countries has hit an economic brick wall and the money being sucked into the NHS cannot increase any more – even though we have created a legacy of an increasing proportion of retired and elderly members of society to support. 

The meme defect that the NHS inherited in 1948 and that was not corrected in the transplant operation  1980’s is now exerting it’s influence – the NHS has no capability for process engineering – the theory, techniques, tools and training required to design processes are not on the curriculum of either the NHS managers or the clinicians. The effect of this defect is that we can only treat the symptoms rather than the cause – and we only have blunt and ineffective instruments such as a budget restriction – the management equivalent of a straight jacket – and budget cuts – the management equivalent of a jar of leeches. To illustrate the scale of the effect of this inborn error of management we only need to look at other organisations that do not appear to suffer from the same condition – for example the electronics manufacturing industry. The almost unbelieveable increase in the performance, quality and value for money of modern electronics over the last decade (mobile phones, digital cameras, portable music players, laptop computers, etc) is because these industries have invested in developing both their electrical and process engineering capabilities. The Law of the Jungle has weeded out the companies who did not – they have gone out of business or been absorbed – but publically funded service organisations like the NHS do not have this survival pressure – they are protected from it – and trying to simulate competition with an artificial internal market and applying stick-and-carrot top-down target-driven management is not a like-for-like replacement.    

The challenge for the NHS is clear – if we want to continue to enjoy high quality health care, free at the point of delivery, and that we can afford then we will need to recognise and correct our inborn error of management. If we ignore the symptoms, deny the diagnosis and refuse to take the medicine then we will suffer a painful and lingering decline – not lethal and not enjoyable – and it is has a name: purgatory.

The good news is that the treatment is neither expensive, nor unpleasant nor dangerous – process engineering is easy to learn, quick to apply, and delivers results almost immediately – and it can be incorporated into the organisational meme-pool quite quickly by using the see-do-teach vector. All we have to do is to own up to the symptoms, consider the evidence, accept the diagnosis, recognise the challenge and take our medicine. The sooner the better!

 

Does More Efficient equal More Productive?

It is often assumed that efficiency and productivity are the same thing – and this assumption leads to the conclusion that if we use our resources more efficiently then we will automatically be more productive. This is incorrect. The definition of productivity is the ratio of what we expect to get out divided by what we put in – and the important caveat to remember is that only the output which meets expectation is counted – only output that passes the required quality specification.

This caveat has two important implications:

1. Not all activity contributes to productivity. Failures do not.
2. To measure productivity we must define a quality specification.

Efficiency is how resources are used and is often presented as metric called utilisation – the ratio of how much time a resource was used to how much time a resource was available.  So, utilisation includes time spent by resources detecting and correcting avoidable errors.

Increasing utilisation does not always imply increasing productivity: It is possible to become more efficient and less productive by making, checking, detecting and fixing more errors.

For example, if we make more mistakes we will have more output that fails to meet the expected quality, our customers complain and productivity has gone down. Our standard reaction to this situation is to put pressure on ourselves to do more checking and to correct the erros we find – which implies that our utilisation has gone up but our productivity has remained down: we are doing more work to achieve the same outcome.

However, if we remove the cause of the mistakes then more output will meet the quality specification and productivity will go up (better outcome with same resources); and we also have have less re-work to do so utilisation goes down which means productivity goes up even further (remember: productivity = success out divided by effort in). Fixing the root case of errors delivers a double-productivity-improvement.

In the UK we have become a victim of our own success – we have a population that is living longer (hurray) and that will present a greater demand for medical care in the future – however the resources that are available to provide healthcare cannot increase at the same pace (boo) – so we have a problem looming that is not going to go away just by ignoring it. Our healthcare system needs to become more productive. It needs to deliver more care with the same cash – and that implies three requirements:
1. We need to specify our expectation of required quality.
2. We need to measure productivity so that we can measure improvement over time.
3. We need to diagnose the root-causes of errors rather than just treat their effects.

Improved productivity requires improved quality and lower costs – which is good because we want both!