The Challenge of Wicked Problems

“Wicked problem” is a phrase used to describe a problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often not recognised.
The term ‘wicked’ is used, not in the sense of evil, but rather in the sense that it is resistant to resolution.
The complex inter-dependencies imply that an effort to solve one aspect of a wicked problem may reveal or create other problems.

System-level improvement is a very common example of a wicked problem, so an Improvement Scientist needs to be able to sort the wicked problems from the tame ones.

Tame problems can be solved using well known and understood methods and the solution is either right or wrong. For example – working out how much resource capacity is needed to deliver a defined demand is a tame problem.  Designing a booking schedule to avoid excessive waiting is a tame problem.  The fact that many people do not know how to solve these tame problems does not make them wicked ones.  Ignorance in not that same as intransigence.

Wicked problems do not have right or wrong solutions – they have better or worse outcomes.  Wicked problems cannot be precisely defined, dissected, analysed and solved. They are messy. They are more than complicated – they are complex.  A mechanical clock is a complicated mechanism but designing, building, operating and even repairing a clock is a tame problem not a wicked one.

So how can we tell a wicked problem from a tame one?

If a problem has been solved and there is a known and repeatable solution then it is, by definition, a tame problem.  If a problem has never been solved then it might be tame – and the only way to find out is to try solving it.
The barrier we then discover is that each of us gets stuck in the mud of our habitual, unconscious assumptions. Experience teaches us that just taking a different perspective can be enough to create the breakthrough insight – the “Ah ha!” moment. Seeking other perspectives and opinions is an effective strategy when stuck.

So, if two-heads-are-better-than-one then many heads must be even better! Do we need a committee to solve wicked problems?
Experience teaches us that when we try it we find that it often does not work!
The different perspectives also come with different needs, different assumptions, and different agendas and we end up with a different wicked problem. The committee is rendered ineffective and inefficient by rhetorical discussion and argument.

This is where a very useful Improvement Science technique comes in handy. It is called Argument Free Problem Solving (AFPS) and it was intentionally designed to facilitate groups working on complex problems.

The trick to AFPS is to understand what generates the arguments and to design these causes out of the problem solving process. There are several contributors.

First there is just good old fashioned disrespectful skepticism – otherwise known as cynicism.  The antidote to this poison is to respectfully challenge the disrespectful component of the cynical behaviour – the personal discounting bit.  And it is surprisingly effective!

Second there is the well known principle that different people approach life and problems in different ways.  Some call this temperament and others call it personality. Whatever the label, knowing our preferred style and how different styles can conflict is useful because it leads to mutual respect for our different gifts.  One tried and tested method is Jungian Typology which comes in various brands such as the MBTI® (Myers Briggs Type Indicator).

Third there is the deepening understanding of how the 1.3 kg of caveman wetware between our ears actually works.  The ongoing advances in neuroscience are revealing fascinating insights into how “irrational” we really are and how easy it is to fool the intuition. Stage magicians and hypnotists make a living out of this inherent “weakness”. One of the lessons from neuroscience is that we find it easier to communicate when we are all in the same mental state – even if we have different temperaments.  It is called cognitive  resonance.  Being on the same wavelength.  Arguments arise when different people are in conflicting mental states – cognitive dissonance.

So an effective problem solving team is more akin to a flock of birds or a shoal of fish – that can change direction quickly and as one – without a committee, without an argument, and without creating chaos.  For birds and fish it is an effective survival strategy because it confounds the predators. The ones that do not join in … get eaten!

When a group are able to change perspective together and still stay focused on the problem then the tame ones get resolved and the wicked ones start to be dissolved.
And that is all we can expect for wicked problems.

The AFPS method can be learned quickly – and experience shows that just one demonstration is usually enough to convince the participants when a team is hopelessly entangled in a wicked-looking problem!

Are-Eee-Ess-Pee-Eee-See-Tee

The phrase that sums up the attitude and behaviour of an effective Improvement Scientist is respectful challenge. The challenge part is the easier to appreciate because to improve we have to change something which implies that we have to challenge the current reality in some way. The respect part is a bit tricker.

One dictionary definition is: Respect gives a positive feeling of esteem for a person or entity. The opposite of respect is contempt.

This definition gets us started because it points to what happens inside our heads – feeling respected is a good feeling; feeling disrespected is a bad one. Improvement only happens and is sustained when it is strongly associated with good feelings. That is how our the caveman wetware between our ears works. So respect is a fundamental component of improvement.

The animation illustrates several aspects of respect. One is the handshake. It is one of those rituals that on the surface seems illogical and superfluous but it has deep social and psychological importance. I once read that it comes from the time when men carried swords and the hand shake signifies “I am not holding my sword“. The handshake is an expression of extending mutual trust using a clear visual signal – it is a mark of mutual respect.  The other aspect is signified by the neckties. Again an illogical and superfluous garment except that it too broadcasts a signal – the message “I have prepared for this meeting by taking care to be clean and tidy because it is important“. This too has great social significance – in the past the biggest killer was not swords but something much smaller and more dangerous. Germs. People knew that disease and dirt were associated and that meant a dirty person was a dangerous one. Cleaning up was much more difficult in the days before piped water, baths, showers, washing machines and soap – so to put effort into getting clean and tidy was a mark of great respect. It still is.

So if we want to challenge and influence improvement then we must establish respect first. And that means we have to behave in a respectful manner. And that means we have to think in a respectful way. And that means we have to consciously not behave in an unintended disrespectful manner. Our learned rituals, such as a smile, a handshake and a hello, help us to do that automatically. Unfortunately it is more often what we do not do that is the most disrespectful behaviour.  And we all fall into these traps.

Unintended outcomes that result from what we do not do are called Errors of Omission (EOO) – and they are tricky to spot because there is no tangible evidence of them. The evidence of the error is intangible – a bad feeling.

For example, not acknowledging someone is an EOO. This is very obvious in social situations and it presses one of our Three Fears buttons – the Fear of Rejection.  It is very easy to broadcast to whole roomful of people that you do not respect someone just by obviously ignoring them.  And the higher up the social pecking order you are the greater the impact: for two reasons. First because followers unconsciously copy the behaviour of the leader; and second because it broadcasts the message that disrespectful behaviour is OK.

Contempt is toxic to a collaborative culture and blocks significant, sustained improvement.

In the modern world we have so many more ways that we can communicate and therefore many more opportunities for communication EOOs. The most fertile ground for EOOs is probably email.  It is so much easier to be disrespectful to a lot of people in a short period of time by email than just about any other medium. Just failing to acknowledge an email question or request is enough.  Failing to put in the email-equivalent of a handshake of Dear <yourname> …. message …. Regards <myname>  is similar.

Omitting to communicate last minute changes in a plan is an effective way to upset people too!

And perhaps the most effective is firing a grapeshot email in the hope that one will hit the intended target. These two examples highlight a different form of disrespect: discounting someone else’s time – or more specifically their lifetime.

When we waste our time we waste a bit of our life – and we deny ourselves the opportunity to invest that finite and precious lifetime doing something more enjoyable. Time is not money. Money can be saved for later – time cannot. When we waste an hour of our lives we waste it forever.  If we do that to ourselves we are showing lack of self-respect and that is our choice – when we do it to others we create a pervasive and toxic cultural swamp.

One of the first steps in the process of improvement is to engage and listen and one tool for this is The 4N Chart® – which is an emotional mapping technique. Niggles are the Negative Emotions in the Present together with their Be-Causes. The three commonest niggles that people consistently report are car parking, emails and meetings.  All three involve lifetime wasting activities. The cumulative effect is frustration and erosion of trust which drives further disrespectful behaviour. The end result is a viscous self-sustaining toxic cycle of habitual disrespect.

An effective tactic here is first to hold up the mirror and reflect back what is happening … that is respectful challenge.

The next step is to improving the processes that are linked to car parking, emails and meetings so that they are more effective and more efficient. And that means actively designing them to be more productive – by actively designing out the lifetime wasting parts.

Iconoclasts and Iconoblasts

The human body is an amazing self-repairing system. It does this by being able to detect damage and to repair just the damaged part while still continuing to function. One visible example of this is how it repairs a broken bone. The skeleton is the hard, jointed framework that protects and supports the soft bits. Some of the soft bits, the muscles, both stablise and move this framework of bones. Together they form the musculoskeletal system that gives us the power to move ourselves.  So when, by accident, we break a bone how do we repair the damage?  The secret is in the microscopic structure of the bone. Bone is not like concrete, solid and inert, it is a living tissue. Two of the microsopic cells that live in the bone are the osteoclasts and the osteoblasts (osteo- is Greek for “bone”; -clast is Greek for “break” and -blast is Greek for “germ” in the sense of something that grows).  Osteoclasts dissolve the old bone and osteoblasts deposit new bone – so when they work together they can create bone, remodel bone, and repair bone. It is humbling when we consider that millions of microscopic cells are able to coordinate this continuous, dynamic, adaptive, reparative behaviour with no central command-and-control system, no decision makers, no designers, no blue-prints, no project managers. How is this biological miracle achieved? We are not sure – but we know that there must be a process.

Organisations are systems that face a similar challenge. They have relatively rigid operational and cultural structures of roles, responsibilities, lines of accountability, rules, regulations, values, beliefs, attitudes and behaviours.  These formal and informal structures are the conceptual “bones” of the organisation – the structure that enables the organisation to function.  Organisations also need to grow and to develop – which means that their virtual bones need to be remodelled continuously. Occasionally organisations have accidents – and their bones break – and sometimes the breaks are deliberate: it is called “re-structuring”.

There are people within organisations that have the same role as the osteoblast in the body. These people are called iconoclasts and what they do is dissolve dogma. They break up the rigid rules and regulations that create the corporate equivalent of concrete – but they are selective. Iconoclasts are sensitive to stress and to strain and they only dissolve the cultural concrete where it is getting in the way of improvement. That is where dogma is blocking innovation.  Iconoclasts question the status quo, and at the same time explain how it is causing a problem, offer alternatives, and predict the benefits of the innovation. Iconoclasts are not skeptics or cynics – they prepare the ground for change – they are facilitators.

There is a second group people who we could call the iconoblasts. They are the ones who create the new rules, the new designs, the new recipes, the new processes, the new operating standards – and they work alongside the iconoclasts to ensure the structure remains strong and stable as it evolves. The iconoblasts are called Improvement Scientists.

Improvement Scientists are like builders – they use the raw materials of ideas, experience, knowledge, understanding, creativity and enthusiasm and assemble them into new organisational structures.  In doing so they fully accept that one day these structures will in turn be dismantled and rebuilt. That is the way of improvement.  The dogma is relative and temporary rather than absolute and permanent. And the faster the structures can be disassembled and reassembled the more agile the organisation becomes and the more able it is to survive change.

So how are the iconoclasts and iconoblasts coordinated? Can they also work effectively and efficiently without a command-and-control system? If millions if microscopic cells in our bones can achieve it then maybe the individuals within organisations can do it too. We just need to understand what makes an iconoclast and an iconoblast and effective partnership and an essential part of an organisation.

Productivity Improvement Science

Very often there is a requirement to improve the productivity of a process and operational managers are usually measured and rewarded for how well they do that. Their primary focus is neither safety nor quality – it is productivity – because that is their job.

For-profit organisations see improved productivity as a path to increased profit. Not-for-profit organisations see improved productivity as a path to being able to grow through re-investment of savings.  The goal may be different but the path is the same – productivity improvement.

First we need to define what we mean by productivity: it is the ratio of a system output to a system input. There are many input and output metrics to choose from and a convenient one to use is the ratio of revenue to expenses for a defined period of time.  Any change that increases this ratio represents an improvement in productivity on this purely financial dimension and we know that this financial data is measured. We just need to look at the bank statement.

There are two ways to approach productivity improvement: by considering the forces that help productivity and the forces that hinder it. This force-field metaphor was described by the psychologist Kurt Lewin (1890-1947) and has been developed and applied extensively and successfully in many organisations and many scenarios in the context of change management.

Improvement results from either strengthening helpers or weakening hinderers or both – and experience shows that it is often quicker and easier to focus attention on the hinderers because that leads to both more improvement and to less stress in the system. Usually it is just a matter of alignment. Two strong forces in opposition results in high stress and low motion; but in alignment creates low stress and high acceleration.

So what hinders productivity?

Well, anything that reduces or delays workflow will reduce or delay revenue and therefore hinder productivity. Anything that increases resource requirement will increase cost and therefore hinder productivity. So looking for something that causes both and either removing or realigning it will have a Win-Win impact on productivity!

A common factor that reduces and delays workflow is the design of the process – in particular a design that has a lot of sequential steps performed by different people in different departments. The handoffs between the steps are a rich source of time-traps and bottlenecks and these both delay and limit the flow.  A common factor that increases resource requirement is making mistakes because errors generate extra work – to detect and to correct.  And there is a link between fragmentation and errors: in a multi-step process there are more opportunities for errors – particularly at the handoffs between steps.

So the most useful way to improve the productivity of a process is to simplify it by combining several, small, separate steps into single large ones.

A good example of this can be found in healthcare – and specifically in the outpatient department.

Traditionally visits to outpatients are defined as “new” – which implies the first visit for a particular problem – and “review” which implies the second and subsequent visits.  The first phase is the diagnostic work and this often requires special tests or investigations to be performed (such as blood tests, imaging, etc) which are usually done by different departments using specialised equipment and skills. The design of departmental work schedules requires a patient to visit on a separate occasion to a different department for each test. Each of these separate visits incurs a delay and a risk of a number of errors – the commonest of which is a failure to attend for the test on the appointed day and time. Such did-not-attend or DNA rates are surprisingly high – and values of 10% are typical in the NHS.

The cumulative productivity hindering effect of this multi-visit diagnostic process design is large.  Suppose there are three steps: New-Test-Review and each step has a 10% DNA rate and a 4 week wait. The quickest that a patient could complete the process is 12 weeks and the chance of getting through right first time (the yield) is about 90% x 90% x 90% = 73% which implies that 27% extra resource is needed to correct the failures.  Most attempts to improve productivity focus on forcing down the DNA rate – usually with limited success. A more effective approach is to redesign process by combining the three New-Test-Review steps into one visit.  Exactly the same resources are needed to do the work as before but now the minimum time would be 4 weeks, the right-first-time yield would increase to 90% and the extra resources required to manage the two handoffs, the two queues, and the two sources of DNAs would be unnecessary.  The result is a significant improvement in productivity at no cost.  It is also an improvement in the quality of the patient experience but that is a unintended bonus.

So if the solution is that obvious and that beneficial then why are we not doing this everywhere? The answer is that we do in some areas – in particular where quality and urgency is important such as fast-track one-stop clinics for suspected cancer. However – we are not doing it as widely as we could and one reason for that is a hidden hinderer: the way that the productivity is estimated in the business case and measured in the the day-to-day business.

Typically process productivity is estimated using the calculated unit price of the product or service. The unit price is arrived at by adding up the unit costs of the steps and adding an allocation of the overhead costs (how overhead is allocated is subject to a lot of heated debate by accountants!). The unit price is then multiplied by expected activity to get expected revenue and divided by the total cost (or budget) to get the productivity measure.  This approach is widely taught and used and is certainly better than guessing but it has a number of drawbacks. Firstly, it does not take into account the effects of the handoffs and the queues between the steps and secondly it drives step-optimisation behaviour. A departmental operational manager who is responsible and accountable for one step in the process will focus their attention on driving down costs and pushing up utilisation of their step because that is what they are performance managed on. This in itself is not wrong – but it can become counter-productive when it is done in isolation and independently of the other steps in the process.  Unfortunately our traditional management accounting methods do not prevent this unintentional productivity hindering behaviour – and very often they actually promote it – literally!

This insight is not new – it has been recognised by some for a long time – so we might ask ourselves why this is still the case? This is a very good question that opens another “can of worms” which for the sake of brevity will be deferred to a later conversation.

So, when applying Improvement Science in the domain of financial productivity improvement then the design of both the process and of the productivity modelling-and-monitoring method may need addressing at the same time.  Unfortunately this does not seem to be common knowledge and this insight may explain why productivity improvements do not happen more often – especially in publically funded not-for-profit service organisations such as the NHS.

Pruning the Niggle Tree

Sometimes our daily existence feels like a perpetual struggle between two opposing forces: the positive force of innovation, learning, progress and success; and the opposing force of cynicism, complacency, stagnation and failure.  Often the balance-of-opposing-forces is so close that even small differences of opinion can derail us – especially if they are persistent. And we want to stay on course to improvement.

Niggles are the irritating things that happen every day. Day after day. Niggles are persistent. So when we are in our “ying-yang” equilibrium and “balanced on the edge” then just one extra niggle can push us off our emotional tight-rope. And we know it. The final straw!

So to keep ourselves on track to success we need to “nail” niggles.  But which ones? There seem to be so many! Where do we start?

If we recorded just one day and from that we listed all the positive things that happened on green PostIt® notes and all the negatives things on red ones – then we would be left with a random-looking pile of red and green notes. Good days would have more green, and bad days would have more red – and all days would have both. And that is just the way it is. Yes? But are they actually random? Is there a deeper connection?

Experience teaches us that when we Investigate-a-Niggle we find it is connected to other niggles. The “cannot find a parking place” niggle is because of the “car park is full” niggle which also causes the “someone arrived late for my important meeting” niggle. The red leaf is attached to a red twig which in turn sprouts other red leaves. The red leaves connect to other red leaves; not to green ones.

If we tug on a green leaf – a Nugget – we find that it too is connected to other nuggets. The “congratulations on a job well done” nugget is connected to the the “feedback is important” nugget from which sprouts the “opportunities for learning” nugget. Our green leaf is attached, indirectly, to many other green leaves; not to red ones.

It seems that our red leaves (niggles) and our green leaves (nuggets) are connected – but not directly to each other. It is as if we have two separate but tightly intertwined plants competing with each other for space and light. So if we want a tree that is more green than red and if we want to progress steadily in the direction of sustained improvement – then we need to prune the niggle tree (red leaves) and leave the nugget tree (green leaves) unscathed.

The problem is that if we just cut off one or two red leaves new ones sprout quickly from the red twigs to replace them. We quickly learn that this apprach is futile. We suspect that if we were able to cut all the red leaves off at once then the niggle tree might shrivel and die – but that looks impossible. We need to be creative and we need to search deeper. With the  knowledge that the red-leaves are part of one tree and we can remove multiple red leaves in one snip by working our way back from the leaves, up the red twigs and to the red branches. If we prune far enough back then we can expect a large number of interconnected red leaves to wither and fall off – leaving the healthy green leaves more space and more light to grow on that part of the tree.

Improvement Science is about pruning the Niggle tree to make space for the Nugget tree to grow. It is about creating an environment for the Green shoots of innovation to sprout.  Most resistance comes from those who feed on the Red leaves – the Cynics – and if we remove enough red branches then they will go hungry. And now the Cynics have a choice: learn to taste and appreciate the Green leaves or “find another tree”.

We want a Greener tree- with fewer poisonous Red leaves on it.

Negotiate, Negotiate, Negotiate.

One of the most important skills that an Improvement Scientist needs is the ability to negotiate.  We are all familiar with one form of negotiaton which is called distributive negotiation which is where the parties carve up the pie in a low trust compromise. That is not the form we need – what we need is called integrative negotiation. The goal of integrative negotiation is to join several parts into a greater whole and it implies a higher level of trust and a greater degree of collaboration.

Organisations of more than about 90 people are usually split into departments – and for good reasons. The complex organisation requires specialist aptitudes, skills, and know-how and it is easier to group people together who share the specialist skills needed to deliver that service to the organisation – such as financial services in the accounts department.  The problem is that this division also creates barriers and as the organisation increases in size these barriers have a cumulative effect that can severely limit the capability of the organisation.  The mantra that is often associated with this problem is “communication, communication, communication” … which is too non-specific and therefore usually ineffective.

The products and services that an organisation is designed to deliver are rarely the output of one department – so the parts need to align and to integrate to create an effective and efficient delivery system. This requires more than just communication – it requires integrative negotiation – and it is not a natural skill or one that is easy to develop. It requires investment of effort and time.

To facilitate the process we need to provide three things: a common goal, a common language and a common ground.  The common goal is what all parts of the system are aligned to; the common language is how the dialog is communicated; and the common ground is our launch pad.

Integrative negotiation starts with finding the common ground – the areas of agreement. Very often these are taken for granted because we are psychologically tuned to notice differences rather than similarities. We have to make the “assumed” and “obvious” explicit before we turn our attention on our differences.

Integrative negoation proceeds with defining the common niggles and nice-ifs that could be resolved by a single change; the win-win-win opportunities.

Integrative negotiation concludes with identifying changes that are wholly within the circle of influence of the parties involved – the changes that they have the power to make individually and collectively.

After negotiation comes decision and after decision comes action and that is when improvement happens.

The Nerve Curve

The Nerve Curve is the emotional roller-coaster ride that everyone who engages in Improvement needs to become confident to step onto.

Just like a theme park ride it has ups and downs, twists and turns, surprises and challenges, an element of danger and a splash of excitement.  If it did not have all of those components then it would not be fun and there would not be queues of people wanting to ride, again and again.  And the reason that theme parks are so successful is because their rides have been very carefully designed – to be challenging, exciting, fun and safe – all at the same time.

So, when we challenge others to step aboard our Improvement Nerve Curve then we need to ensure that our ride is safe – and to do that we need to understand where the emotional dangers lurk, to actively point them out and then avoid them.

A big danger hides right at the start.  To get aboard the Nerve Curve we have to ask questions that expose the Elephant-in-the-Room issues.  Everyone knows they are there – but no one wants to talk about them.   The biggest one is called Distrust – which is wrapped up in all sorts of different ways and inside the nut is the  Kernel of Cynicism.  The inexperienced improvement facilitator may blunder straight into this trap just by using one small word … the word “Why”?  Arrrrrgh!  Kaboom!  Splat!  Game Over.

The “Why” question is like throwing a match into a barrel of emotional gunpowder – because it is interpreted as “What is your purpose?” and in a low-trust climate no one will want to reveal what their real purpose or intention is.  They have learned from experience to keep their cards close to their chest – it is safer to keep agendas hidden.

A much safer question is “What?”  What are the facts?  What are the effects? What are the causes? What works well? What does not? What do we want? What don’t we want? What are the constraints? What are our change options? What would each deliver? What are everyone’s views?  What is our decision?  What is our first action? What is the deadline?

Sticking to the “What” question helps to avoid everyone diving for the Political Panic Button and pulling the Emotional Emergency Brake before we have even got started.

The first part of the ride is the “Awful Reality Slope” that swoops us down into “Painful Awareness Canyon” which is the emotional low-point of the ride.  This is where the elephants-in-the-room roam for all to see and where passengers realise that, once the issues are in plain view, there is no way back.

The next danger is at the far end of the Canyon and is called the Black Chasm of Ignorance and the roller-coaster track goes right to the edge of it.  Arrrgh – we are going over the edge of the cliff – quick grab the Wilful Blindness Goggles and Denial Bag from under the seat, apply the Blunder Onwards Blind Fold and the Hope-for-the-Best Smoke Hood.

So, before our carriage reaches the Black Chasm we need to switch on the headlights to reveal the Bridge of How:  The structure and sequence that spans the chasm and that is copiously illuminated with stories from those who have gone before.  The first part is steep though and the climb is hard work.  Our carriage clanks and groans and it seems to take forever but at the top we are rewarded by a New Perspective and the exhilarating ride down into the Plateau of Understanding where we stop to reflect and to celebrate our success.

Here we disembark and discover the Forest of Opportunity which conceals many more Nerve Curves going off in all directions – rides that we can board when we feel ready for a new challenge.  There is danger lurking here too though – hidden in the Forest is Complacency Swamp – which looks innocent except that the Bridge of How is hidden from view.   Here we can get lured by the pungent perfume of Power and the addictive aroma of Arrogance and we can become too comfortable in the Zone.   As we snooze in the Hammock of Calm from we do not notice that the world around us is changing.  In reality we are slipping backwards into Blissful Ignorance and we do not notice – until we suddenly find ourselves in an unfamiliar Canyon of Painful Awareness.  Ouch!

Being forewarned is our best defense.  So, while we are encouraged to explore the Forest of Opportunity,  we learn that we must also return regularly to the Plateau of Understanding to don the Habit of Humility.  We must  regularly refresh ourselves from the Fountain of New Knowledge by showing others what we have learned and learning from them in return.  And when we start to crave more excitement we can board another Nerve Curve to a new Plateau of Understanding.

The Safety Harness of our Improvement journey is called See-Do-Teach and the most important part is Teach.  Our educators need to have more than just a knowledge of how-to-do, they also need to have enough understanding to be able to explore the why-to -do. The Quest for Purpose.

To convince others to get onboard the Nerve Curve we must be able to explain why the Issues still exist and why the current methods are not sufficient.  Those who have been on the ride are the only ones who are credible because they understand.  They have learned by doing.

And that understanding grows with practice and it grows more quickly when we take on the challenge of learning how to explore purpose and explain why.  This is Nerve Curve II.

All aboard for the greatest ride of all.

Knowledge and Understanding

Knowledge is not the same as Understanding.

We all know that the sun rises in the East and sets in the West; most of us know that the oceans have a twice-a-day tidal cycle and some of us know that these tides also have a monthly cycle that is associated with the phase of the moon. We know all of this just from taking notice; remembering what we see; and being able to recognise the patterns. We use this knowledge to make reliable predictions of the future times and heights of the tides; and we can do all of this without any understanding of how tides are caused.

Our lack of understanding means that we can only describe what has happened. We cannot explain how it happened. We cannot extract meaning – the why it happened.

People have observed and described the movements of the sun, sea, moon, and stars for millennia and a few could even predict them with surprising accuracy – but it was not until the 17th century that we began to understand what caused the tides. Isaac Newton developed enough of an understanding to explain how it worked and he did it using a new concept called gravity and a new tool called calculus.  He then used this understanding to explain a lot of other unexplained things and suddenly the Universe started to make a lot more sense to everyone. Nowadays we teach this knowledge at school and we take it for granted. We assume it is obvious and it is not. We are no smarter now that people in the 17th Century – we just have a deeper understanding (of physics).

Understanding enables things that have not been observed or described to be predicted and explained. Understanding is necessary if we want to make rational and reliable decisions that will lead to changes for the better in a changing world.

So, how can we test if we only know what to do or if we actually understand what to do?

If we understand then we can demonstrate the application of our knowledge by solving old and new problems effectively and we can explain how we do it.  If we do not understand then we may still be able to apply our knowledge to old problems but we do not solve new problems effectively or efficiently and we are not able to explain why.

But we do not want the risk of making a mistake in order to test if we have and understanding-gap so how can we find out? What we look for is the tell-tale sign of an excess of knowledge and a dearth of understanding – and it has a name – it is called “bureaucracy”.

Suppose we have a system where the decisions-makers do not make effective decisions when faced with new challenges – which means that their decisions lead to unintended adverse outcomes. It does not take very long for the system to know that the decision process is ineffective – so to protect itself the system reacts by creating bureaucracy – a sort of organisational damage-limitation circle of sand-bags that limit the negative consequences of the poor decisions. A bureaucratic firewall so to speak.

Unfortunately, while bureaucracy is effective it is non-specific, it uses up resources and it slows everything down. Bureaucracy is inefficiency. What we get as a result is a system that costs more and appears to do less and that is resistant to any change – not just poor decisions – it slows down good ones too.

The bureaucratic barrier is important though; doing less bad stuff is actually a reasonable survival strategy – until the cost of the bureaucracy threatens the systems viability. Then it becomes a liability.

So what happens when a last-saloon-in-town “efficiency” drive is started in desperation and the “bureaucratic red tape” is slashed? The poor decisions that the red tape was ensnaring are free to spread virally and when implemented they create a big-bang unintended adverse consequence! The safety and quality performance of the system drops sharply and that triggers the reflex “we-told-you-so” and rapid re-introduction of the red-tape, plus some extra to prevent it happening again.  The system learns from its experience and concludes that “higher quality always costs more” and “don’t trust our decision-makers” and “the only way to avoid a bad decision is not to make/or/implement any decisions” and to “the safest way to maintain quality is to add extra checks and increased the price”. The system then remembers this new knowledge for future reference; the bureaucratic concrete sets hard; and the whole cycle repeats itself. Ad infinitum.

So, with this clearer insight into the value of bureaucracy and its root cause we can now design an alternative system: to develop knowledge into understanding and by that route to improve our capability to make better decisions that lead to predictable, reliable, demonstrable and explainable benefits for everyone. When we do that the non-specific bureaucracy is seen to impede progress so it makes sense to dismantle the bits that block improvement – and keep the bits that block poor decisions and that maintain performance. We now get improved quality and lower costs at the same time, quickly, predictably and without taking big risks, and we can reinvest what we have saved in making making further improvements and developing more knowledge, a deeper understanding and wiser decisions. Ad infinitum.

The primary focus of Improvement Science is to expand understanding – our ability to decide what to do, and what not to; where and where not to; and when and when not to – and to be able to explain and to demonstrate the “how” and to some extent the “why”.

One proven method is to See, then to Do, and then to Teach. And when we try that we discover to our surprise that the person whose understanding increases the most is the teacher!  Which is good because the deeper the teachers understanding the more flaxible, adaptable and open to new learning they become.  Education and bureaucracy are poor partners.

Cause and Effect

Breaking News: Scientists have discovered that people with yellow teeth are more likely to die of lung cancer. Patient-groups and dentists are now calling for tooth-whitening to be made freely available to everyone.”

Does anything about this statement strike you as illogical? Surely it is obvious. Having yellow teeth does not cause lung cancer – smoking causes both yellow teeth and lung cancer!  Providing a tax-funded tooth-whitening service will be futile – banning smoking is the way to reduce deaths from lung cancer!

What is wrong here? Do we have a problem with mad scientists, misuse of statistics or manipulative journalists? Or all three?

Unfortunately, while we may believe that smoking causes both yellow teeth and lung cancer it is surprisingly difficult to prove it – even when sane scientists use the correct statistics and their results are accurately reported by trustworthy journalists.  It is not easy to prove causality.  So we just assume it.

We all do this many times every day – we infer causality from our experience of interacting with the real world – and it is our innate ability to do that which allows us to say that the opening statement does not feel right.  And we do this effortlessly and unconsciously.

We then use our inferred-causality for three purposes. Firstly, we use it to explain how past actions led to the present situation. The chain of cause-and-effect. Secondly, we use it to create options in the present – our choices of actions. Thirdly, we use it to predict the outcome of our chosen action – we set our expectation and then compare the outcome with our prediction. If outcome is better than we expected then we feel good, if it is worse then we feel bad.

What we are doing naturally and effortlessly is called “causal modelling”. And it is an impressive skill. It is the skill needed to solve problems by designing ways around them.

Unfortunately – the ability to build and use a causal model does not guarantee that our model is a valid, complete or accurate representation of reality. Our model may be imperfect and we may not be aware of it.  This raises two questions: “How could two people end up with different causal models when they are experiencing the same reality?” and “How do we prove if either is correct and if so, which it is?”

The issue here is that no two people can perceive reality exactly the same way – we each have an unique perspective – and it is an inevitable source of variation.

We also tend to assume that what-we-perceive-is-the-truth so if someone expresses a different view of reality then we habitually jump to the conclusion that they are “wrong” and we are “right”.  This unconscious assumption of our own rightness extends to our causal models as well. If someone else believes a different explanation of how we got to where we are, what our choices are and what effect we might expect from a particular action then there is almost endless opportunity for disagreement!

Fortunately our different perceptions agree enough to create common ground which allows us to co-exist reasonably amicably.  But, then we take the common ground for granted, it slips from our awareness, and we then magnify the molehills of disagreement into mountains of discontent.  It is the way our caveman wetware works. It is part of the human condition.

So, if our goal is improvement, then we need to consider a more effective approach: which is to assume that all our causal models are approximate and that they are all works-in-progress. This implies that each of us has two challenges: first to develop a valid causal model by testing it against reality through experimentation; and second to assist the collective development of a common causal model by sharing our individual understanding through explanation and demonstration.

The problem we then encounter is that statistical analysis of historical data cannot answer questions of causality – it is necessary but it is not sufficient – and because it is insufficient it does not make common-sense.  For example, there may well be a statistically significant association between “yellow teeth” and “lung cancer” and “premature death” but knowing those facts is not enough to help us create a valid cause-and-effect model that we then use to make wiser choices of more effective actions that cause us to live longer.

Learning how to make wiser choices that lead to better outcomes is what Improvement Science is all about – and we need more than statistics – we need to learn how to collectively create, test and employ causal models.

And that has another name – is called common sense.

Resistance to Change

Many people who are passionate about improvement become frustrated when they encounter resistance-to-change.

It does not matter what sort of improvement is desired – safety, delivery, quality, costs, revenue, productivity or all of them.

The natural and intuitive reaction to meeting resistance is to push harder – and our experience of the physical world has taught us that if we apply enough pressure at the right place then resistance will be overcome and we will move forward.

Unfortunately we sometimes discover that we are pushing against an immovable object and even our maximum effort is futile – so we give up and label it as “impossible”.

Much of Improvement Science appears counter-intuitive at first sight and the challenge of resistance is no different.  The counter-intuitive response to feeling resistance is to pull back, and that is exactly what works better. But why does it work better? Isn’t that just giving up and giving in? How can that be better?

To explain the rationale it is necessary to examine the nature of resistance more closely.

Resistance to change is an emotional reaction to an unconsciously perceived threat that is translated into a conscious decision, action and justification: the response. The range of verbal responses is large, as illustrated in the caption, and the range of non-verbal responses is just as large.  Attempting to deflect or defuse all of them is impractical, ineffective and leads to a feeling of frustration and futility.

This negative emotional reaction we call resistance is non-specific because that is how our emotions work – and it is triggered as much by the way the change is presented as by what the change is.

Many change “experts” recommend  the better method of “driving” change is selling-versus-telling and recommend learning psycho-manipulation techniques to achieve it – close-the-deal sales training for example. Unfortunately this strategy can create a psychological “arms race” which can escalate just as quickly and lead to the same outcome: an  emotional battle and psychological casualties. This outcome is often given the generic label of “stress”.

An alternative approach is to regard resistance behaviour as multi-factorial and one model separates the non-specific resistance response into separate categories: Why DoDon’t Do – Can’t Do – Won’t Do.

The Why Do response is valuable feedback because is says “we do not understand the purpose of the proposed change” and it is not unusual for proposals to be purposeless. This is sometimes called “meddling”.  This is fear of the unknown.

The Don’t Do  is valuable feedback that is saying “there is a risk with this proposed change – an unintended negative consequence that may be greater than the intended positive outcome“.  Often it is very hard to explain this NoNo reaction because it is the output of an unconscious thought process that operates out of awareness. It just doesn’t feel good. And some people are better at spotting the risks – they prefer to wear the Black Hat – they are called skeptics.  This is fear of failure.

The Can’t Do is also valuable feedback that is saying “we get the purpose and we can see the problem and the benefit of a change – we just cannot see the path that links the two because it is blocked by something.” This reaction is often triggered by an unconscious recognition that some form of collaborative working will be required but the cultural context is low on respect and trust. It can also just be a manifestation of a knowledge, skill or experience gap – the “I don’t know how to do” gap. Some people habitually adopt the Victim role – most are genuine and do not know how.

The Won’t Do response is also valuable feedback that is saying “we can see the purpose, the problem, the benefit, and the path but we won’t do it because we don’t trust you“. This reaction is common in a low-trust culture where manipulation, bullying and game playing is the observed and expected behaviour. The role being adopted here is the Persecutor role – and the psychological discount is caring for others. Persecutors lack empathy.

The common theme here is that all resistance-to-change responses represent valuable feedback and explains why the better reaction to resistance is to stop talking and start listening because to make progress will require using the feedback to diagnose what components or resistance are present. This is necessary because each category requires a different approach.

For example Why Do requires making the both problem and the purpose explicit; Don’t Do requires exploring the fear and bringing to awareness what is fuelling it; Can’t Do requires searching for the skill gaps and filling them; and Won’t Do requires identifying the trust-eroding beliefs, attitudes and behaviours and making it safe to talk about them.

Resistance-to-change is generalised as a threat when in reality it represents an opportunity to learn and to improve – which is what Improvement Science is all about.

The Bucket Brigade Fire Fighting Service

Fire-fighting is a behaviour that has a long history, and before Fireman Sam arrived on the scene we had the Bucket Brigade.  This was a people-intensive process designed to deliver water from the nearest pump, pond or river with as little risk, delay and effort as possible. The principle of a bucket-brigade is that a chain of people forms between the pump and the fire and they pass buckets in two directions – full ones from the pump to the fire and empty ones from the fire back to the pump.

A bucket brigade is useful metaphor for many processes and an Improvement Science Practitioner (ISP) can learn a lot from exploring its behaviour.

First of all the number of steps in the process or stream is fixed because it is determined by the distance between the pump and the fire. The time it takes for a Bucket Passer to pass a bucket to the next person is predictable  too and it is this cycle-time that determines the rate at which a bucket will move along the line. The fixed step-number and fixed cycle-time implies that the time it takes for a bucket to pass from one end of the line to the other is fixed too. It does not matter if the bucket is empty, half empty or full – the delivery time per bucket is consistent from bucket to bucket. The outflow however is not fixed – it is determined by how full each bucket is when it reaches the end of the line: empty buckets means zero flow, full buckets means maximum flow.

This implies that the process is behaving like a time-trap because the delivery time and the delivery volume (i.e. flow) are independent. Having bigger buckets or fuller buckets makes no difference to the time it takes to traverse the line but it does influence the outflow.

Most systems have many processes that are structured just like a bucket brigade: each step in the process contributes to completing the task before handing the part-completed task on to the next step.

The four dimensions of improvement are Safety, Flow, Quality and Productivity and we can see that, if we are not dropping buckets, then the safety, flow and quality are fixed by the design of the process. So what can we do to improve productivity?

Well, it is evident that the time it takes to do the hand-off adds to the cycle-time of each step. So along comes the Fire Service Finance Department who sees time-as-money and they work out that the unit cost of each step of the process could be reduced by accumulating the jobs at each stage and then handing them off as a batch – because the time-is-money and the cost of the hand-off can now be shared across several buckets. They conclude that the unit cost for the steps will come down and productivity will go up – simple maths and intuitively obvious in theory – but does it actually work in reality?

Q1: Does it reduce the number of Bucket Passers? No. We need just as many as we did before. What we are doing is replacing the smaller buckets with bigger ones – and that will require capital investment.  So when our Finance Department use the lower unit cost as justification then the bigger, more expensive buckets start to look like a good financial option – on paper. But looking at the wage bills we can see that they are the same as before so this raises a question: have the bigger buckets increased the flow or reduced the delivery time? We will need a tangible, positive and measurable  improvement in productivity to justify our capital investment.

To summarise: we have the same number of Bucket Passers working at the same cycle time so there is no improvement in how long it takes for the water to reach the fire from the pump! The delivery time is unchanged. And using bigger buckets implies that the pump needs to be able to work faster to fill them in one cycle of the process – but to minimise cost when we created the Fire Service we bought a pump with just enough average flow capacity and it cannot be made to increase its flow. So, equipped with a bigger bucket the first Bucket Passer has to wait longer for their bigger bucket to be filled before passing it on down the line.  This implies a longer cycle-time for the first step, and therefore also for every step in the chain. So the delivery-time will actually get longer and the flow will stay the same – on average. All we have appear to have achieved is a higher cost and longer delivery time – which is precisely the opposite of what we intended. Productivity has actually fallen!

In a state of  near-panic the Fire Service Finance Department decide to measure the utilisation of the Bucket Passers and discover that it has fallen which must mean that they have become lazy! So a Push Policy is imposed to make them work faster – the Service cannot afford financial inducements – and threats cost nothing. The result is that in their haste to avoid penalties the bigger, fuller, heavier buckets get fumbled and some of the precious water is lost – so less reaches the fire.  The yield of the process falls and now we have a more expensive, longer delivery time, lower flow process. Productivity has fallen even further and now the Bucket Passers and Accountants are at war. How much worse can it get?

Where did we go wrong?

We made an error of omission. We omitted to learn the basics of process design before attempting to improve the productivity of our time-trap dominated process!  Our error of omission led us to confuse the step, stage, stream and system and we incorrectly used stage metrics (unit cost and utilisation) in an attempt to improve system performance (productivity). The outcome was the exact opposite of what we intended; a line of unhappy Bucket Passers; a frustrated Finance Department and an angry Customer whose house burned down because our Fire Service did not deliver enough water on time. Lose-Lose-Lose.

Q1: Is it possible to improve the productivity of a time-trap design?

Q1: Yes, it is.

Q2: How do we avoid making the same error?

A2: Follow the FISH .

Targets, Tyrannies and Traps.

If we are required to place a sensitive part of our anatomy into a device that is designed to apply significant and sustained pressure, then the person controlling the handle would have our complete attention!

Our sole objective would be to avoid the crushing and relentless pain and this would most definitely bias our behaviour.

We might say or do things that ordinarily we would not – just to escape from the pain.

The requirement to meet well-intentioned but poorly-designed performance targets can create the organisational equivalent of a medieval thumbscrew; and the distorting effect on behaviour is the same.  Some people even seem to derive pleasure from turning the screw!

But what if we do not know how to achieve the performance target? We might then act to deflect the pain onto others – we might become tyrants too – and we might start to apply our own thumbscrews further along the chain of command.  Those unfortunate enough to be at the end of the pecking order have nowhere to hide – and that is a deeply distressing place to be – helpless and hopeless.

Fortunately there is a way out of the corporate torture chamber: It is to learn how to design systems to deliver the required performance specification – and learning how to do this is much easier than many believe.

For example, most assume without question that big queues and long waits are always caused by inefficient use of available capacity – because that is what their monitoring systems report. So out come thumbscrews heralded by the chanted mantra “increase utilisation, increase utilisation”.  Unfortunately, this belief is only partially correct: low utilisation of available capacity can and does lead to big queues and long waits but there is a much more prevalent and insidious cause of long waits that has nothing to do with capacity or utilisation. These little beasties are are called time-traps.

The essential feature of a time trap is that it is independent of both flow and time – it adds the same amount of delay irrespective of whether the flow is low or high and irrespective of when the work arrives. In contrast waits caused by insufficient capacity are flow and time dependent – the higher the flow the longer the wait – and the effect is cumulative over time.

Many confuse the time-trap with its close relative the batch – but they are not the same thing at all – and most confuse both of these with capacity-constraints which are a completely different delay generating beast altogether.

The distinction is critical because the treatments for time-traps, batches and capacity-constraints are different – and if we get the diagnosis wrong then we will make the wrong decision, choose the wrong action, and our system will get sicker, or at least no better. The corporate pain will continue and possibly get worse – leading to even more bad behaviour and more desperate a self-destructive strategies.

So when we want to reduce lead times by reducing waiting-in-queues then the first thing we need to do is to search for the time-traps, and to do that we need to be able to recognise their characteristic footprint on our time-series charts; the vital signs of our system.

We need to learn how to create and interpret the charts – and to do that quickly we need guidance from someone who can explain what to look for and how to interpret the picture.

If we lack insight and humility and choose not to learn then we are choosing to stay in the target-tyranny-trap and our pain will continue.

The Power of the Positive Deviants

It is neither reasonable nor sensible to expect anyone to be a font of all knowledge.

And gurus with their group-think are useful but potentially dangerous when they suppress competitive paradigms.

So where does an Improvement Scientist seek reliable and trustworthy inspiration?

Guessing is a poor guide; gut-instinct can seriously mislead; and mind-altering substances are illegal, unreliable or both!

So who are the sources of tested ideas and where do we find them?

They are called Positive Deviants and they are everywhere.


But, the phrase positive deviant does not feel quite right does it? The word “deviant” has a strong negative emotional association. We are socially programmed from birth to treat deviations from the norm with distrust and for good reason. Social animals view conformity and similarity as security – it is our herd instinct. Anyone who looks or behaves too far from the norm is perceived as odd and therefore a potential threat and discounted or shunned.

So why consider deviants at all? Well, because anyone who behaves significantly differently from the majority is a potential source of new insight – so long as we know how to separate the positive deviants from the negative ones.

Negative deviants display behaviours that we could all benefit from by actively discouraging!  The NoNo or thou-shalt-not behaviours that are usually embodied in Law.  Killing, stealing, lying, speeding, dropping litter – that sort of thing. The anti-social trust-eroding conflict-generating behaviour that poisons the pond that we all swim in.

Positive deviants display behaviours that we could all benefit from actively encouraging! The NiceIf behaviours. But we are habitually focussed more on self-protection than self-development and we generalise from specifics. So we treat all deviants the same – we are wary of them. And by so doing we miss many valuable opportunities to learn and to improve.


How then do we identify the Positive Deviants?

The first step is to decide the dimension we want to improve and choose a suitable metric to measure it.

The second step is to measure the metric for everyone and do it over time – not just at a point in time. Single point-in-time measurements (snapshots) are almost useless – we can be tricked by the noise in the system into poor decisions.

The third step is to plot our measure-for-improvement as a time-series chart and look at it.  Are there points at the positive end of the scale that deviate significantly from the average? If so – where and who do they come from? Is there a pattern? Is there anything we might use as a predictor of positive deviance?

Now we separate the data into groups guided by our proposed predictors and compare the groups. Do the Positive Deviants now stick out like a sore thumb? Did our predictors separate the wheat from the chaff?

If so we next go and investigate.  We need to compare and contrast the Positive Deviants with the Norms. We need to compare and contrast both their context and their content. We need to know what is similar and what is different. There is something that is causing the sustained deviation and we need to search until we find it – and then we need know how and why it is happening.

We need to separate associations from causations … we need to understand the chains of events that lead to the better outcomes.

Only then will a new Door to Opportunity magically appear in our Black Wall of Ignorance – a door that leads to a proven path of improvement. A path that has been trodden before by a Positive Deviant – or by a whole tribe of them.

And only we ourselves can choose to open the door and explore the path – we cannot be pushed through by someone else.

When our system is designed to identify and celebrate the Positive Deviants then the negative deviants will be identified too! And that helps too because they will light the path to more NoNos that we can all learn to avoid.

For more about positive deviance from Wikipedia click here

For a case study on positive deviance click here

NB: The terms NiceIfs  and NoNos are two of the N’s on The 4N Chart® – the other two are Nuggets and Niggles.

Seeing Is Believing or Is It?

Do we believe what we see or do we see what we believe?  It sounds like a chicken-and-egg question – so what is the answer? One, the other or both?

Before we explore further we need to be clear about what we mean by the concept “see”.  I objectively see with my real eyes but I subjectively see with my mind’s eye. So to use the word see for both is likely to result in confusion and conflict and to side-step this we will use the word perceive for seeing-with-our-minds-eye.   

When we are sure of our belief then we perceive what we believe. This may sound incorrect but psychologists know better – they have studied sensation and perception in great depth and they have proved that we are all susceptible to “perceptual bias”. What we believe we will see distorts what we actually perceive – and we do it unconsciously. Our expectation acts like a bit of ancient stained glass that obscures and distorts some things and paints in a false picture of the rest.  And that is just during the perception process: when we recall what we perceived we can add a whole extra layer of distortion and can can actually modify our original memory! If we do that often enough we can become 100% sure we saw something that never actually happened. This is why eye-witness accounts are notoriously inaccurate! 

But we do not do this all of the time.  Sometimes we are open-minded, we have no expectation of what we will see or we actually expect to be surprised by what we will see. We like the feeling of anticipation and excitement – of not knowing what will happen next.   That is the psychological basis of entertainment, of exploration, of discovery, of learning, and of improvement science.

An experienced improvement facilitator knows this – and knows how to create a context where deeply held beliefs can be explored with sensitivity and respect; how to celebrate what works and how and why it does; how to challenge what does not; and how to create novel experiences; foster creativity and release new ideas that enhance what is already known, understood and believed.

Through this exploration process our perception broadens, sharpens and becomes more attuned with reality. We achieve both greater clarity and deeper understanding – and it is these that enable us to make wiser decisions and commit to more effective action.

Sometimes we have an opportunity to see for real what we would like to believe is possible – and that can be the pivotal event that releases our passion and generates our commitment to act. It is called the Black Swan effect because seeing just one black swan dispels our belief that all swans are white.

A practical manifestation of this principle is in the rational design of effective team communication – and one of the most effective I have seen is the Communication Cell – a standardised layout of visual information that is easy-to-see and that creates an undistorted perception of reality.  I first saw it many years ago as a trainee pilot when we used it as the focus for briefings and debriefings; I saw it again a few years ago at Unipart where it is used for daily communication; and I have seen it again this week in the NHS where it is being used as part of a service improvement programme.

So if you do not believe then come and see for yourself.

March Madness

Whether we like it or not we are driven by a triumvirate of celestial clocks. Our daily cycle is the result of the rotation of the Earth; the ebb and flow of the tides is caused by the interaction of the orbiting Moon and the spinning Earth; and the annual sequence of seasons is the outcome of the tilted Earth circling the Sun.  The other planets, stars and galaxies appear not to have much physical influence – despite what astrologists would have us believe. 

Hares are said to behave oddly in the month of March – as popularised by Lewis Carroll in Alice’s Adentures in Wonderland – but there is another form of March Madness that affects people – one that is not celestial and seasonal in origin – its cause is fiscal and financial. The madness that accompanies the end of the tax year.

This fiscal cycle is man-made and is arbitrary – it could just as well be any other month and does indeed differ from country to country – and the reason it is April 6th in the UK is because it is based on the ecclesiastical year which starts on March 25th but was shifted to April 6th when 11 days were lost on the adoption of the Gregorian calendar in 1752.  The driver of the fiscal cycle is taxation and the embodiment in Law of the requirement to present standard annual financial statements for the purpose of personal taxation.

The problem is that this system was designed for a time when the bean-counting bureaucracy was people-pen-paper based and to perform this onerous task more often than annually would have been counter-productive.  That is the upside. The downside is that an annual fiscal cycle shackled to a single date creates a feast-and-famine cash flow effect. The public coffers would have a shark-fin shaped wonga-in-progress chart!  And preparing for the end of the financial year creates multi-faceted March madness: annual cash hoarding leads to delayed investment decisions and underspent budgets being disposed of carelessly; short term tax minimisation strategies distort long term investment decisions and financial targets take precident over quality and delivery goals. Success or failure hinges on the the financial equivalent of threading the eye of a long needle with a bargepole. The annual fiscal policy distorts the behaviour of system and benefits nobody. 

It would be a better design for everyone if fiscal feedback was continuous – especially as the pace of change is quickening to the point that an annual financial planning cycle is painfully long . The good news is that there are elements of fiscal load levelling aleady: companies can choose a date for their annual returns; sales tax is charged continuosuly and collected quarterly; income tax is collected monthly or weekly. But with the ubiquitous digital computer the cost of the bureaucracy is now so low that the annual fiscal fiasco is technically unnecessary and it has become more of a liability than an asset.

What would be the advantages of scrapping it? Individuals could change their tax review date and interval to one that better suits them and this would spread the bureaucratic burden on the inland revenue over the year; the country would have a smoother tax revenue flow and less ]need to  borrow to fund public expenses; and publically funded organisations could budget on a trimester or even monthly basis and become more responsive to financial fluxes and changes in the system. It could be better for everyone – but it would require radical redesign. We are not equipped to do that – we would need to understand the principles of improvement science that relate to elimination of variation.

And what about the other annual cycle that plagues the population – the Education Niggle? This is the one that requires everyone with children of school age to be forced to take family holidays at the same time: Easter, Summer and Christmas – creating another batch-and-queue feast-and-famine cycle. This fiasco originated in the early 1800’s when educational reformers believed that continuous schooling was unhealthy and institutionalised when the Forster Elementary Education Act of 1870 provided partially state funded schools – especially for the poor – to provide a sufficient supply of educated workers for the burgeoning Industrial Revolution. Once the expectation of a long summer vacation was established it has been difficult to change.  More recent evidence shows that the loss of learning momentum has a detrimental effect on children not to mention the logistical problems created if both parents are working. Children are born all year round and have wide variation in their abilities and rate of learning and to impose an arbitrary educational cycle is clearly more for the convenience of the schools and teachers than aligned to the needs of children, their families or society.  As our required skills become more generic and knowledge focussed the need for effective and efficient continuous education has never been greater. Digital communication technology is revolutionising this whole sector and individually-tailored, integrated, life-long  learning and continuous assessment is now both feasible and more affordable.

And then there is healthcare!  Where do we start?

It is time to challenge and change our out-of-date no-longer-fit-for-purpose bureaucratic establishment designs – so there will be no shortage of opportunties or work for every competent and capable Improvement Scientist!

Resetting Our Systems

 Our bodies are amazing self-monitoring and self-maintaining systems – and we take them completely for granted!

The fact that it is all automatic is good news for us because it frees us up to concentrate on other things – BUT – it has a sinister side too.  Our automatic monitor-and-maintain design does not imply what is maintained is healthy – the system is just designed to keep itself stable.

Take our blood pressure as an example. We all have two monitor-and-maintain systems that work together – one that stablises short-term changes in blood pressure (such as when you recline, stand, run, fight, and flee) and the other that stablises long-term changes. The image above is a very simplified version of the long-term regulation system!

Around one quarter of all adults are classified as having high blood pressure – which means that it is consistently higher than is healthy – and billions of £ are spent every year on drugs to reduce blood pressure in millions of people.  Why is this an issue? How does it happen? What lessons are there for the student of Improvement Science?

High blood pressure (or hypertension) is dangerous – and the higher it is the more dangerous it is. It is called the silent killer and the reason is that it is called silent is because there are no symptoms. The reason it called a killer is because over time it causes irreversible damage to vital organs – the heart, kidneys and arteries in the brain.

The vast majority of hypertensives have what is called essential hypertension – which means that there is no obvious single cause.  It is believed that this is the result of their system gradually becoming reset so that it actively maintains the high blood pressure.  This is just like gradually increasing the setting on the thermostat in our house – say by just 0.01 degree per week – not much and not even measurable – but over time the cumulative effect would have a big impact on our heating bills!

So, what resets our long-term blood pressure regulation system? It is believed that the main culprit is stress because when we feel stressed our bodies react in the short-term by pushing our blood pressure up – it is called the fright-fight-flight response. If the stress is repeated time and time again our pressure-o-stat becomes gradually reset and the high blood pressure is then maintained, even when we do not feel stressed. And we do not notice – until something catastrophic happens! And that is too late.

The same effect happens in organisations except that the pressure is emotional and is created by the stress of continually fighting to meet performance targets. The result is a gradual resetting of our expectations and behaviours and the organisation develops emotional hypertension which leads to irreversible damage to the organisations culture. This emotional creep goes largely unnoticed until a catastrophic event happens – and if severe enough the organisation will be crippled and may not survive. The Mid Staffs Hospital patient safety catastrophe is a real and recent example of cultural creep in a healthcare organisation driven by incessant target-driven behaviour. It is a stark lesson to us all. 

So what is the solution?

The first step is to realise that we cannot just rely on hope, ignore the risk and wait for the early warning  symptoms – by that time the damage may be irreversible; or the catastrophe may get us without warning. We have to actively look for the signs of the creeping cultural change – and we have to do that over a long period of time because it is gradual. So, if we have just be jolted out of denial by a too-close-for-comfort expereince then we need to adopt a different strategy and use an external absolute reference – an emotionally and culturally healthy organisation.

The second step is to adopt a method that will tell us reliably if there is a significant shift in our emotional pressure and a method that is sensitive eneough to alert  us before it goes outside a safe range – because we want to intervene as early as possible and only when necessary. Masterly inactivity and cat-like observation according to one wise medical mentor.  

The third step is to actively remove as many of the stressors as possible – and for an organisation this means replacing DRATs (Delusional Ratios and Arbitrary Targets) with well-designed specification limits; and replacing reactive fire-fighting with proactive feedback. This is the role of the leaders.

The fourth step is to actively reduce the emotional pressure but to do it gradually because the whole system needs to adjust. Dropping the emotional pressure too quickly is as dangerous as discounting its importance.

The key to all of this is the appropriate use of data and time-series analysis because the smaller long-term shifts are hidden in the large short-term variation. This is where many get stuck because they are not aware that there two different sorts of statistics. The  correct sort for monitoring systems is called time-series statistics and it not the same as the statistics that we learn at school and university. That is called comparative statistics. This is a shame really because time-series statistics is much more applicable to every day life problems such as managing our blood pressure, our weight, our finances, and the cultural health of our organisations.

Fortunately time-series statistics is easier to learn and use than school statistics so to get started on resetting your personal and organisational emot-o-stat please help yourself to the complimentary guide by clicking here.

Homeostasis

Improvement Science is not just about removing the barriers that block improvement and building barriers to prevent deterioration – it is also about maintaining acceptable, stable and predictable performance.

In fact most of the time this is what we need our systems to do so that we can focus our attention on the areas for improvement rather than running around keeping all the plates spinning.  Improving the ability of a system to maintain itself is a worthwhile and necessary objective.

Long term stability cannot be achieved by assuming a stable context and creating a rigid solution because the World is always changing. Long term stability is achieved by creating resilient solutions that can adjust their behaviour, within limits, to their ever-changing context.

This self-adjusting behaviour of a system is called homeostasis.

The foundation for the concept of homeostasis was first proposed by Claude Bernard (1813-1878) who unlike most of his contemporaries, believed that all living creatures were bound by the same physical laws as inanimate matter.  In his words: “La fixité du milieu intérieur est la condition d’une vie libre et indépendante” (“The constancy of the internal environment is the condition for a free and independent life”).

The term homeostasis is attributed to Walter Bradford Cannon (1871 – 1945) who was a professor of physiology at Harvard medical school and who popularized his theories in a book called The Wisdom of the Body (1932). Cannon described four principles of homeostasis:

  1. Constancy in an open system requires mechanisms that act to maintain this constancy.
  2. Steady-state conditions require that any tendency toward change automatically meets with factors that resist change.
  3. The regulating system that determines the homeostatic state consists of a number of cooperating mechanisms acting simultaneously or successively.
  4. Homeostasis does not occur by chance, but is the result of organised self-government.

Homeostasis is therefore an emergent behaviour of a system and is the result of organised, cooperating, automatic mechanisms. We know this by another name – feedback control – which is passing data from one part of a system to guide the actions of another part. Any system that does not have homeostatic feedback loops as part of its design will be inherently unstable – especially in a changing environment.  And unstable means untrustworthy.

Take driving for example. Our vehicle and its trusting passengers want to get to their desired destination on time and in one piece. To achieve this we will need to keep our vehicle within the boundaries of the road – the white lines – in order to avoid “disappointment”.

As their trusted driver our feedback loop consists of a view of the road ahead via the front windscreen; our vision connected through a working nervous system to the muscles in ours arms and legs; to the steering wheel, accelerator and brakes; then to the engine, transmission, wheels and tyres and finally to the road underneath the wheels. It is quite a complicated multi-step feedback system – but an effective one. The road can change direction and unpredictable things can happen and we can adapt, adjust and remain in control.  An inferior feedback design would be to use only the rear-view mirror and to steer by looking at the whites lines emerging from behind us. This design is just as complicated but it is much less effective and much less safe because it is entirely reactive.  We get no early warning of what we are approaching.  So, any system that uses the output performance as the feedback loop to the input decision step is like driving with just a rear view mirror.  Complex, expensive, unstable, ineffective and unsafe.     

As the number of steps in a process increases the more important the design of  the feedback stabilisation becomes – as does the number of ways we can get it wrong:  Wrong feedback signal, or from the wrong place, or to the wrong place, or at the wrong time, or with the wrong interpretation – any of which result in the wrong decision, the wrong action and the wrong outcome. Getting it right means getting all of it right all of the time – not just some of it right some of the time. We can’t leave it to chance – we have to design it to work.

Let us consider a real example. The NHS 18-week performance requirement.

The stream map shows a simple system with two parallel streams: A and B that each has two steps 1 and 2. A typical example would be generic referral of patients for investigations and treatment to one of a number of consultants who offer that service. The two streams do the same thing so the first step of the system is to decide which way to direct new tasks – to Step A1 or to Step B1. The whole system is required to deliver completed tasks in less than 18 weeks (18/52) – irrespective of which stream we direct work into.   What feedback data do we use to decide where to direct the next referral?

The do nothing option is to just allocate work without using any feedback. We might do that randomly, alternately or by some other means that are independent of the system.  This is called a push design and is equivalent to driving with your eyes shut but relying on hope and luck for a favourable outcome. We will know when we have got it wrong – but it is too late then – we have crashed the system! 

A more plausible option is to use the waiting time for the first step as the feedback signal – streaming work to the first step with the shortest waiting time. This makes sense because the time waiting for the first step is part of the lead time for the whole stream so minimising this first wait feels reasonable – and it is – BUT only in one situation: when the first steps are the constraint steps in both streams [the constraint step is one one that defines the maximum stream flow].  If this condition is not met then we heading for trouble and the map above illustrates why. In this case Stream A is just failing the 18-week performance target but because the waiting time for Step A1 is the shorter we would continue to load more work onto the failing  stream – and literally push it over the edge. In contrast Stream B is not failing and because the waiting time for Step B1 is the longer it is not being overloaded – it may even be underloaded.  So this “plausible” feedback design can actually make the system less stable. Oops!

In our transport metaphor – this is like driving too fast at night or in fog – only being able to see what is immediately ahead – and then braking and swerving to get around corners when they “suddenly” appear and running off the road unintentionally! Dangerous and expensive.

With this new insight we might now reasonably suggest using the actual output performance to decide which way to direct new work – but this is back to driving by watching the rear-view mirror!  So what is the answer?

The solution is to design the system to use the most appropriate feedback signal to guide the streaming decision. That feedback signal needs to be forward looking, responsive and to lead to stable and equitable performance of the whole system – and it may orginate from inside the system. The diagram above holds the hint: the predicted waiting time for the second step would be a better choice.  Please note that I said the predicted waiting time – which is estimated when the task leaves Step 1 and joins the back of the queue between Step 1 and Step 2. It is not the actual time the most recent task came off the queue: that is rear-view mirror gazing again.

When driving we look as far ahead as we can, for what we are heading towards, and we combine that feedback with our present speed to predict how much time we have before we need to slow down, when to turn, in which direction, by how much, and for how long. With effective feedback we can behave proactively, avoid surprises, and eliminate sudden braking and swerving! Our passengers will have a more comfortable ride and are more likely to survive the journey! And the better we can do all that the faster we can travel in both comfort and safety – even on an unfamiliar road.  It may be less exciting but excitement is not our objective. On time delivery is our goal.

Excitement comes from anticipating improvement – maintaining what we have already improved is rewarding.  We need both to sustain us and to free us to focus on the improvement work! 

 

Pushmepullyu

The pushmepullyu is a fictional animal immortalised in the 1960’s film Dr Dolittle featuring Rex Harrison who learned from a parrot how to talk to animals.  The pushmepullyu was a rare, mysterious animal that was never captured and displayed in zoos. It had a sharp-horned head at both ends and while one head slept the other stayed awake so it was impossible to sneak up on and capture.

The spirit of the pushmepullyu lives on in Improvement Science as Push-Pull and remains equally mysterious and difficult to understand and explain. It is confusing terminology. So what does Push-Pull acually mean?

To decode the terminology we need to first understand a critical metric of any process – the constraint cycle time (CCT) – and to do that we need to define what the terms constraint and cycle time mean.

Consider a process that comprises a series of steps that must be completed in sequence.  If we put one task through the process we can measure how long each step takes to complete its contribution to the whole task.  This is the touch time of the step and if the resource is immediately available to start the next task this is also the cycle time of the step.

If we now start two tasks at the same time then we will observe when an upstream step has a longer cycle time than the next step downstream because it will shadow the downstream step. In contrast, if the upstream step has a shorter cycle time than the next step down stream then it will expose the downstream step. The differences in the cycle times of the steps will determine the behaviour of the process.

Confused? Probably.  The description above is correct BUT hard to understand because we learn better from reality than from rhetoric; and we find pictures work better than words.  Pragmatic comes before academic; reality before theory.  We need a realistic example to learn from.

Suppose we have a process that we are told has three steps in sequence, and when one task is put through it takes 30 mins to complete.  This is called the lead time and is an important process output metric. We now know it is possible to complete the work in 30 mins so we can set this as our lead time expectation.  

Suppose we plot a chart of lead times in the order that the tasks start and record the start time and lead time for each one – and we get a chart that looks like this. It is called a lead time run chart.  The first six tasks complete in 30 mins as expected – then it all goes pear-shaped. But why?  The run chart does not tell  us the reason – it just alerts us to dig deeper. 

The clue is in the run chart but we need to know what to look for.  We do not know how to do that yet so we need to ask for some more data.

We are given this run chart – which is a count of the number of tasks being worked on recorded at 5 minute intervals. It is the work in progress run chart.

We know that we have a three step process and three separate resources – one for each step. So we know that that if there is a WIP of less than 3 we must have idle resources; and if there is a WIP of more than 3 we must have queues of tasks waiting.

We can see that the WIP run chart looks a bit like the lead time run chart.  But it still does not tell us what is causing the unstable behaviour.

In fact we do already have all the data we need to work it out but it is not intuitively obvious how to do it. We feel we need to dig deeper.

 We decide to go and see for ourselves and to observe exactly what happens to each of the twelve tasks and each of the three resources. We use these observations to draw a Gantt chart.

Now we can see what is happening.

We can see that the cycle time of Step 1 (green) is 10 mins; the cycle time for Step 2 (amber) is 15 mins; and the cycle time for Step 3 (blue) is 5 mins.

 

This explains why the minimum lead time was 30 mins: 10+15+5 = 30 mins. OK – that makes sense now.

Red means tasks waiting and we can see that a lead time longer than 30 mins is associated with waiting – which means one or more queues.  We can see that there are two queues – the first between Step 1 and Step 2 which starts to form at Task G and then grows; and the second before Step 1 which first appears for Task J  and then grows. So what changes at Task G and Task J?

Looking at the chart we can see that the slope of the left hand edge is changing – it is getting steeper – which means tasks are arriving faster and faster. We look at the interval between the start times and it confirms our suspicion. This data was the clue in the original lead time run chart. 

Looking more closely at the differences between the start times we can see that the first three arrive at one every 20 mins; the next three at one every 15 mins; the next three at one every 10 mins and the last three at one every 5 mins.

Ah ha!

Tasks are being pushed  into the process at an increasing rate that is independent of the rate at which the process can work.     

When we compare the rate of arrival with the cycle time of each step in a process we find that one step will be most exposed – it is called the constraint step and it is the step that controls the flow in the whole process. The constraint cycle time is therefore the critical metric that determines the maximum flow in the whole process – irrespective of how many steps it has or where the constraint step is situated.

If we push tasks into the process slower than the constraint cycle time then all the steps in the process will be able to keep up and no queues will form – but all the resources will be under-utilised. Tasks A to C;

If we push tasks into the process faster than the cycle time of any step then queues will grow upstream of these multiple constraint steps – and those queues will grow bigger, take up space and take up time, and will progressively clog up the resources upstream of the constraints while starving those downstream of work. Tasks G to L.

The optimum is when the work arrives at the same rate as the cycle time of the constraint – this is called pull and it means that the constraint is as the pacemaker and used to pull the work into the process. Tasks D to F.

With this new understanding we can see that the correct rate to load this process is one task every 15 mins – the cycle time of Step 2.

We can use a Gantt chart to predict what would happen.

The waiting is eliminated, the lead time is stable and meeting our expectation, and when task B arrives thw WIP is 2 and stays stable.

In this example we can see that there is now spare capacity at the end for another task – we could increase our productivity; and we can see that we need less space to store the queue which also improves our productivity.  Everyone wins. This is called pull scheduling.  Pull is a more productive design than push. 

To improve process productivity it is necessary to measure the sequence and cycle time of every step in the process.  Without that information it is impossible to understand and rationally improve our process.     

BUT in reality we have to deal with variation – in everything – so imagine how hard it is to predict how a multi-step process will behave when work is being pumped into it at a variable rate and resources come and go! No wonder so many processes feel unpredictable, chaotic, unstable, out-of-control and impossible to both understand and predict!

This feeling is an illusion because by learning and using the tools and techniques of Improvement Science it is possible to design and predict-within-limits how these complex systems will behave.  Improvement Science can unravel this Gordian knot!  And it is not intuitively obvious. If it were we would be doing it.

Single Sell System

In the pursuit of improvement it must be remembered that the system must remain viable: better but dead is not the intended outcome.  Viability of socioeconomic systems implies that money is flowing to where it is needed, when it is needed and in the amounts that are needed.

Money is like energy – it only does worthwhile work when it is moving: so the design of more effective money-streams is a critical part of socioeconomic system improvement.

But this is not easy or obvious because the devil is in the detail and complexity grows quicklyand obscures the picture. This lack of clear picture creates the temptation to clean, analyse, simplify and conceptualise and very often leads to analysis-paralysis and then over-simplification.

There is a useful metaphor for this challenge.

Biological systems use energy rather than money and the process of improvement has a different name – it is called evolution. Each of us is an evolution experiment. The viability requirement is the same though – the success of the experiment is measured by our viability. Do our genes and memes survive after we have gone?

It is only in recent times that the mechanism of this biological system has become better understood. It was not until the 19th Century that we realised that complex organisms were made of reproducing cells; and later that there were rules that governed how inherited characteristics passed from generation to generation; and that the vehicle of transmission was a chemical code molecule called DNA that is present in every copy of every cell capable of reproduction.

We learned that our chemical blueprint is stored in the nucleus of every cell (the dark spots in the picture of cells) and this led to the concept that the nucleus worked like a “brain” that issues chemical orders to the cell in the form of a very similar molecule called RNA.  This cellular command-and-control model is unfortunately more a projection of the rhetoric of society than the reality of the situation. The nucleus is not a “brain” – it is a gonad. The “brain” of a cell is the surface membrane – the sensitive interface between outside and inside; where the “sensor” molecules in the outer cell membrane connect to “effector” molecules on the inside.  Cells think with their skin – and their behaviour is guided by their  internal content and external context. Nature and nurture working as a system.

Cells have evolved to collaborate. Rogue cells that become “mentally” unstable and that break away, start to divide, and spread in an uncollaborative and selfish fashion threaten the viability of the whole: they are called malignant. The threat of malignant behaviour to long term viability is so great that we have evolved sophisticated mechanisms to detect and correct malignant behaviour. The fact that cancer is still a problem is because our malignancy defense mechanisms are not 100% effective. 

This realisation of the importance of the cell has led to a focus of medical research on understand how individual cells “sense”, “think”, “act” and “communicate” and has led to great leaps in our understanding of how multi-celled systems called animals and plants work; how they can go awry; and what can be done to prevent and correct these cellular niggles.  We are even learning how to “fix” bits of the the chemical blueprint to correct our chemical software glitches. We are no where near being able to design a cell from scratch though. We simply do not understand enough about how it works.

In comparison, the “single-sell” in an economic system could be considered to be a step in a process – the point where the stream and the silo meet – where expenses are converted to revenue for example.  I will wantonly bend the rules of grammar and use the word “sell” to distinguish it visually from “cell”. So before trying to understand the complex emergent behaviour of a multi-selled economic system we first need to understand better one sell works. How does work flow and time flow and money flow combined at the single sell?

When we do so we learn that the “economic mechanism” of a single sell can be described completely because it is a manfestation of the Laws of Physics – just as the mechanism of the weather can be describe using a small number of equations that combine to describe the flow, pressure, density, temperature etc of the atmospheric gases.  Our simplest single-selled economic system is described by a set of equations – there are about twenty of them in fact.

So, trying to work out in our heads how even a single sell in an economic system will behave amounts to mentally managing twenty simultanous equations – which is a bit of a problem because we’re not very good at that mental maths trick. The best we can do is to learn the patterns in the interdependent behaviour of the outputs of the equations; to recognise what they imply; and then how to use that understanding to craft wiser decisions.

No wonder the design of a viable socioeconomic multi-selled system seems to be eluding even the brightest economic minds at the moment!  It is a complicated system which exhibits complex behaviour.  Is there a better approach?  Our vastly more complex biological counterparts called “organisms” seem to have discovered one. So what can we learn from them?

One lesson might be that is is a good design to detect and correct malignant behaviour early; the unilateral, selfish, uncollaborative behaviour that multiplies, spreads, and becomes painful, incurable then lethal.

First we need to raise awareness and recognition of it … only then can we challenge and contain its toxic legacy.   

Systemory

How do we remember the vast amount of information that we seem to be capable of?

Our brains are comprised of billions of cells most of which are actually inactive and just there to support the active brain cells – the neurons.

Suppose that the active brain cell part is 50% and our brain has a volume of about 1.2 litres or 1,200 cu.cm or 1,200,000 cu.mm. We know from looking down a microscope that each neuron is about 20/1,000 mm x 20/1,000 mm  x 20/1,000 mm which gives a volume of 8/1,000,000 cu.mm or 125,000 neurons for every cu.mm. The population of a medium sized town in a grain of salt!  This is a concept we can just about grasp. And with these two facts we estimate that there are in the order of 140,000,000,000 neurons in a human brain – 140 billion – about 20 times the population of the whole World. Wow!

But even that huge number is less than the size of the memory on the hard disc of the computer I am writing this blog on – which has 200 gigabytes which is 1,600 gigabits which is 1,600 billion bits. Ten times as many memory cells as there are neurons in a human brain. 

But our brains are not just for storing data – they do all the data processing too – it is an integrated processor-and-memory design completely unlike the separate processor-or-memory design of a digital computer.  Each of our brains is remarkable in its capability, adaptability, and agility – its ability to cope with change – its ability to learn and to change its behaviour while still working.  So how does our biological memory work?

Well not like a digital computer where the zeros and ones, the binary digits (bits) are stored in regular structure of memory cells – a static structural memory – a data prison.  Our biological memory works in a completely different way – it is a temporal memory – it is time dependent. Our memories are not “recalled” like getting a book out of an indexed slot on a numbered in a massive library; are memories are replayed like a recording or rebuilt from a recipe. Time is the critical factor and this concept of temporal memory is a feature of all systems.

And that is not all – the temporal memory is not a library of video tapes – it is the simultaneous collective action of many parts of the system that create the illusion of the temporal memory – we have a parallel-distributed-temporal-memory. More like a video hologram. And it means we cannot point to the “memory” part of our brains – it is distributed throughout the system – and this means that the connections between the parts are as critical a part of the design and the parts themselves. It is a tricky concept to grasp and none of the billions of digital computers that co-inhabit this planet operate this way. They are feeble and fragile in comparison. An inferior design.

The terms distributed-temporal or systemic-memory are a bit cumbersome though so we need a new label – let us call it a systemory.  The properties of a systemory are remarkable – for example it still works when a bit of the systemory is removed.  When a bit of your brain is removed you don’t “forget” a bit of your name or lose the left ear on the mental picture of your friends face – as would happen with a computer.  A systemory is resilient to damage which is a necessary design-for-survival. It also implies that we can build our systemory with imperfect parts and incomplete connections. In a digital computer this would not work: the localised-static or silo-memory has to be perfect because if a single bit gets flipped or a single wire gets fractured it can render the whole computer inoperative useless junk.

Another design-for-survival property of a systemory is that it still works even when it is being changed – it is continuously adaptable and updateable.  Not so a computer – to change the operating system the computer has to be stopped, the old program overwritten by the new one, then the new one started. In fact computers are designed to prevent programs modifying themselves – because it a sure recipe for a critical system failure – the dreaded blue screen!

So if we map our systemory concept across from person to population and we replace neurons with people then we get an inkling of how a society can have a collective memory, a collective intelligence, a collective consciousness even – a social systemory. We might call that property the culture.  We can also see that the relationships that link the people are as critical as the people themselves and that both can be imperfect yet we get stable and reliable behaviour. We can also see that influencing the relationships between people has as much effect on the system behaviour as how the people themselves perform – because the properties of the systemory are emergent. Culture is an output not an input.

So in the World – the development of global communication systems means that all 7 billion people in the global social systemory can, in principle, connect to each other and can collectively learn and change faster and faster as the technology to connect more widely and more quickly develops. The rate of culture change is no longer governed by physical constraints such as geographic location, orand temporal constraints such as how long a letter takes to be delivered.

Perhaps the most challenging implication is that a systemory does not have a “point of control” – there is no librarian who acts as a gatekeeper to the data bank, no guard on the data prison.  The concept of “control” in a systemory is different – it is global not local – and it is influence not control.  The rapid development of mobile communication technology and social networking gives ample evidence – we would now rather communicate with a familar on the other side of the world than with a stranger standing next to us in the lunch queue. We have become tweeting and texting daemons.  Our emotional relationships are more important than our geographical ones. And if enough people can connect to each other they can act in a collective, coordinated, adaptive and agile way that no command-and-control system can either command or control. The recent events in the Middle East are ample evidence of the emergent effectiveness of a social systemory.

Our insight exposes a weakness of a social systemory – it is possible to adversely affect the whole by introducing a behavioural toxin that acts at the social connection level – on the relationships between people. The behavioural toxin needs only to have a weak and apparently harmless effect but when disseminated globally the cumulative effect creates cultural dysfunction.  It is rather like the effect of alcohol and other recreational chemical substances on the brain – it cause a temporary systemory dysfunction – but one that in an over-stressed psychological system paradoxically results in pleasure; or rather stress release. Hence the self-reinforcing nature of the addiction.  

Effective leaders are intuitively aware that just their behaviour can be a tonic or a toxin for the whole system: organisations are the the same emotional boat as their leader.

Effective leaders use their behaviour to steer the systemory of the organisation along a path of improvement and their behaviour is the output of their personal systemory.

Leaders have to be the change that they want their organisations to achieve.

The Three Faces of Improvement Science

There is always more than one way to look at something and each perspective is complementary to the others.

Improvement Science has three faces: the first is the Process Face; the second is the People face and the third is the System face – and is represented in the logo with a different colour for each face.

The process face is the easiest to start with because it is logical, objective and absolute.  It describes the process; the what, where, when and how. It is the combination of the hardware and the software; the structure and the function – and it is constrained by the Laws of Physics.

The people face is emotional, subjective and relative.  It describes the people and their perceptions and their purposes. Each person interacts both with the process and with each other and their individual beliefs and behaviours drive the web of relationships. This is the world of psychology and politics.

The system face is neither logical nor emotional – it has characteristics that are easy to describe but difficult to define. Characteritics such a self-organisation; emergent behaviour; and complexity.  Our brains do not appear to be able to comprehend systems as easily and intuitively and we might like to believe. This is one reason why systems often feel counter-intuitive, unpredictable and mysterious. We discover that we are unable to make intuitive decisions that result in whole system improvement  because our intuition tricks us.

Gaining confidence and capability in the practical application of Improvement Science requires starting from our zone of relative strength – our conscious, logical, rational, explanable, teachable, learnable, objective dependency on the physical world. From this solid foundation we can explore our zone of self-control – our internal unconscious, psychological and emotional world; and from there to our zone of relative weakness –  the systemic world of multiple interdependencies that, over time, determine our individual and collective fate.

The good news is that the knowledge and skills we need to handle the rational physical process face are easy and quick to learn.  It can be done with only a short period of focussed, learning-by-doing.  With that foundation in place we can then explore the more difficult areas of people and systems.

 

 

The Devil and the Detail

There are two directions from which we can approach an improvement challenge. From the bottom up – starting with the real details and distilling the principle later; and from the top down – starting with the conceptual principle and doing the detail later.  Neither is better than the other – both are needed.

As individuals we have an innate preference for real detail or conceptual principle – and our preference is manifest by the way we think, talk and behave – it is part of our personality.  It is useful to have insight into our own personality and to recognise that when other people approach a problem in a different way then we may experience a difference of opinion, a conflict of styles, and possibly arguments.  

One very well established model of personality type was proposed by Carl Gustav Jung who was a psychologist and who approached the subject from the perspective of understanding psychological “illness”.  Jung’s “Psychological Types” was used as the foundation of the life-work of Isabel Briggs Myers who was not a psychologist and who was looking from the direction of understanding psychological “normality”. In her book Gifts Differing – Understanding Personality Type (ISBN 978-0891-060741) she demonstrates using empirical data that there is not one normal or ideal type that we are all deviate from – rather that there is a set of stable types that each represents a “different gift”. By this she means that different personality types are suited to different tasks and when the type resonantes with the task it results in high-performance and is seen an asset or “strength” and when it does not it results in low performance and is seen as a liability or “weakness”.

One of the multiple dimensions of the Jungian and Myers-Briggs personality type model is the Sensor – iNtuitor dimension the S-N dimension. This dimension represents where we hold our reference model that provides us with data – data that we convert to information – and informationa the we use to derive decisions and actions.

A person who is naturally inclined to the Sensor end of the S-N dimension prefers to use Reality and Actuality as their reference – and they access it via their senses – sight, sound, touch, smell and taste. They are often detail and data focussed; they trust their senses and their conscious awareness; and they are more comfortable with routine and structure.  

A person who is naturally inclined to the iNtuitor end of the S-N dimension prefers to use Rhetoric and Possibility as their reference and their internal conceptual model that they access via their intuition. They are often principle and concept focussed and discount what their senses tell them in favour their intuition. Intuitors feel uncomfortable with routine and structure which they see as barriers to improvement.  

So when a Sensor and an iNtuitor are working together to solve a problem they are approaching it from two different directions and even when they have a common purpose, common values and a common objective it is very likely that conflict will occur if they are unaware of their different gifts

Gaining this awareness is a key to success because the synergy of the two approaches is greater than either working alone – the sum is greater than the parts – but only if there is awareness and mutual respect for the different gifts.  If there is no awareness and low mutual respect then the sum will be less than the parts and the problem will not be dissolvable.

In her research, Isabel Briggs Myers found that about 60% of high school students have a preference for S and 40% have a preference for N – but when the “academic high flyers”  were surveyed the ratio was S=17%  and N=83% – and there was no difference between males and females.  When she looked at the S-N distribution in different training courses she discovered that there were a higher proportion of S-types in Administrators (59%), Police (80%), and Finance (72%) and a higher proportion of N-types in Liberal Arts (59%), Engineering (65%), Science (83%), Fine Arts (91%), Occupational Therapy (66%), Art Education (87%), Counselor Education (85%), and Law (59%).  Her observation suggested that individuals select subjects based on their “different gifts” and this throws an interesting light on why traditional professions may come into conflict and perhaps why large organisations tend to form departments of “like-minded individuals”.  Departments with names like Finance, Operations and Governance  – or FOG.

This insight also offers an explanation for the conflict between “strategists” who tend to be N-types and who naturally gravitate to the “manager” part of an organisation and the “tacticians” who tend to be S-types and who naturally gravitate to the “worker” part of the same organisation.

It  has also been shown that conventional “intelligence tests” favour the N-types over the S-types and suggests why highly intelligent academics my perform very poorly when asked to apply their concepts and principles in the real world. Effective action requires pragmatists – but academics tend to congregate in academic instituitions – often disrespectfully labelled by pragmatists as “Ivory Towers”.      

Unfortunately this innate tendency to seek-like-types is counter-productive because it re-inforces the differences, exacerbates the communication barriers,  and leads to “tribal” and “disrespectful” and “trust eroding” behaviour, and to the “organisational silos” that are often evident.

Complex real-world problems cannot be solved this way because they require the synergy of the gifts – each part playing to its strength when the time is right.

The first step to know-how is self-awareness.

If you would like to know your Jungian/MBTI® type you can do so by getting the app: HERE

Doing Our Way to New Thinking.

Most of our thinking happens out of awareness – it is unconscious. Most of the data that pours in through our senses never reaches awareness either – but that does not mean it does not have an impact on what we remember, how we feel and what we decide and do in the future. It does.

Improvement Science is the knowledge of how to achieve sustained change for the better; and doing that requires an ability to unlearn unconscious knowledge that blocks our path to improvement – and to unlearn selectively.

So how can we do that if it is unconscious? Well, there are  at least two ways:

1. Bring the unconscious knowledge to the surface so it can be examined, sorted, kept or discarded. This is done through the social process of debate and discussion. It does work though it can be a slow and difficult process.

2. Do the unlearning at the unconscious level – and we can do that by using reality rather than rhetoric. The easiest way to connect ourselves to reality is to go out there and try doing things.

When we deliberately do things  we are learning unconsciously because most of our sensory data never reaches awareness.  When we are just thinking the unconscious is relatively unaffected: talking and thinking are the same conscious process. Discussion and dialog operate at the conscious level but differ in style – discussion is more competitive; dialog is more collaborative. 

The door to the unconscious is controlled by emotions – and it appears that learning happens more effectively and more efficiently in certain emotional states. Some emotional states can impair learning; such as depression, frustration and anxiety. Strong emotional states associated with dramatic experiences can result in profound but unselective learning – the emotionally vivid memories that are often associated with unpleasant events.  Sometimes the conscious memory is so emotionally charged and unpleasant that it is suppressed – but the unconscious memory is not so easily erased – so it continues to influence but out of awareness. The same is true for pleasant emotional experiences – they can create profound learning experiences – and the conscious memory may be called an inspirational or “eureka” moment – a sudden emotional shift for the better. And it too is unselective and difficult to erase.

An emotionally safe environment for doing new things and having fun at the same time comes close to the ideal context for learning. In such an enviroment we learn without effort. It does not feel like work – yet we know we have done work because we feel tired afterwards.  And if we were to record the way that we behave and talk before the doing; and again afterwards then we will measure a change even though we may not notice the change ourselves. Other people may notice before we do – particularly if the change is significant – or if they only interact with us occasionally.

It is for this reason that keeping a personal journal is an effective way to capture the change in ourselves over time.  

The Jungian model of personality types states that there are three dimensions to personality (Isabel Briggs Myers added a fourth later to create the MBTI®).

One dimension describes where we prefer to go for input data – sensors (S) use external reality as their reference – intuitors (N) use their internal rhetoric.

Another dimension is how we make decisions –  thinkers (T) prefer a conscious, logical, rational, sequential decision process while feelers (F) favour an unconscious, emotional, “irrational”, parallel approach.

The third dimension is where we direct the output of our decisions – extraverts (E) direct it outwards into the public outside world while intraverts (I) direct it inwards to their private inner world.

Irrespective of our individual preferences, experience suggests that an effective learning sequence starts with our experience of reality (S) and depending how emotionally loaded it is (F) we may then internalise the message as a general intuitive concept (N) or a specific logical construct (T).

The implication of this is that to learn effectively and efficiently we need to be able to access all four modes of thinking and to do that we might design our teaching methods to resonate with this natural learning sequence, focussing on creating surprisingly positive reality based emotional experiences first. And we must be mindful that if we skip steps or create too many emotionally negative experiences we we may unintentionally impair the effectiveness of the learning process.

A carefully designed practical exercise that takes just a few minutes to complete can be a much more effective and efficient way to teach a profound principle than to read libraries of books or to listen to hours of rhetoric.  Indeed some of the most dramatic shifts in our understanding of the Universe have been facilitated by easily repeatable experiments.

Intuition and emotions can trick us – so Doing Our Way to New Thinking may be a better improvement strategy.

Reality trumps Rhetoric

One of the biggest challenges posed by Improvement is the requirement for beliefs to change – because static beliefs imply stagnated learning and arrested change.  We all display our beliefs for all to hear and see through our language – word and deed – our spoken language and our body language – and what we do not say and do not do is as important as what we do say and what we do do.  Let us call the whole language thing our Rhetoric – the external manifestation of our internal mental model.

Disappointingly, exercising our mental model does not seem to have much impact on Reality – at least not directly. We do not seem to be able to perform acts of telepathy or telekinesis. We are not like the Jedi knights in the Star Wars films who have learned to master the Force – for good or bad. We are not like the wizards in the Harry Potter who have mastered magical powers – again for good or bad. We are weak-minded muggles and Reality is spectacularly indifferent to our feeble powers. No matter what we might prefer to believe – Reality trumps Rhetoric.

Of course we can side step this uncomfortable feeling by resorting to the belief of One Truth which is often another way of saying My Opinion – and we then assume that if everyone else changed their belief to our belief then we would have full alignment, no conflict, and improvement would automatically flow.  What we actually achieve is a common Rhetoric about which Reality is still completely indifferent.  We know that if we disagree then one of us must be wrong or rather un-real-istic; but we forget that even if we agree then we can still both be wrong. Agreement is not a good test of the validity of our Rhetoric. The only test of validity is Reality itself – and facing the unfeeling Reality risks bruising our rather fragile egos – so we shy away from doing so.

So one way to facilitate improvement is to employ Reality as our final arbiter and to do this respectfully.  This is why teachers of improvement science must be masters of improvement science. They must be able to demonstrate their Improvenent Science Rhetoric by using Reality and their apprentices need to see the IS Rhetoric applied to solving real problems. One way to do this is for the apprentices to do it themselves, for real, with guidance of an IS master and in a safe context where they can make errors and not damage their egos. When this is done what happens is almost magical – the Rhetoric changes – the spoken language and the body language changes – what is said and what is done changes – and what is not said and not done changess too. And very often the change is not noticed at least by those who change.  We only appear to have one mental model: only one view of Reality so when it changes we change.

It is also interesting to observe is that this evolution of Rhetoric does not happen immediately or in one blinding flash of complete insight. We take small steps rather than giant leaps. More often the initial emotional reaction is confusion because our experience of the Reality clashes with the expectation of our Rhetoric.  And very often the changes happen when we are asleep – it is almost as if our minds work on dissolving the confusion when it is not distracted with the demands of awake-work; almost like we are re-organising our mental model structure when it is offline. It is a very common to have a sleepless night after such an Reality Check and to wake with a feeling of greater clarity – our updated mental model declaring itself as our New Rhetoric. Experienced facilitators of Improvement Science understand this natural learning process and that it happens to everyone – including themselves. It is this feeling of increased clarity, deeper understanding, and released energy that is the buzz of Improvement Science – the addictive drug.  We learn that our memory plays tricks on us; and what was conflict yesterday becomes confusion today and clarity tomorrow. One behaviour that often emerges spontaneously is the desire to keep a journal – sometimes at the bedside – to capture the twists and turns of the story of our evolving Rhetoric.

This blog just such a journal.

Design-for-Productivity

One tangible output of process or system design exercise is a blueprint.

This is the set of Policies that define how the design is built and how it is operated so that it delivers the specified performance.

These are just like the blueprints for an architectural design, the latter being the tangible structure, the former being the intangible function.

A computer system has the same two interdependent components that must be co-designed at the same time: the hardware and the software.


The functional design of a system is manifest as the Seven Flows and one of these is Cash Flow, because if the cash does not flow to the right place at the right time in the right amount then the whole system can fail to meet its design requirement. That is one reason why we need accountants – to manage the money flow – so a critical component of the system design is the Budget Policy.

We employ accountants to police the Cash Flow Policies because that is what they are trained to do and that is what they are good at doing – they are the Guardians of the Cash.

Providing flow-capacity requires providing resource-capacity, which requires providing resource-time; and because resource-time-costs-money then the flow-capacity design is intimately linked to the budget design.

This raises some important questions:
Q: Who designs the budget policy?
Q: Is the budget design done as part of the system design?
Q: Are our accountants trained in system design?

The challenge for all organisations is to find ways to improve productivity, to provide more for the same in a not-for-profit organisation, or to deliver a healthy return on investment in the for-profit arena (and remember our pensions are dependent on our future collective productivity).

To achieve the maximum cash flow (i.e. revenue) at the minimum cash cost (i.e. expense) then both the flow scheduling policy and the resource capacity policy must be co-designed to deliver the maximum productivity performance.


If we have a single-step process it is relatively easy to estimate both the costs and the budget to generate the required activity and revenue; but how do we scale this up to the more realistic situation when the flow of work crosses many departments – each of which does different work and has different skills, resources and budgets?

Q: Does it matter that these departments and budgets are managed independently?
Q: If we optimise the performance of each department separately will we get the optimum overall system performance?

Our intuition suggests that to maximise the productivity of the whole system we need to maximise the productivity of the parts.  Yes – that is clearly necessary – but is it sufficient?


To answer this question we will consider a process where the stream flows though several separate steps – separate in the sense that that they have separate budgets – but not separate in that they are linked by the same flow.

The separate budgets are allocated from the total revenue generated by the outflow of the process. For the purposes of this exercise we will assume the goal is zero profit and we just need to calculate the price that needs to be charged the “customer” for us to break even.

The internal reports produced for each of our departments for each time period are:
1. Activity – the amount of work completed in the period.
2. Expenses – the cost of the resources made available in the period – the budget.
3. Utilisation – the ratio of the time spent using resources to the total time the resources were available.

We know that the theoretical maximum utilisation of resources is 100% and this can only be achieved when there is zero-variation. This is impossible in the real world but we will assume it is achievable for the purpose of this example.

There are three questions we need answers to:
Q1: What is the lowest price we can achieve and meet the required demand?
Q2: Will optimising each step independently step give us this lowest price?
Q3: How do we design our budgets to deliver maximum productivity?


To explore these questions let us play with a real example.

Let us assume we have a single stream of work that crosses six separate departments labelled A-F in that sequence. The department budgets have been allocated based on historical activity and utilisation and our required activity of 50 jobs per time period. We have already worked hard to remove all the errors, variation and “waste” within each department and we have achieved 100% observed utilisation of all our resources. We are very proud of our high effectiveness and our high efficiency.

Our current not-for-profit price is £202,000/50 = £4,040 and because our observed utilisation of resources at each step is 100% we conclude this is the most efficient design and that this is the lowest possible price.

Unfortunately our celebration is short-lived because the market for our product is growing bigger and more competitive and our market research department reports that to retain our market share we need to deliver 20% more activity at 80% of the current price!

A quick calculation shows that our productivity must increase by 50% (New Activity/New Price = 120%/80% = 150%) but as we already have a utilisation of 100% then this challenge looks hopelessly impossible.  To increase activity by 20% will require increasing flow-capacity by 20% which will imply a 20% increase in costs so a 20% increase in budget – just to maintain the current price.  If we no longer have customers who want to pay our current price then we are in trouble.

Fortunately our conclusion is incorrect – and it is incorrect because we are not using the data available to co-design the system such that cash flow and work flow are aligned.  And we do not do that because we have not learned how to design-for-productivity.  We are not even aware that this is possible.  It is, and it is called Value Stream Accounting.

The blacked out boxes in the table above hid the data that we need to do this – an we do not know what they are. Yet.

But if we apply the theory, techniques and tools of system design, and we use the data that is already available then we get this result …

 We can see that the total budget is less, the budget allocations are different, the activity is 20% up and the zero-profit price is 34% less – which is a 83% increase in productivity!

More than enough to stay in business.

Yet the observed resource utilisation is still 100%  and that is counter-intuitive and is a very surprising discovery for many. It is however the reality.

And it is important to be reminded that the work itself has not changed – the ONLY change here is the budget policy design – in other words the resource capacity available at each stage.  A zero-cost policy change.

The example answers our first two questions:
A1. We now have a price that meets our customers needs, offers worthwhile work, and we stay in business.
A2. We have disproved our assumption that 100% utilisation at each step implies maximum productivity.

Our third question “How to do it?” requires learning the tools, techniques and theory of System Engineering and Design.  It is not difficult and it is not intuitively obvious – if it were we would all be doing it.

Want to satisfy your curiosity?
Want to see how this was done?
Want to learn how to do it yourself?

You can do that here.


For more posts like this please vote here.
For more information please subscribe here.

Harried to the Rescue!

We are social animals and we need social interaction with others of our kind – it is the way our caveman wetware works.

And we need it as much as we need air, water, food and sleep. Solitary confinement is an effective punishment – you don’t need to physically beat someone to psychologically hurt them – just actively excluding them or omitting to notice them is effective and has the advantage that it leaves no visible marks – and no trail of incriminating evidence.

This is the Dark Art of the Game Player and the act of social omission is called discounting – so once we know what to look for the signature of the Game Player is obvious – and we can choose to play along or not.

Some people have learned how to protect themselves from gamey behaviour – they have learned to maintain a healthy balance of confidence and humility. They ask for feedback, they know their strengths and their weaknesses, and they and strive to maintain and develop their capability through teaching and learning. Sticks and stones may break their bones but names can never hurt them.

Other people have not learned how to spot the signs and to avoid being sucked into games – they react to the discounting by trying harder, working harder, taking on more and more – all to gain morsels of recognition. Their strategy works but it has an unfortunate consequence – it becomes an unconscious habit and they become players of the game called “Harried”.  The start is signalled by a big sigh as they are hooked into their preferred Rescuer role – always there to pick up the pieces – always offering to talke on extra work – always on the look out for an opportunity to take on more burden. “Good Ol’ Harried” they hear “S/he works every hour God sends like a Trojan”. The unspoken ulterior motive of the instigator of the game is less admirable “Delegate the job to Harried – or better still – just mess it up a bit do nothing – just wait – Harried will parachute in and save the day – and save me having to do it myself.” The conspirators in the game are adopting different roles – Victim and Persecutor – and it is in their interest to have Rescuers around who will willingly join the game. The Persecutors are not easy to see because their behaviour is passive – discounting is passive aggressive behaviour – they discount others need for a work-life balance. The Victims are easier to spot – they claim not be able to solve their own problems by acting helpless and letting Harried take over. And the whole social construct is designed with one purpose – to create a rich opportunity for social interaction – because even though they are painful, games are better than solitary anonymity.

According to Eric Berne, founder of the school of Transactional Analysis, games are learned behaviour – and they spring from an injunction that we are all taught as children: that each of us is reliant on others for recognition – and those others are our parents. Sure, recognition from influential others is important BUT it is not our only source. We can give ourselves recognition. Each of us can learn to celebrate a job well done; a lesson learned; a challenge overcome – and through that route we can learn to recognise others genuinely, openly and without expectation of a return compliment. But to learn this we have to grasp the nettle and to unlearn our habit of playing the Persecutor-Rescuer-Victim games; and to do that we must first shine a light onto our blindspots.

Gamey behaviour is a potent yet invisible barrier to improvement. So if it is endemic in an organisation that wants to improve then it needs to be diagnosed and managed as an integral part of the improvement process. It is a critical human factor and in Improvement Science the human factors and the  process factors progress hand in hand.

Here is an paragraph from Games Nurses Play by Pamela Levin:

“Harried” is a game played when situations are complicated. The aim is to make the situation even more complicated so that a person feels justified in giving up. “Harried Midwife” is so named because I (P.L.) first observed the game on an obstetric floor, but it has its counterpart in other clinical settings. The game is aided by institutional needs, since it is a rare hospital unit that has the staff adequate in numbers these days. In the situation I observed, the harried nurse sent her only nurse’s aide to lunch when three deliveries were pending. Instead of using a methodical approach, she went running about checking a pulse here, a chart there, a dilatation here, and an I.V. there, so she never was caught up with the work. She lost her pen and couldn’t “chart” until she found it. She answered the telephone and lost the message. She was so busy setting up the delivery room, she forgot to notify the doctor of the impending delivery. The baby, which arrived in the labor room, was considered contaminated, and could not be discharged to the newborn nursery. After the chaos had died down, the nurse felt justified in doing almost no work for the rest of the day.

Click for the complete Games Nurses Play article here

Lub-Hub Lub-Hub Lub-Hub

If you put an ear to someones chest you can hear their heart “lub-dub lub-dub lub-dub”. The sound is caused by the valves in the heart closing, like softly slamming doors, as part of the wonderfully orchestrated process of pumping blood around the lungs and body. The heart is an impressive example of bioengineering but it was not designed – it evolved over time – its elegance and efficiency emerged over a long journey of emergent evolution.  The lub-dub is a comforting sound – it signals regularity, predictability, and stabilty; and was probably the first and most familiar sound each of heard in the womb. Our hearts are sensitive to our emotional state – and it is no accident that the beat of music mirrors the beat of the heart: slow means relaxed and fast means aroused.

Systems and processes have a heart beat too – but it is not usually audible. It can been seen though if the measures of a process are plotted as time series charts. Only artificial systems show constant and unwavering behaviour – rigidity –  natural systems have cycles.  The charts from natural systems show the “vital signs” of the system.  One chart tells us something of value – several charts considered together tell us much more.

We can measure and display the electrical activity of the heart over time – it is called an electrocardiogram (ECG) -literally “electric-heart-picture”; we can measure and display the movement of muscles, valves and blood by beaming ultrasound at the heart – an echocardiogram; we can visualise the pressure of the blood over time – a plethysmocardiogram; and we can visualise the sound the heart makes – a phonocardiogram. When we display the various cardiograms on the same time scale one above the other we get a much better understanding of how the heart is behaving  as a system. And if we have learned what to expect to see with in a normal heart we can look for deviations from healthy behaviour and use those to help us diagnose the cause.  With experience the task of diagnosis becomes a simple, effective and efficient pattern matching exercise.

The same is true of systems and processes – plotting the system metrics as time-series charts and searching for the tell-tale patterns of process disease can be a simple, quick and accurate technique: when you have learned what a “healthy” process looks like and which patterns are caused by which process “diseases”.  This skill is gained through Operations Management training and lots of practice with the guidance of an experienced practitioner. Without this investment in developing knowlewdge and understanding there is a high risk of making a wrong diagnosis and instituting an ineffective or even dangerous treatment.  Confidence is good – competence is even better.

The objective of process diagnostics is to identify where and when the LUBs and HUBs appear are in the system: a LUB is a “low utilisation bottleneck” and a HUB is a “high utilisation bottleneck”.  Both restrict flow but they do it in different ways and therefore require different management. If we confuse a LUB for a HUB and choose the wrong treatent we can unintentionally make the process sicker – or even kill the system completely. The intention is OK but if we are not competent the implementation will not be OK.

Improvement Science rests on two foundations stones – Operations Management and Human Factors – and managers of any process or system need an understanding of both and to be able to apply their knowledge in practice with competence and confidence.  Just as a doctor needs to understand how the heart works and how to apply this knowledge in clinical practice. Both technical and emotional capability is needed – the Head and the Heart need each other.                          

Safety-By-Design

The picture is of Elisha Graves Otis demonstrating, in the mid 19th century, his safe elevator that automatically applies a brake if the lift cable breaks. It is a “simple” fail-safe mechanical design that effectively created the elevator industry and the opportunity of high-rise buildings.

“To err is human” and human factors research into how we err has revealed two parts – the Error of Intention (poor decision) and the Error of Execution (poor delivery) – often referred to as “mistakes” and “slips”.

Most of the time we act unconsciously using well practiced skills that work because most of our tasks are predictable; walking, driving a car etc.

The caveman wetware between our ears has evolved to delegate this uninteresting and predictable work to different parts of the sub-conscious brain and this design frees us to concentrate our conscious attention on other things.

So, if something happens that is unexpected we may not be aware of it and we may make a slip without noticing. This is one way that process variation can lead to low quality – and these are the often the most insidious slips because they go unnoticed.

It is these unintended errors that we need to eliminate using safe process design.

There are two ways – by designing processes to reduce the opportunity for mistakes (i.e. improve our decision making); and then to avoid slips by designing the subsequent process to be predictable and therefore suitable for delegation.

Finally, we need to add a mechanism to automatically alert us of any slips and to protect us from their consequences by failing-safe.  The sign of good process design is that it becomes invisible – we are not aware of it because it works at the sub-conscious level.

As soon as we become aware of the design we have either made a slip – or the design is poor.


Suppose we walk up to a door and we are faced with a flat metal plate – this “says” to us that we need to “push” the door to open it – it is unambiguous design and we do not need to invoke consciousness to make a push-or-pull decision.  The technical term for this is an “affordance”.

In contrast a door handle is an ambiguous design – it may require a push or a pull – and we either need to look for other clues or conduct a suck-it-and-see experiment. Either way we need to switch our conscious attention to the task – which means we have to switch it away from something else. It is those conscious interruptions that cause us irritation and can spawn other, possibly much bigger, slips and mistakes.

Safe systems require safe processes – and safe processes mean fewer mistakes and fewer slips. We can reduce slips through good design and relentless improvement.

A simple and effective tool for this is The 4N Chart® – specifically the “niggle” quadrant.

Whenever we are interrupted by a poorly designed process we experience a niggle – and by recording what, where and when those niggles occur we can quickly focus our consciousness on the opportunity for improvement. One requirement to do this is the expectation and the discipline to record niggles – not necessarily to fix them immediately – but just to record them and to review them later.

In his book “Chasing the Rabbit” Steven Spear describes two examples of world class safety: the US Nuclear Submarine Programme and Alcoa, an aluminium producer.  Both are potentially dangerous activities and, in both examples, their world class safety record came from setting the expectation that all niggles are recorded and acted upon – using a simple, effective and efficient niggle-busting process.

In stark and worrying contrast, high-volume high-risk activities such as health care remain unsafe not because there is no incident reporting process – but because the design of the report-and-review process is both ineffective and inefficient and so is not used.

The risk of avoidable death in a modern hospital is quoted at around 1:300 – if our risk of dying in an elevator were that high we would take the stairs!  This worrying statistic is to be expected though – because if we lack the organisational capability to design a safe health care delivery process then we will lack the organisational capability to design a safe improvement process too.

Our skill gap is clear – we need to learn how to improve process safety-by-design.


Download Design for Patient Safety report written by the Design Council.

Other good examples are the WHO Safer Surgery Checklist, and the story behind this is told in Dr Atul Gawande’s Checklist Manifesto.

Passion-Process-Purpose

The wetware between our ears is both amazing and frustrating.

One of the amazing features is how we can condense a whole paradigm into a few words; and one of the frustrating features is how we condense a whole paradigm into a few words.  Take the three words – Passion, Process and Purpose – just three seven letter words beginning with P.  Together they capture the paradigm of Improvement Science – these are the three interdependent parts.

Passion provides the energy to change and the desire to do something. Purpose is the goal that is sought; the outcome that is desired. Process is the recipe, the plan, the map of the journey.  All three are necessary and only together they are sufficient.

The easier bit is Passion – we are all emotional beings – we are not rocks or clocks – we have some irrational components included in our design. Despite what we may think, most of our thinking is outside awareness, unconscious, and we are steered by feelings and signal with feelings. We are not aware of how we use emotions to filter data and to facilitate decisions and we are not aware how we broadcast our unconscious thinking in our body language.

The trickier bit is Process and Purpose – not because they are difficult concepts, but because we confuse the two.  There are two different questions that we use to use to try to separate them: the How and the Why questions.  “How?” is the question that asks about the Process; “Why?” is the question that asks about the Purpose – and we very often give a How answer to a Why question. We seem to habitually dodge the Purpose question – and that is what makes it tricky.  Asking the question “What is my purpose for …” is one that we find difficult to answer. It is difficult because our purpose is unconscious – it is a combination of many things combining in parallel – and such multi-part-interdependent-mental objects are systems; and systems are difficult to capture with a single concept and therefore difficult to bring to consciousness. We feel we have a purpose and we know when others share that purpose but we find it difficult to say what it is – so we say how it works instead.  And if we lose our feeling of purpose we become unhappy – we need Purpose.   

This trickiness of  Process and Purpose is critical to the Science of Improvement because the design method starts with a Purpose – and then works backwards to define a Process; while improvement starts with a Passion and moves forward into deciding a Process. Our normal, intuitive mode of working is to use our irrationality to trigger a sequence of actions – we are instinctively reactive.

The contra-normal, counter-intuitive mode of working is to start with our purpose and use our rationality to assemble a sequence of actions.  We pause, consider, think and then act – with purpose.  This is why vision and mission are so important to collective improvement – the vision and mission provide a quick reminder of our collective purpose.  And that is why investing time in deeply exploring the Purpose question is such an important step – when you get to your purpose and you ask the right question there is a sort of mental “click” as the thinking and the feeling align – the two parts of our wetware working as one system.

Low-Tech-Toc

Beware the Magicians who wave High Technology Wands and promise Miraculous Improvements if you buy their Black Magic Boxes!

If a Magician is not willing to open the box and show you the inner workings then run away – quickly.  Their story may be true, the Miracle may indeed be possible, but if they cannot or will not explain HOW the magic trick is done then you will be caught in their spell and will become their slave forever.

Not all Magicians have honourable intentions – those who have been seduced by the Dark Side will ensnare you and will bleed you dry like greedy leeches!

In the early 1980’s a brilliant innovator called Eli Goldratt created a Black Box called OPT that was the tangible manifestation of his intellectual brainchild called ToC – Theory of Constraints. OPT was a piece of complex computer software that was intended to rescue manufacturing from their ignorance and to miraculously deliver dramatic increases in profit. It didn’t.

Eli Goldratt was a physicist and his Black Box was built on strong foundations of Process Physics – it was not Snake Oil – it did work.  The problem was that it did not sell: Not enough people believed his claims and those who did discovered that the Black Box was not as easy to use as the Magician suggested.  So Eli Goldratt wrote a book called The Goal in which he explained, in parable form, the Principles of ToC and the theoretical foundations on which his Black Box was built.  The book was a big success but his Black Box still did not sell; just an explanation of how his Black Box worked was enough for people to apply the Principles of ToC and to get dramatic results. So, Eli abandoned his plan of making a fortune selling Black Boxes and set up the Goldratt Institute to disseminate the Principles of ToC – which he did with considerably more success. Eli Goldratt died in June 2011 after a short battle with cancer and the World has lost a great innovator and a founding father of Improvement Science. His legacy lives on in the books he wrote that chart his personal journey of discovery.

The Principles of ToC are central both to process improvement and to process design.  As Eli unintentionally demonstrated, it is more effective and much quicker to learn the Principles of ToC pragmatically and with low technology – such as a book – than with a complex, expensive, high technology Black Box.  As many people have discovered – adding complex technology to a complex problem does not create a simple solution! Many processes are relatively uncomplicated and do not require high technology solutions. An example is the challenge of designing a high productivity schedule when there is variation in both the content and the volume of the work.

If our required goal is to improve productivity (or profit) then we want to improve the throughput and/or to reduce the resources required. That is relatively easy when there is no variation in content and no variation in volume – such as when we are making just one product at a constant rate – like a Model-T Ford in Black! Add some content and volume variation and the challenge becomes a lot trickier! From the 1950’s the move from mass production to mass customisation in the automobile industry created this new challenge and spawned a series of  innovative approaches such as the Toyota Production System (Lean), Six Sigma and Theory of Constraints.  TPS focussed on small batches, fast changeovers and low technology (kanbans or cards) to keep inventory low and flow high; Six Sigma focussed on scientifically identifying and eliminating all sources of variation so that work flows smoothly and in “statistical control”; ToC focussed on identifying the “constraint steps” in the system and then on scheduling tasks so that the constraints never run out of work.

When applied to a complex system of interlinked and interdependent processes the ToC method requires a complicated Black Box to do the scheduling because we cannot do it in our heads. However, when applied to a simpler system or to a part of a complex system it can be done using a low technology method called “paper and pen”. The technique is called Template Scheduling and there is a real example in the “Three Wins” book where the template schedule design was tested using a computer simulation to measure the resilience of the design to natural variation – and the computer was not used to do the actual scheduling. There was no Black Box doiung the scheduling. The outcome of the design was a piece of paper that defined the designed-and-tested template schedule: and the design testing predicted a 40% increase in throughput using the same resources. This dramatic jump in productivity might be regarded as  “miraculous” or even “impossible” but only to someone who was not aware of the template scheduling method. The reality is that that the designed schedule worked just as predicted – there was no miracle, no magic, no Magician and no Black Box.

What Is The Cost Of Reality?

It is often assumed that “high quality costs more” and there is certainly ample evidence to support this assertion: dinner in a high quality restaurant commands a high price. The usual justifications for the assumption are (a) quality ingredients and quality skills cost more to provide; and (b) if people want a high quality product or service that is in relatively short supply then it commands a higher price – the Law of Supply and Demand.  Together this creates a self-regulating system – it costs more to produce and so long as enough customers are prepared to pay the higher price the system works.  So what is the problem? The problem is that the model is incorrect. The assumption is incorrect.  Higher quality does not always cost more – it usually costs less. Convinced?  No. Of course not. To be convinced we need hard, rational evidence that disproves our assumption. OK. Here is the evidence.

Suppose we have a simple process that has been designed to deliver the Perfect Service – 100% quality, on time, first time and every time – 100% dependable and 100% predictable. We choose a Service for our example because the product is intangible and we cannot store it in a warehouse – so it must be produced as it is consumed.

To measure the Cost of Quality we first need to work out the minimum price we would need to charge to stay in business – the sum of all our costs divided by the number we produce: our Minimum Viable Price. When we examine our Perfect Service we find that it has three parts – Part 1 is the administrative work: receiving customers; scheduling the work; arranging for the necessary resources to be available; collecting the payment; having meetings; writing reports and so on. The list of expenses seems endless. It is the necessary work of management – but it is not what adds value for the customer. Part 3 is the work that actually adds the value – it is the part the customer wants – the Service that they are prepared to pay for. So what is Part 2 work? This is where our customers wait for their value – the queue. Each of the three parts will consume resources either directly or indirectly – each has a cost – and we want Part 3 to represent most of the cost; Part 2 the least and Part 1 somewhere in between. That feels realistic and reasonable. And in our Perfect Service there is no delay between the arrival of a customer and starting the value work; so there is  no queue; so no work in progress waiting to start, so the cost of Part 2 is zero.  

The second step is to work out the cost of our Perfect Service – and we could use algebra and equations to do that but we won’t because the language of abstract mathematics excludes too many people from the conversation – let us just pick some realistic numbers to play with and see what we discover. Let us assume Part 1 requires a total of 30 mins of work that uses resources which cost £12 per hour; and let us assume Part 3 requires 30 mins of work that uses resources which cost £60 per hour; and let us assume Part 2 uses resources that cost £6 per hour (if we were to need them). We can now work out the Minimum Viable Price for our Perfect Service:

Part 1 work: 30 mins @ £12 per hour = £6
Part 2 work:  = £0
Part 3 work: 30 mins at £60 per hour = £30
Total: £36 per customer.

Our Perfect Service has been designed to deliver at the rate of demand which is one job every 30 mins and this means that the Part 1 and Part 3 resources are working continuously at 100% utilisation. There is no waste, no waiting, and no wobble. This is our Perfect Service and £36 per job is our Minimum Viable Price.         

The third step is to tarnish our Perfect Service to make it more realistic – and then to do whatever is necessary to counter the necessary imperfections so that we still produce 100% quality. To the outside world the quality of the service has not changed but it is no longer perfect – they need to wait a bit longer, and they may need to pay a bit more. Quality costs remember!  The question is – how much longer and how much more? If we can work that out and compare it with our Minimim Viable Price we will get a measure of the Cost of Reality.

We know that variation is always present in real systems – so let the first Dose of Reality be the variation in the time it takes to do the value work. What effect does this have?  This apparently simple question is surprisingly difficult to answer in our heads – and we have chosen not to use “scarymatics” so let us run an empirical experiment and see what happens. We could do that with the real system, or we could do it on a model of the system.  As our Perfect Service is so simple we can use a model. There are lots of ways to do this simulation and the technique used in this example is called discrete event simulation (DES)  and I used a process simulation tool called CPS (www.SAASoft.com).

Let us see what happens when we add some random variation to the time it takes to do the Part 3 value work – the flow will not change, the average time will not change, we will just add some random noise – but not too much – something realistic like 10% say.

The chart shows the time from start to finish for each customer and to see the impact of adding the variation the first 48 customers are served by our Perfect Service and then we switch to the Realistic Service. See what happens – the time in the process increases then sort of stabilises. This means we must have created a queue (i.e. Part 2 work) and that will require space to store and capacity to clear. When we get the costs in we work out our new minimum viable price it comes out, in this case, to be £43.42 per task. That is an increase of over 20% and it gives us a measure of the Cost of the Variation. If we repeat the exercise many times we get a similar answer, not the same every time because the variation is random, but it is always an extra cost. It is never less that the perfect proce and it does not average out to zero. This may sound counter-intuitive until we understand the reason: when we add variation we need a bit of a queue to ensure there is always work for Part 3 to do; and that queue will form spontaneously when customers take longer than average. If there is no queue and a customer requires less than average time then the Part 3 resource will be idle for some of the time. That idle time cannot be stored and used later: time is not money.  So what happens is that a queue forms spontaneously, so long as there is space for it,  and it ensures there is always just enough work waiting to be done. It is a self-regulating system – the queue is called a buffer.

Let us see what happens when we take our Perfect Process and add a different form of variation – random errors. To prevent the error leaving the system and affecting our output quality we will repeat the work. If the errors are random and rare then the chance of getting it wrong twice for the same customer will be small so the rework will be a rough measure of the internal process quality. For a fair comparison let us use the same degree of variation as before – 10% of the Part 3 have an error and need to be reworked – which in our example means work going to the back of the queue.

Again, to see the effect of the change, the first 48 tasks are from the Perfect System and after that we introduce a 10% chance of a task failing the quality standard and needing to be reworked: in this example 5 tasks failed, which is the expected rate. The effect on the start to finish time is very different from before – the time for the reworked tasks are clearly longer as we would expect, but the time for the other tasks gets longer too. It implies that a Part 2 queue is building up and after each error we can see that the queue grows – and after a delay.  This is counter-intuitive. Why is this happening? It is because in our Perfect Service we had 100% utiliation – there was just enough capacity to do the work when it was done right-first-time, so if we make errors and we create extra demand and extra load, it will exceed our capacity; we have created a bottleneck and the queue will form and it will cointinue to grow as long as errors are made.  This queue needs space to store and capacity to clear. How much though? Well, in this example, when we add up all these extra costs we get a new minimum price of £62.81 – that is a massive 74% increase!  Wow! It looks like errors create much bigger problem for us than variation. There is another important learning point – random cycle-time variation is self-regulating and inherently stable; random errors are not self-regulating and they create inherently unstable processes.

Our empirical experiment has demonstrated three principles of process design for minimising the Cost of Reality:

1. Eliminate sources of errors by designing error-proofed right-first-time processes that prevent errors happening.
2. Ensure there is enough spare capacity at every stage to allow recovery from the inevitable random errors.
3. Ensure that all the steps can flow uninterrupted by allowing enough buffer space for the critical steps.

With these Three Principles of cost-effective design in mind we can now predict what will happen if we combine a not-for-profit process, with a rising demand, with a rising expectation, with a falling budget, and with an inspect-and-rework process design: we predict everyone will be unhappy. We will all be miserable because the only way to stay in budget is to cut the lower priority value work and reinvest the savings in the rising cost of checking and rework for the higher priority jobs. But we have a  problem – our activity will fall, so our revenue will fall, and despite the cost cutting the budget still doesn’t balance because of the increasing cost of inspection and rework – and we enter the death spiral of finanical decline.

The only way to avoid this fatal financial tailspin is to replace the inspection-and-rework habit with a right-first-time design; before it is too late. And to do that we need to learn how to design and deliver right-first-time processes.

Charts created using BaseLine

The Crime of Metric Abuse

We live in a world that is increasingly intolerant of errors – we want everything to be right all the time – and if it is not then someone must have erred with deliberate intent so they need to be named, blamed and shamed! We set safety standards and tough targets; we measure and check; and we expose and correct anyone who is non-conformant. We accept that is the price we must pay for a Perfect World … Yes? Unfortunately the answer is No. We are deluded. We are all habitual criminals. We are all guilty of committing a crime against humanity – the Crime of Metric Abuse. And we are blissfully ignorant of it so it comes as a big shock when we learn the reality of our unconscious complicity.

You might want to sit down for the next bit.

First we need to set the scene:
1. Sustained improvement requires actions that result in irreversible and beneficial changes to the structure and function of the system.
2. These actions require making wise decisions – effective decisions.
3. These actions require using resources well – efficient processes.
4. Making wise decisions requires that we use our system metrics correctly.
5. Understanding what correct use is means recognising incorrect use – abuse awareness.

When we commit the Crime of Metric Abuse, even unconsciously, we make poor decisions. If we act on those decisions we get an outcome that we do not intend and do not want – we make an error.  Unfortunately, more efficiency does not compensate for less effectiveness – if fact it makes it worse. Efficiency amplifies Effectiveness – “Doing the wrong thing right makes it wronger not righter” as Russell Ackoff succinctly puts it.  Paradoxically our inefficient and bureaucratic systems may be our only defence against our ineffective and potentially dangerous decision making – so before we strip out the bureaucracy and strive for efficiency we had better be sure we are making effective decisions and that means exposing and treating our nasty habit for Metric Abuse.

Metric Abuse manifests in many forms – and there are two that when combined create a particularly virulent addiction – Abuse of Ratios and Abuse of Targets. First let us talk about the Abuse of Ratios.

A ratio is one number divided by another – which sounds innocent enough – and ratios are very useful so what is the danger? The danger is that by combining two numbers to create one we throw away some information. This is not a good idea when making the best possible decision means squeezing every last drop of understanding our of our information. To unconsciously throw away useful information amounts to incompetence; to consciously throw away useful information is negligence because we could and should know better.

Here is a time-series chart of a process metric presented as a ratio. This is productivity – the ratio of an output to an input – and it shows that our productivity is stable over time.  We started OK and we finished OK and we congratulate ourselves for our good management – yes? Well, maybe and maybe not.  Suppose we are measuring the Quality of the output and the Cost of the input; then calculating our Value-For-Money productivity from the ratio; and then only share this derived metric. What if quality and cost are changing over time in the same direction and by the same rate? The productivity ratio will not change.

 

Suppose the raw data we used to calculate our ratio was as shown in the two charts of measured Ouput Quality and measured Input Cost  – we can see immediately that, although our ratio is telling us everything is stable, our system is actually changing over time – it is unstable and therefore it is unpredictable. Systems that are unstable have a nasty habit of finding barriers to further change and when they do they have a habit of crashing, suddenly, unpredictably and spectacularly. If you take your eyes of the white line when driving and drift off course you may suddenly discover a barrier – the crash barrier for example, or worse still an on-coming vehicle! The apparent stability indicated by a ratio is an illusion or rather a delusion. We delude ourselves that we are OK – in reality we may be on a collision course with catastrophe. 

But increasing quality is what we want surely? Yes – it is what we want – but at what cost? If we use the strategy of quality-by-inspection and add extra checking to detect errors and extra capacity to fix the errors we find then we will incur higher costs. This is the story that these Quality and Cost charts are showing.  To stay in business the extra cost must be passed on to our customers in the price we charge: and we have all been brainwashed from birth to expect to pay more for better quality. But what happens when the rising price hits our customers finanical constraint?  We are no longer able to afford the better quality so we settle for the lower quality but affordable alternative.  What happens then to the company that has invested in quality by inspection? It loses customers which means it loses revenue which is bad for its financial health – and to survive it starts cutting prices, cutting corners, cutting costs, cutting staff and eventually – cutting its own throat! The delusional productivity ratio has hidden the real problem until a sudden and unpredictable drop in revenue and profit provides a reality check – by which time it is too late. Of course if all our competitors are committing the same crime of metric abuse and suffering from the same delusion we may survive a bit longer in the toxic mediocrity swamp – but if a new competitor who is not deluded by ratios and who learns how to provide consistently higher quality at a consistently lower price – then we are in big trouble: our customers leave and our end is swift and without mercy. Competition cannot bring controlled improvement while the Abuse of Ratios remains rife and unchallenged.

Now let us talk about the second Metric Abuse, the Abuse of Targets.

The blue line on the Productivity chart is the Target Productivity. As leaders and managers we have bee brainwashed with the mantra that “you get what you measure” and with this belief we commit the crime of Target Abuse when we set an arbitrary target and use it to decide when to reward and when to punish. We compound our second crime when we connect our arbitrary target to our accounting clock and post periodic praise when we are above target and periodic pain when we are below. We magnify the crime if we have a quality-by-inspection strategy because we create an internal quality-cost tradeoff that generates conflict between our governance goal and our finance goal: the result is a festering and acrimonious stalemate. Our quality-by-inspection strategy paradoxically prevents improvement in productivity and we learn to accept the inevitable oscillation between good and bad and eventually may even convince ourselves that this is the best and the only way.  With this life-limiting-belief deeply embedded in our collective unconsciousness, the more enthusiastically this quality-by-inspection design is enforced the more fear, frustration and failures it generates – until trust is eroded to the point that when the system hits a problem – morale collapses, errors increase, checks are overwhelmed, rework capacity is swamped, quality slumps and costs escalate. Productivity nose-dives and both customers and staff jump into the lifeboats to avoid going down with the ship!  

The use of delusional ratios and arbitrary targets (DRATs) is a dangerous and addictive behaviour and should be made a criminal offense punishable by Law because it is both destructive and unnecessary.

With painful awareness of the problem a path to a solution starts to form:

1. Share the numerator, the denominator and the ratio data as time series charts.
2. Only put requirement specifications on the numerator and denominator charts.
3. Outlaw quality-by-inspection and replace with quality-by-design-and-improvement.  

Metric Abuse is a Crime. DRATs are a dangerous addiction. DRATs kill Motivation. DRATs Kill Organisations.

Charts created using BaseLine

The One-Eyed Man in the Land of the Blind.

“There are known knowns; there are things we know we know.
We also know there are known unknowns; that is to say we know there are some things we do not know.
But there are also unknown unknowns – the ones we don’t know we don’t know.” Donald Rumsfeld 2002

This infamous quotation is a humorously clumsy way of expressing a profound concept. This statement is about our collective ignorance – and it hides a beguiling assumption which is that we are all so similar that we just have to accept the things that we all do not know. It is OK to be collectively and blissfully ignorant. But is this OK? Is this not the self-justifying mantra of those who live in the Land of the Blind?

Our collective blissful ignorance holds the promise of great unknown gains; and harbours the potential of great untold pain.

Our collective knowledge is vast and is growing because we have dissolved many Unknowns.  For each there must have been a point in time when the first person become painfully aware of their ignorance and, by some means, discovered some new knowledge. When that happened they had a number of options – to keep it to themselves, to share it with those they knew, or to share it with strangers. The innovators dilemma is that when they share new knowledge they know they will cause emotional pain; because to share knowledge with the blissfully ignorant implies pushing them to the state of painful awareness.

We are social animals and we demonstrate empathy and respect for others, so we do not want to deliberately cause them emotional pain – even the short term pain of awareness that must preceed the long term gain of knowledge, understanding and wisdom. It is the constant challenge that every parent, every teacher, every coach, every mentor, every leader and every healer has to learn to master.

So, how do we deal with the situation when we are painfully aware that others are in the state of blissful ignorance – of not knowing what they do not know – and we know that making them aware will be emotionally painful for them – just as it was for us? We know from experience that that an insensitive, clumsy, blunt, brutal, just-tell-it-as-it is approach can cause pain-but-no-gain; we have all had experience of others who seem to gain a perverse pleasure from the emotional impact they generate by triggering painful awareness. The disrespectful “means-justifies-the-ends” and “cruel-to-be-kind” mindset is the mantra of those who do not walk their own talk – those who do not challenge their own blissful ignorance – those who do not seek to gain an understanding of how to foster effective learning without inflicting emotional pain.

The no-pain-no-gain life limiting belief is an excuse – not a barrier. It is possible to learn without pain – we have all been doing it for our whole lives; each of us can think of people who inspired us to learn and to have fun doing so – rare and memorable role models, bright stars in the darkness of disappointment. Our challenge is to learn how to inspire ourselves.

The first step is to create an emotionally Safe Environment for Learning and Fun (SELF). For the leader/teacher/healer this requires developing an ability to build a culture of trust by actively unlearning their own trust-corroding-behaviours.  

The second step is to know what we know – to be sure of our facts and confident that we can explain and support what we know with evidence and insight. To deliberately push someone into painful awareness with no means to guide them out of that dark place is disrespectful and untrustworthy behaviour. Learning how to teach what we know is the most effective means to discover our own depth of understanding and it is an energising exercise in humility development! 

The third step is for us to have the courage to raise awareness in a sensitive and respectful way – sometimes this is done by demonstrating the knowledge; sometimes this is done by asking carefully framed questions; and sometimes it is done as a respectful challenge.  The three approaches are not mutually exclusive: leading-by-example is effective but leaders need to be teachers and healers too.  

At all stages the challenge for the leader/teacher/healer is to to ensure they maintain an OK-OK mental model of those they influence. This is the most difficult skill to attain and is the most important. The “Leadership and Self-Deception” book that is in the Library of Improvement Science is a parable that decribes this challenge.

So, how do we dissolve the One-Eyed Man in the Land of the Blind problem? How do we raise awareness of a collective blissful ignorance? How do we share something that is going to cause untold pain and misery in the future – a storm that is building over the horizon of awareness.

Ignaz Semmelweis (1818-1865) was the young Hungarian doctor who in 1847 discovered the dramatic live-saving benefit of the doctors cleaning their hands before entering the obstetric ward of the Vienna Hospital. This was before “germs” had been discovered and Semmelweis could not explain how his discovery worked – all he could do was to exhort others to do as he did. He did not learn how the method worked, he did not publish his data, and he demonstrated trust-eroding behaviour when he accused others of “murder” when they did not do as he told them.  The fact the he was correct did not justify the means by which he challenged their collective blissful ignorance (see http://www.valuesystemdesign.com for a fuller account).  The book that he eventually published in 1861 includes the data that supports our modern understanding of the importance of hand hygiene – but it also includes a passionate diatribe of how he had been wronged by others – a dramatic example of the “I’m OK and The Rest of the World is Not OK” worldview. Semmelweis was committed to a lunatic asylum and died there in 1865.   

W Edwards Deming (1900-1993) was the American engineer, mathematician, mathematical physicist, statistician and student of Walter A. Shewhart who learned the importance of quality in design. After WWII he was part of the team who helped to rebuild the Japanese economy and he taught the Japanese what he had learned and practiced during WWII – which was how to create a high-quality, high-speed, high-efficiency process which, ironically, was building ships for the war effort. Later Deming attempted, and failed, to influence the post-war generation of managers that were being churned out by the new business schools to serve the growing global demand for American mass produced consumer goods. Deming returned to relative obscurity in the USA until 1980 when his teachings were rediscovered when Japan started to challenge the USA economically by producing higher-quality-and-lower-cost consumer products such as cars and electronics ( http://en.wikipedia.org/wiki/W._Edwards_Deming). Before he died in 1993 Deming wrote two books – Out of The Crisis and The New Economics in which he outlines his learning and his philosophy and in which he unreservedly and passionately blames the managers and the business schools that trained them for their arrogant attitude and disrespectful behaviour. Like Semmelweis, the fact that his books contain a deep well of wisdom does not justify the means by which he disseminated his criticism of poeple – in particular of senior management. By doing so he probably created resistance and delayed the spread of knowledge.  

History is repeating itself: the same story is being played out in the global healthcare system. Neither senior doctors nor senior managers are aware of the opportunity that the learning of Semmelweis and Deming represent – the opportunity of Improvement Science and of the theory, techniques and tools of Operations Management. The global healthcare system is in a state of collective blissful ignorance.  Our descendents be the recipients of of decisions and the judges of our behaviour – and time is running out – we do not have the luxury of learning by making the same mistake.

Fortunately, there is an growing group of people who are painfully aware of the problem and are voicing their concerns – such as the Institute of Healthcare Improvement  in America. There is a smaller and less well organised network of people who have acquired and applied some of the knowledge and are able to demonstrate how it works – the Know Hows. There appears to be an even smaller group who understand and use the principles but do it intuitively and unconsciously – they dem0nstrate what is possible but find it difficult to teach others how to do what they do. It is the Know How group that is the key to dissolving the problem.

The first collective challenge is to sign-post some safe paths from Collective Blissful Ignorance to Individual Know How. The second collective challenge is to learn an effective and respectful way to raise awareness of the problem – a way to outline the current reality and the future opportunity – and a way that illuminates the paths that link the two.

In the land of the blind the one-eyed man is the person who discovers that everyone is wearing a head-torch by accidentally finding his own and switching it on!

           

Where is the Rotten Egg?

Have you ever had the experience of arriving home from a holiday – opening the front door and being hit with the rancid smell of something that has gone rotten while you were away.

Phwooorrrarghhh!

When that happens we open the windows to let the fresh-air blow the smelly pong out and we go in search of the offending source of the horrible whiff. Somewhere we know we will find the “rotten egg” and we know we need to remove it because it is now beyond repair.

What happened here is that our usual, regular habit of keeping our house clean was interrupted and that allowed time for something to go rotten and to create a nasty stink. It may also have caused other things to go rotten too – decay  spreads. Usually we maintain an olfactory vigilance to pick up the first whiff of a problem and we act before the rot sets in – but this only works if we know what fresh air smells like, if we remove the peg from our nose, and if we have the courage to remove all of the rot. Permfuing the pig is not an effective long term strategy.

The rotten egg metaphor applies to organisations. The smell we are on the alert for is the rancid odour of a sour relationship, the signal we sense is the dissonance of misery, and the behaviours we look for are those that erode trust. These behaviours have a name – they are called discounts – and they come in two types.

Type 1 discounts are our deliberate actions that lead to erosion of trust – actions like interrupting, gossiping, blaming, manipulation, disrespect, intimidation, and bullying.

Type 2 discounts are the actions that we deliberately omit to do that also cause erosion of trust – like not asking for and not offering feedback, like not sharing data, information and knowledge, like not asking for help, like not saying thank you, like not challenging assumptions, like not speaking out when we feel things are not right, like not getting the elephant out in the room. These two types of discounts are endemic in all organisations and the Type 2 discounts are the more difficult to see because it was what we didn’t do that led to the rot. We must all maintain constant vigilance to sniff out the first whiff of misery and to act immediately and effectively to sustain a pong-free organisational atmosphere.

Anyone for more Boiled Frog?

There is a famous metaphor for the dangers of denial and complacency called the boiled frog syndrome.

Apparently if you drop a frog into hot water it will notice and jump out  but if you put a frog in water at a comfortable temperature and then slowly heat it up it will not jump out – it does not notice the slowly rising temperature until it is too late – and it boils.

The metaphor is used to highlight the dangers of not being aware enough of our surroundings to notice when things are getting “hot” – which means we do not act in time to prevent a catastrophe.

There is another side to the boiled frog syndrome – and this when improvements are made incrementally by someone else and we do not notice those either. This is the same error of complacency and there is no positive feedback so the improvement investment fizzles out – without us noticing that either.  This is a disadvantage of incremental improvement – we only notice the effect if we deliberately measure at intervals and compare present with past. Not many of us appear to have the foresight or fortitude to do that. We are the engineers of our own mediocrity.

There is an alternative though – it is called improvement-by-design. The difference from improvement-by-increments is that with design you deliberately plan to make a big beneficial change happen quickly – and you can do this by testing the design before implementing it so that you know it is feasible.  When the change is made the big beneficial difference is noticed – WOW! – and everyone notices: supporters and cynics alike.  Their responses are different though – the advocates are jubilant and the cynics are shocked. The cynics worldview is suddenly challenged – and the feeling is one of positive confusion. They say “Wow! That’s a miracle – how did you do that?”.

So when we understand enough to design a change then we should use improvement-by-design; and when we don’t understand enough we have no choice but to do use improvement-by-discovery.

The Seven Flows

Improvement Science is the knowledge and experience required to improve … but to improve what?

Improve safety, delivery, quality, and productivity?

Yes – ultimately – but they are the outputs. What has to be improved to achieve these improved outputs? That is a much more interesting question.

The simple answer is “flow”. But flow of what? That is an even better question!

Let us consider a real example. Suppose we want to improve the safety, quality, delivery and productivity of our healthcare system – which we do – what “flows” do we need to consider?

The flow of patients is the obvious one – the observable, tangible flow of people with health issues who arrive and leave healthcare facilities such as GP practices, outpatient departments, wards, theatres, accident units, nursing homes, chemists, etc.

What other flows?

Healthcare is a service with an intangible product that is produced and consumed at the same time – and in for those reasons it is very different from manufacturing. The interaction between the patients and the carers is where the value is added and this implies that “flow of carers” is critical too. Carers are people – no one had yet invented a machine that cares.

As soon as we have two flows that interact we have a new consideration – how do we ensure that they are coordinated so that they are able to interact at the same place, same time, in the right way and is the right amount?

The flows are linked – they are interdependent – we have a system of flows and we cannot just focus on one flow or ignore the inter-dependencies. OK, so far so good. What other flows do we need to consider?

Healthcare is a problem-solving process and it is reliant on data – so the flow of data is essential – some of this is clinical data and related to the practice of care, and some of it is operational data and related to the process of care. Data flow supports the patient and carer flows.

What else?

Solving problems has two stages – making decisions and taking actions – in healthcare the decision is called diagnosis and the action is called treatment. Both may involve the use of materials (e.g. consumables, paper, sheets, drugs, dressings, food, etc) and equipment (e.g. beds, CT scanners, instruments, waste bins etc). The provision of materials and equipment are flows that require data and people to support and coordinate as well.

So far we have flows of patients, people, data, materials and equipment and all the flows are interconnected. This is getting complicated!

Anything else?

The work has to be done in a suitable environment so the buildings and estate need to be provided. This may not seem like a flow but it is – it just has a longer time scale and is more jerky than the other flows – planning-building-using a new hospital has a time span of decades.

Are we finished yet? Is anything needed to support the these flows?

Yes – the flow that links them all is money. Money flowing in is called revenue and investment and money flowing out is called costs and dividends and so long as revenue equals or exceeds costs over the long term the system can function. Money is like energy – work only happens when it is flowing – and if the money doesn’t flow to the right part at the right time and in the right amount then the performance of the whole system can suffer – because all the parts and flows are interdependent.

So, we have Seven Flows – Patients, People, Data, Materials, Equipment, Estate and Money – and when considering any process or system improvement we must remain mindful of all Seven because they are interdependent.

And that is a challenge for us because our caveman brains are not designed to solve seven-dimensional time-dependent problems! We are OK with one dimension, struggle with two, really struggle with three and that is about it. We have to face the reality that we cannot do this in our heads – we need assistance – we need tools to help us handle the Seven Flows simultaneously.

Fortunately these tools exist – so we just need to learn how to use them – and that is what Improvement Science is all about.

Systemic Sickness

Sickness, illness, ill health, unhealthy, disease, disorder, distress are all words that we use when how we feel falls short of how we expect to feel. The words impliy an illness continuum and each of us appeara to use different thresholds as action alerts.

 The first is crossed when we become aware that all is not right and our response and to enter a self-diagnosis and self-treatment mindset. This threshold is context-dependent; we use external references to detect when we have strayed too far from the norm – we compare ourselves with others. This early warning system works most of the time – after all chemists make their main business from over the counter (OTC) remedies!

If the first stage does not work we cross the second threshold when we accept that we need expert assistance and we switch into a different mode of thinking – the “sick role”.  Crossing the second threshold is a big psychological step that implies a perceived loss of control and power – and explains why many people put off seeking help. They enter a phase of denial, self-deception and self-justification which can be very resistant to change.

The same is true of organisations – when they become aware that they are performing below expectation then a “self-diagnosis” and “self-treatment” is instigated, except that it is called something different such as an “investigation” or “root cause analysis” and is followed by a “recommendations” and an “action plan”.  The requirements for this to happen are an ability to become aware of a problem and a capability to understand and address the root cause both effectively and efficiently.  This is called dynamic stability or “homeostasis” and is a feature of many systems.  The image of a centrifugal governor is a good example – it was one of the critical innovations that allowed the power of steam to be harnessed safely a was a foundation stone of the industrial revolution. The design is called a negative feedback stabiliser and it has a drawback – there may be little or no external sign of the effort required to maintain the stability.

Problems arise when parts of this expectation-awareness-feedback-adjustment process are missing, do not work, or become disconnected. If there is an unclear expectation then it is impossible to know when and how to react. Not being clear what “healthy” means leads to confusion. It is too easy to create a distorted sense of normality by choosing a context where everyone is the same as you – “birds of a feather flock together”.

Another danger is to over-simplify the measure of health and to focus on one objective dimension – money – with the assumption that if the money is OK then the system must be OK.  This is an error of logic because although a healthy system implies healthy finances, the reverse is not the case – a business can be both making money and heading for disaster.

Failure can also happen if the most useful health metrics are not measured, are measured badly, or are not communicated in a meaningful way.  Very often metrics are not interpreted in context, not tracked over time, and not compared with the agreed expectation of health.  These multiple errors of omission lead to conterproductive behaviour such as the use of delusional ratios and arbitrary targets (DRATs), short-termism and “chasing the numbers” – all of which can further erode the underlying health of the system – like termites silently eating the foundations of your house. By the time you notice it is too late – the foundations have crumbled into dust!

To achieve and maintain systemic health it is necessary to include the homeostatic mechanisms at the design stage. Trying to add or impose the feedback functions afterwards is less effective and less efficient.  A healthy system is desoigned with sensitive feedback loops that indicate the effort required to maintain dynamic stablity – and if that effort is increasing then that alone is cause for further investigation – often long before the output goes out of specification.  Healthy systems are economic and are designed to require a minimum of effort to maintain stability and sustain performance – good design feels effortless compared with poor design. A system that only detects and reacts to deviations in outputs is an inferior design – it is like driving by looking in the rear-view mirror!

Healthy systems were designed to be healthy from the start or have evolved from unhealthy ones – the books by Jim Collins describes this: “Built to Last” describes organisations that have endured because they were destined to be great from the start. “Good to Great”  describes organisations that have evolved from unremarkable performers into great performers. There is a common theme to great companies irrespective of their genesis – data, information, knowledge, understanding and most important of all a wise leader.

The Ten Billion Barrier

I love history – not the dry boring history of learning lists of dates – the inspiring history of how leaps in understanding happen after decades of apparently fruitless search.  One of the patterns that stands out for me in recent history is how the growth of the human population has mirrored the changes in our understanding of the Universe.  This pattern struck me as curious – given that this has happened only in the last 10,000 years – and it cannot be genetic evolution because the timescale is to short. So what has fuelled this population growth? On further investigation I discovered that the population growth is exponential rather than linear – and very recent – within the last 1000 years.  Exponential growth is a characteristic feature of a system that has a positive feedback loop in it that is not balanced by an equal and opposite negative feedback loop. So, what is being fed back into the system that is creating this unbalanced behaviour? My conclusion so far is “collective improvement in understanding”.

However, exponential growth has a dark side – it is not sustainable. At some point a negative feedback loop will exert itself – and there are two extremes to how fast this can happen: gradual or sudden. Sudden negative feedback is a shock is the one to avoid because it is usually followed by a dramatic reversal of growth which if catastrophic enough is fatal to the system.  When it is less sudden and less severe it can lead into repeating cycles  of growth and decline – boom and bust – which is just a more painful path to the same end.  This somewhat disquieting conclusion led me to conduct the thought experiment that is illustrated by the diagram: If our growth is fuelled by our ability to learn, to use and to maintain our collective knowledge what changes in how we do this must have happened over the last 1000 years?  Biologically we are social animals and using our genetic inheritance we seem only able to maintain about 100 active relationships – which explains the natural size of family groups where face-to-face communication is paramount.  To support a stable group that is larger than 100 we must have developed learned behaviours and social structures. History tells us that we created communities by differentiating into specialised functions and to be stable these were cooperative rather than competitive and the natural multiplier seems to be about 100.  A community with more than 10,000 people is difficult to sustain with an ad hoc power structure with a powerful leader and we develop collective “rules” and a more democratic design – which fuels another 100 fold expansion to 1 million – the order of magnitide of a country or city. Multiply by 100 again and we get the size that is typical of a country and the social structures required to achieve stablity on this scale are different again – we needed to develop a way of actively seeking new knowledge, continuously re-writing the rule books, and industrialising our knowkedge. This has only happened over the last 300 years.  The next multipler takes us to Ten Billion – the order of magnitude of the current global population – and it is at this stage that  our current systems seem to be struggling again.

From this geometric perspective we appear to be approaching a natural human system barrier that our current knowledge management methods seem inadequate to dismantle – and if we press on in denial then we face the prospect of a sudden and catastrophic change – for the worse. Regression to a bygone age would have the same effect because those systems are not designed to suport the global economy.

So, what would have to change in the way we manage our collective knowledge that would avoid a Big Crunch and would steer us to a stable and sustainable future?

Disruptive Innovation

Africa is a fascinating place.  According to a documentary that I saw last year we are ALL descended from a small tribe who escaped from North East Africa about 90,000 years ago. Our DNA carries clues to the story of our journey and it shows that modern man (Africans, Europeans, Asians, Chinese, Japanese, Australians, Americans, Russians etc) – all come from a common stock. It is salutory to reflect how short this time scale is, how successful this tribe has been in replacing all the other branches of the human evolutionary tree, and how the genetic differences between colours and creeds are almost insignificant.  All the evolution that has happened in the last 90,000 years that has transformed the world and the way we live is learned behaviour. This means that, unlike our genes, it is possible to turn the clock backwards 90,000 years in just one generation. To avoid this we need to observe how the descendents of the original tribe learned to do many new things – forced by their new surroundings to adapt or perish.  This is essence of Improvement Science – changing context continuously creates new challenges – from which we can learn, adapt and flourish.

To someone born in rural England a mobile phone appears to be a relatively small step on a relentless technological evolution – to someone born in rural Africa it is a radical and world-changing paradigm shift – one that has already changed their lives.  In some parts of Africa money is now managed using mobile phones and this holds the promise of bypassing the endemic bureaucratic and corrupt practices that so often strangle the greens shoots of innovation and improvement. Information and communication is the lifeblood of improvement and to introduce a communication technology that is reliable, effective, and affordable into a vast potential for cultural innovation is rather like introducing a match to the touchpaper of a firework. Once the fuse has started to fizz there is no going back. The name given to this destabilising phenomenon is “disruptive innovation” and fortunately it can work for the good of all – so long as we steer it in a win-win-win direction. And that is a big challenge because our history suggests that we find exploitation easier than evolution and exploitation always leads to lose-lose-lose outcomes.

So while our global tribe may have learned enough to create a global phone system we still have much to learn about how to create a global social system.

Small Step or Giant Leap?

This iconic image of Earthrise over the Moonscape reveals the dynamic complexity of the living Earth contrasting starkly with the static simplicity of the dead Moon. The feeling of fragility that this picture evokes sounds a warning bell for us – “Death is Irreversible and Life is not Inevitable”. In reality this image was a small technical step that created a giant cultural leap.

And so it is with much of Improvement Science – the perception of the size of the challenge changes once the challenge is overcome. With the benefit of hindsight it was easy, even obvious – but with only the limit of foresight it looked difficult and obscure.  Our ability to challenge, learn and adopt a new perspective is the source of much gain and much pain. We gain the excitement of new understanding and we feel the pain of being forced to face our old ignorance.  Many of us deny ourselves the gain because we cannot face the pain – but it does not have to be that way. We have a tendency to store the pain up until we are forced to face it – and by this means we create what feel like insurmountable barriers to improvement.  There is an alternative – bite sized improvement – taking small steps towards a realistic goal that is on a path to our distant objective.  The small-step method has many advantages – we can do things that matter to us and are within our circle of influence; we can learn and practice the skills in safety; and we can start immediately.

In prospect it will feel like a giant leap and in retrospect it will look like a small step – that is the way of Improvement Science – and as our confidence and curiosity grow we take bigger steps and make smaller leaps.  

Passion, Persistence and Patience.

One goal of Improvement Science is self-sustaining improvement. This does not mean fixing the same problem day-after-day: it means solving new challenges first-time and and for-ever. Patching the same problem over-and-over is called fire-fighting and is an emotionally and financially expensive strategy. We all get a buzz out of solving problems; and that is a good thing because when we free ourselves from the miserable world of the “can’t/won’t do mindset”  we gain the confidence to take action, to solve problems and to gain access to an endless supply of feel-good-fuel.

Be warned though: there is a danger lurking here in the form of the unconscious assumption that if we solve all the problems then we will run out of things to do and our supply of feel-good-fuel will dry up too.  This misconception and our unconscious fear of ego-starvation conspires to undermine our efforts and we can unintentionally drift into reactive fire-fighting behaviour – which sustains our egos but maintains the mediocre status quo. We may also unconsciously collude with others who supply their egos with feel-good-fuel from the same source – and by doing that condemn us all to perpetual mediocrity.

The root cause of our behaviour is our natural tendancy to see challenges as problems – the negative stuff –  the niggles – what we see that is getting the the way and must be removed. We are not as good at seeing challenges as opportunities – the positive stuff – the nice ifs – because we do not see what is not there.  The reason for our distorted perception is because the “caveman wetware between our ears” hasn’t evolved to give us a balanced perspective.  Fortunately, we have evolved the ability to see with our mind’s eye: to dream, to imagine and to conduct “thought experiments”. When we apply that capability we start to ask “What if?” questions.

What if …  I were to see challenges as either niggles (to be lost) or nice-ifs (to be gained)? 
What if … there is a limited or manageable number of niggles to be removed?
What if … I believe there is an unlimited supply of nice-ifs?
What if … I do not get the nice-ifs because I spend all my life fighting the same old niggles?
What if … I nailed some niggles once and for all?
What if … I had time and energy to focus on some nice-ifs?         

None of us enjoy disappointment. We do not like the feeling that follows from reality failing to meet our expectation – we see it as  failure and we often take it personally or accuse others.  As children we can dream freely because have not yet been disappointed enough not to; as adults we appear to lower our expectations to avoid the feeling of disappointment. We learn to settle for smaller dreams or no dreams at all.  I believe the reason we do this is simply because we are not taught any other way – we are not taught how to deliberately and actively access the inexhaustible supply of feel-good-fuel that is the locked-up in our dreams – our nice-ifs. We are not taught how to nail niggles once and forever and how to re-invest our lifetime into make some of our dreams a reality.  To learn those skills we need passion, persistence and patience – and a process. That process is called Improvement Science.

JIT, WIP, LIP and PIP

It is a fantastic feeling when a piece of the jigsaw falls into place and suddenly an important part of the bigger picture emerges. Feelings of confusion, anxiety and threat dissipate and are replaced by a sense of insight, calm and opportunitity.

Improvement Science is about 80% subjective and 20% objective: more cultural than technical – but the technical parts are necessary. Processes obey the Laws of Physics – and unlike the Laws of People these not open to appeal or repeal. So when an essential piece of process physics is missing the picture is incomplete and confusion reigns.

One piece of the process physics jigsaw is JIT (Just-In-Time) and process improvement zealots rant on about JIT as if it were some sort of Holy Grail of Improvement Science.  JIT means what you need arrives just when you need it – which implies that there is no waiting of it-for-you or you-for-it.  JIT is an important output of an improved process; it is not an input!  The danger of confusing output with input is that we may then try to use delivery time as a mangement metric rather than a performance metric – and if we do that we get ourselves into a lot of trouble. Delivery time targets are often set and enforced and to a large extent fail to achieve their intention because of this confusion.  To understand how to achieve JIT requires more pieces of the process physics jigsaw. The piece that goes next to JIT is labelled WIP (Work In Progress) which is the number of jobs that are somewhere between starting and finishing.  JIT is achieved when WIP is low enough to provide the process with just the right amount of resilience to absorb inevitable variation; and WIP is a more useful management metric than JIT for many reasons (which for brevity I will not explain here). Monitoring WIP enables a process manager to become more proactive because changes in WIP can signal a future problem with JIT – giving enough warning to do something.  However, although JIT and WIP are necessary they are not sufficient – we need a third piece of the jigsaw to allow us to design our process to deliver the JIT performance we want.  This third piece is called LIP (Load-In-Progress) and is the parameter needed to plan and schedule  the right capacity at the right place and the right time to achieve the required WIP and JIT.  Together these three pieces provide the stepping stones on the path to Productivity Improvement Planning (PIP) that is the combination of QI (Quality Improvement) and CI (Cost Improvement).

So if we want our PIP then we need to know our LIP and WIP to get the JIT.  Reddit? Geddit?         

“Wars Not Make One Great.”

There appear to be two kinds of conflict: the one initiated by an individual and the one initiated by a group.  There also appears to be a natural cycle to conflict – the individual acting on behalf of a group gains power and can become so disconnected from reality that they are later removed from power by the evolving group. So, both autocracy and democracy appear to have a light-side and a dark-side: with the benefit leading the risk. The problem is that this system design creates the necessary and sufficient conditions for oscillating behaviour: boom-to-bust; centralise-to-decentralise; expand-to-contract. It it not a true cycle though because time cannot be reversed, we can never go back to a previous time – so what we see as oscillating is more like a driver swerving from one side of the  road to another when the road ahead is not straight and the forward view is limited.  To progress quickly along a winding road at night we need early warning of the next bend, good lights, quick  reflexes, and a responsive engine, brakes and steering. We need quick and accurate feedback and the confidence to decide and act.  The less feedback we get the more bumps we have, the lower our confidence falls, and the slower our progress becomes until we are paralysed with anxiety and fear.  Asking for feedback is relatively easy – giving feedback is much more difficult because to be effective it must be tailored to the recipient. General and anonymous feedback is ineffective. This implies that the person who asks for feedback must also specify why they want it and how they want it – they need to set out the terms of the psychological contract.  Without that clarity we descend into confusion. Conflict is often seen as unhealthy and destructive and when conflict is manifest as as battle the out-of-date paradigm that is blocking progress is destroyed but the collateral damage is the price that is paid.  Innocent bystanders get caught in the crossfire. It is this fear of collateral damage that often paralyses action and hands power to the autocrat. The good news is that conflict can be healthy and constructive  – when it is manifest as a race for understanding, for meaning and for a common purpose.  As a race and a challenge and with vision, agility and energy the unknown winding road ahead can be transformed into a safe and exhilarating ride!

Do You Have A Miserable Job?

If you feel miserable at work and do not know what to do then then take heart because you could be suffering from a treatable organisational disease called CRAP (cynically resistant arrogant pessimism).

To achieve a healthier work-life then it is useful to understand the root cause of CRAP and the rationale of how to diagnose and treat it.

Organisations have three interdependent dimensions of performance: value, time and money.  All organisations require both the people and the processes to be working in synergy to reliably deliver value-for-money over time.  To create a productive system it is necessary to understand the relationships between  value, money and time. Money is easier because it is tangible and durable; value is harder because it is intangible and transient. This means that the focus of attention is usually on the money – and it is often assumed that if the money is OK then the value must be OK too.  This assumption is incorrect.

Value and money are interdependent but have different “rates of change”  and can operate in different “directions”.  A common example is when a dip in financial performance triggers an urgent “drive” to improve the “bottom line”.  Reactive revenue generation and cost cutting results in a small, quick, and tangible improvement on the money dimension but at the same time sets off a large, slow, and intangible deterioration on the value dimension.  Money, time and  value are interdependent and the inevitable outcome is a later and larger deterioration in the money – as illustrated in the doodle. If only money is measured the deteriorating value is not detected, and by the time the money starts to falter the momentum of the falling value is so great that even heroic efforts to recover are futile. As the money starts to fall the value falls even further and even faster – the lose-lose-lose spiral of organisational failure is now underway.

People who demonstrate in their attitude and behaviour that they are miserable at work provide the cardinal sign of falling system value. A miserable, sceptical and cynical employee poisons the emotional atmosphere for everyone around them. Misery is both defective and infective.  The primary cause of a miserable job is the behaviour exhibited by people in positions of authority – and the more the focus is only on money the more misery their behaviour generates.

Fortunately there is an antidote; a way to break out of the vicious tail spin – measure both value and money, focus on improving value and observe the positive effect on the money.  The critical behaviour is to actively test the emotional temperature and to take action to keep it moving in a positive direction.  “The Three Signs of a Miserable Job” by Patrick Lencioni tells a story of how an experienced executive learns that the three things a successful managerial leader must do to achieve system health are:
1) ensure employees know their unique place, role and value in the whole system;
2) ensure employees can consciously connect their work with a worthwhile system goal; and
3) ensure employees can objectively measure how they are doing.

Miserable jobs are those where the people feel anonymous, where people feel their work is valueless, and where people feel that they get no feedback from their seniors, peers or juniors. And it does not matter if it is the cleaner or the chief executive – everyone needs a role, a goal and to know all their interdependencies.

We do not have to endure a Miserable Job – we all have the power to transform it into Worthwhile Work.

In Whom and in What do We Trust?

The issue of trust has been a recurring theme again this week – and it has appeared in many guises.  In one situation it was a case of distrust – I observed an overt display of suspicious, sceptical, and cynical behaviour. In another situation it was a case of mistrust – a misplaced confidence in my own intuition. My illogical and irrational heart said one thing but when my mind worked through the problem logically and rationally my intuition was proved incorrect. In another it was a case of rewarded-trust: positive feedback that showed a respectful challenge had resulted in a win-win-win outcome. And in yet another a case of extended-trust: an expression of delighted surprise from someone whose default position was to distrust.

Improvement Science rests on two Foundation stones Trust and Capability. First to trust oneself to have the confidence and humility to challenge, to learn, to change, to improve, to celebrate and to share; second to extend trust to others with a clear explanation of the consequences of betraying that trust; and third in building collective trust by having the courage to challenge trust-eroding behaviour.

At heart we are all curious, friendly, social animals – our natural desire is to want to trust. Distrust is a learned behaviour that, ironically, is the result of the instinctive trust and respect that, as children, we have for our parents.  We are taught to distrust by observing and copying distrustful and disrepectful behaviour by our role models. So with this insight we gain access to an antidote to the emotional poison of distrust: our innate child-like curiosity, desire to explore, appetite for fun, and thirst for knowledge and meaning. To dissolve distrust we only need to reconnect to our own inner child: One half of the foundation of Improvement Science.

Deming’s “System of Profound Knowledge”

W. Edwards Deming (1900-1993) is sometimes referred to as the Father of Quality. He made such a significant contribution to Japan’s burgeoning post-war reputation for innovative high-quality products, and the rapid development of their economic power, that he is regarded as having made more of a difference than any other individual not of Japanese heritage.

Though best known as a statistician and economist, he was initially educated as an electrical engineer and mathematical physicist. To me however he was more of a social scientist – interested in the science of improvement and the creation of value for customers. A lifelong learner, in his later years (1) he became fascinated by epistemology – the processes by which knowledge is created – and this led him into wanting to know more about the psychology of human behaviour and its underlying motivations.

In his nineties he put his whole life of learning into one model – his System of Profound Knowledge (SoPK). What follows is my brief take on each of the four elements of the SoPK and how they fit together.

THE PSYCHOLOGY OF HUMAN BEHAVIOUR
Everyone is different, and we all SEE things differently. We then DO things based on how we see things – and we GET results – of some kind. Over time we shore up our own particular view of the world – some call this a “paradigm” – our own particular world view – multiple loops of DO-GET-SEE (2) are self-reinforcing and as our sense making becomes increasingly fixed we BEHAVE – BECOME – BELIEVE. The trouble is we each to some extent get divorced from reality, or at least how most others see it – in extreme cases we might even get classified by some people as “insane” – indeed the clinical definition of insanity is doing the same things whilst expecting different results.

THE ACQUISITION OF KNOWLEDGE
So when we DO things it would be helpful if we could do them as little experiments that test our sense of what works and what is real. Even better we might get others to help us interpret the results from the benefit of their particular world view/ paradigm. Did you study science at school? If so you might recognize that learning in this way by experimentation is the “scientific method” in action. Through these cycles of learning knowledge gets continually refined and builds. It is also where improvement comes from and how reality evolves. Deming referred to this as the PLAN-DO-STUDY-ACT Cycle (1) – personally i prefer the words in this adjacent diagram. For me the cycle is as much about good mental health as acquiring knowledge, because effective learning (3) keeps individuals and organizations connected to reality and in control of their lives.

UNDERSTANDING VARIATION
The origins of PDSA lie with Walter Shewhart (4) who in 1925 – invented it to help people in organizations methodically and continually inquire into what is happening. He observed that when workers or managers make changes in their working practices so that their processes run better, the results vary, and that this variation often fools them. So he invented a tool for collecting numbers in real time so that each process can be listened in to as a “system” – much like a doctor uses a stethoscope to collect data and interpret how their patient’s system is behaving, by asking what might be contributing to – actually causing – the system’s outcomes. Shewhart named the tool Statistical Process Control – three words, each of which for many people are an instant turn-off. This means they miss his critical insight that there are two distinct types of variation – noise and signal, and that whilst all systems contain noise, only some contain signals – which if present can be taken to be assignable causes of systemic behaviour. Indeed to make it more palatable the tool might better be referred to as a “system behaviour chart”. It is meant to be interpreted like a doctor or nurse interprets the vital sign graph on the end of a patient’s bed i.e. to decide what action if any to take and when. Here is an example that has been created in BaseLine© which is specifically designed to offer the agnostic direct access to the power of Shewhart’s thinking. (5).

THINKING SYSTEMICALLY
What is meant by the word “system”? It means all the parts connected and interrelated as a whole (3). It is often helpful to get representatives of the various stakeholder groups to map the system – with its parts, the flows and the connections – so they can see how different people make sense of say.. their family system, their work system, a particular process of interest.. indeed any system of any kind that feels important to them. The map shown here is one used that might be used generically by manufacturers to help them investigate the separate causal sources of systemic variation – from the Suppliers of Inputs received, to the Processes that convert those inputs into Outputs, which can then be received by Customers – all made possible by vital support processes. This map (1) was taught by Deming in 1950 to Japan’s leaders. When making sense of their own particular systemic context others may prefer a different kind of map, but why? How come others prefer to make sense of things in their own way? To answer this Peter Senge (3) in his own equivalent to the SoPK says you need 5 distinct disciplines: the ability to think systemically, to learn as a team, to create a shared vision, to understand how our mental models get ingrained, and lastly “personal mastery” … which takes me back to where I started.

Aware that he was at the end of his life of learning, Deming bequeathed his System of Profound Knowledge to us so that we might continue his work. Personally, I love the SoPK because it is so complete. It is hard however to keep such a model, complete and as a whole, continually in the front of our minds – such that everything we think and do can be viewed as a fractal of that elegant whole. Indeed as a system, the system of profound knowledge is seriously – even fatally – undermined if any single part is missing ..

• Without understanding the causes of human behaviour we have no empathy for other people’s worldviews, other value systems. Without empathy our ability to manage change is fundamentally impaired.

• Without being good at experimentation and turning our experience into Knowledge – the very essence of science – we threaten our very mental health.

• Without understanding variation we are all too easily deluded – ask any magician (6). We spin our own reality. In ignoring or falsely interpreting data we are even “wilfully blind” (7). Baseline© for example is designed to help people make more of their time-series data – a window onto the system that their data is representing – using its inherent variation to gain an enhanced sense of what’s actually happened, as well as what’s really happening, and what if things stay the same is most likely to happen.

• Without being able to see how things are connected – as a whole system – and seeing the uniqueness of our own particular context, moment to moment, we miss the importance of our maps – and those of others – for good sense-making. We therefore miss the sharing of our individual realities, and with it the potential to spot what really causes outcomes – which neatly takes us back to the need for empathy and for understanding the psychology of human behaviour.

For me the challenge is to be continually striving for that sense of the SoPK – as a complete whole – and by doing this to see how I might grow my influence in the world.

Julian Simcox

References

1. Deming W.E – The New Economics – 1993
2. Covey S.R. – The 7 habits of Highly Effective People – 1989
3. Senge P. M. – The Fifth Discipline: the art and practice of the learning organization – 1990
4. Wheeler D.J. & Poling S.R.– Building Continual Improvement – 1998
5. BaseLine© is available via www.threewinsacademy.co.uk.
6. Macknik S, et al – Sleights of Mind – What the neuroscience of magic reveals about our brains – 2011.
7. Heffernan M. – Wilfully Blind – 2011

Politics, Policy and Police.

I love words – they are a window into the workings of our caveman wetware. Spoken and written language is the remarkably recent innovation that opened the door to the development of civilisations because it allowed individual knowledge to accumulate, to be shared, to become collective and to span generations (the picture is 4000 year old Minoan script) .

We are social animals and we have discovered that our lives are more comfortable and more predictable if we arrange ourselves into collaborative groups – families, tribes and communities; and through our collaboration we have learned to tame our enironment enough to allow us to settle in one place and to concentrate more time and effort on new learning.  The benefits of this strategy comes at a price – because as the size of our communities grow we are forced to find new ways to make decisions that are in the best interests of everyone.  And we need to find new ways to help ourselves abide by those decisions as individuals without incurring the cost of enforcement.  The word “civis” means a person who shares the privileges and the duties of the community in which they live.  And size matters – hamlets, villages and towns developed along with our ability to behave in a “civilised” way. Eventually cities appeared around 6000 years ago – and the Greek word for a city is “polis”.  The bigger the city the greater the capacity to support learning and he specialistion of individual knowledge, skills and experience. This in turn fuels the growth of the group and the development of specialised groups – tribes within tribes. A positive feedback loop is created that drives bigger-and-bigger settlements and more and more knowledge. Until … we forget what it is that underpins the whole design – civilised behaviour.  While our knowkedge has evolved at an accelerating pace our caveman brains have not kept up – and this is where the three “Poli” words come in – they all derive from the same root “polis” and they describe a process:

1. Politic  is the method by which the collective decisions are generated.
2. Policy is the method by which the Political decisions are communicated.
3. Police is the method by which the System of Policies are implemented.

The problem arises when the growth of knowledge and the inevitable changes that result starts to challenge the current Politic+Policy+Police Paradigm that created the context for the change to happen.  The Polices are continulally evolving – as evidenced by the continuous process of legislation. The Paradigm can usually absorb a lot of change but there usually comes a point when it becomes increasingly apparent to the society the the Paradigm has to change radically to support further growth. The more rigid the Policy and the more power to enforce if present the greater the social pressure that builds before the paradigm fractures – and the greater the disruption that will ensue as the social pressure is released.  History is a long catalogue of political paradigm shifts of every size – from minor tremors to major quakes – shifts that are driven by our insatiable hunger for knowledge, understanding and meaning.

Improvement Science operates at the Policy stage and is therefore forms the critical link between Politics and Police.  The purpose of Improvement Science is to design, test and implement Policies that deliver the collective Win-Win-Win outcomes.  Improvement Science is an embodiment of civilised behaviour and it embraces both the constraints that are decided by the People and the constraints that are defined by the Physics.

Do Bosses need Hugs too?

The foundation on which Improvement Science is built is invisible – or rather intangible – and without this foundation the whole construction is unstable and unsustainable.  Rather like an iceberg – mostly under the surface with only a small part that is visible and measurable – and that small visible part is called Performance.

What is underneath?  To push our Performance through the surface so that it gets noticed we know we must synergise the People with the Processes but there is more to it than just that. The deepest part of the foundation, the part that provides the core strength and stability, is our Paradigm – our set of unconscious  beliefs, values, attitudes and habits that comprises our psycho-gyro-scope: our stabiliser. 

Our Paradigm creates inertia: the tendency to keep going in the same direction even when the winds of change have shifted permamantly and are blowing us off course.  Paradigms resist change – and for good reason – inertia is a useful thing when there are minor bumps on the journey and we need to avoid stalling at each one. Inertia becomes a less useful thing when we meet an immovable object such as a Law of Physics – because if we hit one of these then Reality will provide us with some painful feedback. Inertia is also less useful when we have stopped and have no momentum,  because it takes a bigger push for a longer time to get us moving again.

An elephant has a lot of inertia because it is big – and perhaps this is the reason why we refer to  attitudes and beliefs that represent resistance to change as Elephants in the Room.  The ringleader of a herd of organisational elephants is an elephant called Distrust which is the offspring an elephant called Discounting who in turn was born of an elephant called Disrespect.  We see this in organisationswhen we display and cultivate a disrepectful attitudes towards our peers, reports workers and our seniors. The old time-worn and cracked “us-versus-them” record.

So let us break into the cycle and push the Elephant called Distrust into spotlight – what is our alternative. Respect -> Acknowledgement -> Trust.   It doesn’t make any difference who you are: the most valuable form of respect is feedback:  Honest, Unbiassed and Genuine (HUG).  So if we regularly experience the Elephant called Distrust making a Toxic Swamp in our organisations and we feel discounted and disrespected then part of the reason may be that we are not giving ourselves enough HUGs. And that means the bosses too.

Sentenced to Death-by-Meeting!

Do you ever feel a sense of dread when you are summoned to an urgent meeting; or when you get the minutes and agenda the day before your monthly team meeting; or when you see your diary full of meetings for weeks in advance – like a slow and painful punishment?

If so then you may have unwittingly sentenced yourself to Death by Meeting.  What?  We do it to ourselves? No way! That would be madness!

But think about it. We consciously and deliberately ingest all sorts of other toxins: chemicals like caffeine, alcohol and cigarette smoke – so what is so different about immersing ourselves in the emotional toxic waste that many meetings seem to generate?

Perhaps we have learned to believe that there is no other way and because we have never experienced focussed, fun, and effective meetings where problems are surfaced, shared and solved quickly – problems that thwart us as individuals. Meetings where the problem-solving sum is greater than the problem-accumulating parts.

A meeting is a system that is designed to solve  problems.  We can improve our system incrementally but it is a slow process; to achieve a breakthrough we need to radically redesign the system.  There are three steps to doing this:

1. First decide what sort of problems the meeting is required to solve: strategic, operational or tactical;
2. Second design, test and practice a problem solving process for each category of problem; and
3. Third, select the appropriate tool for the task.

In his illuminating book Death by Meeting, Patrick Lencioni describes three meeting designs and illustrates with a story why meetings don’t work if the wrong tool is used for the wrong task. It is a sobering story.

There is another dimension to the design of meetings; that is how we solve problems as groups – and how, as a group, we seem to waste a lot of effort and time in unproductive discussion.  In his book Six Thinking Hats Edward De Bono provides an explanation for our habitual behaviour and a design for a radically different group problem solving process – one that a group would not arrive at by evolution – but one that has been proven to work.

If  we feel sentenced to death-by-meetings then we could buy and read these two small books – a zero-risk, one-off investment of effort, time and money for a guaranteed regular reward of fun, free time and success!

So if I complain to myself and others about pointless meetings and I have not bothered to do something about it myself then I now know that it is I who sentenced myself to Death-by-Meeting. Unintentionally and unconsciously perhaps – but me nevertheless.

Is a Queue an Asset or a Liability?

Many believe that a queue is a good thing.

To a supplier a queue is tangible evidence that there is demand for their product or service and reassurance that their resources will not sit idle, waiting for work and consuming profit rather than creating it.  To a customer a queue is tangible evidence that the product or service is in demand and therefore must be worth having. They may have to wait but the wait will be worth it.  Both suppliers and customers unconsciously collude in the Great Deception and even give it a name – “The Law of Supply and Demand”. By doing so they unwittingly open the door for charlatans and tricksters who deliberately create and maintain queues to make themselves appear more worthy or efficient than they really are.

Even though we all know this intuitively we seem unable to do anything about it. “That is just the way it is” we say with a shrug of resignation. But it does not have to be so – there is a path out of this dead end.

Let us look at this problem from a different perspective. Is a product actually any better because we have waited to get it? No. A longer wait does not increase the quality of the product or service and may indeed impair it.  So, if  a queue does not increase quality does it reduce the cost?  The answer again is “No”. A queue always increases the cost and often in many ways.  Exactly how much the cost increases by depends on what is on the queue, where the queue is, and how long it is. This may sound counter-intitutive and didactic so I need to explain in a bit more detail the reason this statement is an inevitable consequence of the Laws of Physics.

Suppose the queue comprises perishable goods; goods that require constant maintenance; goods that command a fixed price when they leave the queue; goods that are required to be held in a container of limited capacity with fixed overhead costs (i.e. costs that are fixed irrespective of how full the container is).  Patients in a hospital or passengers on an aeroplane are typical examples because the patient/passenger is deprived of their ability to look after themselves; they are totally dependent on others for supplying all their basic needs; and they are perishable in the sense that a patient cannot wait forever for treatment and an aeroplane cannot fly around forever waiting to land. A queue of patients waiting to leave hospital or an aeroplane full of passsengers circling to land at an airport represents an expensive queue – the queue has a cost – and the bigger the queue is and the longer it persists the greater the cost.

So how does a queue form in the first place? The answer is: when the flow in exceeds the flow out. The instant that happens the queue starts to grow bigger.  When flow in is less than flow out the queue is getting smaller – but we cannot have a negative queue – so when the flow out exceeds the flow in AND the size of the queue reaches zero the system suddenly changes behaviour – the work dries up and the resources become idle.  This creates a different cost – the cost of idle resources consuming money but not producing revenue. So a queue/work costs and no queue/no work costs too.  The least cost situation is when the work arrives at exactly the same rate that it can be done: there is no waiting by anyone – no queue and no idle resources.  Note however that this does not imply that the work has to arrive at a constant rate – only that rate at which the work arrives matches the rate at which it is done – it is the difference between the two that should be zero at all times. And where we have several steps – the flow must be the same through all steps of the stream at all times.  Remember the second condition for minimum cost – the size of the queue must be zero as well – this is the zero inventory goal of the “perfect process”.

So, if any deviation from this perfect balance of flow creates some form of cost, why do we ever tolerate queues? The reason is that the perfect world above implies that it is possible to predict the flow in and the flow out with complete accuracy and reliabilty.  We all know from experience that this is impossible: there is always some degree of  natural variation which is unpredictable and which we often call “noise” or “chaos”. For that single reason the lowest cost (not zero cost) situation is when there is just enough breathing space for a queue to wax and wane – smoothing out the unpredictable variation between inflow and outflow. This healthy queue is called a buffer.

The less “noise” the less breathing space is needed and the closer you can get to zero queue cost.

So, given this logical explanation it might surprise you to learn that most of the flow variation we observe in real processes is neither natural nor unpredictable – we deliberately and persistently inject predictable flow variation into our processes.  This unnatural variation is created by own policies – for example, accumulating DIY jobs until there are enough to justify doing them.   The reason we do this is because we have been bamboozled into believing it is a good thing for the financial health of our system. We have been beguiled by the accountants – the Money Magicians.  Actually that is not precise enough – the accountants themselves  are the innocent messengers – the deception comes from the Accounting Policies.  The major niggle is one convention that has become ossified into Accounting Practice – the convention that a queue of work waiting to be finished or sold represents an asset – sort of frozen-for-now-cash that can be thawed out or “liquidated” when the product is sold.  This convention is not incorrect it is just incomplete because, as we have demonstrated, every queue incurs a cost.  In accountant-speak a cost is called a liability and unfortunately this queue-cost-liability is never included in the accounts and this makes a very, very, big difference to the outcome. To assess the financial health of an organisation at a point in time an accountant will use a balance sheet to subtract the liabilities from the assets and come up with a number that is called equity. If that number is zero or negative then the business is financially dead – the technical name is bankruptcy and no accountant likes to utter the B word.  Denial is not a reliable long term buisness strategy and if our Accounting Policies do not include the cost of the queue as a liability on the balance sheet then our finanical reports will be a distortion of reality and will present the business as healthier than it really is.  This is an Error of Omission and has grave negative consequences.  One of which is that it can create a sense of complacency, a blindness to the early warning signs of financial illness and reactive rather than proactive behaviour. The problem is compounded when a large and complex organisation is split into smaller, simpler mini-businesses that all suffer from the same financial blindspot. It becomes even more difficult to see the problem when everyone is making the same error of omission and when it is easier to blame someone else for the inevitable problems that ensue.

We all know from experience that prevention is better than cure and we also know that the future is not predictable with certainty: so in addition to prevention we need vigilence, prompt action, decisive action and appropriate action at the earliest detectable sign of a significant deterioration. Complacency is not a reliable long term survival strategy.

So what is the way forward? Dispense with the accountants? NO! You need them – they are very good at what they do – it is just that what they are doing is not exactly what we all need them to be doing – and that is because the Accounting Policies that they diligently enforce are incomplete.  A safer strategy would be for us to set our accountants the task of learning how to count the cost of a queue and to include that in our internal finanical reporting. The quality of business decisions based on financial data will improve and that is good for everyone – the business, the customers and the reputation of the Accounting Profession. Win-win-win.

The question was “Is a queue and asset or a liability?” The answer is “Both”.