The Nerve Curve

The Nerve Curve is the emotional roller-coaster ride that everyone who engages in Improvement needs to become confident to step onto.

Just like a theme park ride it has ups and downs, twists and turns, surprises and challenges, an element of danger and a splash of excitement.  If it did not have all of those components then it would not be fun and there would not be queues of people wanting to ride, again and again.  And the reason that theme parks are so successful is because their rides have been very carefully designed – to be challenging, exciting, fun and safe – all at the same time.

So, when we challenge others to step aboard our Improvement Nerve Curve then we need to ensure that our ride is safe – and to do that we need to understand where the emotional dangers lurk, to actively point them out and then avoid them.

A big danger hides right at the start.  To get aboard the Nerve Curve we have to ask questions that expose the Elephant-in-the-Room issues.  Everyone knows they are there – but no one wants to talk about them.   The biggest one is called Distrust – which is wrapped up in all sorts of different ways and inside the nut is the  Kernel of Cynicism.  The inexperienced improvement facilitator may blunder straight into this trap just by using one small word … the word “Why”?  Arrrrrgh!  Kaboom!  Splat!  Game Over.

The “Why” question is like throwing a match into a barrel of emotional gunpowder – because it is interpreted as “What is your purpose?” and in a low-trust climate no one will want to reveal what their real purpose or intention is.  They have learned from experience to keep their cards close to their chest – it is safer to keep agendas hidden.

A much safer question is “What?”  What are the facts?  What are the effects? What are the causes? What works well? What does not? What do we want? What don’t we want? What are the constraints? What are our change options? What would each deliver? What are everyone’s views?  What is our decision?  What is our first action? What is the deadline?

Sticking to the “What” question helps to avoid everyone diving for the Political Panic Button and pulling the Emotional Emergency Brake before we have even got started.

The first part of the ride is the “Awful Reality Slope” that swoops us down into “Painful Awareness Canyon” which is the emotional low-point of the ride.  This is where the elephants-in-the-room roam for all to see and where passengers realise that, once the issues are in plain view, there is no way back.

The next danger is at the far end of the Canyon and is called the Black Chasm of Ignorance and the roller-coaster track goes right to the edge of it.  Arrrgh – we are going over the edge of the cliff – quick grab the Wilful Blindness Goggles and Denial Bag from under the seat, apply the Blunder Onwards Blind Fold and the Hope-for-the-Best Smoke Hood.

So, before our carriage reaches the Black Chasm we need to switch on the headlights to reveal the Bridge of How:  The structure and sequence that spans the chasm and that is copiously illuminated with stories from those who have gone before.  The first part is steep though and the climb is hard work.  Our carriage clanks and groans and it seems to take forever but at the top we are rewarded by a New Perspective and the exhilarating ride down into the Plateau of Understanding where we stop to reflect and to celebrate our success.

Here we disembark and discover the Forest of Opportunity which conceals many more Nerve Curves going off in all directions – rides that we can board when we feel ready for a new challenge.  There is danger lurking here too though – hidden in the Forest is Complacency Swamp – which looks innocent except that the Bridge of How is hidden from view.   Here we can get lured by the pungent perfume of Power and the addictive aroma of Arrogance and we can become too comfortable in the Zone.   As we snooze in the Hammock of Calm from we do not notice that the world around us is changing.  In reality we are slipping backwards into Blissful Ignorance and we do not notice – until we suddenly find ourselves in an unfamiliar Canyon of Painful Awareness.  Ouch!

Being forewarned is our best defense.  So, while we are encouraged to explore the Forest of Opportunity,  we learn that we must also return regularly to the Plateau of Understanding to don the Habit of Humility.  We must  regularly refresh ourselves from the Fountain of New Knowledge by showing others what we have learned and learning from them in return.  And when we start to crave more excitement we can board another Nerve Curve to a new Plateau of Understanding.

The Safety Harness of our Improvement journey is called See-Do-Teach and the most important part is Teach.  Our educators need to have more than just a knowledge of how-to-do, they also need to have enough understanding to be able to explore the why-to -do. The Quest for Purpose.

To convince others to get onboard the Nerve Curve we must be able to explain why the Issues still exist and why the current methods are not sufficient.  Those who have been on the ride are the only ones who are credible because they understand.  They have learned by doing.

And that understanding grows with practice and it grows more quickly when we take on the challenge of learning how to explore purpose and explain why.  This is Nerve Curve II.

All aboard for the greatest ride of all.

Knowledge and Understanding

Knowledge is not the same as Understanding.

We all know that the sun rises in the East and sets in the West; most of us know that the oceans have a twice-a-day tidal cycle and some of us know that these tides also have a monthly cycle that is associated with the phase of the moon. We know all of this just from taking notice; remembering what we see; and being able to recognise the patterns. We use this knowledge to make reliable predictions of the future times and heights of the tides; and we can do all of this without any understanding of how tides are caused.

Our lack of understanding means that we can only describe what has happened. We cannot explain how it happened. We cannot extract meaning – the why it happened.

People have observed and described the movements of the sun, sea, moon, and stars for millennia and a few could even predict them with surprising accuracy – but it was not until the 17th century that we began to understand what caused the tides. Isaac Newton developed enough of an understanding to explain how it worked and he did it using a new concept called gravity and a new tool called calculus.  He then used this understanding to explain a lot of other unexplained things and suddenly the Universe started to make a lot more sense to everyone. Nowadays we teach this knowledge at school and we take it for granted. We assume it is obvious and it is not. We are no smarter now that people in the 17th Century – we just have a deeper understanding (of physics).

Understanding enables things that have not been observed or described to be predicted and explained. Understanding is necessary if we want to make rational and reliable decisions that will lead to changes for the better in a changing world.

So, how can we test if we only know what to do or if we actually understand what to do?

If we understand then we can demonstrate the application of our knowledge by solving old and new problems effectively and we can explain how we do it.  If we do not understand then we may still be able to apply our knowledge to old problems but we do not solve new problems effectively or efficiently and we are not able to explain why.

But we do not want the risk of making a mistake in order to test if we have and understanding-gap so how can we find out? What we look for is the tell-tale sign of an excess of knowledge and a dearth of understanding – and it has a name – it is called “bureaucracy”.

Suppose we have a system where the decisions-makers do not make effective decisions when faced with new challenges – which means that their decisions lead to unintended adverse outcomes. It does not take very long for the system to know that the decision process is ineffective – so to protect itself the system reacts by creating bureaucracy – a sort of organisational damage-limitation circle of sand-bags that limit the negative consequences of the poor decisions. A bureaucratic firewall so to speak.

Unfortunately, while bureaucracy is effective it is non-specific, it uses up resources and it slows everything down. Bureaucracy is inefficiency. What we get as a result is a system that costs more and appears to do less and that is resistant to any change – not just poor decisions – it slows down good ones too.

The bureaucratic barrier is important though; doing less bad stuff is actually a reasonable survival strategy – until the cost of the bureaucracy threatens the systems viability. Then it becomes a liability.

So what happens when a last-saloon-in-town “efficiency” drive is started in desperation and the “bureaucratic red tape” is slashed? The poor decisions that the red tape was ensnaring are free to spread virally and when implemented they create a big-bang unintended adverse consequence! The safety and quality performance of the system drops sharply and that triggers the reflex “we-told-you-so” and rapid re-introduction of the red-tape, plus some extra to prevent it happening again.  The system learns from its experience and concludes that “higher quality always costs more” and “don’t trust our decision-makers” and “the only way to avoid a bad decision is not to make/or/implement any decisions” and to “the safest way to maintain quality is to add extra checks and increased the price”. The system then remembers this new knowledge for future reference; the bureaucratic concrete sets hard; and the whole cycle repeats itself. Ad infinitum.

So, with this clearer insight into the value of bureaucracy and its root cause we can now design an alternative system: to develop knowledge into understanding and by that route to improve our capability to make better decisions that lead to predictable, reliable, demonstrable and explainable benefits for everyone. When we do that the non-specific bureaucracy is seen to impede progress so it makes sense to dismantle the bits that block improvement – and keep the bits that block poor decisions and that maintain performance. We now get improved quality and lower costs at the same time, quickly, predictably and without taking big risks, and we can reinvest what we have saved in making making further improvements and developing more knowledge, a deeper understanding and wiser decisions. Ad infinitum.

The primary focus of Improvement Science is to expand understanding – our ability to decide what to do, and what not to; where and where not to; and when and when not to – and to be able to explain and to demonstrate the “how” and to some extent the “why”.

One proven method is to See, then to Do, and then to Teach. And when we try that we discover to our surprise that the person whose understanding increases the most is the teacher!  Which is good because the deeper the teachers understanding the more flaxible, adaptable and open to new learning they become.  Education and bureaucracy are poor partners.

Cause and Effect

Breaking News: Scientists have discovered that people with yellow teeth are more likely to die of lung cancer. Patient-groups and dentists are now calling for tooth-whitening to be made freely available to everyone.”

Does anything about this statement strike you as illogical? Surely it is obvious. Having yellow teeth does not cause lung cancer – smoking causes both yellow teeth and lung cancer!  Providing a tax-funded tooth-whitening service will be futile – banning smoking is the way to reduce deaths from lung cancer!

What is wrong here? Do we have a problem with mad scientists, misuse of statistics or manipulative journalists? Or all three?

Unfortunately, while we may believe that smoking causes both yellow teeth and lung cancer it is surprisingly difficult to prove it – even when sane scientists use the correct statistics and their results are accurately reported by trustworthy journalists.  It is not easy to prove causality.  So we just assume it.

We all do this many times every day – we infer causality from our experience of interacting with the real world – and it is our innate ability to do that which allows us to say that the opening statement does not feel right.  And we do this effortlessly and unconsciously.

We then use our inferred-causality for three purposes. Firstly, we use it to explain how past actions led to the present situation. The chain of cause-and-effect. Secondly, we use it to create options in the present – our choices of actions. Thirdly, we use it to predict the outcome of our chosen action – we set our expectation and then compare the outcome with our prediction. If outcome is better than we expected then we feel good, if it is worse then we feel bad.

What we are doing naturally and effortlessly is called “causal modelling”. And it is an impressive skill. It is the skill needed to solve problems by designing ways around them.

Unfortunately – the ability to build and use a causal model does not guarantee that our model is a valid, complete or accurate representation of reality. Our model may be imperfect and we may not be aware of it.  This raises two questions: “How could two people end up with different causal models when they are experiencing the same reality?” and “How do we prove if either is correct and if so, which it is?”

The issue here is that no two people can perceive reality exactly the same way – we each have an unique perspective – and it is an inevitable source of variation.

We also tend to assume that what-we-perceive-is-the-truth so if someone expresses a different view of reality then we habitually jump to the conclusion that they are “wrong” and we are “right”.  This unconscious assumption of our own rightness extends to our causal models as well. If someone else believes a different explanation of how we got to where we are, what our choices are and what effect we might expect from a particular action then there is almost endless opportunity for disagreement!

Fortunately our different perceptions agree enough to create common ground which allows us to co-exist reasonably amicably.  But, then we take the common ground for granted, it slips from our awareness, and we then magnify the molehills of disagreement into mountains of discontent.  It is the way our caveman wetware works. It is part of the human condition.

So, if our goal is improvement, then we need to consider a more effective approach: which is to assume that all our causal models are approximate and that they are all works-in-progress. This implies that each of us has two challenges: first to develop a valid causal model by testing it against reality through experimentation; and second to assist the collective development of a common causal model by sharing our individual understanding through explanation and demonstration.

The problem we then encounter is that statistical analysis of historical data cannot answer questions of causality – it is necessary but it is not sufficient – and because it is insufficient it does not make common-sense.  For example, there may well be a statistically significant association between “yellow teeth” and “lung cancer” and “premature death” but knowing those facts is not enough to help us create a valid cause-and-effect model that we then use to make wiser choices of more effective actions that cause us to live longer.

Learning how to make wiser choices that lead to better outcomes is what Improvement Science is all about – and we need more than statistics – we need to learn how to collectively create, test and employ causal models.

And that has another name – is called common sense.

Resistance to Change

Many people who are passionate about improvement become frustrated when they encounter resistance-to-change.

It does not matter what sort of improvement is desired – safety, delivery, quality, costs, revenue, productivity or all of them.

The natural and intuitive reaction to meeting resistance is to push harder – and our experience of the physical world has taught us that if we apply enough pressure at the right place then resistance will be overcome and we will move forward.

Unfortunately we sometimes discover that we are pushing against an immovable object and even our maximum effort is futile – so we give up and label it as “impossible”.

Much of Improvement Science appears counter-intuitive at first sight and the challenge of resistance is no different.  The counter-intuitive response to feeling resistance is to pull back, and that is exactly what works better. But why does it work better? Isn’t that just giving up and giving in? How can that be better?

To explain the rationale it is necessary to examine the nature of resistance more closely.

Resistance to change is an emotional reaction to an unconsciously perceived threat that is translated into a conscious decision, action and justification: the response. The range of verbal responses is large, as illustrated in the caption, and the range of non-verbal responses is just as large.  Attempting to deflect or defuse all of them is impractical, ineffective and leads to a feeling of frustration and futility.

This negative emotional reaction we call resistance is non-specific because that is how our emotions work – and it is triggered as much by the way the change is presented as by what the change is.

Many change “experts” recommend  the better method of “driving” change is selling-versus-telling and recommend learning psycho-manipulation techniques to achieve it – close-the-deal sales training for example. Unfortunately this strategy can create a psychological “arms race” which can escalate just as quickly and lead to the same outcome: an  emotional battle and psychological casualties. This outcome is often given the generic label of “stress”.

An alternative approach is to regard resistance behaviour as multi-factorial and one model separates the non-specific resistance response into separate categories: Why DoDon’t Do – Can’t Do – Won’t Do.

The Why Do response is valuable feedback because is says “we do not understand the purpose of the proposed change” and it is not unusual for proposals to be purposeless. This is sometimes called “meddling”.  This is fear of the unknown.

The Don’t Do  is valuable feedback that is saying “there is a risk with this proposed change – an unintended negative consequence that may be greater than the intended positive outcome“.  Often it is very hard to explain this NoNo reaction because it is the output of an unconscious thought process that operates out of awareness. It just doesn’t feel good. And some people are better at spotting the risks – they prefer to wear the Black Hat – they are called skeptics.  This is fear of failure.

The Can’t Do is also valuable feedback that is saying “we get the purpose and we can see the problem and the benefit of a change – we just cannot see the path that links the two because it is blocked by something.” This reaction is often triggered by an unconscious recognition that some form of collaborative working will be required but the cultural context is low on respect and trust. It can also just be a manifestation of a knowledge, skill or experience gap – the “I don’t know how to do” gap. Some people habitually adopt the Victim role – most are genuine and do not know how.

The Won’t Do response is also valuable feedback that is saying “we can see the purpose, the problem, the benefit, and the path but we won’t do it because we don’t trust you“. This reaction is common in a low-trust culture where manipulation, bullying and game playing is the observed and expected behaviour. The role being adopted here is the Persecutor role – and the psychological discount is caring for others. Persecutors lack empathy.

The common theme here is that all resistance-to-change responses represent valuable feedback and explains why the better reaction to resistance is to stop talking and start listening because to make progress will require using the feedback to diagnose what components or resistance are present. This is necessary because each category requires a different approach.

For example Why Do requires making the both problem and the purpose explicit; Don’t Do requires exploring the fear and bringing to awareness what is fuelling it; Can’t Do requires searching for the skill gaps and filling them; and Won’t Do requires identifying the trust-eroding beliefs, attitudes and behaviours and making it safe to talk about them.

Resistance-to-change is generalised as a threat when in reality it represents an opportunity to learn and to improve – which is what Improvement Science is all about.

Building a Big Picture from the Small Bits

We are all a small piece of a complex system that extends well beyond the boundaries of our individual experience.

We all know this.

We also know that seeing the big picture is very helpful because it gives us context, meaning and leads to better decisions more effective actions.

We feel better when we know where we fit into the Big Picture – and we feel miserable when we do not.

And when our system is not working as well as we would like then we need to improve it; and to do that we need to understand how it works so that we only change what we need to.

To do that we need to see the Big Picture and to understand it.


So how do we build the Big Picture from the Small Bits?

Solving a jigsaw puzzle is a good metaphor for the collective challenge we face. Each of us holds a piece which we know very well because it is what we see, hear, touch, smell and taste every day. But how do we assemble the pieces so that we can all clearly see and appreciate the whole rather than dimly perceive a dysfunctional heap of bits?

One strategy is to look for tell-tale features that indicate where a piece might fit – irrespective of the unique picture on it. Such as the four corners.

We also use this method to group pieces that belong on the sides – but this is not enough  to tell us which side and where on which side each piece fits.

So far all we have are some groups of bits – rough parts of the whole – but no clear view of the picture. To see that we need to look at the detail – the uniqueness of each piece.


Our next strategy is to look at the shapes of the edges to find the pieces that are complementary – that leave no gaps when fitted together. These are our potential neighbours. Sometimes there is only one bit that fits, sometimes there are many that fit well enough.


Our third strategy is to look at the patterns on the potential neighbours and to check for continuity because the picture should flow across the boundary – and a mismatch means we have made an error.

 What we have now is the edges of the picture and a heap of bits that go somewhere in the middle.

By connecting the edge-pieces we can see that there are gaps and this is an important insight.

It is not until we have a framework that spans the whole picture that the gaps become obvious.

But we do not know yet if our missing pieces are in the heap or not – we will not know that until we have solved the jigsaw puzzle.


Throughout the problem-dissolving process we are using three levels of content:
Data that we gain through our senses, in this case our visual system;
Information which is the result of using context to classify the data – shape and colour for example; and
Knowlege which we derive from past experience to help us make decisions – “That is a top-left corner so it goes there; that is an edge so it goes in that group; that edge matches that one so they might be neighbours and I will try fitting them together; the picture does not flow so they cannot be neighbours and I must separate them”.

The important point is that we do not need to Understand the picture to do this – we can just use “dumb” pattern-matching techniques, simple logic and brute force to decide which bits go together and which do not. A computer could do it – and we or the computer can solve the puzzle and still not recognise what we are looking at, understand what it means, or be able to make a wise decision.


To do that we need to search for meaning – and that usually means looking for and recognising symbols that are labels for concepts and using the picture to reveal how they relate to each other.

As we fit the neighbours together we see words and phrases that we may recognise – “Legend” and “cycle” for example (click the picture to enlarge)  – and we can use these labels to start to build a conceptual framework, and from that we create an expectation. Just as we did with the corners and edges.

The word “cycle” implies a circle, which is often drawn as a curved line, so we can use this expectation to look for pieces of a circle and lay them out – just as we did with the edges.

We may not recognise all the symbols – “citric acid” for example – and that finding means that there is new knowledge hidden in the picture. By the end we may understand what those new symbols mean from the context that the Big Picture creates.

By searching for meaning we are doing more than mechanically completing a task – we are learning, expanding our knowledge and deepening our understanding.

But to do this we need to separate the heap of bits so they do not obscure each other and so we can see each clearly. When it is a mess the new learning and deeper understanding will elude us.

We have now found some pieces with lines on that look like parts of a circle, so we can arrange them into an approximate sequence – and when we do that we are delighted to find that the pieces fit together, the pictures flow from one to the other, and there is a sense of order and structure starting to emerge from within the picture itself.

Until now the only structure we saw was the artificial and meaningless boundary.  We now see a new and unfamiliar phrase “citric acid cycle” – what is that? Our curiosity is building.

As we progress we find repeated symbols that we now recognise but do not understand – red and gray circles linked together. In the top right under the word “Legend” we see the same symbols together with some we do recognise – “hydrogen, carbon and oxygen”.

Ah ha! Now we can translate the unfamiliar symbols into familiar concepts, and now we suspect that this is something to do with chemistry. But what?

We are nearly there.  Almost all the pieces are in place and we have identified where the last few fit.

Now we can see that all the pieces are from the same jigsaw, there are none missing and there are no damaged, distorted, or duplicated pieces. The Big Picture looks complete.

We can see that the lines between the pieces are not part of the picture – they are artificial boundaries created when the picture was broken into parts – and useful only for helping us to re-assemble the big picture.

Now they are getting in the way – they are distracting us from seeing the picture as clearly as we could – so we can dispense with them – they have served their purpose.

We can also see that the pieces appear to be arranged in columns and rows – and we could view our picture as a set of interlocked vertical stripes or as a set of interlocked horizontal strips – but that this is an artificial structure created by our artificial boundaries. The picture we are seeing transcends our artificial linear decomposition.

We erase all the artificial boundaries and the full picture emerges.

Now we can see that we have a chemical system where a series of reactions are linked in a cycle – and we can see something called pyruvate coming in top left and we recognise the symbols water and CO2 and we conclude that this might be part of the complex biochemical system that is called cellular respiration – the process by which the food that we eat and the oxygen we breathe is converted into energy and the CO2 that we breathe out.

Wow!

And we can see that this is just part of a bigger map – the edges were also artificial and arbitrary! But where does the oxygen fit? And which bit is the energy? And what is the link between the carbohydrate that we eat and this new thing called pyruvate?

Our bigger picture and deeper understanding has generated a lot of new questions, there is so much more to explore, to learn and to understand!!


Let us stop and reflect. What have we learned?

We have learned that our piece was not just one of a random heap of unconnected jigsaw bits; we have learned where our piece fits into a Bigger Picture; we have learned how our piece is an essential part of that picture; we have learned that there is a design in the picture and we have learned how we are part of that design.

And when we all know and we all understand the whole design and how it works then we all have a much better chance of being able to improve it in a rational, sensible, explainable and actionable way.

Building the System Picture from the disorganised heap of Step Parts is one of the key skills of an Improvement Science Practitioner.

And the more practice we get, the quicker we recognise what we are looking at – because there are a relatively few effective system designs.

This is insight is important because most of the unsolved problems are system problems – and the sooner we can diagnose the system design flaws that are the root causes of the system problems, then the sooner we can propose, test and implement solutions and experience the expected improvements.

That is a Win-Win-Win strategy.

That is systems engineering in a nutshell.

Targets, Tyrannies and Traps.

If we are required to place a sensitive part of our anatomy into a device that is designed to apply significant and sustained pressure, then the person controlling the handle would have our complete attention!

Our sole objective would be to avoid the crushing and relentless pain and this would most definitely bias our behaviour.

We might say or do things that ordinarily we would not – just to escape from the pain.

The requirement to meet well-intentioned but poorly-designed performance targets can create the organisational equivalent of a medieval thumbscrew; and the distorting effect on behaviour is the same.  Some people even seem to derive pleasure from turning the screw!

But what if we do not know how to achieve the performance target? We might then act to deflect the pain onto others – we might become tyrants too – and we might start to apply our own thumbscrews further along the chain of command.  Those unfortunate enough to be at the end of the pecking order have nowhere to hide – and that is a deeply distressing place to be – helpless and hopeless.

Fortunately there is a way out of the corporate torture chamber: It is to learn how to design systems to deliver the required performance specification – and learning how to do this is much easier than many believe.

For example, most assume without question that big queues and long waits are always caused by inefficient use of available capacity – because that is what their monitoring systems report. So out come thumbscrews heralded by the chanted mantra “increase utilisation, increase utilisation”.  Unfortunately, this belief is only partially correct: low utilisation of available capacity can and does lead to big queues and long waits but there is a much more prevalent and insidious cause of long waits that has nothing to do with capacity or utilisation. These little beasties are are called time-traps.

The essential feature of a time trap is that it is independent of both flow and time – it adds the same amount of delay irrespective of whether the flow is low or high and irrespective of when the work arrives. In contrast waits caused by insufficient capacity are flow and time dependent – the higher the flow the longer the wait – and the effect is cumulative over time.

Many confuse the time-trap with its close relative the batch – but they are not the same thing at all – and most confuse both of these with capacity-constraints which are a completely different delay generating beast altogether.

The distinction is critical because the treatments for time-traps, batches and capacity-constraints are different – and if we get the diagnosis wrong then we will make the wrong decision, choose the wrong action, and our system will get sicker, or at least no better. The corporate pain will continue and possibly get worse – leading to even more bad behaviour and more desperate a self-destructive strategies.

So when we want to reduce lead times by reducing waiting-in-queues then the first thing we need to do is to search for the time-traps, and to do that we need to be able to recognise their characteristic footprint on our time-series charts; the vital signs of our system.

We need to learn how to create and interpret the charts – and to do that quickly we need guidance from someone who can explain what to look for and how to interpret the picture.

If we lack insight and humility and choose not to learn then we are choosing to stay in the target-tyranny-trap and our pain will continue.

The Power of the Positive Deviants

It is neither reasonable nor sensible to expect anyone to be a font of all knowledge.

And gurus with their group-think are useful but potentially dangerous when they suppress competitive paradigms.

So where does an Improvement Scientist seek reliable and trustworthy inspiration?

Guessing is a poor guide; gut-instinct can seriously mislead; and mind-altering substances are illegal, unreliable or both!

So who are the sources of tested ideas and where do we find them?

They are called Positive Deviants and they are everywhere.


But, the phrase positive deviant does not feel quite right does it? The word “deviant” has a strong negative emotional association. We are socially programmed from birth to treat deviations from the norm with distrust and for good reason. Social animals view conformity and similarity as security – it is our herd instinct. Anyone who looks or behaves too far from the norm is perceived as odd and therefore a potential threat and discounted or shunned.

So why consider deviants at all? Well, because anyone who behaves significantly differently from the majority is a potential source of new insight – so long as we know how to separate the positive deviants from the negative ones.

Negative deviants display behaviours that we could all benefit from by actively discouraging!  The NoNo or thou-shalt-not behaviours that are usually embodied in Law.  Killing, stealing, lying, speeding, dropping litter – that sort of thing. The anti-social trust-eroding conflict-generating behaviour that poisons the pond that we all swim in.

Positive deviants display behaviours that we could all benefit from actively encouraging! The NiceIf behaviours. But we are habitually focussed more on self-protection than self-development and we generalise from specifics. So we treat all deviants the same – we are wary of them. And by so doing we miss many valuable opportunities to learn and to improve.


How then do we identify the Positive Deviants?

The first step is to decide the dimension we want to improve and choose a suitable metric to measure it.

The second step is to measure the metric for everyone and do it over time – not just at a point in time. Single point-in-time measurements (snapshots) are almost useless – we can be tricked by the noise in the system into poor decisions.

The third step is to plot our measure-for-improvement as a time-series chart and look at it.  Are there points at the positive end of the scale that deviate significantly from the average? If so – where and who do they come from? Is there a pattern? Is there anything we might use as a predictor of positive deviance?

Now we separate the data into groups guided by our proposed predictors and compare the groups. Do the Positive Deviants now stick out like a sore thumb? Did our predictors separate the wheat from the chaff?

If so we next go and investigate.  We need to compare and contrast the Positive Deviants with the Norms. We need to compare and contrast both their context and their content. We need to know what is similar and what is different. There is something that is causing the sustained deviation and we need to search until we find it – and then we need know how and why it is happening.

We need to separate associations from causations … we need to understand the chains of events that lead to the better outcomes.

Only then will a new Door to Opportunity magically appear in our Black Wall of Ignorance – a door that leads to a proven path of improvement. A path that has been trodden before by a Positive Deviant – or by a whole tribe of them.

And only we ourselves can choose to open the door and explore the path – we cannot be pushed through by someone else.

When our system is designed to identify and celebrate the Positive Deviants then the negative deviants will be identified too! And that helps too because they will light the path to more NoNos that we can all learn to avoid.

For more about positive deviance from Wikipedia click here

For a case study on positive deviance click here

NB: The terms NiceIfs  and NoNos are two of the N’s on The 4N Chart® – the other two are Nuggets and Niggles.

Seeing Is Believing or Is It?

Do we believe what we see or do we see what we believe?  It sounds like a chicken-and-egg question – so what is the answer? One, the other or both?

Before we explore further we need to be clear about what we mean by the concept “see”.  I objectively see with my real eyes but I subjectively see with my mind’s eye. So to use the word see for both is likely to result in confusion and conflict and to side-step this we will use the word perceive for seeing-with-our-minds-eye.   

When we are sure of our belief then we perceive what we believe. This may sound incorrect but psychologists know better – they have studied sensation and perception in great depth and they have proved that we are all susceptible to “perceptual bias”. What we believe we will see distorts what we actually perceive – and we do it unconsciously. Our expectation acts like a bit of ancient stained glass that obscures and distorts some things and paints in a false picture of the rest.  And that is just during the perception process: when we recall what we perceived we can add a whole extra layer of distortion and can can actually modify our original memory! If we do that often enough we can become 100% sure we saw something that never actually happened. This is why eye-witness accounts are notoriously inaccurate! 

But we do not do this all of the time.  Sometimes we are open-minded, we have no expectation of what we will see or we actually expect to be surprised by what we will see. We like the feeling of anticipation and excitement – of not knowing what will happen next.   That is the psychological basis of entertainment, of exploration, of discovery, of learning, and of improvement science.

An experienced improvement facilitator knows this – and knows how to create a context where deeply held beliefs can be explored with sensitivity and respect; how to celebrate what works and how and why it does; how to challenge what does not; and how to create novel experiences; foster creativity and release new ideas that enhance what is already known, understood and believed.

Through this exploration process our perception broadens, sharpens and becomes more attuned with reality. We achieve both greater clarity and deeper understanding – and it is these that enable us to make wiser decisions and commit to more effective action.

Sometimes we have an opportunity to see for real what we would like to believe is possible – and that can be the pivotal event that releases our passion and generates our commitment to act. It is called the Black Swan effect because seeing just one black swan dispels our belief that all swans are white.

A practical manifestation of this principle is in the rational design of effective team communication – and one of the most effective I have seen is the Communication Cell – a standardised layout of visual information that is easy-to-see and that creates an undistorted perception of reality.  I first saw it many years ago as a trainee pilot when we used it as the focus for briefings and debriefings; I saw it again a few years ago at Unipart where it is used for daily communication; and I have seen it again this week in the NHS where it is being used as part of a service improvement programme.

So if you do not believe then come and see for yourself.

Never Events and Nailing Niggles

Some events should NEVER happen – such as removing the wrong kidney; or injecting an anti-cancer drug designed for a vein into the spine; or sailing a cruise ship over a charted underwater reef; or driving a bus full of sleeping school children into a concrete wall.

But  these catastrophic irreversible and tragic Never Events do keep happening – rarely perhaps – but persistently. At the Never-Event investigation the Finger-of-Blame goes looking for the incompetent culprit while the innocent victims call for compensation.

And after the smoke has cleared and the pain of loss has dimmed another Never-Again-Event happens – and then another, and then another. Rarely perhaps – but not never.

Never Events are so awful and emotionally charged that we remember them and we come to believe that they are not rare and from that misperception we develop a constant nagging feeling of fear for the future. It is our fear that erodes our trust which leads to the paralysis that prevents us from acting.  In the globally tragic event of 9/11 several thousand innocents victims died while the world watched in horror.  More innocent victims than that die needlessly every day in high-tech hospitals from avoidable errors – but that statistic is never shared.

The metaphor that is often used is the Swiss Cheese – the sort on cartoons with lots of holes in it. The cheese represents a quality check – a barrier that catches and corrects mistakes before they cause irreversible damage. But the cheesy check-list is not perfect; it has holes in it.  Mistakes slip through.

So multiple layers of cheesy checks are added in the hope that the holes in the earlier slices will be covered by the cheese in the later ones – and our experience shows that this multi-check design does reduce the number of mistakes that get through. But not completely. And when, by rare chance, holes in each slice line up then the error penetrates all the way through and a Never Event becomes a Actual Catastrophe.  So, the typical recommendation from the after-the-never-event investigation is to add another layer of cheese to the stack – another check on the list on top of all the others.

But the cheese is not durable: it deteriorates over time with the incessant barrage of work and the pressure of increasing demand. The holes get bigger, the cheese gets thinner, and new holes appear. The inevitable outcome is the opening up of unpredictable, new paths through the cheese to a Never Event; more Never Events; more after-the-never-event investigation; and more slices of increasingly expensive and complex cheese added to the tottering, rotting heap.

A drawback of the Swiss Cheese metaphor is that it gives the impression that the slices are static and each cheesy check has a consistent position and persistent set of flaws in it. In reality this is not the case – the system behaves as if the slices and the holes are moving about: variation is jiggling , jostling and wobbling the whole cheesy edifice.

This wobble does not increase the risk of a Never Event  but it prevents the subsequent after-the-event investigation from discovering the specific conjunction of holes that caused it. The Finger of Blame cannot find a culprit and the cause is labelled a “system failure” or an unlucky individual is implicated and named-shamed-blamed and sacrificed to the Gods of Chance on the Alter of Hope! More often new slices of KneeJerk Cheese are added in the desperate hope of improvement – and creating an even greater burden of back-covering bureaucracy than before – and paradoxically increasing the number of holes!

Improvement Science offers a more rational, logical, effective and efficient approach to dissolving this messy, inefficient and ineffective safety design.

First it recognises that to prevent a Never Event then no errors should reach the last layer of cheese checking – the last opportunity to block the error trajectory. An error that penetrates that far is a Near Miss and these will happen more often than Never Events so they are the key to understanding and dissolving the problem.

Every Near Miss that is detected should be reported and investigated immediately – because that is the best time to identify the hole in the previous slice – before it wobbles out of sight. The goal of the investigation is understanding not accountability. Failure to report a near miss; failure to investigate it; failure to learn from it; failure to act on it; and failure to monitor the effect of the action are all errors of omission (EOOs) and they are the worst of management crimes.

The question to ask is “What error happened immediately before the Near Miss?”  This event is called a Not Again. Focussing attention on this Not Again and understanding what, where, when, who and how it happened is the path to preventing the Near Miss and the Never Event.  Why is not the question to ask – especially when trust is low and cynicism and fear are high – the question to ask is “how”.

The first action after Naming the Not Again is to design a counter-measure for it – to plug the hole – NOT to add another slice of Check-and Correct cheese! The second necessary action is to treat that Not Again as a Near-Miss and to monitor it so when it happens again the cause can be identified. These common, every day, repeating causes of Not Agains are called Niggles; the hundreds of minor irritations that we just accept as inevitable. This is where the real work happens – identifying the most common Niggle and focussing all attention on nailing it! Forever.  Niggle naming and nailing is everyone’s responsibility – it is part of business-as-usual – and if leaders do not demonstrate the behaviour and set the expectation then followers will not do it.

So what effect would we expect?

To answer that question we need a better metaphor than our static stack of Swiss cheese slices: we need something more dynamic – something like a motorway!

Suppose you were to set out walking across a busy motorway with your eyes shut and your fingers in your ears – hoping to get to the other side without being run over. What is the chance that you will make it across safely?  It depends on how busy the traffic is and how fast you walk – but say you have a 50:50 chance of getting across one lane safely (which is the same chance as tossing a fair coin and getting a head) – what is the chance that you will get across all six lanes safely? The answer is the same chance as tossing six heads in a row: a 1-in-2 chance of surviving the first lane (50%), a 1 in 4 chance of getting across two lanes (25%), a 1 in 8 chance of making it across three (12.5%) …. to a 1 in 64 chance of getting across all six (1.6%). Said another way that is a 63 out of 64 chance of being run over somewhere which is a 98.4% chance of failure – near certain death! Hardly a Never Event.

What happens to our risk of being run over if the traffic in just one lane is stopped and that lane is now 100% safe to cross? Well you might think that it depends on which lane it is but it doesn’t – the risk of failure is now 31/32 or 96.8% irrespective of which lane it is – so not much improvement apparently!  We have doubled the chance of success though!

Is there a better improvement strategy?

What if we work collectively to just reduce the flow of Niggles in all the lanes at the same time – and suppose we are all able to reduce the risk of a Niggle in our lane-of-influence from 1-in-2 to 1-in-6. How we do it is up to us. To illustrate the benefit we replace our coin with a six-sided die (no pun intended) and we only “die” if we throw a 1.  What happens to our pedestrian’s probability of survival? The chance of surviving the first lane is now 5/6 (83.3%), and both first and second 5/6 x 5/6 = 25/36 (69%.4) and so on to all six lanes which is 5/6 x 5/6 x 5/6 x 5/6 x 5/6 x 5/6 = 15625/46656 = 33.3% which is a lot better than our previous 1.6%!  And what if we keep plugging the holes in our bits of the cheese and we increase our individual lane success rate to 95% – our pedestrians probability of survival is now 73.5%. The chance of a catastrophic event becomes less and less.

The arithmetic may be a bit scary but the message is clear: to prevent the Never Events we must reduce the Near Misses and to to do that we investigate every Near Miss and expose the Not Agains and then use them to Name and Nail all the Niggles.  And we have complete control over the causes of our commonest Niggles because we create them.

This strategy will improve the safety of our system. It has another positive benefit – it will free up our Near Miss investigation team to do something else: it frees them to assist in the re-design the system so that Not Agains cannot happen at all – they become Never Events too – and the earlier in the path that safety-design happens the better – because it renders the other layers of check-and-correct cheesocracy irrelevant.

Just imagine what would happen in a real system if we did that …

And now try to justify not doing it …

And now consider what an individual, team and organisation would need to learn to do this …

It is called Improvement Science.

And learning the Foundations of Improvement Science in Healthcare (FISH) is one place to start.

fish

Homeostasis

Improvement Science is not just about removing the barriers that block improvement and building barriers to prevent deterioration – it is also about maintaining acceptable, stable and predictable performance.

In fact most of the time this is what we need our systems to do so that we can focus our attention on the areas for improvement rather than running around keeping all the plates spinning.  Improving the ability of a system to maintain itself is a worthwhile and necessary objective.

Long term stability cannot be achieved by assuming a stable context and creating a rigid solution because the World is always changing. Long term stability is achieved by creating resilient solutions that can adjust their behaviour, within limits, to their ever-changing context.

This self-adjusting behaviour of a system is called homeostasis.

The foundation for the concept of homeostasis was first proposed by Claude Bernard (1813-1878) who unlike most of his contemporaries, believed that all living creatures were bound by the same physical laws as inanimate matter.  In his words: “La fixité du milieu intérieur est la condition d’une vie libre et indépendante” (“The constancy of the internal environment is the condition for a free and independent life”).

The term homeostasis is attributed to Walter Bradford Cannon (1871 – 1945) who was a professor of physiology at Harvard medical school and who popularized his theories in a book called The Wisdom of the Body (1932). Cannon described four principles of homeostasis:

  1. Constancy in an open system requires mechanisms that act to maintain this constancy.
  2. Steady-state conditions require that any tendency toward change automatically meets with factors that resist change.
  3. The regulating system that determines the homeostatic state consists of a number of cooperating mechanisms acting simultaneously or successively.
  4. Homeostasis does not occur by chance, but is the result of organised self-government.

Homeostasis is therefore an emergent behaviour of a system and is the result of organised, cooperating, automatic mechanisms. We know this by another name – feedback control – which is passing data from one part of a system to guide the actions of another part. Any system that does not have homeostatic feedback loops as part of its design will be inherently unstable – especially in a changing environment.  And unstable means untrustworthy.

Take driving for example. Our vehicle and its trusting passengers want to get to their desired destination on time and in one piece. To achieve this we will need to keep our vehicle within the boundaries of the road – the white lines – in order to avoid “disappointment”.

As their trusted driver our feedback loop consists of a view of the road ahead via the front windscreen; our vision connected through a working nervous system to the muscles in ours arms and legs; to the steering wheel, accelerator and brakes; then to the engine, transmission, wheels and tyres and finally to the road underneath the wheels. It is quite a complicated multi-step feedback system – but an effective one. The road can change direction and unpredictable things can happen and we can adapt, adjust and remain in control.  An inferior feedback design would be to use only the rear-view mirror and to steer by looking at the whites lines emerging from behind us. This design is just as complicated but it is much less effective and much less safe because it is entirely reactive.  We get no early warning of what we are approaching.  So, any system that uses the output performance as the feedback loop to the input decision step is like driving with just a rear view mirror.  Complex, expensive, unstable, ineffective and unsafe.     

As the number of steps in a process increases the more important the design of  the feedback stabilisation becomes – as does the number of ways we can get it wrong:  Wrong feedback signal, or from the wrong place, or to the wrong place, or at the wrong time, or with the wrong interpretation – any of which result in the wrong decision, the wrong action and the wrong outcome. Getting it right means getting all of it right all of the time – not just some of it right some of the time. We can’t leave it to chance – we have to design it to work.

Let us consider a real example. The NHS 18-week performance requirement.

The stream map shows a simple system with two parallel streams: A and B that each has two steps 1 and 2. A typical example would be generic referral of patients for investigations and treatment to one of a number of consultants who offer that service. The two streams do the same thing so the first step of the system is to decide which way to direct new tasks – to Step A1 or to Step B1. The whole system is required to deliver completed tasks in less than 18 weeks (18/52) – irrespective of which stream we direct work into.   What feedback data do we use to decide where to direct the next referral?

The do nothing option is to just allocate work without using any feedback. We might do that randomly, alternately or by some other means that are independent of the system.  This is called a push design and is equivalent to driving with your eyes shut but relying on hope and luck for a favourable outcome. We will know when we have got it wrong – but it is too late then – we have crashed the system! 

A more plausible option is to use the waiting time for the first step as the feedback signal – streaming work to the first step with the shortest waiting time. This makes sense because the time waiting for the first step is part of the lead time for the whole stream so minimising this first wait feels reasonable – and it is – BUT only in one situation: when the first steps are the constraint steps in both streams [the constraint step is one one that defines the maximum stream flow].  If this condition is not met then we heading for trouble and the map above illustrates why. In this case Stream A is just failing the 18-week performance target but because the waiting time for Step A1 is the shorter we would continue to load more work onto the failing  stream – and literally push it over the edge. In contrast Stream B is not failing and because the waiting time for Step B1 is the longer it is not being overloaded – it may even be underloaded.  So this “plausible” feedback design can actually make the system less stable. Oops!

In our transport metaphor – this is like driving too fast at night or in fog – only being able to see what is immediately ahead – and then braking and swerving to get around corners when they “suddenly” appear and running off the road unintentionally! Dangerous and expensive.

With this new insight we might now reasonably suggest using the actual output performance to decide which way to direct new work – but this is back to driving by watching the rear-view mirror!  So what is the answer?

The solution is to design the system to use the most appropriate feedback signal to guide the streaming decision. That feedback signal needs to be forward looking, responsive and to lead to stable and equitable performance of the whole system – and it may orginate from inside the system. The diagram above holds the hint: the predicted waiting time for the second step would be a better choice.  Please note that I said the predicted waiting time – which is estimated when the task leaves Step 1 and joins the back of the queue between Step 1 and Step 2. It is not the actual time the most recent task came off the queue: that is rear-view mirror gazing again.

When driving we look as far ahead as we can, for what we are heading towards, and we combine that feedback with our present speed to predict how much time we have before we need to slow down, when to turn, in which direction, by how much, and for how long. With effective feedback we can behave proactively, avoid surprises, and eliminate sudden braking and swerving! Our passengers will have a more comfortable ride and are more likely to survive the journey! And the better we can do all that the faster we can travel in both comfort and safety – even on an unfamiliar road.  It may be less exciting but excitement is not our objective. On time delivery is our goal.

Excitement comes from anticipating improvement – maintaining what we have already improved is rewarding.  We need both to sustain us and to free us to focus on the improvement work! 

 

Eureka!

This exclamation is most famously attributed to the ancient Greek scholar Archimedes who reportedly proclaimed “Eureka!” when he stepped into a bath and noticed that the water level rose.

Archimedies realised that the volume of water displaced must be equal to the volume of the part of his body he had submerged but this was not why he was allegedly so delighted: he had been trying to solve a problem posed by Hiero of Syracuse who needed to know the purity of gold in an irregular shaped votive crown.

Hiero suspected that his goldsmith was diluting the pure gold with silver and Archimedes  knew that the density of pure gold was different from a gold-silver alloy. His bathtime revalation told him that he could now measure the volume of the crown and with the weight he could calculate the density – without damaging the crown.

The story may or may not be true, but the message is important – new understanding often  appears in a “flash of insight” when a conscious experience unblocks an unconscious conflict. Reality provides the nudge.

Improvement means change, change means learning, and learning means new understanding.  So facilitating improvement boils down to us a series of reality nudges that change our understanding step-by-step.

The problem is that reality is messy and complicated and noisy. There are reality nudges coming at us from all directions and all the time – and to avoid being overwhelmed we filter most of them out – the ones we do not understand.  This unconscious habit of discounting the unknown creates the state of blissful ignorance but has the downside of preventing us from learning and therefore preventing us from improving.

Occasionally a REALLY BIG REALITY NUDGE comes along and we are forced to take notice – this is called a smack – and it is painful and has the downside of creating an angry backlash.

The famous scientist Louis Pasteur is reported to have said “Chance favours the prepared mind” which means that when conditions are right (the prepared mind) a small, random nudge (chance) can trigger a Eureka effect.  What he is saying is that to rely on chance to improve we must prepare the context first.

The way of doing this is called structured reality – deliberately creating a context so the reality nudge has maximum effect.  So to learn and improve and at the same time avoid painful smacks we need to structure the reality so that small nudges are effective – and that is done using carefully designed reality immersion experiences.

The effect is remarkable – it is called the Eureka effect – and it is a repeatable and predictable phenomenon.

This is how the skills of Improvement Science are spread. Facilitators do not do it by delivering a lecture; or by distributing the theory in papers and books; or by demonstrating their results as case studies; or by dictating the actions of others.  Instead they create the context for learning and, if reality does not oblige, at just the right time and place they apply the nudge and …. Eureka!

The critical-to-success factor is creating the context – and that requires an effective design – it cannot be left to chance. 

Leading from the Middle

Cuthbert Simpson is reputed to be the first person to be “stretched” during the reign of Mary I – pulled in more than one direction at the same time while trying, in vain, to satisfy the simultaneous demands of his three interrogators.

Being a middle manager in a large organisation feels rather like this – pulled in many directions trying to satisfy the insatiable appetites for improvement of Governance (quality), Operations (delivery) and Finance (productivity).

The critical-to-survival skill for the over-stretched middle manager is the ability to influence others – or rather three complementary influencing styles.

One dimension is vertical and strategic-tactical and requires using the organisational strategy to influence operational tactics; and to use front line feedback to influence future strategic decisions. This influencing dimension requires two complementary styles of behaviour: followership and leadership.  

One dimension is horizontal and operational and requires influencing peer-middle-managers in other departpments. This requires yet a different style of leadership: collaboration.

The successful middle manager is able to switch influencing style as effortlessly as changing gear when driving. Select the wrong style at the wrong time and there is an unpleasant grating of teeth and possibly a painful career-grinding-to-a-halt experience.

So what do these three styles have to do with Improvement Science?

Taking the last point first.  Middle managers are the lynch-pin on which whole system improvement depends.  Whole system improvement is impossible without their commitment – just as a car without a working gearbox is just a heap of near useless junk.  Whole system improvement needs middle managers who are skilled in the three styles of behaviour.

The most important style is collaboration – the ability to influence peers – because that is the key to the other two.  Let us consider a small socioeconomic system that we all have experience of – the family. How difficult is it to manage children when the parent-figures do not get on with each other and who broadcast confusingly mixed messages? Almost impossible. The children learn quickly to play one off against the other and sit back and enjoy the spectacle.  And as a child how difficult it is to manage the parent-figures when you are always fighting and arguing with your siblings and peers and competing with each other for attention? Almost impossible again. Children are much more effective in getting what they want when they learn how to work together.

The same is true in organisations. When influencing from-middle-to-strategic it is more effective to influence your peers and then work together to make the collective case; and when influencing from-middle-to-tactical it is more effective to influence your peers and then work together to set a clear and unambiguous expectations.

The key survival skill is the ability to influence your peers effectively and that means respect for their opinion, their knowledge, their skill and their time – and setting the same expectation of them. Collaboration requires trust; and trust requires respect; and respect is earned by example.

PS. It also helps a lot to be able to answer the question “Can you show us how?”

NIGYYSOB

This is the image of an infamous headline printed on May 4th 1982 in a well known UK newspaper.  It refers to the sinking of the General Belgrano in the Falklands war.

It is the clarion call of revenge – the payback for past grievances.

The full title is NIGYYSOB which stands for Now I Gotcha You Son Ofa B**** and is the name of one of Eric Berne’s Games that People Play.  In this case it is a Level 4 Game – played out on the global stage by the armed forces of the protagonists and resulting in both destruction and death.


The NIGYYSOB game is played out much more frequently at Level 1 – in the everyday interactions between people – people who believe that revenge has a sweet taste.

The reason this is important to the world of Improvement Science is because sometimes a well-intentioned improvement can get unintentionally entangled in a game of NIGYYSOB.

Here is how the drama unfolds.

Someone complains frequently about something that is not working, a Niggle, that they believe that they are powerless to solve. Their complaints are either ignored, discounted or not acted upon because the person with the assumed authority to resolve it cannot do so because they do not know how and will not admit that.  This stalemate can fester for a long time and can build up a Reservoir of Resentment. The Niggle persists and keeps irritating the emotional wound which remains an open cultural sore.  It is not unusual for a well-intentioned third party to intervene to resolve the standoff but as they too are unable to resolve the underlying problem – and all that results is either meddling or diktat which can actually make the problem worse.

The outcome is a festering three-way stalemate with a history of failed expectations and a deepening Well of Cynicism.

Then someone with an understanding of Improvement Science appears on the scene – and the stage is set for a new chapter of the drama because they risk of being “hooked” into The Game.  The newcomer knows how to resolve the problem and, with the grudging consent of the three protagonists, as if by magic, the Niggle is dissolved.  Wow!   The walls of the Well of Cynicism are breached by the new reality and the three protagonists suddenly realise that they may need to radically re-evaluate their worldviews.  That was not expected!

What can happen next is an emotional backlash – rather like a tight elastic band being released at one end. Twang! Snap! Ouch!


We all have a the same psychological reaction to a sudden and surprising change in our reality – be it for the better or for the worse. It takes time to adjust to a new worldview and that transition phase is both fragile and unstable; so there is a risk of going off course.

Experience teaches us that it does not take much to knock the tentative improvement over.


The application of Improvement Science will generate transitions that need to be anticipated and proactively managed because if this is not done then there is a risk that the emotional backlash will upset the whole improvement apple-cart.

What appears to occur is: after reality shows that the improvement has worked then the realisation dawns that the festering problem was always solvable, and the chronic emotional pain was avoidable. This comes as a psychological shock that can trigger a reflex emotional response called anger: the emotion that signals the unconscious perception of sudden loss of the old, familiar, worldview. The anger is often directed externally and at the perceived obstruction that blocked the improvement; the person who “should” have known what to do; often the “boss”.  This backlash, the emotional payoff, carries the implied message of “You are not OK because you hold the power, and you could not solve this, and you were too arrogant to ask for help and now I have proved you wrong and that I was right all the time!”  Sweet-tasting revenge?

Unfortunately not. The problem is that this emotional backlash damages the fragile, emerging, respectful relationship and can effectively scupper any future tentative inclinations to improve. The chronic emotional pain returns even worse than before; the Well of Cynicism deepens; and the walls are strengthened and become less porous.

The improvement is not maintained and it dies of neglect.


The reality of the situation was that none of the three protagonists actually knew what to do – hence the stalemate – and the only way out of that situation is for them all to recognise and accept the reality of their collective ignorance – and then to learn together.

Managing the improvement transition is something that an experienced facilitator needs to understand. If there is a them-and-us cultural context; a frustrated standoff; a high pressure store of accumulated bad feeling; and a deep well of cynicism then that emotional abscess needs to diagnosed, incised and drained before any attempt at sustained improvement can be made.

If we apply direct pressure on an emotional abscess then it is likely to rupture and squirt you with cynicide; or worse still force the emotional toxin back into the organisation and poison the whole system. (Email is a common path-of-low-resistance for emotional toxic waste!).

One solution is to appreciate that the toxic emotional pressure needs to be released in a safe and controlled way before the healing process can start.  Most of the pain goes away as soon as the abscess is lanced – the rest dissipates as the healing process engages.

One model that is helpful in proactively managing this dynamic is the Elizabeth Kubler-Ross model of grief which describes the five stages: denial, anger, bargaining, depression, and acceptance.  Grief is the normal emotional reaction to a sudden change in reality – such as the loss of a loved one – and the same psychological process operates for all emotionally significant changes.  The facilitator just needs to provide a game-free and constructive way to manage the anger by reinvesting the passion into the next cycle of improvement.  A more recent framework for this is the Lewis-Parker model which has seven stages:

  1. Immobilisation – Shock. Overwhelmed mismatch: expectations vs reality.
  2. Denial of Change – Temporary retreat. False competence.
  3. Incompetence – Awareness and frustration.
  4. Acceptance of Reality – ‘Letting go’.
  5. Testing – New ways to deal with new reality.
  6. Search for Meaning – Internalisation and seeking to understand.
  7. Integration – Incorporation of meanings within behaviours.

An effective tool for getting the emotional rollercoaster moving is The 4N Chart® – it allows the emotional pressure and pain to be released in a safe way. The complementary tool for diagnosing and treating the cultural abscess is called AFPS (Argument Free Problem Solving) which is a version of Edward De Bono’s Six Thinking Hats®.

The two are part of the improvement-by-design framework called 6M Design® which in turn is a rational, learnable, applicable and teachable manifestation of Improvement Science.

 

Pushmepullyu

The pushmepullyu is a fictional animal immortalised in the 1960’s film Dr Dolittle featuring Rex Harrison who learned from a parrot how to talk to animals.  The pushmepullyu was a rare, mysterious animal that was never captured and displayed in zoos. It had a sharp-horned head at both ends and while one head slept the other stayed awake so it was impossible to sneak up on and capture.

The spirit of the pushmepullyu lives on in Improvement Science as Push-Pull and remains equally mysterious and difficult to understand and explain. It is confusing terminology. So what does Push-Pull acually mean?

To decode the terminology we need to first understand a critical metric of any process – the constraint cycle time (CCT) – and to do that we need to define what the terms constraint and cycle time mean.

Consider a process that comprises a series of steps that must be completed in sequence.  If we put one task through the process we can measure how long each step takes to complete its contribution to the whole task.  This is the touch time of the step and if the resource is immediately available to start the next task this is also the cycle time of the step.

If we now start two tasks at the same time then we will observe when an upstream step has a longer cycle time than the next step downstream because it will shadow the downstream step. In contrast, if the upstream step has a shorter cycle time than the next step down stream then it will expose the downstream step. The differences in the cycle times of the steps will determine the behaviour of the process.

Confused? Probably.  The description above is correct BUT hard to understand because we learn better from reality than from rhetoric; and we find pictures work better than words.  Pragmatic comes before academic; reality before theory.  We need a realistic example to learn from.

Suppose we have a process that we are told has three steps in sequence, and when one task is put through it takes 30 mins to complete.  This is called the lead time and is an important process output metric. We now know it is possible to complete the work in 30 mins so we can set this as our lead time expectation.  

Suppose we plot a chart of lead times in the order that the tasks start and record the start time and lead time for each one – and we get a chart that looks like this. It is called a lead time run chart.  The first six tasks complete in 30 mins as expected – then it all goes pear-shaped. But why?  The run chart does not tell  us the reason – it just alerts us to dig deeper. 

The clue is in the run chart but we need to know what to look for.  We do not know how to do that yet so we need to ask for some more data.

We are given this run chart – which is a count of the number of tasks being worked on recorded at 5 minute intervals. It is the work in progress run chart.

We know that we have a three step process and three separate resources – one for each step. So we know that that if there is a WIP of less than 3 we must have idle resources; and if there is a WIP of more than 3 we must have queues of tasks waiting.

We can see that the WIP run chart looks a bit like the lead time run chart.  But it still does not tell us what is causing the unstable behaviour.

In fact we do already have all the data we need to work it out but it is not intuitively obvious how to do it. We feel we need to dig deeper.

 We decide to go and see for ourselves and to observe exactly what happens to each of the twelve tasks and each of the three resources. We use these observations to draw a Gantt chart.

Now we can see what is happening.

We can see that the cycle time of Step 1 (green) is 10 mins; the cycle time for Step 2 (amber) is 15 mins; and the cycle time for Step 3 (blue) is 5 mins.

 

This explains why the minimum lead time was 30 mins: 10+15+5 = 30 mins. OK – that makes sense now.

Red means tasks waiting and we can see that a lead time longer than 30 mins is associated with waiting – which means one or more queues.  We can see that there are two queues – the first between Step 1 and Step 2 which starts to form at Task G and then grows; and the second before Step 1 which first appears for Task J  and then grows. So what changes at Task G and Task J?

Looking at the chart we can see that the slope of the left hand edge is changing – it is getting steeper – which means tasks are arriving faster and faster. We look at the interval between the start times and it confirms our suspicion. This data was the clue in the original lead time run chart. 

Looking more closely at the differences between the start times we can see that the first three arrive at one every 20 mins; the next three at one every 15 mins; the next three at one every 10 mins and the last three at one every 5 mins.

Ah ha!

Tasks are being pushed  into the process at an increasing rate that is independent of the rate at which the process can work.     

When we compare the rate of arrival with the cycle time of each step in a process we find that one step will be most exposed – it is called the constraint step and it is the step that controls the flow in the whole process. The constraint cycle time is therefore the critical metric that determines the maximum flow in the whole process – irrespective of how many steps it has or where the constraint step is situated.

If we push tasks into the process slower than the constraint cycle time then all the steps in the process will be able to keep up and no queues will form – but all the resources will be under-utilised. Tasks A to C;

If we push tasks into the process faster than the cycle time of any step then queues will grow upstream of these multiple constraint steps – and those queues will grow bigger, take up space and take up time, and will progressively clog up the resources upstream of the constraints while starving those downstream of work. Tasks G to L.

The optimum is when the work arrives at the same rate as the cycle time of the constraint – this is called pull and it means that the constraint is as the pacemaker and used to pull the work into the process. Tasks D to F.

With this new understanding we can see that the correct rate to load this process is one task every 15 mins – the cycle time of Step 2.

We can use a Gantt chart to predict what would happen.

The waiting is eliminated, the lead time is stable and meeting our expectation, and when task B arrives thw WIP is 2 and stays stable.

In this example we can see that there is now spare capacity at the end for another task – we could increase our productivity; and we can see that we need less space to store the queue which also improves our productivity.  Everyone wins. This is called pull scheduling.  Pull is a more productive design than push. 

To improve process productivity it is necessary to measure the sequence and cycle time of every step in the process.  Without that information it is impossible to understand and rationally improve our process.     

BUT in reality we have to deal with variation – in everything – so imagine how hard it is to predict how a multi-step process will behave when work is being pumped into it at a variable rate and resources come and go! No wonder so many processes feel unpredictable, chaotic, unstable, out-of-control and impossible to both understand and predict!

This feeling is an illusion because by learning and using the tools and techniques of Improvement Science it is possible to design and predict-within-limits how these complex systems will behave.  Improvement Science can unravel this Gordian knot!  And it is not intuitively obvious. If it were we would be doing it.

FISH

Several years ago I read an inspirational book called Fish! which recounts the tale of a manager who is given the task of “sorting out” the worst department in her organisation – a department that everyone hated to deal with and that everyone hated to work in. The nickname was The Toxic Energy Dump.

The story retells how, by chance, she stumbled across help in the unlikeliest of places – the Pike Place fish market in Seattle.  There she learned four principles that transformed her department and her worklife:

1. Work Made Fun Gets Done
2. Make Someone’s Day
3. Be Fully Present
4. Choose Your Attitude

 The take home lesson from Fish! is that we make our work miserable by the way we behave towards each other.   So if we are unhappy at work and we do nothing about our behaviour then our misery will continue.

This means we can choose to make work enjoyable – and it is the responsibility of leaders at all levels to create the context for this to happen.  Miserable staff = poor leadership.  And leadership starts with the leader.  

  • Effective leadership is inspiring others to achieve through example.
  • Leadership does not work without trust. 
  • Play is more than an activity – it is creative energy – and requires a culture of trust not a culture of fear. 
  • To make someone’s day all you need to so is show them how much you appreciate them. 
  • The attitude and behaviour of a leader has a powerful effect on those that they lead.
  • Effective leaders know what they stand for and ask others to hold them to account.

FISH has another meaning – it stands for Foundations of Improvement Science for Health – and it is the core set of skills needed to create a SELF – a Safe Environment for Learning and Fun.  The necessary context for culture change. It is more than that though – FISH also includes the skills to design more productive processes – releasing valuable lifetime and energy to invest in creative fun.  

Fish are immersed in their environment – and so are people. We learn by immersion in reality. Rhetoric – be it thinking, talking or writing – is a much less effective teacher.

So all we have to do is co-create a context for improvement and then immerse ourselves in it. The improvement that results is an inevitable consequence of th design. We design our system for improvement and it improves itself.

To learn more about Foundations of Improvement Science for Health (FISH)  click: here 

The Three Faces of Improvement Science

There is always more than one way to look at something and each perspective is complementary to the others.

Improvement Science has three faces: the first is the Process Face; the second is the People face and the third is the System face – and is represented in the logo with a different colour for each face.

The process face is the easiest to start with because it is logical, objective and absolute.  It describes the process; the what, where, when and how. It is the combination of the hardware and the software; the structure and the function – and it is constrained by the Laws of Physics.

The people face is emotional, subjective and relative.  It describes the people and their perceptions and their purposes. Each person interacts both with the process and with each other and their individual beliefs and behaviours drive the web of relationships. This is the world of psychology and politics.

The system face is neither logical nor emotional – it has characteristics that are easy to describe but difficult to define. Characteritics such a self-organisation; emergent behaviour; and complexity.  Our brains do not appear to be able to comprehend systems as easily and intuitively and we might like to believe. This is one reason why systems often feel counter-intuitive, unpredictable and mysterious. We discover that we are unable to make intuitive decisions that result in whole system improvement  because our intuition tricks us.

Gaining confidence and capability in the practical application of Improvement Science requires starting from our zone of relative strength – our conscious, logical, rational, explanable, teachable, learnable, objective dependency on the physical world. From this solid foundation we can explore our zone of self-control – our internal unconscious, psychological and emotional world; and from there to our zone of relative weakness –  the systemic world of multiple interdependencies that, over time, determine our individual and collective fate.

The good news is that the knowledge and skills we need to handle the rational physical process face are easy and quick to learn.  It can be done with only a short period of focussed, learning-by-doing.  With that foundation in place we can then explore the more difficult areas of people and systems.

 

 

The Devil and the Detail

There are two directions from which we can approach an improvement challenge. From the bottom up – starting with the real details and distilling the principle later; and from the top down – starting with the conceptual principle and doing the detail later.  Neither is better than the other – both are needed.

As individuals we have an innate preference for real detail or conceptual principle – and our preference is manifest by the way we think, talk and behave – it is part of our personality.  It is useful to have insight into our own personality and to recognise that when other people approach a problem in a different way then we may experience a difference of opinion, a conflict of styles, and possibly arguments.  

One very well established model of personality type was proposed by Carl Gustav Jung who was a psychologist and who approached the subject from the perspective of understanding psychological “illness”.  Jung’s “Psychological Types” was used as the foundation of the life-work of Isabel Briggs Myers who was not a psychologist and who was looking from the direction of understanding psychological “normality”. In her book Gifts Differing – Understanding Personality Type (ISBN 978-0891-060741) she demonstrates using empirical data that there is not one normal or ideal type that we are all deviate from – rather that there is a set of stable types that each represents a “different gift”. By this she means that different personality types are suited to different tasks and when the type resonantes with the task it results in high-performance and is seen an asset or “strength” and when it does not it results in low performance and is seen as a liability or “weakness”.

One of the multiple dimensions of the Jungian and Myers-Briggs personality type model is the Sensor – iNtuitor dimension the S-N dimension. This dimension represents where we hold our reference model that provides us with data – data that we convert to information – and informationa the we use to derive decisions and actions.

A person who is naturally inclined to the Sensor end of the S-N dimension prefers to use Reality and Actuality as their reference – and they access it via their senses – sight, sound, touch, smell and taste. They are often detail and data focussed; they trust their senses and their conscious awareness; and they are more comfortable with routine and structure.  

A person who is naturally inclined to the iNtuitor end of the S-N dimension prefers to use Rhetoric and Possibility as their reference and their internal conceptual model that they access via their intuition. They are often principle and concept focussed and discount what their senses tell them in favour their intuition. Intuitors feel uncomfortable with routine and structure which they see as barriers to improvement.  

So when a Sensor and an iNtuitor are working together to solve a problem they are approaching it from two different directions and even when they have a common purpose, common values and a common objective it is very likely that conflict will occur if they are unaware of their different gifts

Gaining this awareness is a key to success because the synergy of the two approaches is greater than either working alone – the sum is greater than the parts – but only if there is awareness and mutual respect for the different gifts.  If there is no awareness and low mutual respect then the sum will be less than the parts and the problem will not be dissolvable.

In her research, Isabel Briggs Myers found that about 60% of high school students have a preference for S and 40% have a preference for N – but when the “academic high flyers”  were surveyed the ratio was S=17%  and N=83% – and there was no difference between males and females.  When she looked at the S-N distribution in different training courses she discovered that there were a higher proportion of S-types in Administrators (59%), Police (80%), and Finance (72%) and a higher proportion of N-types in Liberal Arts (59%), Engineering (65%), Science (83%), Fine Arts (91%), Occupational Therapy (66%), Art Education (87%), Counselor Education (85%), and Law (59%).  Her observation suggested that individuals select subjects based on their “different gifts” and this throws an interesting light on why traditional professions may come into conflict and perhaps why large organisations tend to form departments of “like-minded individuals”.  Departments with names like Finance, Operations and Governance  – or FOG.

This insight also offers an explanation for the conflict between “strategists” who tend to be N-types and who naturally gravitate to the “manager” part of an organisation and the “tacticians” who tend to be S-types and who naturally gravitate to the “worker” part of the same organisation.

It  has also been shown that conventional “intelligence tests” favour the N-types over the S-types and suggests why highly intelligent academics my perform very poorly when asked to apply their concepts and principles in the real world. Effective action requires pragmatists – but academics tend to congregate in academic instituitions – often disrespectfully labelled by pragmatists as “Ivory Towers”.      

Unfortunately this innate tendency to seek-like-types is counter-productive because it re-inforces the differences, exacerbates the communication barriers,  and leads to “tribal” and “disrespectful” and “trust eroding” behaviour, and to the “organisational silos” that are often evident.

Complex real-world problems cannot be solved this way because they require the synergy of the gifts – each part playing to its strength when the time is right.

The first step to know-how is self-awareness.

If you would like to know your Jungian/MBTI® type you can do so by getting the app: HERE

Flap-Flop-Flip

The world seems to is getting itself into a real flap at the moment.

The global economy is showing signs of faltering – the perfect dream of eternal financial growth seems to be showing cracks and is increasingly looking tarnished.

The doom mongers are surprisingly quiet – perhaps because they do not have any new ideas either.


It feels like the system is heading for a big flop and that is not a great feeling.

Last week I posed the Argument-Free-Problem-Solving challenge – and some were curious enough to have a go. It seems that the challenge needs more explanation of how it works to create enough engagement to climb the skepticism barrier.

At the heart of the AFPS method is The 4N Chart® – a simple, effective and efficient way to get a balanced perspective of the emotional contours of the change terrain.  The improvement process boils down to recognising, celebrating, and maintaining the Nuggets, flipping the Niggles into NoNos and reinvesting the currencies that are released into converting NiceIfs into more Nuggets.

The trick is the flip.


To perform a flip we have to make our assumptions explicit – which means we have to use external reality to challenge our internal rhetoric.  We need real data – presented in an easily digestible format – as a picture – and in context which converts the data into information that we can then ingest and use to grow our knowledge and broaden our understanding.

To convert knowledge into understanding we must ask a question: “Is our assumption a generalisation from a specific experience?

For example – it is generally assumed that high utilisation is associated with high productivity – and we want high productivity so we push for high utilisation.  And if we look at reality we can easily find evidence to support our assumption.  If I have under-utilised fixed-cost resources and I push more work into the process, I see an increase the flow in the stream, and an increase in utilisation, and an increase in revenue, and no increase in cost – higher outcome: higher productivity.

But if we look more carefully we can also find examples that seem to disprove our assumption. I have under-utilised resources and I push more work into the process, and the flow increases initially then falls dramatically, the revenue falls, productivity falls and when I look at all my resources they are still fully utilised.  The system has become gridlocked – and when I investigate is discover that the resource I need to unlock the flow is tied up somewhere else in the process with more urgent work. My system does not have an anti-deadlock design.

Our rhetoric of generalisation has been challenged by the reality of specifics – and it only takes one example.  One black swan will disprove the generalisation that “all swans are white”.

We now know we need to flip the “general assumption” into “specific evidence” – changing the words “all”, “always”, “none” and “never” into “some” and “sometimes”.

In our example we flip our assumption into “sometimes utilisation and productivity go up together, and sometimes they do not”. This flip reveals a new hidden door in the invisible wall that limits the breadth of our understanding and that unconsciously hinders our progress.

To open that door we must learn how to tell one specific from another and opening that door will lead to a path of discovery, more knowledge, broader understanding, deeper wisdom, better decisions, more effective actions and sustained improvement.

Flap-Flop-Flip.


This week has seen the loss of one of the greatest Improvement Scientists – Steve Jobs – creator of Apple – who put the essence of Improvement Science into words more eloquently than anyone in his 2005 address at Stanford University.

“Your time is limited, so don’t waste it living someone else’s life. Don’t be trapped by dogma – which is living with the results of other people’s thinking. Don’t let the noise of other’s opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. They somehow already know what you truly want to become. Everything else is secondary.” Steve Jobs (1955-2011).

And with a lifetime of experience of leading an organisation that epitomises quality by design Steve Jobs had the most credibility of any person on the planet when it comes to management of improvement.

Argument-Free-Problem-Solving

I used to be puzzled when I reflected on the observation that we seem to be able to solve problems as individuals much more quickly and with greater certainty than we could as groups.

I used to believe that having many different perspectives of a problem would be an asset – but in reality it seems to be more of a liability.

Now when I receive an invitation to a meeting to discuss an issue of urgent importance my little heart sinks as I recall the endless hours of my limited life-time wasted in worthless, unproductive discussion.

But, not to be one to wallow in despair I have been busy applying the principles of Improvement Science to this ubiquitous and persistent niggle.  And I have discovered something called Argument Free Problem Solving (AFPS) – or rather that is my name for it because it does what it says on the tin – it solves problems without arguments.

The trick was to treat problem-solving as a process; to understand how we solve problems as individuals; what are the worthwhile bits; and how we scupper the process when we add-in more than one person; and then how to design-to-align the  problem-solving workflow so that it …. flows. So that it is effective and efficient.

The result is AFPS and I’ve been testing it out. Wow! Does it work or what!

I have also discovered that we do not need to create an artificial set of Rules or a Special Jargon – we can  apply the recipe to any situation in a very natural and unobtrusive way.  Just this week I have seen it work like magic several times: once in defusing what was looking like a big bust up looming; once t0 resolve a small niggle that had been magnified into a huge monster and a big battle – the smoke of which was obscuring the real win-win-win opportunity; and once in a collaborative process improvement exercise that demonstrated a 2000% improvement in system productivity – yes – two thousand percent!

So AFPS  has been added to the  Improvement Science treasure chest and (because I like to tease and have fun) I have hidden the key in cyberspace at coordinates  http://www.saasoft.com/moodle

Mwah ha ha ha – me hearties! 

The Cost of Distrust

Previously we have explored “costs” associated with processes and systems – costs that could be avoided through the effective application of Improvement Science. The Cost of Errors. The Cost of Queues. The Cost of Variation.

These costs are large, additive and cumulative and yet they pale into insignificance when compared with the most potent source of cost. The Cost of Distrust.

The picture is of Sue Sheridan and the link below is to a video of Sue telling her story of betrayed trust: in a health care system.  She describes the tragic consequences of trust-eroding health care system behaviour.  Sue is not bitter though – she remains hopeful that her story will bring everyone to the table of Safety Improvement

View the Video

The symptoms of distrust are easy to find. They are written on the faces of the people; broadcast in the way they behave with each other; heard in what they say; and felt in how they say it. The clues are also in what they do not do and what they do not say. What is missing is as important as what is present.

There are also tangible signs of distrust too – checklists, application-for-permission forms, authorisation protocols, exception logs, risk registers, investigation reports, guidelines, policies, directives, contracts and all the other machinery of the Bureaucracy of Distrust. 

The intangible symptoms of distrust and the tangible signs of distrust both have an impact on the flow of work. The untrustworthy behaviour creates dissatisfaction, demotivation and conflict; the bureaucracy creates handoffs, delays and queues.  All  are potent sources of more errors, delays and waste.

The Cost of Distrust is is counted on all three dimensions – emotional, temporal and financial.

It may appear impossible to assign a finanical cost of distrust because of the complex interactions between the three dimensions in a real system; so one way to approach it is to estimate the cost of a high-trust system.  A system in which the trustworthy behaviour is explicit and trust eroding behaviour is promptly and respectfully challenged.

Picture such a system and consider these questions:

  • How would it feel to work in a high-trust  system where you know that trust-eroding-behaviour will be challenged with respect?
  • How would it feel to be the customer of a high-trust system?
               
  • What would be the cost of a system that did not need the Bureaucracy of Distrust to deliver safety and quality?

Trust eroding behaviours are not reduced by decree, threat, exhortation, name-shame-blame, or pleading because all these behaviours are based on the assumption of distrust and say “I do not trust you to do this without my external motivation”. These attitudes behaviours give away the “I am OK but You are Not OK” belief.

Trust eroding behaviours are most effectively reduced by a collective charter which is when a group of people state what behaviours they do not expect and individually commit to avoiding and challenging. The charter is the tangible sign of the peer support that empowers everyone to challenge with respect because they have collective authority to do so. Authority that is made explicit through the collective charter: “We the undersigned commit to respectfully challenge the following trust eroding behaviours …”.

It requires confidence and competence to open a conversation about distrust with someone else and that confidence comes from insight, instruction and practice. The easiest person to practice with is ourselves – it takes courage to do and it is worth the investment – which is asking and answering two questions:

Q1: What behaviours would erode my trust in someone else?

Make a list and rank on order with the most trust-eroding at the top. 

Q2: Do I ever exhibit any of the behaviours I have just listed?

Choose just one  from your list that you feel you can commit to – and make a promose to yourself – every time you demonstrate the behaviour make a mental note of:

  • When it happened?
  • Where it happened?
  • Who was present?
  • What just happened?
  • How did you feel?

You do not need to actively challange your motives,  or to actively change your behaviour – you just need to connect up your own emotional feedback loop.  The change will happen as if by magic!

Doing Our Way to New Thinking.

Most of our thinking happens out of awareness – it is unconscious. Most of the data that pours in through our senses never reaches awareness either – but that does not mean it does not have an impact on what we remember, how we feel and what we decide and do in the future. It does.

Improvement Science is the knowledge of how to achieve sustained change for the better; and doing that requires an ability to unlearn unconscious knowledge that blocks our path to improvement – and to unlearn selectively.

So how can we do that if it is unconscious? Well, there are  at least two ways:

1. Bring the unconscious knowledge to the surface so it can be examined, sorted, kept or discarded. This is done through the social process of debate and discussion. It does work though it can be a slow and difficult process.

2. Do the unlearning at the unconscious level – and we can do that by using reality rather than rhetoric. The easiest way to connect ourselves to reality is to go out there and try doing things.

When we deliberately do things  we are learning unconsciously because most of our sensory data never reaches awareness.  When we are just thinking the unconscious is relatively unaffected: talking and thinking are the same conscious process. Discussion and dialog operate at the conscious level but differ in style – discussion is more competitive; dialog is more collaborative. 

The door to the unconscious is controlled by emotions – and it appears that learning happens more effectively and more efficiently in certain emotional states. Some emotional states can impair learning; such as depression, frustration and anxiety. Strong emotional states associated with dramatic experiences can result in profound but unselective learning – the emotionally vivid memories that are often associated with unpleasant events.  Sometimes the conscious memory is so emotionally charged and unpleasant that it is suppressed – but the unconscious memory is not so easily erased – so it continues to influence but out of awareness. The same is true for pleasant emotional experiences – they can create profound learning experiences – and the conscious memory may be called an inspirational or “eureka” moment – a sudden emotional shift for the better. And it too is unselective and difficult to erase.

An emotionally safe environment for doing new things and having fun at the same time comes close to the ideal context for learning. In such an enviroment we learn without effort. It does not feel like work – yet we know we have done work because we feel tired afterwards.  And if we were to record the way that we behave and talk before the doing; and again afterwards then we will measure a change even though we may not notice the change ourselves. Other people may notice before we do – particularly if the change is significant – or if they only interact with us occasionally.

It is for this reason that keeping a personal journal is an effective way to capture the change in ourselves over time.  

The Jungian model of personality types states that there are three dimensions to personality (Isabel Briggs Myers added a fourth later to create the MBTI®).

One dimension describes where we prefer to go for input data – sensors (S) use external reality as their reference – intuitors (N) use their internal rhetoric.

Another dimension is how we make decisions –  thinkers (T) prefer a conscious, logical, rational, sequential decision process while feelers (F) favour an unconscious, emotional, “irrational”, parallel approach.

The third dimension is where we direct the output of our decisions – extraverts (E) direct it outwards into the public outside world while intraverts (I) direct it inwards to their private inner world.

Irrespective of our individual preferences, experience suggests that an effective learning sequence starts with our experience of reality (S) and depending how emotionally loaded it is (F) we may then internalise the message as a general intuitive concept (N) or a specific logical construct (T).

The implication of this is that to learn effectively and efficiently we need to be able to access all four modes of thinking and to do that we might design our teaching methods to resonate with this natural learning sequence, focussing on creating surprisingly positive reality based emotional experiences first. And we must be mindful that if we skip steps or create too many emotionally negative experiences we we may unintentionally impair the effectiveness of the learning process.

A carefully designed practical exercise that takes just a few minutes to complete can be a much more effective and efficient way to teach a profound principle than to read libraries of books or to listen to hours of rhetoric.  Indeed some of the most dramatic shifts in our understanding of the Universe have been facilitated by easily repeatable experiments.

Intuition and emotions can trick us – so Doing Our Way to New Thinking may be a better improvement strategy.

Reality trumps Rhetoric

One of the biggest challenges posed by Improvement is the requirement for beliefs to change – because static beliefs imply stagnated learning and arrested change.  We all display our beliefs for all to hear and see through our language – word and deed – our spoken language and our body language – and what we do not say and do not do is as important as what we do say and what we do do.  Let us call the whole language thing our Rhetoric – the external manifestation of our internal mental model.

Disappointingly, exercising our mental model does not seem to have much impact on Reality – at least not directly. We do not seem to be able to perform acts of telepathy or telekinesis. We are not like the Jedi knights in the Star Wars films who have learned to master the Force – for good or bad. We are not like the wizards in the Harry Potter who have mastered magical powers – again for good or bad. We are weak-minded muggles and Reality is spectacularly indifferent to our feeble powers. No matter what we might prefer to believe – Reality trumps Rhetoric.

Of course we can side step this uncomfortable feeling by resorting to the belief of One Truth which is often another way of saying My Opinion – and we then assume that if everyone else changed their belief to our belief then we would have full alignment, no conflict, and improvement would automatically flow.  What we actually achieve is a common Rhetoric about which Reality is still completely indifferent.  We know that if we disagree then one of us must be wrong or rather un-real-istic; but we forget that even if we agree then we can still both be wrong. Agreement is not a good test of the validity of our Rhetoric. The only test of validity is Reality itself – and facing the unfeeling Reality risks bruising our rather fragile egos – so we shy away from doing so.

So one way to facilitate improvement is to employ Reality as our final arbiter and to do this respectfully.  This is why teachers of improvement science must be masters of improvement science. They must be able to demonstrate their Improvenent Science Rhetoric by using Reality and their apprentices need to see the IS Rhetoric applied to solving real problems. One way to do this is for the apprentices to do it themselves, for real, with guidance of an IS master and in a safe context where they can make errors and not damage their egos. When this is done what happens is almost magical – the Rhetoric changes – the spoken language and the body language changes – what is said and what is done changes – and what is not said and not done changess too. And very often the change is not noticed at least by those who change.  We only appear to have one mental model: only one view of Reality so when it changes we change.

It is also interesting to observe is that this evolution of Rhetoric does not happen immediately or in one blinding flash of complete insight. We take small steps rather than giant leaps. More often the initial emotional reaction is confusion because our experience of the Reality clashes with the expectation of our Rhetoric.  And very often the changes happen when we are asleep – it is almost as if our minds work on dissolving the confusion when it is not distracted with the demands of awake-work; almost like we are re-organising our mental model structure when it is offline. It is a very common to have a sleepless night after such an Reality Check and to wake with a feeling of greater clarity – our updated mental model declaring itself as our New Rhetoric. Experienced facilitators of Improvement Science understand this natural learning process and that it happens to everyone – including themselves. It is this feeling of increased clarity, deeper understanding, and released energy that is the buzz of Improvement Science – the addictive drug.  We learn that our memory plays tricks on us; and what was conflict yesterday becomes confusion today and clarity tomorrow. One behaviour that often emerges spontaneously is the desire to keep a journal – sometimes at the bedside – to capture the twists and turns of the story of our evolving Rhetoric.

This blog just such a journal.

Design-for-Productivity

One tangible output of process or system design exercise is a blueprint.

This is the set of Policies that define how the design is built and how it is operated so that it delivers the specified performance.

These are just like the blueprints for an architectural design, the latter being the tangible structure, the former being the intangible function.

A computer system has the same two interdependent components that must be co-designed at the same time: the hardware and the software.


The functional design of a system is manifest as the Seven Flows and one of these is Cash Flow, because if the cash does not flow to the right place at the right time in the right amount then the whole system can fail to meet its design requirement. That is one reason why we need accountants – to manage the money flow – so a critical component of the system design is the Budget Policy.

We employ accountants to police the Cash Flow Policies because that is what they are trained to do and that is what they are good at doing – they are the Guardians of the Cash.

Providing flow-capacity requires providing resource-capacity, which requires providing resource-time; and because resource-time-costs-money then the flow-capacity design is intimately linked to the budget design.

This raises some important questions:
Q: Who designs the budget policy?
Q: Is the budget design done as part of the system design?
Q: Are our accountants trained in system design?

The challenge for all organisations is to find ways to improve productivity, to provide more for the same in a not-for-profit organisation, or to deliver a healthy return on investment in the for-profit arena (and remember our pensions are dependent on our future collective productivity).

To achieve the maximum cash flow (i.e. revenue) at the minimum cash cost (i.e. expense) then both the flow scheduling policy and the resource capacity policy must be co-designed to deliver the maximum productivity performance.


If we have a single-step process it is relatively easy to estimate both the costs and the budget to generate the required activity and revenue; but how do we scale this up to the more realistic situation when the flow of work crosses many departments – each of which does different work and has different skills, resources and budgets?

Q: Does it matter that these departments and budgets are managed independently?
Q: If we optimise the performance of each department separately will we get the optimum overall system performance?

Our intuition suggests that to maximise the productivity of the whole system we need to maximise the productivity of the parts.  Yes – that is clearly necessary – but is it sufficient?


To answer this question we will consider a process where the stream flows though several separate steps – separate in the sense that that they have separate budgets – but not separate in that they are linked by the same flow.

The separate budgets are allocated from the total revenue generated by the outflow of the process. For the purposes of this exercise we will assume the goal is zero profit and we just need to calculate the price that needs to be charged the “customer” for us to break even.

The internal reports produced for each of our departments for each time period are:
1. Activity – the amount of work completed in the period.
2. Expenses – the cost of the resources made available in the period – the budget.
3. Utilisation – the ratio of the time spent using resources to the total time the resources were available.

We know that the theoretical maximum utilisation of resources is 100% and this can only be achieved when there is zero-variation. This is impossible in the real world but we will assume it is achievable for the purpose of this example.

There are three questions we need answers to:
Q1: What is the lowest price we can achieve and meet the required demand?
Q2: Will optimising each step independently step give us this lowest price?
Q3: How do we design our budgets to deliver maximum productivity?


To explore these questions let us play with a real example.

Let us assume we have a single stream of work that crosses six separate departments labelled A-F in that sequence. The department budgets have been allocated based on historical activity and utilisation and our required activity of 50 jobs per time period. We have already worked hard to remove all the errors, variation and “waste” within each department and we have achieved 100% observed utilisation of all our resources. We are very proud of our high effectiveness and our high efficiency.

Our current not-for-profit price is £202,000/50 = £4,040 and because our observed utilisation of resources at each step is 100% we conclude this is the most efficient design and that this is the lowest possible price.

Unfortunately our celebration is short-lived because the market for our product is growing bigger and more competitive and our market research department reports that to retain our market share we need to deliver 20% more activity at 80% of the current price!

A quick calculation shows that our productivity must increase by 50% (New Activity/New Price = 120%/80% = 150%) but as we already have a utilisation of 100% then this challenge looks hopelessly impossible.  To increase activity by 20% will require increasing flow-capacity by 20% which will imply a 20% increase in costs so a 20% increase in budget – just to maintain the current price.  If we no longer have customers who want to pay our current price then we are in trouble.

Fortunately our conclusion is incorrect – and it is incorrect because we are not using the data available to co-design the system such that cash flow and work flow are aligned.  And we do not do that because we have not learned how to design-for-productivity.  We are not even aware that this is possible.  It is, and it is called Value Stream Accounting.

The blacked out boxes in the table above hid the data that we need to do this – an we do not know what they are. Yet.

But if we apply the theory, techniques and tools of system design, and we use the data that is already available then we get this result …

 We can see that the total budget is less, the budget allocations are different, the activity is 20% up and the zero-profit price is 34% less – which is a 83% increase in productivity!

More than enough to stay in business.

Yet the observed resource utilisation is still 100%  and that is counter-intuitive and is a very surprising discovery for many. It is however the reality.

And it is important to be reminded that the work itself has not changed – the ONLY change here is the budget policy design – in other words the resource capacity available at each stage.  A zero-cost policy change.

The example answers our first two questions:
A1. We now have a price that meets our customers needs, offers worthwhile work, and we stay in business.
A2. We have disproved our assumption that 100% utilisation at each step implies maximum productivity.

Our third question “How to do it?” requires learning the tools, techniques and theory of System Engineering and Design.  It is not difficult and it is not intuitively obvious – if it were we would all be doing it.

Want to satisfy your curiosity?
Want to see how this was done?
Want to learn how to do it yourself?

You can do that here.


For more posts like this please vote here.
For more information please subscribe here.

Lub-Hub Lub-Hub Lub-Hub

If you put an ear to someones chest you can hear their heart “lub-dub lub-dub lub-dub”. The sound is caused by the valves in the heart closing, like softly slamming doors, as part of the wonderfully orchestrated process of pumping blood around the lungs and body. The heart is an impressive example of bioengineering but it was not designed – it evolved over time – its elegance and efficiency emerged over a long journey of emergent evolution.  The lub-dub is a comforting sound – it signals regularity, predictability, and stabilty; and was probably the first and most familiar sound each of heard in the womb. Our hearts are sensitive to our emotional state – and it is no accident that the beat of music mirrors the beat of the heart: slow means relaxed and fast means aroused.

Systems and processes have a heart beat too – but it is not usually audible. It can been seen though if the measures of a process are plotted as time series charts. Only artificial systems show constant and unwavering behaviour – rigidity –  natural systems have cycles.  The charts from natural systems show the “vital signs” of the system.  One chart tells us something of value – several charts considered together tell us much more.

We can measure and display the electrical activity of the heart over time – it is called an electrocardiogram (ECG) -literally “electric-heart-picture”; we can measure and display the movement of muscles, valves and blood by beaming ultrasound at the heart – an echocardiogram; we can visualise the pressure of the blood over time – a plethysmocardiogram; and we can visualise the sound the heart makes – a phonocardiogram. When we display the various cardiograms on the same time scale one above the other we get a much better understanding of how the heart is behaving  as a system. And if we have learned what to expect to see with in a normal heart we can look for deviations from healthy behaviour and use those to help us diagnose the cause.  With experience the task of diagnosis becomes a simple, effective and efficient pattern matching exercise.

The same is true of systems and processes – plotting the system metrics as time-series charts and searching for the tell-tale patterns of process disease can be a simple, quick and accurate technique: when you have learned what a “healthy” process looks like and which patterns are caused by which process “diseases”.  This skill is gained through Operations Management training and lots of practice with the guidance of an experienced practitioner. Without this investment in developing knowlewdge and understanding there is a high risk of making a wrong diagnosis and instituting an ineffective or even dangerous treatment.  Confidence is good – competence is even better.

The objective of process diagnostics is to identify where and when the LUBs and HUBs appear are in the system: a LUB is a “low utilisation bottleneck” and a HUB is a “high utilisation bottleneck”.  Both restrict flow but they do it in different ways and therefore require different management. If we confuse a LUB for a HUB and choose the wrong treatent we can unintentionally make the process sicker – or even kill the system completely. The intention is OK but if we are not competent the implementation will not be OK.

Improvement Science rests on two foundations stones – Operations Management and Human Factors – and managers of any process or system need an understanding of both and to be able to apply their knowledge in practice with competence and confidence.  Just as a doctor needs to understand how the heart works and how to apply this knowledge in clinical practice. Both technical and emotional capability is needed – the Head and the Heart need each other.                          

Safety-By-Design

The picture is of Elisha Graves Otis demonstrating, in the mid 19th century, his safe elevator that automatically applies a brake if the lift cable breaks. It is a “simple” fail-safe mechanical design that effectively created the elevator industry and the opportunity of high-rise buildings.

“To err is human” and human factors research into how we err has revealed two parts – the Error of Intention (poor decision) and the Error of Execution (poor delivery) – often referred to as “mistakes” and “slips”.

Most of the time we act unconsciously using well practiced skills that work because most of our tasks are predictable; walking, driving a car etc.

The caveman wetware between our ears has evolved to delegate this uninteresting and predictable work to different parts of the sub-conscious brain and this design frees us to concentrate our conscious attention on other things.

So, if something happens that is unexpected we may not be aware of it and we may make a slip without noticing. This is one way that process variation can lead to low quality – and these are the often the most insidious slips because they go unnoticed.

It is these unintended errors that we need to eliminate using safe process design.

There are two ways – by designing processes to reduce the opportunity for mistakes (i.e. improve our decision making); and then to avoid slips by designing the subsequent process to be predictable and therefore suitable for delegation.

Finally, we need to add a mechanism to automatically alert us of any slips and to protect us from their consequences by failing-safe.  The sign of good process design is that it becomes invisible – we are not aware of it because it works at the sub-conscious level.

As soon as we become aware of the design we have either made a slip – or the design is poor.


Suppose we walk up to a door and we are faced with a flat metal plate – this “says” to us that we need to “push” the door to open it – it is unambiguous design and we do not need to invoke consciousness to make a push-or-pull decision.  The technical term for this is an “affordance”.

In contrast a door handle is an ambiguous design – it may require a push or a pull – and we either need to look for other clues or conduct a suck-it-and-see experiment. Either way we need to switch our conscious attention to the task – which means we have to switch it away from something else. It is those conscious interruptions that cause us irritation and can spawn other, possibly much bigger, slips and mistakes.

Safe systems require safe processes – and safe processes mean fewer mistakes and fewer slips. We can reduce slips through good design and relentless improvement.

A simple and effective tool for this is The 4N Chart® – specifically the “niggle” quadrant.

Whenever we are interrupted by a poorly designed process we experience a niggle – and by recording what, where and when those niggles occur we can quickly focus our consciousness on the opportunity for improvement. One requirement to do this is the expectation and the discipline to record niggles – not necessarily to fix them immediately – but just to record them and to review them later.

In his book “Chasing the Rabbit” Steven Spear describes two examples of world class safety: the US Nuclear Submarine Programme and Alcoa, an aluminium producer.  Both are potentially dangerous activities and, in both examples, their world class safety record came from setting the expectation that all niggles are recorded and acted upon – using a simple, effective and efficient niggle-busting process.

In stark and worrying contrast, high-volume high-risk activities such as health care remain unsafe not because there is no incident reporting process – but because the design of the report-and-review process is both ineffective and inefficient and so is not used.

The risk of avoidable death in a modern hospital is quoted at around 1:300 – if our risk of dying in an elevator were that high we would take the stairs!  This worrying statistic is to be expected though – because if we lack the organisational capability to design a safe health care delivery process then we will lack the organisational capability to design a safe improvement process too.

Our skill gap is clear – we need to learn how to improve process safety-by-design.


Download Design for Patient Safety report written by the Design Council.

Other good examples are the WHO Safer Surgery Checklist, and the story behind this is told in Dr Atul Gawande’s Checklist Manifesto.

Low-Tech-Toc

Beware the Magicians who wave High Technology Wands and promise Miraculous Improvements if you buy their Black Magic Boxes!

If a Magician is not willing to open the box and show you the inner workings then run away – quickly.  Their story may be true, the Miracle may indeed be possible, but if they cannot or will not explain HOW the magic trick is done then you will be caught in their spell and will become their slave forever.

Not all Magicians have honourable intentions – those who have been seduced by the Dark Side will ensnare you and will bleed you dry like greedy leeches!

In the early 1980’s a brilliant innovator called Eli Goldratt created a Black Box called OPT that was the tangible manifestation of his intellectual brainchild called ToC – Theory of Constraints. OPT was a piece of complex computer software that was intended to rescue manufacturing from their ignorance and to miraculously deliver dramatic increases in profit. It didn’t.

Eli Goldratt was a physicist and his Black Box was built on strong foundations of Process Physics – it was not Snake Oil – it did work.  The problem was that it did not sell: Not enough people believed his claims and those who did discovered that the Black Box was not as easy to use as the Magician suggested.  So Eli Goldratt wrote a book called The Goal in which he explained, in parable form, the Principles of ToC and the theoretical foundations on which his Black Box was built.  The book was a big success but his Black Box still did not sell; just an explanation of how his Black Box worked was enough for people to apply the Principles of ToC and to get dramatic results. So, Eli abandoned his plan of making a fortune selling Black Boxes and set up the Goldratt Institute to disseminate the Principles of ToC – which he did with considerably more success. Eli Goldratt died in June 2011 after a short battle with cancer and the World has lost a great innovator and a founding father of Improvement Science. His legacy lives on in the books he wrote that chart his personal journey of discovery.

The Principles of ToC are central both to process improvement and to process design.  As Eli unintentionally demonstrated, it is more effective and much quicker to learn the Principles of ToC pragmatically and with low technology – such as a book – than with a complex, expensive, high technology Black Box.  As many people have discovered – adding complex technology to a complex problem does not create a simple solution! Many processes are relatively uncomplicated and do not require high technology solutions. An example is the challenge of designing a high productivity schedule when there is variation in both the content and the volume of the work.

If our required goal is to improve productivity (or profit) then we want to improve the throughput and/or to reduce the resources required. That is relatively easy when there is no variation in content and no variation in volume – such as when we are making just one product at a constant rate – like a Model-T Ford in Black! Add some content and volume variation and the challenge becomes a lot trickier! From the 1950’s the move from mass production to mass customisation in the automobile industry created this new challenge and spawned a series of  innovative approaches such as the Toyota Production System (Lean), Six Sigma and Theory of Constraints.  TPS focussed on small batches, fast changeovers and low technology (kanbans or cards) to keep inventory low and flow high; Six Sigma focussed on scientifically identifying and eliminating all sources of variation so that work flows smoothly and in “statistical control”; ToC focussed on identifying the “constraint steps” in the system and then on scheduling tasks so that the constraints never run out of work.

When applied to a complex system of interlinked and interdependent processes the ToC method requires a complicated Black Box to do the scheduling because we cannot do it in our heads. However, when applied to a simpler system or to a part of a complex system it can be done using a low technology method called “paper and pen”. The technique is called Template Scheduling and there is a real example in the “Three Wins” book where the template schedule design was tested using a computer simulation to measure the resilience of the design to natural variation – and the computer was not used to do the actual scheduling. There was no Black Box doiung the scheduling. The outcome of the design was a piece of paper that defined the designed-and-tested template schedule: and the design testing predicted a 40% increase in throughput using the same resources. This dramatic jump in productivity might be regarded as  “miraculous” or even “impossible” but only to someone who was not aware of the template scheduling method. The reality is that that the designed schedule worked just as predicted – there was no miracle, no magic, no Magician and no Black Box.

What Is The Cost Of Reality?

It is often assumed that “high quality costs more” and there is certainly ample evidence to support this assertion: dinner in a high quality restaurant commands a high price. The usual justifications for the assumption are (a) quality ingredients and quality skills cost more to provide; and (b) if people want a high quality product or service that is in relatively short supply then it commands a higher price – the Law of Supply and Demand.  Together this creates a self-regulating system – it costs more to produce and so long as enough customers are prepared to pay the higher price the system works.  So what is the problem? The problem is that the model is incorrect. The assumption is incorrect.  Higher quality does not always cost more – it usually costs less. Convinced?  No. Of course not. To be convinced we need hard, rational evidence that disproves our assumption. OK. Here is the evidence.

Suppose we have a simple process that has been designed to deliver the Perfect Service – 100% quality, on time, first time and every time – 100% dependable and 100% predictable. We choose a Service for our example because the product is intangible and we cannot store it in a warehouse – so it must be produced as it is consumed.

To measure the Cost of Quality we first need to work out the minimum price we would need to charge to stay in business – the sum of all our costs divided by the number we produce: our Minimum Viable Price. When we examine our Perfect Service we find that it has three parts – Part 1 is the administrative work: receiving customers; scheduling the work; arranging for the necessary resources to be available; collecting the payment; having meetings; writing reports and so on. The list of expenses seems endless. It is the necessary work of management – but it is not what adds value for the customer. Part 3 is the work that actually adds the value – it is the part the customer wants – the Service that they are prepared to pay for. So what is Part 2 work? This is where our customers wait for their value – the queue. Each of the three parts will consume resources either directly or indirectly – each has a cost – and we want Part 3 to represent most of the cost; Part 2 the least and Part 1 somewhere in between. That feels realistic and reasonable. And in our Perfect Service there is no delay between the arrival of a customer and starting the value work; so there is  no queue; so no work in progress waiting to start, so the cost of Part 2 is zero.  

The second step is to work out the cost of our Perfect Service – and we could use algebra and equations to do that but we won’t because the language of abstract mathematics excludes too many people from the conversation – let us just pick some realistic numbers to play with and see what we discover. Let us assume Part 1 requires a total of 30 mins of work that uses resources which cost £12 per hour; and let us assume Part 3 requires 30 mins of work that uses resources which cost £60 per hour; and let us assume Part 2 uses resources that cost £6 per hour (if we were to need them). We can now work out the Minimum Viable Price for our Perfect Service:

Part 1 work: 30 mins @ £12 per hour = £6
Part 2 work:  = £0
Part 3 work: 30 mins at £60 per hour = £30
Total: £36 per customer.

Our Perfect Service has been designed to deliver at the rate of demand which is one job every 30 mins and this means that the Part 1 and Part 3 resources are working continuously at 100% utilisation. There is no waste, no waiting, and no wobble. This is our Perfect Service and £36 per job is our Minimum Viable Price.         

The third step is to tarnish our Perfect Service to make it more realistic – and then to do whatever is necessary to counter the necessary imperfections so that we still produce 100% quality. To the outside world the quality of the service has not changed but it is no longer perfect – they need to wait a bit longer, and they may need to pay a bit more. Quality costs remember!  The question is – how much longer and how much more? If we can work that out and compare it with our Minimim Viable Price we will get a measure of the Cost of Reality.

We know that variation is always present in real systems – so let the first Dose of Reality be the variation in the time it takes to do the value work. What effect does this have?  This apparently simple question is surprisingly difficult to answer in our heads – and we have chosen not to use “scarymatics” so let us run an empirical experiment and see what happens. We could do that with the real system, or we could do it on a model of the system.  As our Perfect Service is so simple we can use a model. There are lots of ways to do this simulation and the technique used in this example is called discrete event simulation (DES)  and I used a process simulation tool called CPS (www.SAASoft.com).

Let us see what happens when we add some random variation to the time it takes to do the Part 3 value work – the flow will not change, the average time will not change, we will just add some random noise – but not too much – something realistic like 10% say.

The chart shows the time from start to finish for each customer and to see the impact of adding the variation the first 48 customers are served by our Perfect Service and then we switch to the Realistic Service. See what happens – the time in the process increases then sort of stabilises. This means we must have created a queue (i.e. Part 2 work) and that will require space to store and capacity to clear. When we get the costs in we work out our new minimum viable price it comes out, in this case, to be £43.42 per task. That is an increase of over 20% and it gives us a measure of the Cost of the Variation. If we repeat the exercise many times we get a similar answer, not the same every time because the variation is random, but it is always an extra cost. It is never less that the perfect proce and it does not average out to zero. This may sound counter-intuitive until we understand the reason: when we add variation we need a bit of a queue to ensure there is always work for Part 3 to do; and that queue will form spontaneously when customers take longer than average. If there is no queue and a customer requires less than average time then the Part 3 resource will be idle for some of the time. That idle time cannot be stored and used later: time is not money.  So what happens is that a queue forms spontaneously, so long as there is space for it,  and it ensures there is always just enough work waiting to be done. It is a self-regulating system – the queue is called a buffer.

Let us see what happens when we take our Perfect Process and add a different form of variation – random errors. To prevent the error leaving the system and affecting our output quality we will repeat the work. If the errors are random and rare then the chance of getting it wrong twice for the same customer will be small so the rework will be a rough measure of the internal process quality. For a fair comparison let us use the same degree of variation as before – 10% of the Part 3 have an error and need to be reworked – which in our example means work going to the back of the queue.

Again, to see the effect of the change, the first 48 tasks are from the Perfect System and after that we introduce a 10% chance of a task failing the quality standard and needing to be reworked: in this example 5 tasks failed, which is the expected rate. The effect on the start to finish time is very different from before – the time for the reworked tasks are clearly longer as we would expect, but the time for the other tasks gets longer too. It implies that a Part 2 queue is building up and after each error we can see that the queue grows – and after a delay.  This is counter-intuitive. Why is this happening? It is because in our Perfect Service we had 100% utiliation – there was just enough capacity to do the work when it was done right-first-time, so if we make errors and we create extra demand and extra load, it will exceed our capacity; we have created a bottleneck and the queue will form and it will cointinue to grow as long as errors are made.  This queue needs space to store and capacity to clear. How much though? Well, in this example, when we add up all these extra costs we get a new minimum price of £62.81 – that is a massive 74% increase!  Wow! It looks like errors create much bigger problem for us than variation. There is another important learning point – random cycle-time variation is self-regulating and inherently stable; random errors are not self-regulating and they create inherently unstable processes.

Our empirical experiment has demonstrated three principles of process design for minimising the Cost of Reality:

1. Eliminate sources of errors by designing error-proofed right-first-time processes that prevent errors happening.
2. Ensure there is enough spare capacity at every stage to allow recovery from the inevitable random errors.
3. Ensure that all the steps can flow uninterrupted by allowing enough buffer space for the critical steps.

With these Three Principles of cost-effective design in mind we can now predict what will happen if we combine a not-for-profit process, with a rising demand, with a rising expectation, with a falling budget, and with an inspect-and-rework process design: we predict everyone will be unhappy. We will all be miserable because the only way to stay in budget is to cut the lower priority value work and reinvest the savings in the rising cost of checking and rework for the higher priority jobs. But we have a  problem – our activity will fall, so our revenue will fall, and despite the cost cutting the budget still doesn’t balance because of the increasing cost of inspection and rework – and we enter the death spiral of finanical decline.

The only way to avoid this fatal financial tailspin is to replace the inspection-and-rework habit with a right-first-time design; before it is too late. And to do that we need to learn how to design and deliver right-first-time processes.

Charts created using BaseLine

The Crime of Metric Abuse

We live in a world that is increasingly intolerant of errors – we want everything to be right all the time – and if it is not then someone must have erred with deliberate intent so they need to be named, blamed and shamed! We set safety standards and tough targets; we measure and check; and we expose and correct anyone who is non-conformant. We accept that is the price we must pay for a Perfect World … Yes? Unfortunately the answer is No. We are deluded. We are all habitual criminals. We are all guilty of committing a crime against humanity – the Crime of Metric Abuse. And we are blissfully ignorant of it so it comes as a big shock when we learn the reality of our unconscious complicity.

You might want to sit down for the next bit.

First we need to set the scene:
1. Sustained improvement requires actions that result in irreversible and beneficial changes to the structure and function of the system.
2. These actions require making wise decisions – effective decisions.
3. These actions require using resources well – efficient processes.
4. Making wise decisions requires that we use our system metrics correctly.
5. Understanding what correct use is means recognising incorrect use – abuse awareness.

When we commit the Crime of Metric Abuse, even unconsciously, we make poor decisions. If we act on those decisions we get an outcome that we do not intend and do not want – we make an error.  Unfortunately, more efficiency does not compensate for less effectiveness – if fact it makes it worse. Efficiency amplifies Effectiveness – “Doing the wrong thing right makes it wronger not righter” as Russell Ackoff succinctly puts it.  Paradoxically our inefficient and bureaucratic systems may be our only defence against our ineffective and potentially dangerous decision making – so before we strip out the bureaucracy and strive for efficiency we had better be sure we are making effective decisions and that means exposing and treating our nasty habit for Metric Abuse.

Metric Abuse manifests in many forms – and there are two that when combined create a particularly virulent addiction – Abuse of Ratios and Abuse of Targets. First let us talk about the Abuse of Ratios.

A ratio is one number divided by another – which sounds innocent enough – and ratios are very useful so what is the danger? The danger is that by combining two numbers to create one we throw away some information. This is not a good idea when making the best possible decision means squeezing every last drop of understanding our of our information. To unconsciously throw away useful information amounts to incompetence; to consciously throw away useful information is negligence because we could and should know better.

Here is a time-series chart of a process metric presented as a ratio. This is productivity – the ratio of an output to an input – and it shows that our productivity is stable over time.  We started OK and we finished OK and we congratulate ourselves for our good management – yes? Well, maybe and maybe not.  Suppose we are measuring the Quality of the output and the Cost of the input; then calculating our Value-For-Money productivity from the ratio; and then only share this derived metric. What if quality and cost are changing over time in the same direction and by the same rate? The productivity ratio will not change.

 

Suppose the raw data we used to calculate our ratio was as shown in the two charts of measured Ouput Quality and measured Input Cost  – we can see immediately that, although our ratio is telling us everything is stable, our system is actually changing over time – it is unstable and therefore it is unpredictable. Systems that are unstable have a nasty habit of finding barriers to further change and when they do they have a habit of crashing, suddenly, unpredictably and spectacularly. If you take your eyes of the white line when driving and drift off course you may suddenly discover a barrier – the crash barrier for example, or worse still an on-coming vehicle! The apparent stability indicated by a ratio is an illusion or rather a delusion. We delude ourselves that we are OK – in reality we may be on a collision course with catastrophe. 

But increasing quality is what we want surely? Yes – it is what we want – but at what cost? If we use the strategy of quality-by-inspection and add extra checking to detect errors and extra capacity to fix the errors we find then we will incur higher costs. This is the story that these Quality and Cost charts are showing.  To stay in business the extra cost must be passed on to our customers in the price we charge: and we have all been brainwashed from birth to expect to pay more for better quality. But what happens when the rising price hits our customers finanical constraint?  We are no longer able to afford the better quality so we settle for the lower quality but affordable alternative.  What happens then to the company that has invested in quality by inspection? It loses customers which means it loses revenue which is bad for its financial health – and to survive it starts cutting prices, cutting corners, cutting costs, cutting staff and eventually – cutting its own throat! The delusional productivity ratio has hidden the real problem until a sudden and unpredictable drop in revenue and profit provides a reality check – by which time it is too late. Of course if all our competitors are committing the same crime of metric abuse and suffering from the same delusion we may survive a bit longer in the toxic mediocrity swamp – but if a new competitor who is not deluded by ratios and who learns how to provide consistently higher quality at a consistently lower price – then we are in big trouble: our customers leave and our end is swift and without mercy. Competition cannot bring controlled improvement while the Abuse of Ratios remains rife and unchallenged.

Now let us talk about the second Metric Abuse, the Abuse of Targets.

The blue line on the Productivity chart is the Target Productivity. As leaders and managers we have bee brainwashed with the mantra that “you get what you measure” and with this belief we commit the crime of Target Abuse when we set an arbitrary target and use it to decide when to reward and when to punish. We compound our second crime when we connect our arbitrary target to our accounting clock and post periodic praise when we are above target and periodic pain when we are below. We magnify the crime if we have a quality-by-inspection strategy because we create an internal quality-cost tradeoff that generates conflict between our governance goal and our finance goal: the result is a festering and acrimonious stalemate. Our quality-by-inspection strategy paradoxically prevents improvement in productivity and we learn to accept the inevitable oscillation between good and bad and eventually may even convince ourselves that this is the best and the only way.  With this life-limiting-belief deeply embedded in our collective unconsciousness, the more enthusiastically this quality-by-inspection design is enforced the more fear, frustration and failures it generates – until trust is eroded to the point that when the system hits a problem – morale collapses, errors increase, checks are overwhelmed, rework capacity is swamped, quality slumps and costs escalate. Productivity nose-dives and both customers and staff jump into the lifeboats to avoid going down with the ship!  

The use of delusional ratios and arbitrary targets (DRATs) is a dangerous and addictive behaviour and should be made a criminal offense punishable by Law because it is both destructive and unnecessary.

With painful awareness of the problem a path to a solution starts to form:

1. Share the numerator, the denominator and the ratio data as time series charts.
2. Only put requirement specifications on the numerator and denominator charts.
3. Outlaw quality-by-inspection and replace with quality-by-design-and-improvement.  

Metric Abuse is a Crime. DRATs are a dangerous addiction. DRATs kill Motivation. DRATs Kill Organisations.

Charts created using BaseLine

Where is the Rotten Egg?

Have you ever had the experience of arriving home from a holiday – opening the front door and being hit with the rancid smell of something that has gone rotten while you were away.

Phwooorrrarghhh!

When that happens we open the windows to let the fresh-air blow the smelly pong out and we go in search of the offending source of the horrible whiff. Somewhere we know we will find the “rotten egg” and we know we need to remove it because it is now beyond repair.

What happened here is that our usual, regular habit of keeping our house clean was interrupted and that allowed time for something to go rotten and to create a nasty stink. It may also have caused other things to go rotten too – decay  spreads. Usually we maintain an olfactory vigilance to pick up the first whiff of a problem and we act before the rot sets in – but this only works if we know what fresh air smells like, if we remove the peg from our nose, and if we have the courage to remove all of the rot. Permfuing the pig is not an effective long term strategy.

The rotten egg metaphor applies to organisations. The smell we are on the alert for is the rancid odour of a sour relationship, the signal we sense is the dissonance of misery, and the behaviours we look for are those that erode trust. These behaviours have a name – they are called discounts – and they come in two types.

Type 1 discounts are our deliberate actions that lead to erosion of trust – actions like interrupting, gossiping, blaming, manipulation, disrespect, intimidation, and bullying.

Type 2 discounts are the actions that we deliberately omit to do that also cause erosion of trust – like not asking for and not offering feedback, like not sharing data, information and knowledge, like not asking for help, like not saying thank you, like not challenging assumptions, like not speaking out when we feel things are not right, like not getting the elephant out in the room. These two types of discounts are endemic in all organisations and the Type 2 discounts are the more difficult to see because it was what we didn’t do that led to the rot. We must all maintain constant vigilance to sniff out the first whiff of misery and to act immediately and effectively to sustain a pong-free organisational atmosphere.

The Seven Flows

Improvement Science is the knowledge and experience required to improve … but to improve what?

Improve safety, delivery, quality, and productivity?

Yes – ultimately – but they are the outputs. What has to be improved to achieve these improved outputs? That is a much more interesting question.

The simple answer is “flow”. But flow of what? That is an even better question!

Let us consider a real example. Suppose we want to improve the safety, quality, delivery and productivity of our healthcare system – which we do – what “flows” do we need to consider?

The flow of patients is the obvious one – the observable, tangible flow of people with health issues who arrive and leave healthcare facilities such as GP practices, outpatient departments, wards, theatres, accident units, nursing homes, chemists, etc.

What other flows?

Healthcare is a service with an intangible product that is produced and consumed at the same time – and in for those reasons it is very different from manufacturing. The interaction between the patients and the carers is where the value is added and this implies that “flow of carers” is critical too. Carers are people – no one had yet invented a machine that cares.

As soon as we have two flows that interact we have a new consideration – how do we ensure that they are coordinated so that they are able to interact at the same place, same time, in the right way and is the right amount?

The flows are linked – they are interdependent – we have a system of flows and we cannot just focus on one flow or ignore the inter-dependencies. OK, so far so good. What other flows do we need to consider?

Healthcare is a problem-solving process and it is reliant on data – so the flow of data is essential – some of this is clinical data and related to the practice of care, and some of it is operational data and related to the process of care. Data flow supports the patient and carer flows.

What else?

Solving problems has two stages – making decisions and taking actions – in healthcare the decision is called diagnosis and the action is called treatment. Both may involve the use of materials (e.g. consumables, paper, sheets, drugs, dressings, food, etc) and equipment (e.g. beds, CT scanners, instruments, waste bins etc). The provision of materials and equipment are flows that require data and people to support and coordinate as well.

So far we have flows of patients, people, data, materials and equipment and all the flows are interconnected. This is getting complicated!

Anything else?

The work has to be done in a suitable environment so the buildings and estate need to be provided. This may not seem like a flow but it is – it just has a longer time scale and is more jerky than the other flows – planning-building-using a new hospital has a time span of decades.

Are we finished yet? Is anything needed to support the these flows?

Yes – the flow that links them all is money. Money flowing in is called revenue and investment and money flowing out is called costs and dividends and so long as revenue equals or exceeds costs over the long term the system can function. Money is like energy – work only happens when it is flowing – and if the money doesn’t flow to the right part at the right time and in the right amount then the performance of the whole system can suffer – because all the parts and flows are interdependent.

So, we have Seven Flows – Patients, People, Data, Materials, Equipment, Estate and Money – and when considering any process or system improvement we must remain mindful of all Seven because they are interdependent.

And that is a challenge for us because our caveman brains are not designed to solve seven-dimensional time-dependent problems! We are OK with one dimension, struggle with two, really struggle with three and that is about it. We have to face the reality that we cannot do this in our heads – we need assistance – we need tools to help us handle the Seven Flows simultaneously.

Fortunately these tools exist – so we just need to learn how to use them – and that is what Improvement Science is all about.

Ignorance Mining

Ignorance means “not knowing” and as the saying goes “Ignorance is bliss” because we do not worry about what we do not know about.  Or do we?

We are not totally ignorant – because we know that there are “unknowns” that would be of value to us. This knowledge creates an anxiety that we are very good at pushing out of awareness and despite the denial the unconscious feeling remains and it is emotionally corrosive. Repressed anxiety leads to the counter-productive behaviour of self-deception and then to self-justification – both of which are potent impedients to improvement.

We habitually, continuously and unconsciously discount the importance of what we do not know and in so doing we create internal emotional dissonance.  Our inner conflict drives external discounting behaviour and the inevitable toxic cultural consequence – Erosion of Trust.  Our inner conflict also drives internal discounting behaviour and the inevitable toxic emotional consequence – Erosion of  Confidence. This is the toxic emotional waste swamp that we create for ourselves and is the slippery slope that leads down to frustration, depression, cynicism and apathy. Ignorance  leads to anxiety and fear – and because we have conditioned ourselves to back away from fear we reflexly back away from ignorance and we end up trading fear for frustration. We do it to ourselves first and then we do it to others.

The antidote is counter-intuitive: it is to actively acknowledge and embrace our ignorance – and to do that we have to deliberately expose our own ignorance because we are very, very good at burying it from conscious view under a mountain of self-deception and self-justification.  We need to become Ignorace Miners.

The opposite of ignorance if knowledge and the good news is that we only need to scratch the surface to find knowledge nuggets – not huge ones perhaps – but plentiful. A bag of small knowledge nuggets is as valuable as an ingot of insight!

Knowledge nuggets are durable because they withstand cultural erosion but they can get washed away in the flood of toxic emotional waste and they can get buried under layers of cynical-resentful-arrogant-pessimism (CRAP).  These knowledge nuggests need to be re-gathered, re-freshed and re-cycled – and it is an endlessly exciting and energising experience.

So, when we are feeling fustrated, demotivated and depressed we just need to give ourselves a break and indulge in a bit of gentle ignorance mining – and when we do we will start to feel better immediately.

JIT, WIP, LIP and PIP

It is a fantastic feeling when a piece of the jigsaw falls into place and suddenly an important part of the bigger picture emerges. Feelings of confusion, anxiety and threat dissipate and are replaced by a sense of insight, calm and opportunitity.

Improvement Science is about 80% subjective and 20% objective: more cultural than technical – but the technical parts are necessary. Processes obey the Laws of Physics – and unlike the Laws of People these not open to appeal or repeal. So when an essential piece of process physics is missing the picture is incomplete and confusion reigns.

One piece of the process physics jigsaw is JIT (Just-In-Time) and process improvement zealots rant on about JIT as if it were some sort of Holy Grail of Improvement Science.  JIT means what you need arrives just when you need it – which implies that there is no waiting of it-for-you or you-for-it.  JIT is an important output of an improved process; it is not an input!  The danger of confusing output with input is that we may then try to use delivery time as a mangement metric rather than a performance metric – and if we do that we get ourselves into a lot of trouble. Delivery time targets are often set and enforced and to a large extent fail to achieve their intention because of this confusion.  To understand how to achieve JIT requires more pieces of the process physics jigsaw. The piece that goes next to JIT is labelled WIP (Work In Progress) which is the number of jobs that are somewhere between starting and finishing.  JIT is achieved when WIP is low enough to provide the process with just the right amount of resilience to absorb inevitable variation; and WIP is a more useful management metric than JIT for many reasons (which for brevity I will not explain here). Monitoring WIP enables a process manager to become more proactive because changes in WIP can signal a future problem with JIT – giving enough warning to do something.  However, although JIT and WIP are necessary they are not sufficient – we need a third piece of the jigsaw to allow us to design our process to deliver the JIT performance we want.  This third piece is called LIP (Load-In-Progress) and is the parameter needed to plan and schedule  the right capacity at the right place and the right time to achieve the required WIP and JIT.  Together these three pieces provide the stepping stones on the path to Productivity Improvement Planning (PIP) that is the combination of QI (Quality Improvement) and CI (Cost Improvement).

So if we want our PIP then we need to know our LIP and WIP to get the JIT.  Reddit? Geddit?         

Do You Have A Miserable Job?

If you feel miserable at work and do not know what to do then then take heart because you could be suffering from a treatable organisational disease called CRAP (cynically resistant arrogant pessimism).

To achieve a healthier work-life then it is useful to understand the root cause of CRAP and the rationale of how to diagnose and treat it.

Organisations have three interdependent dimensions of performance: value, time and money.  All organisations require both the people and the processes to be working in synergy to reliably deliver value-for-money over time.  To create a productive system it is necessary to understand the relationships between  value, money and time. Money is easier because it is tangible and durable; value is harder because it is intangible and transient. This means that the focus of attention is usually on the money – and it is often assumed that if the money is OK then the value must be OK too.  This assumption is incorrect.

Value and money are interdependent but have different “rates of change”  and can operate in different “directions”.  A common example is when a dip in financial performance triggers an urgent “drive” to improve the “bottom line”.  Reactive revenue generation and cost cutting results in a small, quick, and tangible improvement on the money dimension but at the same time sets off a large, slow, and intangible deterioration on the value dimension.  Money, time and  value are interdependent and the inevitable outcome is a later and larger deterioration in the money – as illustrated in the doodle. If only money is measured the deteriorating value is not detected, and by the time the money starts to falter the momentum of the falling value is so great that even heroic efforts to recover are futile. As the money starts to fall the value falls even further and even faster – the lose-lose-lose spiral of organisational failure is now underway.

People who demonstrate in their attitude and behaviour that they are miserable at work provide the cardinal sign of falling system value. A miserable, sceptical and cynical employee poisons the emotional atmosphere for everyone around them. Misery is both defective and infective.  The primary cause of a miserable job is the behaviour exhibited by people in positions of authority – and the more the focus is only on money the more misery their behaviour generates.

Fortunately there is an antidote; a way to break out of the vicious tail spin – measure both value and money, focus on improving value and observe the positive effect on the money.  The critical behaviour is to actively test the emotional temperature and to take action to keep it moving in a positive direction.  “The Three Signs of a Miserable Job” by Patrick Lencioni tells a story of how an experienced executive learns that the three things a successful managerial leader must do to achieve system health are:
1) ensure employees know their unique place, role and value in the whole system;
2) ensure employees can consciously connect their work with a worthwhile system goal; and
3) ensure employees can objectively measure how they are doing.

Miserable jobs are those where the people feel anonymous, where people feel their work is valueless, and where people feel that they get no feedback from their seniors, peers or juniors. And it does not matter if it is the cleaner or the chief executive – everyone needs a role, a goal and to know all their interdependencies.

We do not have to endure a Miserable Job – we all have the power to transform it into Worthwhile Work.

Politics, Policy and Police.

I love words – they are a window into the workings of our caveman wetware. Spoken and written language is the remarkably recent innovation that opened the door to the development of civilisations because it allowed individual knowledge to accumulate, to be shared, to become collective and to span generations (the picture is 4000 year old Minoan script) .

We are social animals and we have discovered that our lives are more comfortable and more predictable if we arrange ourselves into collaborative groups – families, tribes and communities; and through our collaboration we have learned to tame our enironment enough to allow us to settle in one place and to concentrate more time and effort on new learning.  The benefits of this strategy comes at a price – because as the size of our communities grow we are forced to find new ways to make decisions that are in the best interests of everyone.  And we need to find new ways to help ourselves abide by those decisions as individuals without incurring the cost of enforcement.  The word “civis” means a person who shares the privileges and the duties of the community in which they live.  And size matters – hamlets, villages and towns developed along with our ability to behave in a “civilised” way. Eventually cities appeared around 6000 years ago – and the Greek word for a city is “polis”.  The bigger the city the greater the capacity to support learning and he specialistion of individual knowledge, skills and experience. This in turn fuels the growth of the group and the development of specialised groups – tribes within tribes. A positive feedback loop is created that drives bigger-and-bigger settlements and more and more knowledge. Until … we forget what it is that underpins the whole design – civilised behaviour.  While our knowkedge has evolved at an accelerating pace our caveman brains have not kept up – and this is where the three “Poli” words come in – they all derive from the same root “polis” and they describe a process:

1. Politic  is the method by which the collective decisions are generated.
2. Policy is the method by which the Political decisions are communicated.
3. Police is the method by which the System of Policies are implemented.

The problem arises when the growth of knowledge and the inevitable changes that result starts to challenge the current Politic+Policy+Police Paradigm that created the context for the change to happen.  The Polices are continulally evolving – as evidenced by the continuous process of legislation. The Paradigm can usually absorb a lot of change but there usually comes a point when it becomes increasingly apparent to the society the the Paradigm has to change radically to support further growth. The more rigid the Policy and the more power to enforce if present the greater the social pressure that builds before the paradigm fractures – and the greater the disruption that will ensue as the social pressure is released.  History is a long catalogue of political paradigm shifts of every size – from minor tremors to major quakes – shifts that are driven by our insatiable hunger for knowledge, understanding and meaning.

Improvement Science operates at the Policy stage and is therefore forms the critical link between Politics and Police.  The purpose of Improvement Science is to design, test and implement Policies that deliver the collective Win-Win-Win outcomes.  Improvement Science is an embodiment of civilised behaviour and it embraces both the constraints that are decided by the People and the constraints that are defined by the Physics.

Sentenced to Death-by-Meeting!

Do you ever feel a sense of dread when you are summoned to an urgent meeting; or when you get the minutes and agenda the day before your monthly team meeting; or when you see your diary full of meetings for weeks in advance – like a slow and painful punishment?

If so then you may have unwittingly sentenced yourself to Death by Meeting.  What?  We do it to ourselves? No way! That would be madness!

But think about it. We consciously and deliberately ingest all sorts of other toxins: chemicals like caffeine, alcohol and cigarette smoke – so what is so different about immersing ourselves in the emotional toxic waste that many meetings seem to generate?

Perhaps we have learned to believe that there is no other way and because we have never experienced focussed, fun, and effective meetings where problems are surfaced, shared and solved quickly – problems that thwart us as individuals. Meetings where the problem-solving sum is greater than the problem-accumulating parts.

A meeting is a system that is designed to solve  problems.  We can improve our system incrementally but it is a slow process; to achieve a breakthrough we need to radically redesign the system.  There are three steps to doing this:

1. First decide what sort of problems the meeting is required to solve: strategic, operational or tactical;
2. Second design, test and practice a problem solving process for each category of problem; and
3. Third, select the appropriate tool for the task.

In his illuminating book Death by Meeting, Patrick Lencioni describes three meeting designs and illustrates with a story why meetings don’t work if the wrong tool is used for the wrong task. It is a sobering story.

There is another dimension to the design of meetings; that is how we solve problems as groups – and how, as a group, we seem to waste a lot of effort and time in unproductive discussion.  In his book Six Thinking Hats Edward De Bono provides an explanation for our habitual behaviour and a design for a radically different group problem solving process – one that a group would not arrive at by evolution – but one that has been proven to work.

If  we feel sentenced to death-by-meetings then we could buy and read these two small books – a zero-risk, one-off investment of effort, time and money for a guaranteed regular reward of fun, free time and success!

So if I complain to myself and others about pointless meetings and I have not bothered to do something about it myself then I now know that it is I who sentenced myself to Death-by-Meeting. Unintentionally and unconsciously perhaps – but me nevertheless.

Is a Queue an Asset or a Liability?

Many believe that a queue is a good thing.

To a supplier a queue is tangible evidence that there is demand for their product or service and reassurance that their resources will not sit idle, waiting for work and consuming profit rather than creating it.  To a customer a queue is tangible evidence that the product or service is in demand and therefore must be worth having. They may have to wait but the wait will be worth it.  Both suppliers and customers unconsciously collude in the Great Deception and even give it a name – “The Law of Supply and Demand”. By doing so they unwittingly open the door for charlatans and tricksters who deliberately create and maintain queues to make themselves appear more worthy or efficient than they really are.

Even though we all know this intuitively we seem unable to do anything about it. “That is just the way it is” we say with a shrug of resignation. But it does not have to be so – there is a path out of this dead end.

Let us look at this problem from a different perspective. Is a product actually any better because we have waited to get it? No. A longer wait does not increase the quality of the product or service and may indeed impair it.  So, if  a queue does not increase quality does it reduce the cost?  The answer again is “No”. A queue always increases the cost and often in many ways.  Exactly how much the cost increases by depends on what is on the queue, where the queue is, and how long it is. This may sound counter-intitutive and didactic so I need to explain in a bit more detail the reason this statement is an inevitable consequence of the Laws of Physics.

Suppose the queue comprises perishable goods; goods that require constant maintenance; goods that command a fixed price when they leave the queue; goods that are required to be held in a container of limited capacity with fixed overhead costs (i.e. costs that are fixed irrespective of how full the container is).  Patients in a hospital or passengers on an aeroplane are typical examples because the patient/passenger is deprived of their ability to look after themselves; they are totally dependent on others for supplying all their basic needs; and they are perishable in the sense that a patient cannot wait forever for treatment and an aeroplane cannot fly around forever waiting to land. A queue of patients waiting to leave hospital or an aeroplane full of passsengers circling to land at an airport represents an expensive queue – the queue has a cost – and the bigger the queue is and the longer it persists the greater the cost.

So how does a queue form in the first place? The answer is: when the flow in exceeds the flow out. The instant that happens the queue starts to grow bigger.  When flow in is less than flow out the queue is getting smaller – but we cannot have a negative queue – so when the flow out exceeds the flow in AND the size of the queue reaches zero the system suddenly changes behaviour – the work dries up and the resources become idle.  This creates a different cost – the cost of idle resources consuming money but not producing revenue. So a queue/work costs and no queue/no work costs too.  The least cost situation is when the work arrives at exactly the same rate that it can be done: there is no waiting by anyone – no queue and no idle resources.  Note however that this does not imply that the work has to arrive at a constant rate – only that rate at which the work arrives matches the rate at which it is done – it is the difference between the two that should be zero at all times. And where we have several steps – the flow must be the same through all steps of the stream at all times.  Remember the second condition for minimum cost – the size of the queue must be zero as well – this is the zero inventory goal of the “perfect process”.

So, if any deviation from this perfect balance of flow creates some form of cost, why do we ever tolerate queues? The reason is that the perfect world above implies that it is possible to predict the flow in and the flow out with complete accuracy and reliabilty.  We all know from experience that this is impossible: there is always some degree of  natural variation which is unpredictable and which we often call “noise” or “chaos”. For that single reason the lowest cost (not zero cost) situation is when there is just enough breathing space for a queue to wax and wane – smoothing out the unpredictable variation between inflow and outflow. This healthy queue is called a buffer.

The less “noise” the less breathing space is needed and the closer you can get to zero queue cost.

So, given this logical explanation it might surprise you to learn that most of the flow variation we observe in real processes is neither natural nor unpredictable – we deliberately and persistently inject predictable flow variation into our processes.  This unnatural variation is created by own policies – for example, accumulating DIY jobs until there are enough to justify doing them.   The reason we do this is because we have been bamboozled into believing it is a good thing for the financial health of our system. We have been beguiled by the accountants – the Money Magicians.  Actually that is not precise enough – the accountants themselves  are the innocent messengers – the deception comes from the Accounting Policies.  The major niggle is one convention that has become ossified into Accounting Practice – the convention that a queue of work waiting to be finished or sold represents an asset – sort of frozen-for-now-cash that can be thawed out or “liquidated” when the product is sold.  This convention is not incorrect it is just incomplete because, as we have demonstrated, every queue incurs a cost.  In accountant-speak a cost is called a liability and unfortunately this queue-cost-liability is never included in the accounts and this makes a very, very, big difference to the outcome. To assess the financial health of an organisation at a point in time an accountant will use a balance sheet to subtract the liabilities from the assets and come up with a number that is called equity. If that number is zero or negative then the business is financially dead – the technical name is bankruptcy and no accountant likes to utter the B word.  Denial is not a reliable long term buisness strategy and if our Accounting Policies do not include the cost of the queue as a liability on the balance sheet then our finanical reports will be a distortion of reality and will present the business as healthier than it really is.  This is an Error of Omission and has grave negative consequences.  One of which is that it can create a sense of complacency, a blindness to the early warning signs of financial illness and reactive rather than proactive behaviour. The problem is compounded when a large and complex organisation is split into smaller, simpler mini-businesses that all suffer from the same financial blindspot. It becomes even more difficult to see the problem when everyone is making the same error of omission and when it is easier to blame someone else for the inevitable problems that ensue.

We all know from experience that prevention is better than cure and we also know that the future is not predictable with certainty: so in addition to prevention we need vigilence, prompt action, decisive action and appropriate action at the earliest detectable sign of a significant deterioration. Complacency is not a reliable long term survival strategy.

So what is the way forward? Dispense with the accountants? NO! You need them – they are very good at what they do – it is just that what they are doing is not exactly what we all need them to be doing – and that is because the Accounting Policies that they diligently enforce are incomplete.  A safer strategy would be for us to set our accountants the task of learning how to count the cost of a queue and to include that in our internal finanical reporting. The quality of business decisions based on financial data will improve and that is good for everyone – the business, the customers and the reputation of the Accounting Profession. Win-win-win.

The question was “Is a queue and asset or a liability?” The answer is “Both”.

The Rubik Cube Problem

Look what popped out of Santa’s sack!

I have not seen one of these for years and it brought back memories of hours of frustration and time wasted in attempting to solve it myself; a sense of failure when I could not; a feeling of envy for those who knew how to; and a sense of indignation when they jealously guarded the secret of their “magical” power.

The Rubik Cube got me thinking – what sort of problem is this?

At first it is easy enough but it becomes quickly apparent that it becomes more difficult the closer we get to the final solution – because our attempts to reach perfection undo our previous good work.  It is very difficult to maintain our initial improvement while exploring new options. 

This insight struck me as very similar to many of the problems we face in life and the sense of futility that creates a powerful force that resists further attempts at change.  Fortunately, we know that it is possible to solve the Rubik cube – so the question this raises is “Is there a way to solve it in a rational, reliable and economical way from any starting point?

One approach is to try every possible combination of moves until we find the solution. That is the way a computer might be programmed to solve it – the zero intelligence or brute force approach.

The problem here is that it works in theory but fails in practice because of the number of possible combinations of moves. At each step you can move one of the six faces in one of two directions – that is 12 possible options; and for each of these there are 12 second moves or 12 x 12 possible two-move paths; 12 x 12 x 12 = 1728 possible three-move paths; about 3 million six-move paths; and nearly half a billion eight-move paths!

You get the idea – solving it this way is not feasible unless you are already very close to the solution.

So how do we actually solve the Rubik Cube?  Well, the instructions that come with a new one tells you – a combination of two well-known ingredients: strategy and tactics. The strategy is called goal-directed and in my instructions the recommended strategy is to solving each layer in sequence. The tactics are called heuristics: tried-tested-and-learned sequences of actions that are triggered by specific patterns.

At each step we look for a small set of patterns and when we find one we follow the pre-designed heuristic and that moves us forward along the path towards the next goal. Of the billions of possible heuristics we only learn, remember, use and teach the small number that preserve the progress we have already made – these are our magic spells.

So where do these heuristics come from?

Well, we can search for them ourselves or we can learn them from someone else.  The first option holds the opportunity for new insights and possible breakthroughs – the second option is quicker!  Someone who designs or discovers a better heuristic is assured a place in history – most of us only ever learn ones that have been discovered or taught by others – it is a much quicker way to solve problems.  

So, for a bit of fun I compared the two approaches using a computer: the competitive-zero-intelligence-brute-force versus the collaborative-goal-directed-learned-and-shared-heuristics.  The heuristic method won easily every time!

The Rubik Cube is an example of a mechanical system: each of the twenty-six parts are interdependent, we cannot move one facet independently of the others, we can only move groups of nine at a time. Every action we make has nine consequences – not just one.  To solve the whole Rubik Cube system problem we must be mindful of the interdependencies and adopt methods that preserve what works while improving what does not.

The human body is a complex biological system. In medicine we have a phrase for this concept of preserving what works while improving what does not: “primum non nocere” which means “first of all do no harm”.  Doctors are masters of goal-directed heuristics; the medical model of diagnosis before prognosis before treatment is a goal-directed strategy and the common tactic is to quickly and accurately pattern-match from a small set of carefully selected data. 

In reality we all employ goal-directed-heuristics all of the time – it is the way our caveman brains have evolved.  Relative success comes from having a more useful set of heuristics – and these can be learned.  Just as with the Rubik Cube – it is quicker to learn what works from someone who can demonstrate that it works and can explain how it works – than to always laboriously work it out for ourselves.

An organisation is a bio-psycho-socio-economic system: a set of interdependent parts called people connected together by relationships and communication processes we call culture.  Improvement Science is a set of heuristics that have been discovered or designed to guide us safely and reliably towards any goal we choose to select – preserving what has been shown to work and challenging what does not.  Improvement Science does not define the path it only helps us avoid getting stuck, or going around in circles, or getting hopelessly lost while we are on the life-journey to our chosen goal.

And Improvement Science is learnable.

Inborn Errors of Management

There is a group of diseases called “inborn errors of metabolism” which are caused by a faulty or missing piece of DNA – the blueprint of life that we inherit from our parents. DNA is the chemical memory that stores the string of instructions for how to build every living organism – humans included. If just one DNA instruction becomes damaged or missing then we may lose the ability to make or to remove one specific chemical – and that can lead to a deficiency or an excess of other chemicals – which can then lead to dysfunction – which can then make us feel unwell – and can then limit both our quality and quantity of life.  We are a biological system of interdependent parts. If an inborn error of metabolism is lethal it will not be passed on to our offspring because we don’t live long enough – so the ones we see are the ones which and not lethal.  We treat the symptoms of an inborn error of metabolism by artificially replacing the missing chemical – but the way to treat the cause is to repair, replace or remove the faulty DNA.

The same metaphor can be applied to any social system. It too has a form of DNA which is called culture – the inherited set of knowledge, beliefs, attitudes and behaviours that the organisation uses to conduct itself in its day-to-day business of survival. These patterns of behaviour are called memes – the social equivalent to genes – and are passed on from generation to generation through language – body language and symbolic language; spoken words – stories, legends, myths, songs, poems and books – the cultural collective memory of the human bio-psycho-social system. All human organisations share a large number of common memes – just as we share a large number of common genes with other animals and plants and even bacteria. Despite this much larger common cultural heritage – it is the differences rather than the similarities that we notice – and it is these differences that spawn the cultural conflict that we observe at all levels of society.

If, by chance alone, an organisation inherits a depleted set of memes it will appear different to all the others and it will tend to defend that difference rather than to change it. If an organisation has a meme defect, a cultural mutation that affects a management process, then we have the organisational condition called an Inborn Error of Management – and so long as the mutation is not lethal to the organisation it will tend to persist and be passed largely unnoticed from one generation of managers to the next!

The NHS was born in 1948 without a professional management arm, and while it survived and grew initally, it became gradually apparent that the omisson of the professional management limb was a problem; so in the 1980’s, following the Griffiths Report, a large dose professional management was grafted on and a dose of new management memes were injected. These included finance, legal and human resource management memes but one important meme was accidentally omitted – process engineering – the ability to design a process to meet a specific quality, time and cost specification.  This omission was not noticed initially because the rapid development of new medical technologies and new treatments was delivering improvements that obscured the inborn error of management. The NHS became the envy of many other countries – high quality healthcare available to all and free at the point of delivery.  Population longevity improved, public expectation increased, demand for healthcare increased and inevitably the costs increased.  In the 1990’s the growing pains of the burgeoning NHS led to a call for more funding, quoting other countries as evidence, and at the turn of the New Millenium a ten year plan to pump billions of pounds per year into the NHS was hatched.  Unfortunately, the other healthcare services had inherited the same meme defect – so the NHS grew 40% bigger but no better – and the evidence is now accumulatung that productivity (the ratio of output quality to input cost) has actally fallen by more than 10% – there are more people doing more work but less well.  The UK along with many other countries has hit an economic brick wall and the money being sucked into the NHS cannot increase any more – even though we have created a legacy of an increasing proportion of retired and elderly members of society to support. 

The meme defect that the NHS inherited in 1948 and that was not corrected in the transplant operation  1980’s is now exerting it’s influence – the NHS has no capability for process engineering – the theory, techniques, tools and training required to design processes are not on the curriculum of either the NHS managers or the clinicians. The effect of this defect is that we can only treat the symptoms rather than the cause – and we only have blunt and ineffective instruments such as a budget restriction – the management equivalent of a straight jacket – and budget cuts – the management equivalent of a jar of leeches. To illustrate the scale of the effect of this inborn error of management we only need to look at other organisations that do not appear to suffer from the same condition – for example the electronics manufacturing industry. The almost unbelieveable increase in the performance, quality and value for money of modern electronics over the last decade (mobile phones, digital cameras, portable music players, laptop computers, etc) is because these industries have invested in developing both their electrical and process engineering capabilities. The Law of the Jungle has weeded out the companies who did not – they have gone out of business or been absorbed – but publically funded service organisations like the NHS do not have this survival pressure – they are protected from it – and trying to simulate competition with an artificial internal market and applying stick-and-carrot top-down target-driven management is not a like-for-like replacement.    

The challenge for the NHS is clear – if we want to continue to enjoy high quality health care, free at the point of delivery, and that we can afford then we will need to recognise and correct our inborn error of management. If we ignore the symptoms, deny the diagnosis and refuse to take the medicine then we will suffer a painful and lingering decline – not lethal and not enjoyable – and it is has a name: purgatory.

The good news is that the treatment is neither expensive, nor unpleasant nor dangerous – process engineering is easy to learn, quick to apply, and delivers results almost immediately – and it can be incorporated into the organisational meme-pool quite quickly by using the see-do-teach vector. All we have to do is to own up to the symptoms, consider the evidence, accept the diagnosis, recognise the challenge and take our medicine. The sooner the better!

 

The Drama Triangle

Have you ever had the experience of trying to help someone with a problem, not succeeding, and being left with a sense of irritation, disappointment, frustration and even anger?

Was the dialog that led up to this unhappy outcome something along the lines of:

A: I have a problem with …
B: What about trying …
A: Yes, but ….
B: What about trying ….
A: Yes, but …

… and so on until you run out of ideas, patience or both.

If this sounds familiar then it is likely that you have been unwittingly sucked into a Drama Triangle – an unconscious, habitual pattern of behaviour that we all use to some degree.

This endemic behaviour has a hidden purpose: to feed our belonging need for social interaction.

The theory goes something like this – we are social animals and we need social interaction just as much as we need oxygen, water and food.  Without it we become psychologically malnourished and this insight explains why prolonged solitary confinement is such an effective punishment – it is the psychological equivalent to starvation.

The emotional sustenance we want most is unconditional love (UCL) – the sort we usually get from our parents, family and close friends.  Repeated affirmation that we are ‘OK’ with no strings attached.

The downside of our unconscious desire for UCL is that it offers a way for others to control our behaviour and those who choose to abuse that power are termed ‘manipulative’.  This control is done by adding conditions: “I will give you the affirmation you crave IF you do what I want“.  This is conditional love (CL).

When we are born we are completely powerless, and completely dependent on our parents – in particular our mother.  As we get older and start to exert our free will we learn that our society has rules – we cannot just follow every selfish desire.

Our parents unconsciously employ CL as a form of behavioural control and it is surprisingly effective: “If you are a good boy/girl then …“.  So, as children, we learn the technique from our parents.

This in itself  is not a problem; but it can become a problem when CL is the only sort available and when the intention is to further only the interests of the giver.  When this happens it becomes … manipulation.

The apparently harmless playground threat of “If you don’t do what I want then I won’t be your friend anymore” is the practice script of a future manipulator – and it feeds on a limiting-belief in the unconscious mind of the child – the belief that there is a limited supply of UCL and that someone else controls it.

And because we make this assumption at the pre-verbal stage of child development, it becomes unconscious, habitual, unspoken and second nature.


Our invalid childhood belief has a knock-on effect; we learn to survive on CL because “No Love” is the worst of all options; it is the psychological equivalent of starvation.

And we learn to put up with second best, and because CL offers inferior emotional nourishment we need a way of generating as much as we want, on-demand.

So we employ the behaviour we were unwittingly taught by our patents – and the Drama Triangle becomes our on-demand-generator-of-second-rate-emotional-sustenance.

The tangible evidence of this “programming” is an observable behaviour that is called “game playing” and was first described by Eric Berne in the famous book “Games People Play“.

Berne described many different Games and they all have a common pattern and a common objective – to generate second-rate emotional food (or ‘transactions’ to use Berne’s language).  But our harvest comes at a price – the transactions are unhealthy – not enough to harm us immediately – but enough to leave us feeling dissatisfied and unhappy.

But what choice do we believe we have?

If we were given the options of breathing stale air or suffocating what would we do?

If we assume our options are to die of thirst or drink stagnant pond-water what would we do?

If we believe our only options are to starve or eat rubbish what would we do?

Our survival instinct is much stronger than our belonging need, so we choose unhealthy over deadly and eventually we become so habituated to game-playing that we do not notice it any more.

Excessive and prolonged exposure to the Drama Triangle is the psychological equivalent of alcoholic liver cirrhosis.  Permanent and irreversible psychological scarring called cynicism.


It is important to remember that this is learned behaviour – and therefore it can be unlearned – or rather overwritten with a healthier habit.

Just by becoming aware of the problem, and understanding the root cause of the Drama Triangle, an alternative pathway appears.

We can challenge our untested assumption that UCL is limited and that someone else controls the supply.  We can consider the alternative hypothesis: that the supply of UCL is unlimited and that we control the supply.

Q: How easy is it for us to offer someone else UCL?

Easy – we see it all the time. How do you feel when someone gives a genuine “Thank You”, cheers you on, celebrates your success, seeks your opinion, and recommends you to others – with no strings attached.  These are all forms of UCL that anyone can practice; by making a conscious choice to give with no expectation of a return.

For many people it feels uncomfortable at first because the game-playing behaviour is so deeply ingrained – and game-playing is particularly prevalent in the corridors of power where it is called “politics”.

Game-free behaviour gets easier with practice because UCL benefits both the giver and the receiver – it feels healthier – there is no need for a payback, there is no score to be kept, no emotional account to balance.  It feels like a breath of fresh air.


So, next time you feel that brief flash of irritation at the start of a conversation or are left with a negative feeling after a conversation just stop and ask yourself  “Was I just sucked into a Drama Triangle?”

Anyone who is able to “press your button” is hooking you into a game, and it takes two to play.

Now consider the question “And to what extent was I unconsciously colluding?


The tactic to avoid the Drama Triangle is to learn to sense the emotional “hook” that signals the invitation to play the Game; and to consciously deflect it before it embeds into your unconscious mind and triggers an unconscious, habitual, reflex, emotional reaction.

One of the most potent barriers to change is when we unconsciously compute that our previously reliable sources of CL are threatened by the change.  We have no choice but to oppose the change – and that choice is made unconsciously. So, we unwittingly undermine the plan.

The symptoms of this unconscious behaviour are obvious when you know what to look for … and the commonest reaction is:

“Yes … but …”

and the more intelligent and invested the person the more cogent and rational the argument will sound.

The most effective response is to provide evidence that disproves the defensive assertion – not the person’s opinion – and before taking on this challenge we need to prepare the evidence.

By demonstrating that their game-playing behaviour no longer leads to the expected payoff, and at the same time demonstrating that game-free behaviour is both possible and better – we demonstrate that the underlying, unconscious, limiting belief is invalid.

And by that route we develop our capability for game-free social interactions.

Simple enough in theory, and it does works in practice, though it can be difficult to learn because game-playing is such an ingrained behaviour.  It does get easier with practice and the ultimate reward is worth the investment  – a healthier emotional environment.  And that is transformational.

More than the Sum or Less?

It is often assumed that if you combine world-class individuals into a team you will get a world-class team.

Meredith Belbin showed 30 years ago that you do not and it was a big shock at the time!

So, if world class individuals are not enough, what are the necessary and sufficient conditions for a world-class team?

The late Russell Ackoff described it perfectly – he said that if you take the best parts of all the available cars and put them together you do not get the best car – you do not even get a car. The parts are necessary but they are not sufficient – how the parts connect to each other and how they influence each other is more important.  These interdependencies are part of the system – and to understand a system requires understanding both the parts and their relationships.

A car is a mechanical system; the human body is a biological system; and a team is a social system. So to create a high performance, healthy, world class team requires that both the individuals and their relationships with each other are aligned and resonant.

When the parts are aligned we get more than the sum of the parts; and when they are not we get less.

If we were to define intelligence quotient as “an ability to understand and solve novel problems” then the capability of a team to solve novel problems is the collective intelligence.  Experience suggests that a group can appear to be less intelligent than any of the individual members.  The problem here is with the relationships between the parts – and the term that is often applied is “dysfunctional”.

The root cause is almost always distrustful attitudes which lead from disrespectful prejudices and that lead to discounting behaviour.  We learn these prejudices, attitudes and behaviours from each other and we reinforce them with years of practice.  But if they are learned then they can be un-learned. It is simple in theory, and it is possible in practice, but it is not easy.

So if we want to (dis)solve complex, novel problems thenwe need world-class problem solving teams; and to transform our 3rd class dysfunctional teams we must first learn to challenge respectfully our disrespectful behaviour.

The elephant is in the room!

How Do We Measure the Cost of Waste?

There is a saying in Yorkshire “Where there’s muck there’s brass” which means that muck or waste is expensive to create and to clean up. 

Improvement science provides the theory, techniques and tools to reduce the cost of waste and to re-invest the savings in further improvement.  But how much does waste cost us? How much can we expect to release to re-invest?  The answer is deceptively simple to work out and decidedly alarming when we do.

We start with the conventional measurement of cost – the expenses – be they materials, direct labour, indirect labour, whatever. We just add up all the costs for a period of time to give the total spend – let us call that the stage cost. The next step requires some new thinking – it requires looking from the perspective of the job or customer – and following the path backwards from the intended outcome, recording what was done, how much resource-time and material it required and how much that required work actually cost.  This is what one satisfied customer is prepared to pay for; so let us call this the required stream cost. We now just multiply the output or activity for the period of time by the required stream cost and we will call that the total stream cost. We now just compare the stage cost and the stream cost – the difference is the cost of waste – the cost of all the resources consumed that did not contribute to the intended outcome. The difference is usually large; the stream cost is typically only 20%-50% of the stage cost!

This may sound unbelieveable but it is true – and the only way to prove it to go and observe the process and do the calculation – just looking at our conventional finanical reports will not give us the answer.  Once we do this simple experiment we will see the opportunity that Improvement Science offers – to reduce the cost of waste in a planned and predictable manner.

But if we are not prepared to challenge our assumptions by testing them against reality then we will deny ourselves that opportunity. The choice is ours.

One of the commonest assumptions we make is called the Flaw of Averages: the assumption that it is always valid to use averages when developing business cases. This assumption is incorrect.  But it is not immediately obvious why it is incorrect and the explanation sounds counter-intuitive. So, one way to illustrate is with a real example and here is one that has been created using a process simulation tool – virtual reality:

When Is Seeing Believing?

One of the problems with our caveman brains is that they are a bit slow. It may not feel that way but they are – and if you don’t believe me try this experiment: Stand up, get a book, hold it in your left hand open it at any page, hold a coin in your right hand between finger and thumb so that it will land on the floor when you drop it. Then close your eyes and count to three. Open your eyes, drop the coin, and immediately start reading the book. How long is it before you are consciously aware of the meaning of the words. My guess is that the coin hits the floor about the same time that you start to making sense of what is on the page. That means it takes about half a second to start perceiving what you are seeing. That long delay is a problem because the world around us is often changing much faster than that and, to survive, we need to keep up. So what we do is fill in the gaps – what we perceive is a combination of what we actually see and what we expect to see – the process is seamless, automatic and unconscious. And that is OK so long as expectation and reality stay in tune – but what happens when they don’t? We experience the “Eh?” effect which signals that we are temporarily confused – an uncomfortable and scary feeling which resolves when we re-align our perception with reality. Over time we all learn to avoid that uncomfortable confusion feeling with a simple mind trick – we just filter out the things we see that do not fit our expectation. Psychologists call this “perceptual distortion” and the effect is even greater when we look with our minds-eye rather than our real eyes – then we only perceive  what we expect to see and we avoid the uncomfortable “Eh?” effect completely.  This unconscious behaviour we all demonstrate is called self-delusion and it is a powerful barrier to improvement – because to improve we have to first accept that what we have is not good enough and that reality does not match our expectation.

To become a master of improvement it is necessary to learn to be comfortable with the “eh?” feeling – to disconnect it from the negative emotion of fear that drives the denial reaction and self-justifying behaviour – and instead to reconnect it to the positive emotion of excitement that drives the curiosity action and exploratory behaviour.  One ewasy way to generate the “eh?” effect is to perform reality checks – to consciously compare what we actually see with what we expect to see.  That is not easy because our perception is very slippery – we are all very,very good at perceptual distortion. A way around this is to present ourselves with a picture of realilty over time, using the past as a baseline, and our understanding of the system, we can predict what we believe will happen in the near future. We then compare what actually happens with our expectation.  Any significant deviations are “eh?” effects that we can use to focus our curiosity – for there hide the nuggets of new knowledge.  But how do we know what is a “signifcant” deviation? To answer that we must avoid using our slippery self-delusional perception system – we need a tool that is designed to do this interpretation safely, easily, and quickly.  Click here for an example of such a tool.

Must We Unlearn First?

In the famous “Star Wars” films when Luke Skywalker is learning to master the Force – his trainer, Jedi Master Yoda, says the famous line:

You must unlearn what you have learned“.

These seven words capture a fundamental principle of Improvement Science – that very often we have to unlearn before we can improve.

Unlearning is not the same as forgetting – because much of what we have learned is unconscious – so to unlearn we first have to make our assumptions conscious.

Unlearning is not just erasing a memory, it is preparing the mental ground to replace the learning with something else.

And we do not want to unlearn everything – we want to keep the nexus of knowledge nuggets that form the solid foundation of new learning.  We only want to unlearn what is preventing us adding new understanding, concepts and skills – the invisible layer of psychological grease that smears our vision and leaves our minds slippery and unable to grasp new concepts.

We need to apply some cognitive detergent and ad some heated debate to strip off the psycho-slime.  The best detergent is I have found is called Reality and the good news is that Reality is widely available, completely free and supplies will never run out.

Watch the video on YouTube

Are there Three Languages?

When we are in “heated agreement” with each other it feels like we are talking different languages and this is a sign that we need to explore further and deeper. With patience and persistence we realise they are just dialects of the same language. Our challenge now is to learn to speak clearly in one language at a time and in the same language as the person(s) we are communicating with. Improvement Science has three primary languages – the language of quality (100% qualitative) , the language of money (100% quantitative) and the language of time (100% qualitative or quantitative depending on our perspective).  Learning to speak all three languages fluently – dreams are painted in the language of quality, processes are described in the language of time, and survival is a story told in the language of money which is the universal currency that we exchange for our physical needs (water, food, warmth, shelter, security, etc).

The engagement is emotional – through the subjective language of quality – and once engaged we have to master the flow of time in order to influence the flow of money. Our higher purpose is necessary but it is not sufficient – it is our actions that converts our passion into reality – and uncoordinated or badly designed action just dissipates passion and leads to exhaustion, disappointment and cynicism.

Is it OK to Fail First Time?

Improvement Science is about learning from when what actually happens is different to that which we expected to happen.  Is this surprise a failure or is this a success? It depends on our perspective. If we always get what we expect then we could conclude that we have succeeded – yet we have neither learned anything nor improved. So have we failed to learn? In contrast, if we never get what we expected then we could conclude that we  always fail – yet we do not report what we have learned and improved.  Our expectation might be too high! So comparing outcome with expectation seems a poor way to measure our progress with learning and improvement.

When we try something new we should expect to be surprised – otherwise it would not be new.  It is what we learn from that expected surprise that is of most value. Sometime life turns out better than we expected – what can we learn from those experiences and how can we ensure that outcome happens again – predictably? Sometimes life turns out worse than we expected – what can we learn from those experiences and how can we ensure that outcome does not happen again, predictably?  So, yes it is OK for us to fail and to not get what we expected – first time.  What is not OK is for us to fail to learn from the lesson and to make an avoidable mistake more than once or miss an opportunity for improvement more than once.

The Plague of Niggles

Historians tell us that in the Middle Ages about 25 million people, one third of the population of Europe, were wiped out by a series of Plagues! We now know that the cause was probably a bacteria called Yersinia Pestis that was spread by fleas when they bite their human hosts to get a meal of blood. The fleas were carried by rats and ships carried the rats from one country to another.  The unsanitary living conditions of the ports and towns at the time provided the ideal conditions for rats and fleas and, with a superstitious belief that cats were evil, without their natural predator the population of rats increased, so the population of fleas increased, so the likehood of transmission of the lethal bacteria increased, and the number of people decreased. A classic example of a chance combination of factors that together created an unstable and deadly system.

The Black Death was not eliminated by modern hi-tech medicine; it just went away when some of the factors that fuelled the instability were reduced. A tangible one being the enforced rebuilding of London after the Great Fire in Sept 1666 which gutted the medieval city and which followed the year after the last Great Plague in 1665 that killed 20% of the population. 

The story is an ideal illustration of how apparently trivial, albeit  annoying, repeated occurences can ultimately combine and lead to a catastrophic outcome.  I have a name for these apparently trivial, annoying and repeated occurences – I call them Niggles – and we are plagued by them. Every day we are plagued by junk mail, unpredictable deliveries, peak time traffic jams, car parking, email storms, surly staff, always-engaged call centres, bad news, bureaucracy, queues, confusion, stress, disappointment, depression. Need I go on?  The Plague of Niggles saps our spirit just as the Plague of Fleas sucked our ancestors blood.  And the Plague of Niggles infect us with a life-limiting disease – not a rapidly fatal one like the Black Death – instead we are infected with a slow, progressive, wasting disease that affects our attitude and behaviour and which manifests itself as criticism, apathy and cynicism.  A disease that seems as terifying, mysterious and incurable to us today as the Plague was to our ancestors. 

History repeats itself and we now know that complex systems behave in characteristic ways – so our best strategy may the same – prevention. If we use the lesson of history as our guide we should be proactive and focus our attention on the Niggles. We should actively seek them out; see them for what they really are; exercise our amazing ability to understand and solve them; and then share the nuggets of new knowledge that we generate.

Seek-See-Solve-Share.

How to Kill an Organisation with a Budget.

The primary goal of an organisation is to survive – and to do that it must be financially viable. The income must meet or exceed the expenses; the bottom line must be zero or greater; your financial assets much equal or exceed your financial liabilities.  So, organisations have to make financial plans to ensure finanical survival and as large organisations are usually sub-divided into smaller functional parts the common finanical planning tool is the departmental budget. We all know from experience that the future is not precisely predictable and that costs tend to creep up; and the budget is also commonly used as an expense containment tool.  A perfectly reasonable strategy to help ensure survival.  But by combining the two reasonable requirements intoi one tool have we unintentionally created a potentially lethal combination? The answer is “yes” – and this is why ….

The usual policy for a budget is to set the future budget based on the past performance.  Perfectly reasonable. And to contain costs we say “if our expenses were less than our budget then we didn’t need the extra money and we can remove it from our budget for next year.” Very plausible.  And was also say “if our expenses were more than our budget then we are suffering from cost-creep and the deficit is carried over to next year and our budget is not increased.”  What do we observe?  We observe pain!  The first behaviour is that departments on track to underspend will try to spend the remainder of the budget by the end of the period to ensure the next budget is not reduced … they spend their reserves.  The departments on track to overspend cut all the soft costs they can – such as not recruiting when people leave, buying cheap low quality supplies, cancelling training etc.  The result is that teh departments that impose internal cuts will perform less well – because they do not have the capacity to do their work – and that has a knock on effect on other departments because the revenue generating work is usually crosses several departments.  A constraint in just one will affect the flow through all of them.  The combined result is a fall in throughput, a fall in revenue, more severe budget restrictions, and a self-reinforcing spiral of decline to organisational death! Precisely the opposite intention of the budget design.

If that is the disease then what is the root cause? What is the patholgy?

The problem here is the mismatch between the financial specification (budget available) and the financial capability (cost required).  The solution is to recognise the importance of the difference. The first step is to set the budget specification to match the cost capability at each step along the process in order to stabilise the flow; the second step is to redesign the process to improve the cost capability and only reduce the budget when the process has been shown to be capable of working at a lower cost.  This requires two skills: first to be able to work out the cost capability of every step in the process; and second to design-for-cost. Budgets do neither of these and without these skills a budget transforms from a useful management asset to lethal organisational liability!

Do We have a Wealth of Data and a Dearth of Information?

Sustained improvement only follows from effective actions; which follow from well-informed decisions – not from blind guessing.  A well-informed decision imples good information – and good information is not just good data. Good information implies that good data is presented in a format that is both undistorted and meaningful to the recipient.  How we present data is, in my experience, one of the weakest links in the improvement process.  We rarely see data presented in a clear, undistorted, and informative way and commonly we see it presented in a way that obscures or distorts our perception of reality. We are presented with partial facts quoted without context – so we unconsciously fill in the gaps with our own assumptions and prejudices and in so doing distort our perception further.  And the more emotive the subject the more durable the memory that we create – which means it continues to distort our future perception even more.

The primary purpose of the news media is survival – by selling news – so the more emotive and memorable the news the better it sells.  Accuracy and completeness can render news less attractive: by generating the “that’s obvious, it is not news” response.  Catchy headlines sell news and to do that they need to generate a specific emotional reaction quickly – and that emotion is curiosity! Once alerted, they must hold the readers attention by quickly creating a sense of drama and suspense – like a good joke – by being just ambiguous enough to resonate with many different pepole – playing on their prejudices to build the emotional intensity.

The purpose of politicians is survival – to stay in power long enough to achieve their goals – so the less negative press they attract the better – but Politicians and the Press need each other because their purpose is the same – to survive by selling an idea to the masses – and to do that they must distort reality and create ambiguity.  This has the unfortunate side effect of also generating less-than-wise decisions.

So if our goal is to cut through the emotive fog and get to a good decision quickly so that we can act effectively we need just the right data presented in context and in an unambiguous format that we, the decision-maker, can interpret quickly. The most accessible format is as a picture that tells a story – the past, the present and the likely future – a future that is shaped by the actions that come from the decisions we make in the present that we make using information from the past.  The skill is to convert data into a story … and one simple and effective tool for doing that is a process behaviour chart.

Now is When Infinity Becomes One

Time is an intangible – we can’t touch it, taste it, smell it, hear it or see it – yet we do sense it – and we know it is valuable. A precious commodity we call lifetime. We often treat lifetime as it if were tangible – something that we can see, hear, smell, taste and touch – something like money. We often hear the phrase “time is money” and we say things like “spending time” and “wasting time” – as if it were money. But time is not money; we cannot save time, we cannot buy time, and we all get the same amount of time per day to use.

Another odd thing about time is that we sense that it moves in one direction – from past to future with now as the transition. This creates an interesting discontinuity: if we look forward from now into the future we perceive an infinite number of possibilities; yet if we look backwards from now into the past we see only one actuality. That is really odd – Now is when Infinity becomes One.

So, how does that insight help us make a choice?  Well, suppose we have decided what we want in the future and are now trying to make a choice of what to do next; to plan our route to our future desired goal.  Looking from now forwards presents us with a very large number of paths to choose from, none of which we can be sure will lead us safely to where we want to get to.  So what happens? We may become paralysed by indecision; we may debate and argue about which path to take; we may boldly step out on a plausible path with hope and courage; or we may just guess and stumble on with blind faith.  Which we choose seems more a reflection of our personality than a rational strategy. So let us try something else – let us project ourselves into the future to the place where we want to be; and then let us look backwards in time from the future to the present. Now we see a single path that led to where we are; and by unpicking that path we can see that each step of it had a set of necessary and sufficient pre-conditions which, with the addition of time, moved us forward along the path.  Hindsight is much clearer than foresight and each of us has a lifetime’s worth of hindsight to reflect on; and the cumulative hindsight of history to draw on.  This is not an exercise in fantasy; we already have what we need.

To make our choice we start with the outcome we want and ask the question “What are the immediately preceeding necessary and sufficient conditions?”   Then for each condition we ask the question “Does that condition already exist?” If so then we stop – we need go no further on this side branch; and if not then we repeat the Two Questions and we keep going until we have linked our goal back to pre-conditions that exist.  All the pre-conditions in the map we have drawn are necessary but we do not yet have all of them. Some are only dependent on pre-conditions that exist – these are the important ones because they tell us exactly what to focus on doing next. Our choice is now obvious and simple – though the action may not be easy. No one said the journey would be easy!

Which Checkout do We Choose?

When we are approaching the checkout in the supermarket how do we decide which queue to join?  Is it the shortest? Is it the one with the fewest number of full trollies? Is it the one that is staffed by the most competent looking operative? Or is it the new-fangled computerised one that technophobes like me avoid like the plague? If our goal is to get out of the shop as quickly as possible then this is an important yet tricky decision. Once we have committed to a specific queue then we are bound by the social norms to stick it out.

Technically speaking the queue to join is not the shortest one, or the one with the where there are the smallest number of individual items that need to be scanned, or the one with the fastest operative – it is the queue with the smallest load – the cumulative product of the number of items and cycle time of the operative. Hence our quick mental calculation of length of queue * average size of trollies * speed of operative.  Even then it can go wrong if someone throws a spanner in – such as picking up the only item on the shelf with a missing barcode – triggering the need to call a “supervisor”!

Are we completely powerless in this process? Not at all – we each can ensure all our purchases have barcodes and we can also influence the cycle time of the operative. Observe what they are doing – picking up each item in turn, finding the bar code, and turning the item so that the bar code can be easily scanned by the computer.  To shorten the cycle time all we have to do is make the work for the operative as easy as possible by placing each item on the moving belt in the correct orientation and spaced so that the speed of the belt delivers the items at the same rate that the operative can scan.  This sounds counter-intuitive but it works!  It is just like the variable speed limits on some motorways – by slowing down you get there faster because the flow is smoother – there is less “turbulence” created.

There are two potential flaws in this counter-intuitive strategy though – the people in the queue behind you may start “tutting” because they believe you are playing childish games and slowing the process down (which is incorrect but we are social animals and we copy other people’s behaviour and react to “social deviants”).  The other flaw is that, if I am shopping alone I cannot both stream my purchases for optimal scanning and also pack my scanned purchases into my reusable shopping bag!  So, I may only be able to use this strategy when accompanied by a trained assistant and have access to my fast getaway car!  Of course I might get even more radical – and offer to stream the shopping for the person in front of me while they pack their scanned items. But that would mean that we work together to achieve a common goal – to reduce the (life)time we all spend waiting in the shopping queue. This way we do not need an assistant or a getaway car and shopping might even become more sociable.  Everyone wins. What everyone? How is that possible?