The Nerve Curve

The Nerve Curve is the emotional rollercoaster ride that everyone who engages in Improvement Science needs to become confident to step onto.

Just like a theme park ride it has ups and downs, twists and turns, surprises and challenges, an element of danger and a splash of excitement.  If it did not have all of those components then it would not be fun and there would be no queues of people wanting to ride, again and again. And the reason that theme parks are so successful is because their rides have been very carefully designed – to be challenging, exciting, fun and safe – all at the same time.

So when we challenge others to step aboard our Improvement Science Nerve Curve then we need to ensure that our ride is safe – and to do that we need to understand where the dangers lurk, to actively point them out and then to avoid them.

A big danger hides right at the start. To get aboard the Nerve Curve we have to ask questions that expose the elephant-in-the-room issues.  Everyone knows they are there – but no one wants to talk about them – and the biggest one is called Distrust – which is wrapped up in all sorts of different ways but inside is the  Kernel of Cynicism. The inexperienced improvement faciliator may blunder straight into this trap just by using one small word … the word “Why”?  Arrrrrgh. Splat!  Game Over. Next.

The “Why” question is like throwing a match into a barrel of emotional gunpowder – because it is interpreted as “What is your purpose?” and in a low-trust climate no one will want to reveal what their real purpose or intention is – they have learned from experience to hold their cards close to their chest – it is safer to keep their agendas hidden.

A much safer question is “What?” What are the facts? What are the feelings? What are the effects? What are the causes? What works well? What does not? What we we want? What don’t we want? What are our options? What would each deliver? What are everyones’ views? What is our decision? What is our action? Sticking to the What question helps to avoid everyone diving for the Political Panic Button and pulling the Emotional Emergency Brake before we have even got started.

The first part of the ride is the “Awful Reality Slope” that swoops us down into “Painful Awareness Canyon” the emotional low-point of the ride – where the elephants-in-the-room roam for all to see and where people realise that once they are in view there is no way back.  The next danger is at the far end of the Canyon and is called the Black Chasm of Ignorance and the rollercoaster track goes right to the edge. Arrrgh – there is no way back now – we are going over the edge – quick grab the Denial Bag from under the seat, apply the Blunder Onwards Blind Fold or the Hope-for-the-best Smoke Hood.

So before our carriage reaches the Black Chasm we need to switch on the headlights to reveal the Bridge of How:  the structure and the path to the other side and copiously illuminated with stories from those who have gone before. The track is steep though and the climb is hard work. Our carriage clanks and groans and it seems to take forever but at the top we are rewarded by a New Perspective and the exhilarating ride down into the Plateau of Understanding where we stop to celebrate our success. Here we disembark and discover the Forest of Opportunity which conceals many more Nerve Curves going off in all directions – rides that we can board when we feel ready for a new challenge. There is danger lurking here too though – through the Forest is Complacency Swamp – which looks innocent except that the Bridge of How is hidden from view.  Here we can get lured by the sweet smell of Power and the addictive aroma of Arrogance and we can become too comfortable in the Hammock of  Blissful Ignorance where we do not notice that the world around us is changing. In reality we are slipping backwards but we do not notice – until we suddenly find ourselves in an unfamiliar Canyon of Painful Awareness. Ouch!

Being forewarned is our best defence and while we are encouraged to explore the Forest of Opportunity – we learn that we must return regularly to the Plateau to don the Role of Educator and to refresh ourselves from the Fountain of New Knowledge by showing others what we have learned and learning from them in return.  And when we start to crave more excitement we can board another Nerve Curve to a higher Plateau of Understanding.

The Safety Harness of our Improvement journey is called See-Do-Teach and the most important part is Teach.  Our educators need to have more than just a knowledge of how to do, they also need to have enough understanding to be able to explain why to do. To convince others to get onboard the Nerve Curve they must be able to explain why the issues still exist and why the current methods are not sufficient.  Those who have been through the ride are the only ones who are credible because they understand.

And that understanding grows with practice and it grows quickly when we take on the challenge of learning how to explain the why.  This is Nerve Curve II.

All aboard for the greatest ride of all.

Knowledge and Understanding

Knowledge is not the same as Understanding.

We all know that the sun rises in the East and sets in the West; most of us know that the oceans have a twice-a-day tidal cycle and some of us know that these tides also have a monthly cycle that is associated with the phase of the moon. We know all of this just from taking notice; remembering what we see; and being able to recognise the patterns. We use this knowledge to make reliable predictions of the future times and heights of the tides; and we can do all of this without any understanding of how tides are caused.

Our lack of understanding means that we can only describe what has happened. We cannot explain how it happened. We cannot extract meaning – the why it happened.

People have observed and described the movements of the sun, sea, moon, and stars for millennia and a few could even predict them with surprising accuracy – but it was not until the 17th century that we began to understand what caused the tides. Isaac Newton developed enough of an understanding to explain how it worked and he did it using a new concept called gravity and a new tool called calculus.  He then used this understanding to explain a lot of other unexplained things and suddenly the Universe started to make a lot more sense to everyone. Nowadays we teach this knowledge at school and we take it for granted. We assume it is obvious and it is not. We are no smarter now that people in the 17th Century – we just have a deeper understanding (of physics).

Understanding enables things that have not been observed or described to be predicted and explained. Understanding is necessary if we want to make rational and reliable decisions that will lead to changes for the better in a changing world.

So, how can we test if we only know what to do or if we actually understand what to do?

If we understand then we can demonstrate the application of our knowledge by solving old and new problems effectively and we can explain how we do it.  If we do not understand then we may still be able to apply our knowledge to old problems but we do not solve new problems effectively or efficiently and we are not able to explain why.

But we do not want the risk of making a mistake in order to test if we have and understanding-gap so how can we find out? What we look for is the tell-tale sign of an excess of knowledge and a dearth of understanding – and it has a name – it is called “bureaucracy”.

Suppose we have a system where the decisions-makers do not make effective decisions when faced with new challenges – which means that their decisions lead to unintended adverse outcomes. It does not take very long for the system to know that the decision process is ineffective – so to protect itself the system reacts by creating bureaucracy – a sort of organisational damage-limitation circle of sand-bags that limit the negative consequences of the poor decisions. A bureaucratic firewall so to speak.

Unfortunately, while bureaucracy is effective it is non-specific, it uses up resources and it slows everything down. Bureaucracy is inefficiency. What we get as a result is a system that costs more and appears to do less and that is resistant to any change – not just poor decisions – it slows down good ones too.

The bureaucratic barrier is important though; doing less bad stuff is actually a reasonable survival strategy – until the cost of the bureaucracy threatens the systems viability. Then it becomes a liability.

So what happens when a last-saloon-in-town “efficiency” drive is started in desperation and the “bureaucratic red tape” is slashed? The poor decisions that the red tape was ensnaring are free to spread virally and when implemented they create a big-bang unintended adverse consequence! The safety and quality performance of the system drops sharply and that triggers the reflex “we-told-you-so” and rapid re-introduction of the red-tape, plus some extra to prevent it happening again.  The system learns from its experience and concludes that “higher quality always costs more” and “don’t trust our decision-makers” and “the only way to avoid a bad decision is not to make/or/implement any decisions” and to “the safest way to maintain quality is to add extra checks and increased the price”. The system then remembers this new knowledge for future reference; the bureaucratic concrete sets hard; and the whole cycle repeats itself. Ad infinitum.

So, with this clearer insight into the value of bureaucracy and its root cause we can now design an alternative system: to develop knowledge into understanding and by that route to improve our capability to make better decisions that lead to predictable, reliable, demonstrable and explainable benefits for everyone. When we do that the non-specific bureaucracy is seen to impede progress so it makes sense to dismantle the bits that block improvement – and keep the bits that block poor decisions and that maintain performance. We now get improved quality and lower costs at the same time, quickly, predictably and without taking big risks, and we can reinvest what we have saved in making making further improvements and developing more knowledge, a deeper understanding and wiser decisions. Ad infinitum.

The primary focus of Improvement Science is to expand understanding – our ability to decide what to do, and what not to; where and where not to; and when and when not to – and to be able to explain and to demonstrate the “how” and to some extent the “why”.

One proven method is to See, then to Do, and then to Teach. And when we try that we discover to our surprise that the person whose understanding increases the most is the teacher!  Which is good because the deeper the teachers understanding the more flaxible, adaptable and open to new learning they become.  Education and bureaucracy are poor partners.

Cause and Effect

Breaking News: Scientists have discovered that people with yellow teeth are more likely to die of lung cancer. Patient-groups and dentists are now calling for tooth-whitening to be made freely available to everyone.”

Does anything about this statement strike you as illogical? Surely it is obvious. Having yellow teeth does not cause lung cancer – smoking causes both yellow teeth and lung cancer!  Providing a tax-funded tooth-whitening service will be futile – banning smoking is the way to reduce deaths from lung cancer!

What is wrong here? Do we have a problem with mad scientists, misuse of statistics or manipulative journalists? Or all three?

Unfortunately, while we may believe that smoking causes both yellow teeth and lung cancer it is surprisingly difficult to prove it – even when sane scientists use the correct statistics and their results are accurately reported by trustworthy journalists.  It is not easy to prove causality.  So we just assume it.

We all do this many times every day – we infer causality from our experience of interacting with the real world – and it is our innate ability to do that which allows us to say that the opening statement does not feel right.  And we do this effortlessly and unconsciously.

We then use our inferred-causality for three purposes. Firstly, we use it to explain how past actions led to the present situation. The chain of cause-and-effect. Secondly, we use it to create options in the present – our choices of actions. Thirdly, we use it to predict the outcome of our chosen action – we set our expectation and then compare the outcome with our prediction. If outcome is better than we expected then we feel good, if it is worse then we feel bad.

What we are doing naturally and effortlessly is called “causal modelling”. And it is an impressive skill. It is the skill needed to solve problems by designing ways around them.

Unfortunately – the ability to build and use a causal model does not guarantee that our model is a valid, complete or accurate representation of reality. Our model may be imperfect and we may not be aware of it.  This raises two questions: “How could two people end up with different causal models when they are experiencing the same reality?” and “How do we prove if either is correct and if so, which it is?”

The issue here is that no two people can perceive reality exactly the same way – we each have an unique perspective – and it is an inevitable source of variation.

We also tend to assume that what-we-perceive-is-the-truth so if someone expresses a different view of reality then we habitually jump to the conclusion that they are “wrong” and we are “right”.  This unconscious assumption of our own rightness extends to our causal models as well. If someone else believes a different explanation of how we got to where we are, what our choices are and what effect we might expect from a particular action then there is almost endless opportunity for disagreement!

Fortunately our different perceptions agree enough to create common ground which allows us to co-exist reasonably amicably.  But, then we take the common ground for granted, it slips from our awareness, and we then magnify the molehills of disagreement into mountains of discontent.  It is the way our caveman wetware works. It is part of the human condition.

So, if our goal is improvement, then we need to consider a more effective approach: which is to assume that all our causal models are approximate and that they are all works-in-progress. This implies that each of us has two challenges: first to develop a valid causal model by testing it against reality through experimentation; and second to assist the collective development of a common causal model by sharing our individual understanding through explanation and demonstration.

The problem we then encounter is that statistical analysis of historical data cannot answer questions of causality – it is necessary but it is not sufficient – and because it is insufficient it does not make common-sense.  For example, there may well be a statistically significant association between “yellow teeth” and “lung cancer” and “premature death” but knowing those facts is not enough to help us create a valid cause-and-effect model that we then use to make wiser choices of more effective actions that cause us to live longer.

Learning how to make wiser choices that lead to better outcomes is what Improvement Science is all about – and we need more than statistics – we need to learn how to collectively create, test and employ causal models.

And that has another name – is called common sense.

Resistance to Change

Many people who are passionate about improvement become frustrated when they encounter resistance-to-change.

It does not matter what sort of improvement is desired – safety, delivery, quality, costs, revenue, productivity or all of them.

The natural and intuitive reaction to meeting resistance is to push harder – and our experience of the physical world has taught us that if we apply enough pressure at the right place then resistance will be overcome and we will move forward.

Unfortunately we sometimes discover that we are pushing against an immovable object and even our maximum effort is futile – so we give up and label it as “impossible”.

Much of Improvement Science appears counter-intuitive at first sight and the challenge of resistance is no different.  The counter-intuitive response to feeling resistance is to pull back, and that is exactly what works better. But why does it work better? Isn’t that just giving up and giving in? How can that be better?

To explain the rationale it is necessary to examine the nature of resistance more closely.

Resistance to change is an emotional reaction to an unconsciously perceived threat that is translated into a conscious decision, action and justification: the response. The range of verbal responses is large, as illustrated in the caption, and the range of non-verbal responses is just as large.  Attempting to deflect or defuse all of them is impractical, ineffective and leads to a feeling of frustration and futility.

This negative emotional reaction we call resistance is non-specific because that is how our emotions work – and it is triggered as much by the way the change is presented as by what the change is.

Many change “experts” recommend  the better method of “driving” change is selling-versus-telling and recommend learning psycho-manipulation techniques to achieve it – close-the-deal sales training for example. Unfortunately this strategy can create a psychological “arms race” which can escalate just as quickly and lead to the same outcome: an  emotional battle and psychological casualties. This outcome is often given the generic label of “stress”.

An alternative approach is to regard resistance behaviour as multi-factorial and one model separates the non-specific resistance response into separate categories: Why DoDon’t Do – Can’t Do – Won’t Do.

The Why Do response is valuable feedback because is says “we do not understand the purpose of the proposed change” and it is not unusual for proposals to be purposeless. This is sometimes called “meddling”.  This is fear of the unknown.

The Don’t Do  is valuable feedback that is saying “there is a risk with this proposed change – an unintended negative consequence that may be greater than the intended positive outcome“.  Often it is very hard to explain this NoNo reaction because it is the output of an unconscious thought process that operates out of awareness. It just doesn’t feel good. And some people are better at spotting the risks – they prefer to wear the Black Hat – they are called skeptics.  This is fear of failure.

The Can’t Do is also valuable feedback that is saying “we get the purpose and we can see the problem and the benefit of a change – we just cannot see the path that links the two because it is blocked by something.” This reaction is often triggered by an unconscious recognition that some form of collaborative working will be required but the cultural context is low on respect and trust. It can also just be a manifestation of a knowledge, skill or experience gap – the “I don’t know how to do” gap. Some people habitually adopt the Victim role – most are genuine and do not know how.

The Won’t Do response is also valuable feedback that is saying “we can see the purpose, the problem, the benefit, and the path but we won’t do it because we don’t trust you“. This reaction is common in a low-trust culture where manipulation, bullying and game playing is the observed and expected behaviour. The role being adopted here is the Persecutor role – and the psychological discount is caring for others. Persecutors lack empathy.

The common theme here is that all resistance-to-change responses represent valuable feedback and explains why the better reaction to resistance is to stop talking and start listening because to make progress will require using the feedback to diagnose what components or resistance are present. This is necessary because each category requires a different approach.

For example Why Do requires making the both problem and the purpose explicit; Don’t Do requires exploring the fear and bringing to awareness what is fuelling it; Can’t Do requires searching for the skill gaps and filling them; and Won’t Do requires identifying the trust-eroding beliefs, attitudes and behaviours and making it safe to talk about them.

Resistance-to-change is generalised as a threat when in reality it represents an opportunity to learn and to improve – which is what Improvement Science is all about.