The First Step Looks The Steepest

Getting started on improvement is not easy.

It feels like we have to push a lot to get anywhere and when we stop pushing everything just goes back to where it was before and all our effort was for nothing.

And it is easy to become despondent.  It is easy to start to believe that improvement is impossible. It is easy to give up. It is not easy to keep going.


One common reason for early failure is that we often start by  trying to improve something that we have little control over. Which is natural because many of the things that niggle us are not of our making.

But not all Niggles are like that; there are also many Niggles over which we have almost complete control.

It is these close-to-home Niggles that we need to start with – and that is surprisingly difficult too – because it requires a bit of time-investment.


The commonest reason for not investing time in improvement is: “I am too busy.”

Q: Too busy doing what – specifically?

This simple question is  a  good place to start because just setting aside a few minutes each day to reflect on where we have been spending our time is a worthwhile task.

And the output of our self-reflection is usually surprising.

We waste lifetime every day doing worthless work.

Then we complain that we are too busy to do the worthwhile stuff.

Q: So what are we scared of? Facing up to the uncomfortable reality of knowing how much lifetime we have wasted already?

We cannot change the past. We can only influence the future. So we need to learn from the past to make wiser choices.


Lifetime is odd stuff.  It both is and is not like money.

We can waste lifetime and we can waste money. In that  respect they are the same. Money we do not use today we can save for tomorrow, but lifetime not used today is gone forever.

We know this, so we have learned to use up every last drop of lifetime – we have learned to keep ourselves busy.

And if we are always busy then any improvement will involve a trade-off: dis-investing and re-investing our lifetime. This implies the return on our lifetime re-investment must come quickly and predictably – or we give up.


One tried-and-tested strategy is to start small and then to re-invest our time dividend in the next cycle of improvement.  An if we make wise re-investment choices, the benefit will grow exponentially.

Successful entrepreneurs do not make it big overnight.

If we examine their life stories we will find a repeating cycle of bigger and bigger business improvement cycles.

The first thing successful entrepreneurs learn is how to make any investment lead to a return – consistently. It is not luck.  They practice with small stuff until they can do it reliably.

Successful entrepreneurs are disciplined and they only take calculated risks.

Unsuccessful entrepreneurs are more numerous and they have a different approach.

They are the get-rich-quick brigade. The undisciplined gamblers. And the Laws of Probability ensure that they all will fail eventually.

Sustained success is not by chance, it is by design.

The same is true for improvement.  The skill to learn is how to spot an opportunity to release some valuable time resource by nailing a time-sapping-niggle; and then to reinvest that time in the next most promising cycle of improvement  – consistently and reliably.  It requires discipline and learning to use some novel tools and techniques.

This is where Improvement Science helps – because the tools and techniques apply to any improvement. Safety. Flow. Quality. Productivity. Stability. Reliability.

In a nutshell … trustworthy.


The first step looks the steepest because the effort required feels high and the benefit gained looks small.  But it is climbing the first step that separates the successful from the unsuccessful. And successful people are self-disciplined people.

After a few invest-release-reinvest cycles the amount of time released exceeds the amount needed to reinvest. It is then we have time to spare – and we can do what we choose with that.

Ask any successful athlete or entrepreneur – they keep doing it long after they need to – just for the “rush” it gives them.


The tool I use, because it is quick, easy and effective, is called The 4N Chart®.  And it has a helpful assistant called a Niggle-o-Gram®.   Together they work like a focusing lens – they show where the most fertile opportunity for improvement is – the best return on an investment of time and effort.

And when we have proved to yourself that the first step of improvement is not as steep as you believed – then we have released some time to re-invest in the next cycle of improvement – and in sharing what we have discovered.

That is where the big return comes from.

10/11/2012: Feedback from people who have used The 4N Chart and Niggle-o-Gram for personal development is overwhelmingly positive.

Look Out For The Time Trap!

There is a common system ailment which every Improvement Scientist needs to know how to manage.

In fact, it is probably the commonest.

The Symptoms: Disappointingly long waiting times and all resources running flat out.

The Diagnosis?  90%+ of managers say “It is obvious – lack of capacity!”.

The Treatment? 90%+ of managers say “It is obvious – more capacity!!”

Intuitively obvious maybe – but unfortunately these are incorrect answers. Which implies that 90%+ of managers do not understand how their systems work. That is a bit of a worry.  Lament not though – misunderstanding is a treatable symptom of an endemic system disease called agnosia (=not knowing).

The correct answer is “I do not yet have enough information to make a diagnosis“.

This answer is more helpful than it looks because it prompts four other questions:

Q1. “What other possible system diagnoses are there that could cause this pattern of symptoms?”
Q2. “What do I need to know to distinguish these system diagnoses?”
Q3. “How would I treat the different ones?”
Q4. “What is the risk of making the wrong system diagnosis and applying the wrong treatment?”


Before we start on this list we need to set out a few ground rules that will protect us from more intuitive errors (see last week).

The first Rule is this:

Rule #1: Data without context is meaningless.

For example 130  is a number – it is data. 130 what? 130 mmHg. Ah ha! The “mmHg” is the units – it means millimetres of mercury and it tells us this data is a pressure. But what, where, when,who, how and why? We need more context.

“The systolic blood pressure measured in the left arm of Joe Bloggs, a 52 year old male, using an Omron M2 oscillometric manometer on Saturday 20th October 2012 at 09:00 is 130 mmHg”.

The extra context makes the data much more informative. The data has become information.

To understand what the information actually means requires some prior knowledge. We need to know what “systolic” means and what an “oscillometric manometer” is and the relevance of the “52 year old male”.  This ability to extract meaning from information has two parts – the ability to recognise the language – the syntax; and the ability to understand the concepts that the words are just labels for; the semantics.

To use this deeper understanding to make a wise decision to do something (or not) requires something else. Exploring that would  distract us from our current purpose. The point is made.

Rule #1: Data without context is meaningless.

In fact it is worse than meaningless – it is dangerous. And it is dangerous because when the context is missing we rarely stop and ask for it – we rush ahead and fill the context gaps with assumptions. We fill the context gaps with beliefs, prejudices, gossip, intuitive leaps, and sometimes even plain guesses.

This is dangerous – because the same data in a different context may have a completely different meaning.

To illustrate.  If we change one word in the context – if we change “systolic” to “diastolic” then the whole meaning changes from one of likely normality that probably needs no action; to one of serious abnormality that definitely does.  If we missed that critical word out then we are in danger of assuming that the data is systolic blood pressure – because that is the most likely given the number.  And we run the risk of missing a common, potentially fatal and completely treatable disease called Stage 2 hypertension.

There is a second rule that we must always apply when using data from systems. It is this:

Rule #2: Plot time-series data as a chart – a system behaviour chart (SBC).

The reason for the second rule is because the first question we always ask about any system must be “Is our system stable?”

Q: What do we mean by the word “stable”? What is the concept that this word is a label for?

A: Stable means predictable-within-limits.

Q: What limits?

A: The limits of natural variation over time.

Q: What does that mean?

A: Let me show you.

Joe Bloggs is disciplined. He measures his blood pressure almost every day and he plots the data on a chart together with some context .  The chart shows that his systolic blood pressure is stable. That does not mean that it is constant – it does vary from day to day. But over time a pattern emerges from which Joe Bloggs can see that, based on past behaviour, there is a range within which future behaviour is predicted to fall.  And Joe Bloggs has drawn these limits on his chart as two red lines and he has called them expectation lines. These are the limits of natural variation over time of his systolic blood pressure.

If one day he measured his blood pressure and it fell outside that expectation range  then he would say “I didn’t expect that!” and he could investigate further. Perhaps he made an error in the measurement? Perhaps something else has changed that could explain the unexpected result. Perhaps it is higher than expected because he is under a lot of emotional stress a work? Perhaps it is lower than expected because he is relaxing on holiday?

His chart does not tell him the cause – it just flags when to ask more “What might have caused that?” questions.

If you arrive at a hospital in an ambulance as an emergency then the first two questions the emergency care team will need to know the answer to are “How sick are you?” and “How stable are you?”. If you are sick and getting sicker then the first task is to stabilise you, and that process is called resuscitation.  There is no time to waste.


So how is all this relevant to the common pattern of symptoms from our sick system: disappointingly long waiting times and resources running flat out?

Using Rule#1 and Rule#2:  To start to establish the diagnosis we need to add the context to the data and then plot our waiting time information as a time series chart and ask the “Is our system stable?” question.

Suppose we do that and this is what we see. The context is that we are measuring the Referral-to-Treatment Time (RTT) for consecutive patients referred to a single service called X. We only know the actual RTT when the treatment happens and we want to be able to set the expectation for new patients when they are referred  – because we know that if patients know what to expect then they are less likely to be disappointed – so we plot our retrospective RTT information in the order of referral.  With the Mark I Eyeball Test (i.e. look at the chart) we form the subjective impression that our system is stable. It is delivering a predictable-within-limits RTT with an average of about 15 weeks and an expected range of about 10 to 20 weeks.

So far so good.

Unfortunately, the purchaser of our service has set a maximum limit for RTT of 18 weeks – a key performance indicator (KPI) target – and they have decided to “motivate” us by withholding payment for every patient that we do not deliver on time. We can now see from our chart that failures to meet the RTT target are expected, so to avoid the inevitable loss of income we have to come up with an improvement plan. Our jobs will depend on it!

Now we have a problem – because when we look at the resources that are delivering the service they are running flat out – 100% utilisation. They have no spare flow-capacity to do the extra work needed to reduce the waiting list. Efficiency drives and exhortation have got us this far but cannot take us any further. We conclude that our only option is “more capacity”. But we cannot afford it because we are operating very close to the edge. We are a not-for-profit organisation. The budgets are tight as a tick. Every penny is being spent. So spending more here will mean spending less somewhere else. And that will cause a big argument.

So the only obvious option left to us is to change the system – and the easiest thing to do is to monitor the waiting time closely on a patient-by-patient basis and if any patient starts to get close to the RTT Target then we bump them up the list so that they get priority. Obvious!

WARNING: We are now treating the symptoms before we have diagnosed the underlying disease!

In medicine that is a dangerous strategy.  Symptoms are often not-specific.  Different diseases can cause the same symptoms.  An early morning headache can be caused by a hangover after a long night on the town – it can also (much less commonly) be caused by a brain tumour. The risks are different and the treatment is different. Get that diagnosis wrong and disappointment will follow.  Do I need a hole in the head or will a paracetamol be enough?


Back to our list of questions.

What else can cause the same pattern of symptoms of a stable and disappointingly long waiting time and resources running at 100% utilisation?

There are several other process diseases that cause this symptom pattern and none of them are caused by lack of capacity.

Which is annoying because it challenges our assumption that this pattern is always caused by lack of capacity. Yes – that can sometimes be the cause – but not always.

But before we explore what these other system diseases are we need to understand why our current belief is so entrenched.

One reason is because we have learned, from experience, that if we throw flow-capacity at the problem then the waiting time will come down. When we do “waiting list initiatives” for example.  So if adding flow-capacity reduces the waiting time then the cause must be lack of capacity? Intuitively obvious.

Intuitively obvious it may be – but incorrect too.  We have been tricked again. This is flawed causal logic. It is called the illusion of causality.

To illustrate. If a patient complains of a headache and we give them paracetamol then the headache will usually get better.  That does not mean that the cause of headaches is a paracetamol deficiency.  The headache could be caused by lots of things and the response to treatment does not reliably tell us which possible cause is the actual cause. And by suppressing the symptoms we run the risk of missing the actual diagnosis while at the same time deluding ourselves that we are doing a good job.

If a system complains of  long waiting times and we add flow-capacity then the long waiting time will usually get better. That does not mean that the cause of long waiting time is lack of flow-capacity.  The long waiting time could be caused by lots of things. The response to treatment does not reliably tell us which possible cause is the actual cause – so by suppressing the symptoms we run the risk of missing the diagnosis while at the same time deluding ourselves that we are doing a good job.

The similarity is not a co-incidence. All systems behave in similar ways. Similar counter-intuitive ways.


So what other system diseases can cause a stable and disappointingly long waiting time and high resource utilisation?

The commonest system disease that is associated with these symptoms is a time trap – and they have nothing to do with capacity or flow.

They are part of the operational policy design of the system. And we actually design time traps into our systems deliberately! Oops!

We create a time trap when we deliberately delay doing something that we could do immediately – perhaps to give the impression that we are very busy or even overworked!  We create a time trap whenever we deferring until later something we could do today.

If the task does not seem important or urgent for us then it is a candidate for delaying with a time trap.

Unfortunately it may be very important and urgent for someone else – and a delay could be expensive for them.

Creating time traps gives us a sense of power – and it is for that reason they are much loved by bureaucrats.

To illustrate how time traps cause these symptoms consider the following scenario:

Suppose I have just enough resource-capacity to keep up with demand and flow is smooth and fault-free.  My resources are 100% utilised;  the flow-in equals the flow-out; and my waiting time is stable.  If I then add a time trap to my design then the waiting time will increase but over the long term nothing else will change: the flow-in,  the flow-out,  the resource-capacity, the cost and the utilisation of the resources will all remain stable.  I have increased waiting time without adding or removing capacity. So lack of resource-capacity is not always the cause of a longer waiting time.

This new insight creates a new problem; a BIG problem.

Suppose we are measuring flow-in (demand) and flow-out (activity) and time from-start-to-finish (lead time) and the resource usage (utilisation) and we are obeying Rule#1 and Rule#2 and plotting our data with its context as system behaviour charts.  If we have a time trap in our system then none of these charts will tell us that a time-trap is the cause of a longer-than-necessary lead time.

Aw Shucks!

And that is the primary reason why most systems are infested with time traps. The commonly reported performance metrics we use do not tell us that they are there.  We cannot improve what we cannot see.

Well actually the system behaviour charts do hold the clues we need – but we need to understand how systems work in order to know how to use the charts to make the time trap diagnosis.

Q: Why bother though?

A: Simple. It costs nothing to remove a time trap.  We just design it out of the process. Our flow-in will stay the same; our flow-out will stay the same; the capacity we need will stay the same; the cost will stay the same; the revenue will stay the same but the lead-time will fall.

Q: So how does that help me reduce my costs? That is what I’m being nailed to the floor with as well!

A: If a second process requires the output of the process that has a hidden time trap then the cost of the queue in the second process is the indirect cost of the time trap.  This is why time traps are such a fertile cause of excess cost – because they are hidden and because their impact is felt in a different part of the system – and usually in a different budget.

To illustrate. Suppose that 60 patients per day are discharged from our hospital and each one requires a prescription of to-take-out (TTO) medications to be completed before they can leave.  Suppose that there is a time trap in this drug dispensing and delivery process. The time trap is a policy where a porter is scheduled to collect and distribute all the prescriptions at 5 pm. The porter is busy for the whole day and this policy ensures that all the prescriptions for the day are ready before the porter arrives at 5 pm.  Suppose we get the event data from our electronic prescribing system (EPS) and we plot it as a system behaviour chart and it shows most of the sixty prescriptions are generated over a four hour period between 11 am and 3 pm. These prescriptions are delivered on paper (by our busy porter) and the pharmacy guarantees to complete each one within two hours of receipt although most take less than 30 minutes to complete. What is the cost of this one-delivery-per-day-porter-policy time trap? Suppose our hospital has 500 beds and the total annual expense is £182 million – that is £0.5 million per day.  So sixty patients are waiting for between 2 and 5 hours longer than necessary, because of the porter-policy-time-trap, and this adds up to about 5 bed-days per day – that is the cost of 5 beds – 1% of the total cost – about £1.8 million.  So the time trap is, indirectly, costing us the equivalent of £1.8 million per annum.  It would be much more cost-effective for the system to have a dedicated porter working from 12 am to 5 pm doing nothing else but delivering dispensed TTOs as soon as they are ready!  And assuming that there are no other time traps in the decision-to-discharge process;  such as the time trap created by batching all the TTO prescriptions to the end of the morning ward round; and the time trap created by the batch of delivered TTOs waiting for the nurses to distribute them to the queue of waiting patients!


Q: So how do we nail the diagnosis of a time trap and how do we differentiate it from a Batch or a Bottleneck or Carveout?

A: To learn how to do that will require a bit more explanation of the physics of processes.

And anyway if I just told you the answer you would know how but might not understand why it is the answer. Knowledge and understanding are not the same thing. Wise decisions do not follow from just knowledge – they require understanding. Especially when trying to make wise decisions in unfamiliar scenarios.

It is said that if we are shown we will understand 10%; if we can do we will understand 50%; and if we are able to teach then we will understand 90%.

So instead of showing how instead I will offer a hint. The first step of the path to knowing how and understanding why is in the following essay:

A Study of the Relative Value of Different Time-series Charts for Proactive Process Monitoring. JOIS 2012;3:1-18

Click here to visit JOIS

Safety by Despair, Desire or Design?

Imagine the health and safety implications of landing a helicopter carrying a critically ill patient on the roof of a hospital.

Consider the possible number of ways that this scenario could go horribly wrong. But in reality it does not because this is a very visible hazard and the associated risks are actively mitigated.

It is much more dangerous for a slightly ill patient to enter the doors of the hospital on their own two legs.  Surely not!  How can that be?

First the reality – the evidence.

Repeated studies have shown that about 1 in 300  emergency admissions to hospitals do not leave alive and their death is avoidable. And it is not just weekends that are risky. That means about 1 person per week for each large acute hospital in England. That is about a jumbo-jet full of people every week in England. If you want to see the evidence click here to get a copy of a recent study.

How long would an airline stay in business if it crashed one plane full of passengers every week?

And how do we know that these are the risks? Well by looking at hospitals who have recognised the hazards and the risks and have actively done something about it. The ones that have used Improvement Science – and improved.


In one hospital the death rate from a common, high-risk emergency was significantly reduced overnight simply by designing and implementing a protocol that ensured these high-risk patients were admitted to the same ward. It cost nothing to do. No extra staff or extra beds. The effect was a consistently better level of care through proactive medical management. Preventing risk rather than correcting harm. The outcome was not just fewer deaths – the survivers did better too. More of them returned to independent living – which had a huge financial implication for the cost of long term care. It was cheaper for the healthcare system. But that benefit was felt in a different budget so there was no direct financial reward to the hospital for improving the outcome.  So the improvement was not celebrated and sustained. Finance trumped Governance. Desire to improve safety is not enough.


Eventually and inevitably the safety issue will resurface and bite back.  The Mid Staffordshire Hospital debacle is a timely reminder. Eventually despair will drive change – but it will come at a high price.  The emotional knee jerk reaction driven by public outrage will be to add yet more layers of bureaucracy and cost: more inspectors, inspections and delays.  The knee jerk is not designed to understand the root cause and correct it – that toxic combination of ignorance and confidence that goes by the name arrogance.


The reason that the helicopter-on-the-hospital is safer is because it is designed to be – and one of the tools used in safe process design is called Failure Modes and Effects Analysis or FMEA.

So if there is anyone reading this who is in a senior clinical or senior mangerial role in a hospital that has any safety issues – and who has not heard of FMEA then they have a golden opportunity to learn a skill that will lead to safer-by-design hospital.

Safer-by-design hospitals are less frightening to walk into, less demotivating to work in and cheaper to run.  Everyone wins.

If you want to learn more now then click here for a short summary of FMEA from the Institute of Healthcare Improvement.

It was written in 2004. That is eight years ago.

Intuitive Counter

If it takes five machines five minutes to make five widgets how long does it take ten machines to make ten widgets?

If the answer “ten minutes” just popped into your head then your intuition is playing tricks on you. The correct answer is “five minutes“.

Let us try another.

If the lily leaves on the surface of a lake double in area every day and if it takes 48 days to cover the whole lake then how long did it take to cover half the lake?  Twenty four days? Nope. The correct answer is 47 days and once again our intuition has tricked us. It is obvious in hindsight though – just not so obvious before.

We all make thousands of unconscious, intuitive decisions every day so if we make unintended errors like this then they must be happening all the time and we do not realise. 

OK one more and really concentrate this time.

If we have a three-step sequential process and the chance of a significant safety error at each step is 10%, 30% and 20% respectively then what is the overall error rate for the process?  A: (10%+30%+20%) /3 = 60%/3 = 20%? Nope. Um 30%? Nope. What about 60%?  Nope. The answer is 49.6%. And it is not intuitively obvious how that is the correct answer.


When it comes to numbers, counting, and anything to do with chance and probability then our intuition is not a safe and reliable tool. But we rely on it all the time and we are not aware of the errors we are making. And it is not just numbers that our intuition trips us up over!


A lot of us are intuitive thinkers … about 40% in fact. The majority of leaders and executives are categorised as iNtuitors when measured using a standard psychological assessment tool. And remember – they are the ones making the Big Decisions that effect us all.  So if their intuition is tripping them up then their decisions are likely to be a bit suspect.

Fortunately there is a group of people who do not fall into these hidden cognitive counting traps so easily. They have Books of Rules of how to do numbers correctly – and they are called Accountants. When they have the same standard assessment a lot of them pop up at the other end of the iNtuitor dimension. They are called Sensors.   Not because they are sensitive (which of course they are) but because they rank reality more trustworthy than rhetoric. They trust what they see – the facts – the numbers.  And money is a number. And numbers  add up exactly so that everything is neat, tidy, and auditable down to the last penny. Ahhhh – Blisse is Balanced Books and Budgets.  


This is why the World is run by Accountants.  They nail our soft and fuzzy intuitive rhetoric onto the hard and precise fiscal reality.  And in so doing a big and important piece of the picture is lost. The fuzzy bit,


Intuitors have a very important role. They are able to think outside the Rule Book Box. They are comfortable working with fuzzy concepts and in abstract terms and their favourite sport is intuitive leaping. It is a high risk sport though because sometimes Reality reminds them that the Laws of Physics are not optional or subject to negotiation and innovation. Ouch!  But the iNtuitors ability to leap about conceptuallycomes in very handy when the World is changing unpredictably – because it allows the Books of Rules to be challenged and re-written as new discoveries are made. The first Rule is usually “Do not question the Rules” so those who follow Rules are not good at creating new ones. And those who write the rules are not good at sticking to them.

So, after enough painful encounters with Reality the iNtuitors find their comfort zones in board rooms, academia and politics – where they can avoid hard Reality and concentrate on soft Rhetoric. Here they can all have a different conceptual abstract mental model and can happily discuss, debate and argue with each other for eternity. Of course the rest of the Universe is spectacularly indifferent to board room, academic and political rhetoric – but the risk to the disinterested is when the influential iNtuitors impose their self-generated semi-delusional group-think on the Real World without a doing a Reality Check first.  The outcome is entirely predictable ….

And as the hot rhetoric meets cold reality the fog of disillusionment forms. 


So if we wish to embark on a Quest for Improvement then it is really helpful to know where on the iNtuitor-Sensor dimension each of us prefers to sit. Intuitors need Sensors to provide a reality check and Sensors need Intuitors to challenge the status quo.  We are not nailed to our psychological perches – we can shuffle up and down if need be – we do have a favourite spot though; our comfort zone.

To help answer the “Where am I on the NS dimension?” question here is a  Temperament Self-Assessment Tool that you can use. It is based on the Jungian, Myers-Briggs and Keirsey models. Just run the programme, answer the 72 questions and you will get your full 4-dimensional profile and your “centre” on each. Then jot down the results on a scrap of paper. 

There is a whole industry that has sprung up out these (and other) psychological assessment tools. They feed our fascination with knowing what makes us tick and the role of the psychoexpert is to de-mystify the assessments for us and to explain the patterns in the tea leaves (for a fee of course because it takes years of training to become a Demystifier). Disappointingly, my experience is that almost every person I have asked if they know their Myers-Briggs profile say “Oh yes, I did that years ago, it is SPQR or something like that but I have no idea what it means“.  Maybe they should ask for their Demystification Fee to be returned?

Anyway – here is the foundation level demystification guide to help you derive meaning from what is jotted on the scrap of paper.

First look at the N-S (iNtuitor-Sensor) dimension.  If you come out as N then look at the T-F (Thinking-Feeling) dimension – and together they will give an xNTx preference or an xNFx preference. People with these preferences are called Rationals and Idealists respectively.  If you prefer the S end of the N-S dimension then look at the J-P (Judging-Perceiving) result and this will give an xSxJ or xSxP preference. These are the Guardians and the Artisans.  Those are the Four Temperaments described by David Keirsey in “Please Understand Me II“. If you are near the middle of any of the dimensions then you will show a blend of temperaments. And please note – it is not an either-or category – it is a continuous spectrum.

How we actually manifest our innate personality preferences depends on our education, experiences and the exact context. This makes it a tricky to interpret the specific results for an individual – hence the Tribe of Demystificationists. And remember – these are not intelligence tests, and there are no good/bad or right/wrong answers. They are gifts – or rather gifts differing. 


So how does all this psychobabble help us as Improvement Scientists?

Much of Improvement Science is just about improving awareness and insight – so insight into ourselves is of value.  

Rationals (xNTx) are attracted to occupations that involve strategic thinking and making rational, evidence based decisions: such as engineers and executives. The Idealists (xNFx) are rarer, more sensitive, and attracted to occupations such as teaching, counselling, healing and being champions of good causes.  The Guardians (xSxJ) are particularly numerous and are attracted to occupations that form the stable bedrock of society – administrators, inspectors, supervisors, providers and protectors. They value the call-of-duty and sticking-to-the-rules for the good-of-all. Artisans (SPs) are the risk-takers and fun-makers; the promotors, the entertainers, the explorers, the dealers, the artists, the marketeers and the salespeople.

These are the Four Temperaments that form the basic framework of the sixteen Myers-Briggs polarities.  And this is not a new idea – it has been around for millenia – just re-emerging with different names in different paradigms. In the Renaissance the Galenic Paradigm held sway and they were called the Phlegmatics (NT), the Cholerics (NF), the Melancholics (SJ) and the Sangines (SP) – depending on which of the four body fluids were believed to be out of balance (phlegm, yellow bile, black bile or blood). So while the paradigms have changed, the empirical reality appears to have endured the ages.

The message for the Improvement Scientist is two-fold:

1. Know your own temperament and recognise the strengths and limitations of it. They all have a light and dark side.
2. Understand that the temperaments of groups of people can be both synergistic and antagonistic.

It is said that birds of a feather flock together and the collective behaviour of departments in large organisations tend to form around the temperament that suits that organisational function.  The character of the Finance department is usually very different to that of Operations, or Human Resources – and sparks can (and do) fly when they engage each other. No wonder chief executives have a short half-life and an effective one is worth its weight in gold! 

The interdepartmental discord that is commonly observed in large organisations follows more from ignorance (unawareness of the reality of a spectrum of innate temperaments) and arrogance (expecting everyone to think the same way as we do). Antagonism is not an inevitable consequence though – it is just the default outcome in the absence of awareness and effective leadership.

This knowledge highlights two skills that an effective Improvement Scientist needs to master:

1. Respectful Educator (drawing back the black curtain of ignorance) and
2. Respectful Challenger (using reality to illuminate holes in the rhetoric).

Intuitive counter or counter intuitive?