Systemic Sickness

Sickness, illness, ill health, unhealthy, disease, disorder, distress are all words that we use when how we feel falls short of how we expect to feel. The words impliy an illness continuum and each of us appeara to use different thresholds as action alerts.

 The first is crossed when we become aware that all is not right and our response and to enter a self-diagnosis and self-treatment mindset. This threshold is context-dependent; we use external references to detect when we have strayed too far from the norm – we compare ourselves with others. This early warning system works most of the time – after all chemists make their main business from over the counter (OTC) remedies!

If the first stage does not work we cross the second threshold when we accept that we need expert assistance and we switch into a different mode of thinking – the “sick role”.  Crossing the second threshold is a big psychological step that implies a perceived loss of control and power – and explains why many people put off seeking help. They enter a phase of denial, self-deception and self-justification which can be very resistant to change.

The same is true of organisations – when they become aware that they are performing below expectation then a “self-diagnosis” and “self-treatment” is instigated, except that it is called something different such as an “investigation” or “root cause analysis” and is followed by a “recommendations” and an “action plan”.  The requirements for this to happen are an ability to become aware of a problem and a capability to understand and address the root cause both effectively and efficiently.  This is called dynamic stability or “homeostasis” and is a feature of many systems.  The image of a centrifugal governor is a good example – it was one of the critical innovations that allowed the power of steam to be harnessed safely a was a foundation stone of the industrial revolution. The design is called a negative feedback stabiliser and it has a drawback – there may be little or no external sign of the effort required to maintain the stability.

Problems arise when parts of this expectation-awareness-feedback-adjustment process are missing, do not work, or become disconnected. If there is an unclear expectation then it is impossible to know when and how to react. Not being clear what “healthy” means leads to confusion. It is too easy to create a distorted sense of normality by choosing a context where everyone is the same as you – “birds of a feather flock together”.

Another danger is to over-simplify the measure of health and to focus on one objective dimension – money – with the assumption that if the money is OK then the system must be OK.  This is an error of logic because although a healthy system implies healthy finances, the reverse is not the case – a business can be both making money and heading for disaster.

Failure can also happen if the most useful health metrics are not measured, are measured badly, or are not communicated in a meaningful way.  Very often metrics are not interpreted in context, not tracked over time, and not compared with the agreed expectation of health.  These multiple errors of omission lead to conterproductive behaviour such as the use of delusional ratios and arbitrary targets (DRATs), short-termism and “chasing the numbers” – all of which can further erode the underlying health of the system – like termites silently eating the foundations of your house. By the time you notice it is too late – the foundations have crumbled into dust!

To achieve and maintain systemic health it is necessary to include the homeostatic mechanisms at the design stage. Trying to add or impose the feedback functions afterwards is less effective and less efficient.  A healthy system is desoigned with sensitive feedback loops that indicate the effort required to maintain dynamic stablity – and if that effort is increasing then that alone is cause for further investigation – often long before the output goes out of specification.  Healthy systems are economic and are designed to require a minimum of effort to maintain stability and sustain performance – good design feels effortless compared with poor design. A system that only detects and reacts to deviations in outputs is an inferior design – it is like driving by looking in the rear-view mirror!

Healthy systems were designed to be healthy from the start or have evolved from unhealthy ones – the books by Jim Collins describes this: “Built to Last” describes organisations that have endured because they were destined to be great from the start. “Good to Great”  describes organisations that have evolved from unremarkable performers into great performers. There is a common theme to great companies irrespective of their genesis – data, information, knowledge, understanding and most important of all a wise leader.

The Rubik Cube Problem

Look what popped out of Santa’s sack!

I have not seen one of these for years and it brought back memories of hours of frustration and time wasted in attempting to solve it myself; a sense of failure when I could not; a feeling of envy for those who knew how to; and a sense of indignation when they jealously guarded the secret of their “magical” power.

The Rubik Cube got me thinking – what sort of problem is this?

At first it is easy enough but it becomes quickly apparent that it becomes more difficult the closer we get to the final solution – because our attempts to reach perfection undo our previous good work.  It is very difficult to maintain our initial improvement while exploring new options. 

This insight struck me as very similar to many of the problems we face in life and the sense of futility that creates a powerful force that resists further attempts at change.  Fortunately, we know that it is possible to solve the Rubik cube – so the question this raises is “Is there a way to solve it in a rational, reliable and economical way from any starting point?

One approach is to try every possible combination of moves until we find the solution. That is the way a computer might be programmed to solve it – the zero intelligence or brute force approach.

The problem here is that it works in theory but fails in practice because of the number of possible combinations of moves. At each step you can move one of the six faces in one of two directions – that is 12 possible options; and for each of these there are 12 second moves or 12 x 12 possible two-move paths; 12 x 12 x 12 = 1728 possible three-move paths; about 3 million six-move paths; and nearly half a billion eight-move paths!

You get the idea – solving it this way is not feasible unless you are already very close to the solution.

So how do we actually solve the Rubik Cube?  Well, the instructions that come with a new one tells you – a combination of two well-known ingredients: strategy and tactics. The strategy is called goal-directed and in my instructions the recommended strategy is to solving each layer in sequence. The tactics are called heuristics: tried-tested-and-learned sequences of actions that are triggered by specific patterns.

At each step we look for a small set of patterns and when we find one we follow the pre-designed heuristic and that moves us forward along the path towards the next goal. Of the billions of possible heuristics we only learn, remember, use and teach the small number that preserve the progress we have already made – these are our magic spells.

So where do these heuristics come from?

Well, we can search for them ourselves or we can learn them from someone else.  The first option holds the opportunity for new insights and possible breakthroughs – the second option is quicker!  Someone who designs or discovers a better heuristic is assured a place in history – most of us only ever learn ones that have been discovered or taught by others – it is a much quicker way to solve problems.  

So, for a bit of fun I compared the two approaches using a computer: the competitive-zero-intelligence-brute-force versus the collaborative-goal-directed-learned-and-shared-heuristics.  The heuristic method won easily every time!

The Rubik Cube is an example of a mechanical system: each of the twenty-six parts are interdependent, we cannot move one facet independently of the others, we can only move groups of nine at a time. Every action we make has nine consequences – not just one.  To solve the whole Rubik Cube system problem we must be mindful of the interdependencies and adopt methods that preserve what works while improving what does not.

The human body is a complex biological system. In medicine we have a phrase for this concept of preserving what works while improving what does not: “primum non nocere” which means “first of all do no harm”.  Doctors are masters of goal-directed heuristics; the medical model of diagnosis before prognosis before treatment is a goal-directed strategy and the common tactic is to quickly and accurately pattern-match from a small set of carefully selected data. 

In reality we all employ goal-directed-heuristics all of the time – it is the way our caveman brains have evolved.  Relative success comes from having a more useful set of heuristics – and these can be learned.  Just as with the Rubik Cube – it is quicker to learn what works from someone who can demonstrate that it works and can explain how it works – than to always laboriously work it out for ourselves.

An organisation is a bio-psycho-socio-economic system: a set of interdependent parts called people connected together by relationships and communication processes we call culture.  Improvement Science is a set of heuristics that have been discovered or designed to guide us safely and reliably towards any goal we choose to select – preserving what has been shown to work and challenging what does not.  Improvement Science does not define the path it only helps us avoid getting stuck, or going around in circles, or getting hopelessly lost while we are on the life-journey to our chosen goal.

And Improvement Science is learnable.