Resilience

The rise in the use of the term “resilience” seems to mirror the sense of an accelerating pace of change. So, what does it mean? And is the meaning evolving over time?

One sense of the meaning implies a physical ability to handle stresses and shocks without breaking or failing. Flexible, robust and strong are synonyms; and opposites are rigid, fragile, and weak.

So, digging a bit deeper we know that strong implies an ability to withstand extreme stress while resilient implies the ability to withstanding variable stress. And the opposite of resilient is brittle because something can be both strong and brittle.

This is called passive resilience because it is an inherent property and cannot easily be changed. A ball is designed to be resilient – it will bounce back – and this inherent in the material and the structure. The implication of this is that to improve passive resilience we would need to remove and to replace with something better suited to the range of expected variation.

The concept of passive resilience applies to processes as well, and a common manifestation of a brittle process is one that has been designed using averages.

Processes imply flows. The flow into a process is called demand, while the flow out of the process is called activity. What goes in must come out, so if the demand exceeds the activity then a backlog will be growing inside the process. This growing queue creates a number of undesirable effects – first it takes up space, and second it increases the time for demand to be converted into activity. This conversion time is called the lead-time.

So, to avoid a growing queue and a growing wait, there must be sufficient flow-capacity at each and every step along the process. The obvious solution is to set the average flow-capacity equal to the average demand; and we do this because we know that more flow-capacity implies more cost – and to stay in business we must keep a lid on costs!

This sounds obvious and easy but does it actually work in practice?

The surprising answer is “No”. It doesn’t.

What happens in practice is that the measured average activity is always less than the funded flow-capacity, and so less than the demand. The backlogs will continue to grow; the lead-time will continue to grow; the waits will continue to grow; the internal congestion will continue to grow – until we run out of space. At that point everything can grind to a catastrophic halt. That is what we mean by a brittle process.

This fundamental and unexpected result can easily and quickly be demonstrated in a concrete way on a table top using ordinary dice and tokens. A credible game along these lines was described almost 40 years ago in The Goal by Eli Goldratt, originator of the school of improvement called Theory of Constraints. The emotional impact of gaining this insight can be profound and positive because it opens the door to a way forward which avoids the Flaw of Averages trap. There are countless success stories of using this understanding.


So, when we need to cope with variation and we choose a passive resilience approach then we have to plan to the extremes of the range of variation. Sometimes that is not possible and we are forced to accept the likelihood of failure. Or we can consider a different approach.

Reactive resilience is one that living systems have evolved to use extensively, and is illustrated by the simple reflex loop shown in the diagram.

A reactive system has three components linked together – a sensor (i.e. temperature sensitive nerves endings in the skin), a processor (i.e. the grey matter of the spinal chord) and an effector (i.e. the muscle, ligaments and bones). So, when a pre-defined limit of variation is reached (e.g. the flame) then the protective reaction withdraws the finger before it becomes damaged. The advantage this type of reactive resilience is that it is relatively simple and relatively fast. The disadvantage is that it is not addressing the cause of the problem.

This is called reactive, automatic and agnostic.

The automatic self-regulating systems that we see in biology, and that we have emulated in our machines, are evidence of the effectiveness of a combination of passive and reactive resilience. It is good enough for most scenarios – so long as the context remains stable. The problem comes when the context is evolving, and in that case the automatic/reflex/blind/agnostic approach will fail – at some point.


Survival in an evolving context requires more – it requires proactive resilience.

What that means is that the processor component of the feedback loop gains an extra feature – a memory. The advantage this brings is that past experience can be recalled, reflected upon and used to guide future expectation and future behaviour. We can listen and learn and become proactive. We can look ahead and we can keep up with our evolving context. One might call, this reactive adaptation or co-evolution and it is a widely observed phenomenon in nature.

The usual manifestation is this called competition.

Those who can reactively adapt faster and more effectively than others have a better chance of not failing – i.e. a better chance of survival. The traditional term for this is survival of the fittest but the trendier term for proactive resilience is agile.

And that is what successful organisations are learning to do. They are adding a layer of proactive resilience on top of their reactive resilience and their passive resilience.

All three layers of resilience are required to survive in an evolving context.

One manifestation of this is the concept of design which is where we create things with the required resilience before they are needed. This is illustrated by the design squiggle which has time running left to right and shows the design evolving adaptively until there is sufficient clarity to implement and possibly automate.

And one interesting thing about design is that it can be done without an understanding of how something works – just knowing what works is enough. The elegant and durable medieval cathedrals were designed and built by Master builders who had no formal education. They learned the heuristics as apprentices and through experience.


And if we project the word game forwards we might anticipate a form of resilience called proactive adaptation. However, we sense that is a novel thing because there is no proadaptive word in the dictionary.

PS. We might also use the term Anti-Fragile, which is the name of a thought-provoking book that explores this very topic.

Systemory

How do we remember the vast amount of information that we seem to be capable of?

Our brains are comprised of billions of cells most of which are actually inactive and just there to support the active brain cells – the neurons.

Suppose that the active brain cell part is 50% and our brain has a volume of about 1.2 litres or 1,200 cu.cm or 1,200,000 cu.mm. We know from looking down a microscope that each neuron is about 20/1,000 mm x 20/1,000 mm  x 20/1,000 mm which gives a volume of 8/1,000,000 cu.mm or 125,000 neurons for every cu.mm. The population of a medium sized town in a grain of salt!  This is a concept we can just about grasp. And with these two facts we estimate that there are in the order of 140,000,000,000 neurons in a human brain – 140 billion – about 20 times the population of the whole World. Wow!

But even that huge number is less than the size of the memory on the hard disc of the computer I am writing this blog on – which has 200 gigabytes which is 1,600 gigabits which is 1,600 billion bits. Ten times as many memory cells as there are neurons in a human brain. 

But our brains are not just for storing data – they do all the data processing too – it is an integrated processor-and-memory design completely unlike the separate processor-or-memory design of a digital computer.  Each of our brains is remarkable in its capability, adaptability, and agility – its ability to cope with change – its ability to learn and to change its behaviour while still working.  So how does our biological memory work?

Well not like a digital computer where the zeros and ones, the binary digits (bits) are stored in regular structure of memory cells – a static structural memory – a data prison.  Our biological memory works in a completely different way – it is a temporal memory – it is time dependent. Our memories are not “recalled” like getting a book out of an indexed slot on a numbered in a massive library; are memories are replayed like a recording or rebuilt from a recipe. Time is the critical factor and this concept of temporal memory is a feature of all systems.

And that is not all – the temporal memory is not a library of video tapes – it is the simultaneous collective action of many parts of the system that create the illusion of the temporal memory – we have a parallel-distributed-temporal-memory. More like a video hologram. And it means we cannot point to the “memory” part of our brains – it is distributed throughout the system – and this means that the connections between the parts are as critical a part of the design and the parts themselves. It is a tricky concept to grasp and none of the billions of digital computers that co-inhabit this planet operate this way. They are feeble and fragile in comparison. An inferior design.

The terms distributed-temporal or systemic-memory are a bit cumbersome though so we need a new label – let us call it a systemory.  The properties of a systemory are remarkable – for example it still works when a bit of the systemory is removed.  When a bit of your brain is removed you don’t “forget” a bit of your name or lose the left ear on the mental picture of your friends face – as would happen with a computer.  A systemory is resilient to damage which is a necessary design-for-survival. It also implies that we can build our systemory with imperfect parts and incomplete connections. In a digital computer this would not work: the localised-static or silo-memory has to be perfect because if a single bit gets flipped or a single wire gets fractured it can render the whole computer inoperative useless junk.

Another design-for-survival property of a systemory is that it still works even when it is being changed – it is continuously adaptable and updateable.  Not so a computer – to change the operating system the computer has to be stopped, the old program overwritten by the new one, then the new one started. In fact computers are designed to prevent programs modifying themselves – because it a sure recipe for a critical system failure – the dreaded blue screen!

So if we map our systemory concept across from person to population and we replace neurons with people then we get an inkling of how a society can have a collective memory, a collective intelligence, a collective consciousness even – a social systemory. We might call that property the culture.  We can also see that the relationships that link the people are as critical as the people themselves and that both can be imperfect yet we get stable and reliable behaviour. We can also see that influencing the relationships between people has as much effect on the system behaviour as how the people themselves perform – because the properties of the systemory are emergent. Culture is an output not an input.

So in the World – the development of global communication systems means that all 7 billion people in the global social systemory can, in principle, connect to each other and can collectively learn and change faster and faster as the technology to connect more widely and more quickly develops. The rate of culture change is no longer governed by physical constraints such as geographic location, orand temporal constraints such as how long a letter takes to be delivered.

Perhaps the most challenging implication is that a systemory does not have a “point of control” – there is no librarian who acts as a gatekeeper to the data bank, no guard on the data prison.  The concept of “control” in a systemory is different – it is global not local – and it is influence not control.  The rapid development of mobile communication technology and social networking gives ample evidence – we would now rather communicate with a familar on the other side of the world than with a stranger standing next to us in the lunch queue. We have become tweeting and texting daemons.  Our emotional relationships are more important than our geographical ones. And if enough people can connect to each other they can act in a collective, coordinated, adaptive and agile way that no command-and-control system can either command or control. The recent events in the Middle East are ample evidence of the emergent effectiveness of a social systemory.

Our insight exposes a weakness of a social systemory – it is possible to adversely affect the whole by introducing a behavioural toxin that acts at the social connection level – on the relationships between people. The behavioural toxin needs only to have a weak and apparently harmless effect but when disseminated globally the cumulative effect creates cultural dysfunction.  It is rather like the effect of alcohol and other recreational chemical substances on the brain – it cause a temporary systemory dysfunction – but one that in an over-stressed psychological system paradoxically results in pleasure; or rather stress release. Hence the self-reinforcing nature of the addiction.  

Effective leaders are intuitively aware that just their behaviour can be a tonic or a toxin for the whole system: organisations are the the same emotional boat as their leader.

Effective leaders use their behaviour to steer the systemory of the organisation along a path of improvement and their behaviour is the output of their personal systemory.

Leaders have to be the change that they want their organisations to achieve.

Doing Our Way to New Thinking.

Most of our thinking happens out of awareness – it is unconscious. Most of the data that pours in through our senses never reaches awareness either – but that does not mean it does not have an impact on what we remember, how we feel and what we decide and do in the future. It does.

Improvement Science is the knowledge of how to achieve sustained change for the better; and doing that requires an ability to unlearn unconscious knowledge that blocks our path to improvement – and to unlearn selectively.

So how can we do that if it is unconscious? Well, there are  at least two ways:

1. Bring the unconscious knowledge to the surface so it can be examined, sorted, kept or discarded. This is done through the social process of debate and discussion. It does work though it can be a slow and difficult process.

2. Do the unlearning at the unconscious level – and we can do that by using reality rather than rhetoric. The easiest way to connect ourselves to reality is to go out there and try doing things.

When we deliberately do things  we are learning unconsciously because most of our sensory data never reaches awareness.  When we are just thinking the unconscious is relatively unaffected: talking and thinking are the same conscious process. Discussion and dialog operate at the conscious level but differ in style – discussion is more competitive; dialog is more collaborative. 

The door to the unconscious is controlled by emotions – and it appears that learning happens more effectively and more efficiently in certain emotional states. Some emotional states can impair learning; such as depression, frustration and anxiety. Strong emotional states associated with dramatic experiences can result in profound but unselective learning – the emotionally vivid memories that are often associated with unpleasant events.  Sometimes the conscious memory is so emotionally charged and unpleasant that it is suppressed – but the unconscious memory is not so easily erased – so it continues to influence but out of awareness. The same is true for pleasant emotional experiences – they can create profound learning experiences – and the conscious memory may be called an inspirational or “eureka” moment – a sudden emotional shift for the better. And it too is unselective and difficult to erase.

An emotionally safe environment for doing new things and having fun at the same time comes close to the ideal context for learning. In such an enviroment we learn without effort. It does not feel like work – yet we know we have done work because we feel tired afterwards.  And if we were to record the way that we behave and talk before the doing; and again afterwards then we will measure a change even though we may not notice the change ourselves. Other people may notice before we do – particularly if the change is significant – or if they only interact with us occasionally.

It is for this reason that keeping a personal journal is an effective way to capture the change in ourselves over time.  

The Jungian model of personality types states that there are three dimensions to personality (Isabel Briggs Myers added a fourth later to create the MBTI®).

One dimension describes where we prefer to go for input data – sensors (S) use external reality as their reference – intuitors (N) use their internal rhetoric.

Another dimension is how we make decisions –  thinkers (T) prefer a conscious, logical, rational, sequential decision process while feelers (F) favour an unconscious, emotional, “irrational”, parallel approach.

The third dimension is where we direct the output of our decisions – extraverts (E) direct it outwards into the public outside world while intraverts (I) direct it inwards to their private inner world.

Irrespective of our individual preferences, experience suggests that an effective learning sequence starts with our experience of reality (S) and depending how emotionally loaded it is (F) we may then internalise the message as a general intuitive concept (N) or a specific logical construct (T).

The implication of this is that to learn effectively and efficiently we need to be able to access all four modes of thinking and to do that we might design our teaching methods to resonate with this natural learning sequence, focussing on creating surprisingly positive reality based emotional experiences first. And we must be mindful that if we skip steps or create too many emotionally negative experiences we we may unintentionally impair the effectiveness of the learning process.

A carefully designed practical exercise that takes just a few minutes to complete can be a much more effective and efficient way to teach a profound principle than to read libraries of books or to listen to hours of rhetoric.  Indeed some of the most dramatic shifts in our understanding of the Universe have been facilitated by easily repeatable experiments.

Intuition and emotions can trick us – so Doing Our Way to New Thinking may be a better improvement strategy.

Focus-on-the-Flow

One of the foundations of Improvement Science is visualisation – presenting data in a visual format that we find easy to assimilate quickly – as pictures.

We derive deeper understanding from observing how things are changing over time – that is the reality of our everyday experience.

And we gain even deeper understanding of how the world behaves by acting on it and observing the effect of our actions. This is how we all learned-by-doing from day-one. Most of what we know about people, processes and systems we learned long before we went to school.


When I was at school the educational diet was dominated by rote learning of historical facts and tried-and-tested recipes for solving tame problems. It was all OK – but it did not teach me anything about how to improve – that was left to me.

More significantly it taught me more about how not to improve – it taught me that the delivered dogma was not to be questioned. Questions that challenged my older-and-better teachers’ understanding of the world were definitely not welcome.

Young children ask “why?” a lot – but as we get older we stop asking that question – not because we have had our questions answered but because we get the unhelpful answer “just because.”

When we stop asking ourselves “why?” then we stop learning, we close the door to improvement of our understanding, and we close the door to new wisdom.


So to open the door again let us leverage our inborn ability to gain understanding from interacting with the world and observing the effect using moving pictures.

Unfortunately our biology limits us to our immediate space-and-time, so to broaden our scope we need to have a way of projecting a bigger space-scale and longer time-scale into the constraints imposed by the caveman wetware between our ears.

Something like a video game that is realistic enough to teach us something about the real world.

If we want to understand better how a health care system behaves so that we can make wiser decisions of what to do (and what not to do) to improve it then a real-time, interactive, healthcare system video game might be a useful tool.

So, with this design specification I have created one.

The goal of the game is to defeat the enemy – and the enemy is intangible – it is the dark cloak of ignorance – literally “not knowing”.

Not knowing how to improve; not knowing how to ask the “why?” question in a respectful way.  A way that consolidates what we understand and challenges what we do not.

And there is an example of the Health Care System Flow Game being played here.

Reality trumps Rhetoric

One of the biggest challenges posed by Improvement is the requirement for beliefs to change – because static beliefs imply stagnated learning and arrested change.  We all display our beliefs for all to hear and see through our language – word and deed – our spoken language and our body language – and what we do not say and do not do is as important as what we do say and what we do do.  Let us call the whole language thing our Rhetoric – the external manifestation of our internal mental model.

Disappointingly, exercising our mental model does not seem to have much impact on Reality – at least not directly. We do not seem to be able to perform acts of telepathy or telekinesis. We are not like the Jedi knights in the Star Wars films who have learned to master the Force – for good or bad. We are not like the wizards in the Harry Potter who have mastered magical powers – again for good or bad. We are weak-minded muggles and Reality is spectacularly indifferent to our feeble powers. No matter what we might prefer to believe – Reality trumps Rhetoric.

Of course we can side step this uncomfortable feeling by resorting to the belief of One Truth which is often another way of saying My Opinion – and we then assume that if everyone else changed their belief to our belief then we would have full alignment, no conflict, and improvement would automatically flow.  What we actually achieve is a common Rhetoric about which Reality is still completely indifferent.  We know that if we disagree then one of us must be wrong or rather un-real-istic; but we forget that even if we agree then we can still both be wrong. Agreement is not a good test of the validity of our Rhetoric. The only test of validity is Reality itself – and facing the unfeeling Reality risks bruising our rather fragile egos – so we shy away from doing so.

So one way to facilitate improvement is to employ Reality as our final arbiter and to do this respectfully.  This is why teachers of improvement science must be masters of improvement science. They must be able to demonstrate their Improvenent Science Rhetoric by using Reality and their apprentices need to see the IS Rhetoric applied to solving real problems. One way to do this is for the apprentices to do it themselves, for real, with guidance of an IS master and in a safe context where they can make errors and not damage their egos. When this is done what happens is almost magical – the Rhetoric changes – the spoken language and the body language changes – what is said and what is done changes – and what is not said and not done changess too. And very often the change is not noticed at least by those who change.  We only appear to have one mental model: only one view of Reality so when it changes we change.

It is also interesting to observe is that this evolution of Rhetoric does not happen immediately or in one blinding flash of complete insight. We take small steps rather than giant leaps. More often the initial emotional reaction is confusion because our experience of the Reality clashes with the expectation of our Rhetoric.  And very often the changes happen when we are asleep – it is almost as if our minds work on dissolving the confusion when it is not distracted with the demands of awake-work; almost like we are re-organising our mental model structure when it is offline. It is a very common to have a sleepless night after such an Reality Check and to wake with a feeling of greater clarity – our updated mental model declaring itself as our New Rhetoric. Experienced facilitators of Improvement Science understand this natural learning process and that it happens to everyone – including themselves. It is this feeling of increased clarity, deeper understanding, and released energy that is the buzz of Improvement Science – the addictive drug.  We learn that our memory plays tricks on us; and what was conflict yesterday becomes confusion today and clarity tomorrow. One behaviour that often emerges spontaneously is the desire to keep a journal – sometimes at the bedside – to capture the twists and turns of the story of our evolving Rhetoric.

This blog just such a journal.

The Ten Billion Barrier

I love history – not the dry boring history of learning lists of dates – the inspiring history of how leaps in understanding happen after decades of apparently fruitless search.  One of the patterns that stands out for me in recent history is how the growth of the human population has mirrored the changes in our understanding of the Universe.  This pattern struck me as curious – given that this has happened only in the last 10,000 years – and it cannot be genetic evolution because the timescale is to short. So what has fuelled this population growth? On further investigation I discovered that the population growth is exponential rather than linear – and very recent – within the last 1000 years.  Exponential growth is a characteristic feature of a system that has a positive feedback loop in it that is not balanced by an equal and opposite negative feedback loop. So, what is being fed back into the system that is creating this unbalanced behaviour? My conclusion so far is “collective improvement in understanding”.

However, exponential growth has a dark side – it is not sustainable. At some point a negative feedback loop will exert itself – and there are two extremes to how fast this can happen: gradual or sudden. Sudden negative feedback is a shock is the one to avoid because it is usually followed by a dramatic reversal of growth which if catastrophic enough is fatal to the system.  When it is less sudden and less severe it can lead into repeating cycles  of growth and decline – boom and bust – which is just a more painful path to the same end.  This somewhat disquieting conclusion led me to conduct the thought experiment that is illustrated by the diagram: If our growth is fuelled by our ability to learn, to use and to maintain our collective knowledge what changes in how we do this must have happened over the last 1000 years?  Biologically we are social animals and using our genetic inheritance we seem only able to maintain about 100 active relationships – which explains the natural size of family groups where face-to-face communication is paramount.  To support a stable group that is larger than 100 we must have developed learned behaviours and social structures. History tells us that we created communities by differentiating into specialised functions and to be stable these were cooperative rather than competitive and the natural multiplier seems to be about 100.  A community with more than 10,000 people is difficult to sustain with an ad hoc power structure with a powerful leader and we develop collective “rules” and a more democratic design – which fuels another 100 fold expansion to 1 million – the order of magnitide of a country or city. Multiply by 100 again and we get the size that is typical of a country and the social structures required to achieve stablity on this scale are different again – we needed to develop a way of actively seeking new knowledge, continuously re-writing the rule books, and industrialising our knowkedge. This has only happened over the last 300 years.  The next multipler takes us to Ten Billion – the order of magnitude of the current global population – and it is at this stage that  our current systems seem to be struggling again.

From this geometric perspective we appear to be approaching a natural human system barrier that our current knowledge management methods seem inadequate to dismantle – and if we press on in denial then we face the prospect of a sudden and catastrophic change – for the worse. Regression to a bygone age would have the same effect because those systems are not designed to suport the global economy.

So, what would have to change in the way we manage our collective knowledge that would avoid a Big Crunch and would steer us to a stable and sustainable future?

When Is Seeing Believing?

One of the problems with our caveman brains is that they are a bit slow. It may not feel that way but they are – and if you don’t believe me try this experiment: Stand up, get a book, hold it in your left hand open it at any page, hold a coin in your right hand between finger and thumb so that it will land on the floor when you drop it. Then close your eyes and count to three. Open your eyes, drop the coin, and immediately start reading the book. How long is it before you are consciously aware of the meaning of the words. My guess is that the coin hits the floor about the same time that you start to making sense of what is on the page. That means it takes about half a second to start perceiving what you are seeing. That long delay is a problem because the world around us is often changing much faster than that and, to survive, we need to keep up. So what we do is fill in the gaps – what we perceive is a combination of what we actually see and what we expect to see – the process is seamless, automatic and unconscious. And that is OK so long as expectation and reality stay in tune – but what happens when they don’t? We experience the “Eh?” effect which signals that we are temporarily confused – an uncomfortable and scary feeling which resolves when we re-align our perception with reality. Over time we all learn to avoid that uncomfortable confusion feeling with a simple mind trick – we just filter out the things we see that do not fit our expectation. Psychologists call this “perceptual distortion” and the effect is even greater when we look with our minds-eye rather than our real eyes – then we only perceive  what we expect to see and we avoid the uncomfortable “Eh?” effect completely.  This unconscious behaviour we all demonstrate is called self-delusion and it is a powerful barrier to improvement – because to improve we have to first accept that what we have is not good enough and that reality does not match our expectation.

To become a master of improvement it is necessary to learn to be comfortable with the “eh?” feeling – to disconnect it from the negative emotion of fear that drives the denial reaction and self-justifying behaviour – and instead to reconnect it to the positive emotion of excitement that drives the curiosity action and exploratory behaviour.  One ewasy way to generate the “eh?” effect is to perform reality checks – to consciously compare what we actually see with what we expect to see.  That is not easy because our perception is very slippery – we are all very,very good at perceptual distortion. A way around this is to present ourselves with a picture of realilty over time, using the past as a baseline, and our understanding of the system, we can predict what we believe will happen in the near future. We then compare what actually happens with our expectation.  Any significant deviations are “eh?” effects that we can use to focus our curiosity – for there hide the nuggets of new knowledge.  But how do we know what is a “signifcant” deviation? To answer that we must avoid using our slippery self-delusional perception system – we need a tool that is designed to do this interpretation safely, easily, and quickly.  Click here for an example of such a tool.

Can an Old Dog learn New Tricks?

I learned a new trick this week and I am very pleased with myself for two reasons. Firstly because I had the fortune to have been recommended this trick; and secondly because I had the foresight to persevere when the first attempt didn’t work very well.  The trick I learned was using a webinar to provide interactive training. “Oh that’s old hat!” I hear some of you saying. Yes, teleconferencing and webinars have been around for a while – and when I tried it a few years ago I was disappointed and that early experience probably raised my unconscious resistance. The world has moved on – and I hadn’t. High-speed wireless broadband is now widely available and the webinar software is much improved.  It was a breeze to set up (though getting one’s microphone and speakers to work seems a perennial problem!). The training I was offering was for the BaseLine process behaviour chart software – and by being able to share the dynamic image of the application on my computer with all the invitees I was able to talk through what I was doing, how I was doing it and the reasons why I was doing it.  The immediate feedback from the invitees allowed me to pace the demonstration, repeat aspects that were unclear, answer novel queries and to demonstrate features that I had not intended to in my script.  The tried and tested see-do-teach method has been reborn in the Information Age and this old dog is definitely wagging his tail and looking forward to his walk in the park (and maybe a tasty treat, huh?)