Structure Time to Fuel Improvement

The expected response to any suggestion of change is “Yes, but I am too busy – I do not have time.”

And the respondent is correct. They do not.

All their time is used just keeping their head above water or spinning the hamster wheel or whatever other metaphor they feel is appropriate.  We are at an impasse. A stalemate. We know change requires some investment of time and there is no spare time to invest so change cannot happen. Yes?  But that is not good enough – is it?

Well-intended experts proclaim that “I’m too busy” actually means “I have other things to do that are higher priority“. And by that we mean ” … that are a greater threat to my security and to what I care about“. So to get our engagement our well-intended expert pours emotional petrol on us and sets light to it. They show us dramatic video evidence of how our “can’t do” attitude and behaviour is part of the problem. We are the recalcitrant child who is standing in the way of  change and we need to have our face rubbed in our own cynical poo.

Now our platform is really burning. Inflamed is exactly what we are feeling – angry in fact. “Thanks-a-lot. Now #!*@ off!”   And our well-intentioned expert retreats – it is always the same. The Dinosaurs and the Dead Wood are clogging the way ahead.

Perhaps a different perspective might be more constructive.


It is not just how much time we have that is most important – it is how our time is structured.


Humans hate unstructured time. We like to be mentally active for all of our waking moments. 

To test this hypothesis try this demonstration of our human need to fill idle time with activity. When you next talk to someone you know well – at some point after they have finished telling you something just say nothing;  keep looking at them; and keep listening – and say nothing. For up to twenty seconds if necessary. Both you and they will feel an overwhelming urge to say something, anything – to fill the silence. It is called the “pregnant pause effect” and most people find even a gap of a second or two feels uncomfortable. Ten seconds would be almost unbearable. Hold your nerve and stay quiet. They will fill the gap.

This technique is used by cognitive behavioural therapists, counsellors and coaches to help us reveal stuff about ourselves to ourselves – and it works incredibly well. It is also used for less altrusitic purposes by some – so when you feel the pain of the pregnant pause just be aware of what might be going on and counter with a question.


If we have no imposed structure for our time then we will create one – because we feel better for it. We have a name for these time-structuring behaviours: habits, past-times and rituals. And they are very important to us because they reduce anxiety.

There is another name for a pre-meditated time-structure:  it is called a plan or a process design. Many people hate not having a plan – and to them any plan is better than none. So in the absence of an imposed alternative we habitually make do with time-wasting plans and poorly designed processes.  We feel busy because that is the purpose of our time-structuring behaviour – and we look busy too – which is also important. This has an important lesson for all improvement scientists: Using a measure of “business” such as utilisation as a measure of efficiency and productivity is almost meaningless. Utilisation does not distinguish between useful busi-ness and useless busi-ness.

We also time-structure our non-working lives. Reading a newspaper, doing the crossword, listening to the radio,  watching television, and web-browsing are all time-structuring behaviours.


This insight into our need for structured time leads to a rational way to release time for change and improvement – and that is to better structure some of our busy time.

A useful metaphor for a time-structure is a tangible structure – such as a building. Buildings have two parts – a supporting, load bearing, structural framework and the functional fittings that are attached to it. Often the structural framework is invisible in the final building – invisible but essential. That is why we need structural engineers. The same is true for time-structuring: the supporting form should be there but it should not not get in the way of the intended function. That is why we need process design engineers too. Good process design is invisible time-structuring.


One essential investment of time in all organisations is communication. Face-to-face talking, phone calls, SMS, emails, reports, meetings, presentations, webex and so on. We spend more time communicating with each other than doing anything else other than sleeping.  And more niggles are generated by poorly designed and delivered communication processes than everything else combined. By a long way.


As an example let us consider management meetings.

From a process design perspective mmany management meetings are both ineffective and inefficient. They are unproductive.  So why do we still have them?

One possibkle answer is because meetings have two other important purposes: first as a tool for social interaction, and second as a way to structure time.  It turns out that we dislike loneliness even more than idleness – and we can meet both needs at the same time by having a meeting. Productivity is not the primary purpose.


So when we do have to communicate effectively and efficiently in order to collectively resolve a real and urgent problem then we are ill prepared. And we know this. We know that as soon as Crisis Management Committees start to form then we are in really big trouble. What we want in a time of crisis is for someone to structure time for us. To tell us what to do.

And some believe that we unconsciously create crisis after crisis for just that purpose.


Recently I have been running an improvement experiment.  I have  been testing the assumption that we have to meet face-to-face to be effective. This has big implications for efficiency because I work in a multi-site organisation and to attend a meeting on another site implies travelling there and back. That travel takes one hour in each direction when all the separate parts are added together. It has two other costs. The financial cost of the fuel – which is a variable cost – if I do not travel then I do not incur the cost. And there is an emotional cost – I have to concentrate on driving and will use up some of my brain-fuel in doing so. There are three currencies – emotional, temporal and financial.

The experiment was a design change. I changed the design of the communication process from at-the-same-place-and-time to just at-the-same-time. I used an internet-based computer-to-computer link (rather like Skype or FaceTime but with some other useful tools like application sharing).

It worked much better than I expected.

There was the anticipated “we cannot do this because we do not have webcams and no budget for even pencils“. This was solved by buying webcams from the money saved by not burning petrol. The conversion rate was one webcam per four trips – and the webcam is a one off capital cost not a recurring revenue cost. This is accpiuntant-speak for “the actual cash released will fund the change“. No extra budget is required. And combine the fuel savings for everyone, and parking charges and the payback time is even shorter.

There were also the anticipated glitches as people got used to the unfamiliar technology (they did not practice of course because they were too busy) but the niggles go away with a few iterations.

So what were the other benefits?

Well one was the travel time saved – two hours per meeting – which was longer than the meeting! The released time cannot be stored and used later like the money can – it has to be reinvested immediately. I reinvested it in other improvement work. So the benefit was amplified.

Another was the brain-fuel saved from not having to drive – which I used to offset my cumuative brain-fuel deficit called chronic fatigue. The left over was re-invested in the improvement work. 100% recycled. Nothing was wasted.


The unexpected benefit was the biggest one.

The different communication design of a virtual meeting required a different form of meeting structure and discipline. It took a few iterations to realise this – then click – both effectiveness and efficiency jumped up. The time became even better structured, more productive and released even more time to reinvest. Wow!

And the whole thing funded itself.

The Frightening Cost Of Fear

The recurring theme this week has been safety and risk.

Specifically in a healthcare context. Most people are not aware just how risky our current healthcare systems are. Those who work in healthcare are much more aware of the dangers but they seem powerless to do much to make their systems safer for patients.


The shroud-waving  zealots who rant on about safety often use a very unhelpful quotation. They say “Every system is perfectly designed to deliver the performance it does“. The implication is that when the evidence shows that our healthcare systems are dangerous …. then …. we designed them to be dangerous.  The reaction from the audience is emotional and predictable “We did not intend this so do not try to pin the blame on us!”  The well-intentioned shroud-waving safety zealot loses whatever credibility they had and the collective swamp of cynicism and despair gets a bit deeper.


The warning-word here is design – because it has many meanings.  The design of a system can mean “what the system is” in the sense of a blueprint. The design of a system can also mean “how the blueprint was created”.  This process sense is the trap – because it implies intention.  Design needs a purpose – the intended outcome – so to say an unsafe system has been designed is to imply that it was intended to be unsafe. This is incorrect.

The message in the emotional backlash that our well-intended zealot provoked is “You said we intended bad things to happen which is not correct so if you are wrong on that fundamental belief then how can I trust anything else you say?“. This is the reason zealots lose credibility and actually make improvement less likely to happen.


The reality is not that the system was designed to be unsafe – it is that it was not designed not to be. The double negatives are intentional. The two statements are not the same.


The default way of the Universe is evolutionary (which is unintentional and reactive) and chaotic (which is unstable and unsafe). To design a system to be not-unsafe we need to understand Two Sciences – Design Science and Safety Science. Only then can we proactively and intentionally design safe, stable, and trustable systems.    If we do nothing and do not invest in mastering the Two Sciences then we will get the default outcome: unintended unsafety.  This is what the uncomfortable  evidence says we have.


So where does the Frightening Cost of Fear come in?

If our system is unintentionally and unpredictably unsafe then of course we will try to protect ourselves from the blame which inevitably will follow from disappointed customers.  We fear the blame partly because we know it is justified and partly because we feel powerless to avoid it. So we cover our backs. We invent and implement complex check-and-correct systems and we document everything we do so that we have the evidence in the inevitable event of a bad outcome and the backlash it unleashes. The evidence that proves we did our best; it shows we did what the safety zealots told us to do; it shows that we cannot be held responsible for the bad outcome.

Unfortunately this strategy does little to prevent bad outcomes. In fact it can have has exactly the opposite effect of what is intended. The added complexity and cost of our cover-my-back bureaucracy actually increases the stress and chaos and makes bad outcomes more likely to happen. It makes the system even less safe. It does not deflect the blame. It just demonstrates that we do not understand how to design a not-unsafe system.


And the financial cost of our fear is frighteningly high.

Studies have shown that over 60% of nursing time is spent on documentation – and about 70% of healthcare cost is on hospital nurse salaries. The maths is easy – at least 42% of total healthcare cost is spent on back-covering-blame-deflection-bureaucracy.

It gets worse though.

Those legal documents called clinical records need to be moved around and stored for a minimum of seven years. That is expensive. Converting them into an electronic format misses the point entirely. Finding the few shreds of valuable clinical information amidst the morass of back-covering-bureaucracy uses up valuable specialist time and has a high risk of failure. Inevitably the risk of decision errors increases – but this risk is unmeasured and is possibly unmeasurable. The frustration and fear it creates is very obvious though: to anyone willing to look.

The cost of correcting the Niggles that have been detected before they escalate to Not Agains, Near Misses and Never Events can itself account for half the workload. And the cost of clearing up the mess after the uncommon but inevitable disaster becomes built into the system too – as insurance premiums to pay for future litigation and compensation. It is no great surprise that we have unintentionally created a compensation culture! Patient expectation is rising.

Add all those costs up and it becomes plausible to suggest that the Cost of Fear could be a terrifying 80% of the total cost!


Of course we cannot just flick a switch and say “Right – let us train everyone in safe system design science“.  What would all the people who make a living from feeding on the present dung-heap do? What would the checkers and auditors and litigators and insurers do to earn a crust? Join the already swollen ranks of the unemployed?


If we step back and ask “Does the Cost of Fear principle apply to everything?” then we are faced with the uncomfortable conclusion that it most likely is.  So the cost of everything we buy will have a Cost of Fear component in it. We will not see it written down like that but it will be in there – it must be.

This leads us to a profound idea.  If we collectively invested in learning how to design not-unsafe systems then the cost of everything could fall. This means we would not need to work as many hours to earn enough to pay for what we need to live. We could all have less fear and stress. We could all have more time to do what we enjoy. We could all have both of these and be no worse off in terms of financial security.

This Win-Win-Win outcome feels counter-intuitive enough to deserve serious consideration.


So here are some other blog topics on the theme of Safety and Design:

Never Events, Near Misses, Not Agains and Nailing Niggles

The Safety Line in the Quality Sand

Safety By Design

Standard Ambiguity

One of the words that causes the most debate and confusion in the world of Improvement is the word standard – because it has so many different yet inter-related meanings.  It is an ambiguous word and a multi-facetted concept.

For example standard method can be the normal way of doing something (as in a standard operating procedure  or SOP); standard can be the expected outcome of doing something; standard can mean the minimum acceptable quality of the output (as in a safety standard); standard can mean an aspirational performance target; standard can mean an absolute reference or yardstick (as in the standard kilogram); standard can mean average; and so on.  It is an ambiguous word.

So it is no surprise that we get confused. And when we are confused we get scared and we try to relieve our fear by asking questions which doesn’t help because we don’t get clear answers so we start to discuss, and debate and argue and all this takes effort, time and inevitably money. But the fog of confusion does not lift.  If anything it gets denser.  And the reason? Standard Ambiguity.


One cause of this is the perennial confusion between purpose and process. Purpose is the Why. Process is the How.  The concept of standard applied to the Purpose will include the outcomes: the minimum acceptable (safety standard), the expected (the specification standard) and the actual (the de facto standard).  The concept of standard applied to the process would include the standard operating procedures and the reference standards for accurate process measurement (e.g. a gold standard).


To illustrate the problems that result from confusing purpose standards with process standards we need look no further than education.  What is the purpose of a school? To deliver pupils who have achieved their highest educational potential perhaps. What is the purpose of an exam board? To have a common educational reference standard and to have a reliable method for comparing individual pupils against that reference standard perhaps.  So where does the idea of “Being the school that achieved the highest percentage of top grades?” fit with these two purpose standards?  Where does the league table concept fit? It is hard to see immediately. But we do want to improve the educational capability of our population because that is a national and global asset in an increasingly complex, rapidly changing, high technology world. So a league table will drive up the quality of education surely? But it doesn’t seem to be turning out that way. So what is getting in the way?


What is getting in the way is how we confuse collaboration and competition.  It seems to be that many believe we have either collaboration or competition. Either-Or thinking is a trap for the unwary and whenever these words are uttered a small alarm bell should ring.  Are collaboration and competition mutually exclusive? Or are we just making this assumption to simplify the problem? We do that a lot.


Suppose the exam boards were both competing and collaborating with each other. Suppose they collaborated to set and to maintain a stable and trusted reference standard; and suppose that they competed to provide the highest quality service to the schools – in terms of setting and marking exams. What would happen?  An exam board that stepped out of line in terms of the standard would lose its authority to set and mark exams – it would cut its own commercial throat.  And the quality of the examination process would go up because those who invest in that will attract more of the market.  What about the schools – what if they collaborated and competed too.  What if they collaborated to set and maintain a stable and trusted reference standard of conduct and competency of their teachers – and what if they competed to improve the quality of their educational process. They would attract the most pupils. What could happen if we combine competition and collaboration so the sum becomes greater than the parts?


A similar situation exists in healthcare.  Some hospitals are talking about competing to be the safest hospitals and collaborating to improve quality.  It sounds plausible but it is rational?

Safety is an absolute standard – it is the common minimum acceptable quality. No hospital should fail on safety so this is not a suitable subject for competition.  All hospitals should collaborate to set and to maintain safety – helping each other by sharing data, information, knowledge, and understanding.  And with that Foundation of Trust they can then compete on quality – using the competitive spirit to pull them every higher. Better quality of service, better quality of delivery and better quality of performance – including financial. Win-win-win.  So when the quality of everyone improves through competitive upwards pull then the level of minimum acceptable quality increases – so the Safety Standard improves too.


A win-win-win outcome is the purpose of the application of the process of Improvement Science.

Predictable and Explainable – or Not

It is a common and intuitively reasonable assumption to believe that if something is explainable then it is predictable; and if it is not explainable then it is not predictable. Unfortunately this beguiling assumption is incorrect.  Some things are explainable but not predictable; and some others are predictable but not explainable.  Believe me? Of course not. We are all skeptics when our intuitively obvious assumptions and conclusions are challenged! We want real and rational evidence not rhetorical exhortation.

OK.  Explainable means that the principles that guide the process are conceptually simple. We can explain the parts in detail and we can explain how they are connected together in detail. Predictable implies that if we know the starting point in detail, and the intervention in detail, then we can predict what the outcome will be – in detail.


Let us consider an example. Say we know how much we have in our bank account, and we know how much we intend to spend on that new whizzo computer, then we can predict what will be left in out bank account when the payment has been processed. Yes. This is an explainable and predictable system. It is called a linear system.


Let us consider another example. Say we know we have six dice each with numbers 1 to 6 printed on them and we throw them at the same time. Can we predict where they will land and what the final sum will be? No. We can say that it will be between 6 and 36 but that is all. And after we have thrown the dice we will not be able to explain, in detail, how they came to rest exactly where they did.  This is an unpredictable and unexplainable system. It is called a random system.


This is a picture of a conceptually simple system. It is a novelty toy and it comprises two thin sheets of glass held a few millimetres apart by some curved plastic spacers. The narrow space is filled with green coloured oil, some coarse black volcanic sand, and some fine white coral sand. That is all. It is a conceptually simple toy. I have (by some magical means) layered the sand so that the coarse black sand is at the bottom and the fine white sand is on top. It is stable arrangement – and explainable. I then tipped the toy on its side – I rotated it through 90 degrees. It is a simple intervention – and explainable.

My intervention has converted a stable system to an unstable one and I confidently predict that the sand and oil will flow under the influence of gravity. There is no randomness here – I do not jiggle the toy – so the outcome should be predictable because I can explain all the parts in detail before we start;  and I can explain the process in detail; and I can explain precisely what my intervention will be. So I should be able to predict the final configuration of the sand when this simple and explainable system finally settles into a new stable state again. Yes?

Well, I cannot. I can make some educated guesses – some plausible projections. But the only way to find out precisely what will happen is by doing the experiment and observing what actually happens.

This is what happened.

The final, stable configuration of the coarse black and fine white sand has a strange beauty in the way the layers are re-arranged. The result is not random – it has structure. And with the benefit of hindsight I feel I can work backwards and understand how it might have come about. It is explainable in retrospect but I could not predict it in prospect – even with a detailed knowledge of the starting point and the process.

This is called a non-linear system. Explainable in concept but difficult to predict in practice. The weather is another example of a non-linear system – explainable in terms of the physics but not precisely predictable. How reliable are our long range weather forecasts – or the short range ones for that matter?

Non-linear systems exhibit complex and unpredictable  behaviour – even though they may be simple in concept and uncomplicated in construction.  Randomness is usually present in real systems but it is not the cause of the complex behaviour, and making our systems more complicated seems likely to result in more unpredictable behaviour – not less.

If we want the behaviour of our system to be predictable and our system has non-linear parts and relationships in it – then we are forced to accept two Universal Truths.

1. That our system behaviour will only be predictable within limits (even if there is little or no randomness in it).

2. That to keep the behaviour within acceptable limits then we need to be careful how we arrange the parts and how they relate to each other.

This challenge of creating a predictable-within-acceptable-limits system from non-linear parts is called resilient design.


We have a fourth option to consider: a system that has a predictable outcome but an unexplainable reason.

We make predictions two ways – by working out what will happen or by remembering what has happened before. The second method is much easier so it is the one we use most of the time: it is called re-cognition. We call it knowledge.

If we have a black box with inputs on one side and outputs on the other, and we observe that when we set the inputs to a specific configuration we always get the same output – then we have a predicable system. We cannot explain how the inputs result in the output because the inner workings are hidden. It could be very simple – or it could be fiendishly complicated – we do not know.

It this situation we have no choice but to accept the status quo – and we have to accept that to get a predictable outcome we have to follow the rules and just do what we have always done before. It is the creed of blind acceptance – the If you always do what you have always done you will always get what you always got. It is knowledge but it is not understanding.  New knowledge  can only be found by trial and error.  It is not wisdom, it is not design, it is not curiosity and it is not Improvement Science.


If our systems are non-linear (which they are) and we want predictable and acceptable performance (which we do) then we must strive to understand them and then to design them to be as simple as possible (which is difficult) so that we have the greatest opportunity to improve their performance by design (which is called Improvement Science).


This is a snapshot of the evolving oil-and-sand system. Look at that weird wine-glass shaped hole in the top section caused by the black sand being pulled down through the gap in the spacer then running down the slope of the middle section to fill a white sand funnel and then slip through the next hole onto the top of the white sand pyramid created by the white sand in the middle section that slipped through earlier onto the top of the sliding sand in the lowest section. Did you predict that? I suspect not. Me neither. But I can explain it – with the benefit of hindsight.

So what is it that is causing this complex behaviour? It is the spacers – the physical constraints to the flow of the sand and oil. And the same is true of systems – when the process hits a constraint then the behaviour suddenly changes and complex behaviour emerges.  And there is more to it than even this. It is the gaps between the spacers that is creating the complex behaviour. The flow from one compartment leaking into the next and influencing its behaviour, and then into the next.  This is what happens in all systems – the more constraints that are added to force the behaviour into predictable channels, and the more gaps that exist in the system of constraints then the more complex and unpredictable the system behaviour becomes. Which is exactly the opposite of the intended outcome.


The lesson that this simple toy can teach us is that if we want stable and predictable (i.e. non-complex) behaviour from our complicated systems then we must design them to operate inside the constraints so that they just never quite touch them. That requires data, information, knowledge, understanding and wise design. That is called Improvement Science.


But if, in an act of desperation, we force constraints onto the system we will make the system less stable, less predictable, less safe, less productive, less enjoyable and less affordable. That is called tampering.

Little and Often

There seem to be two extremes to building the momentum for improvement – One Big Whack or Many Small Nudges.


The One Big Whack can come at the start and is a shock tactic designed to generate an emotional flip – a Road to Damascus moment – one that people remember very clearly. This is the stuff that newspapers fall over themselves to find – the Big Front Page Story – because it is emotive so it sells newspapers.  The One Big Whack can also come later – as an act of desperation by those in power who originally broadcast The Big Idea and who are disappointed and frustrated by lack of measurable improvement as the time ticks by and the money is consumed.


Many Small Nudges do not generate a big emotional impact; they are unthreatening; they go almost unnoticed; they do not sell newspapers, and they accumulate over time.  The surprise comes when those in power are delighted to discover that significant improvement has been achieved at almost no cost and with no cajoling.

So how is the Many Small Nudge method implemented?

The essential element is The Purpose – and this must not be confused with A Process.  The Purpose is what is intended; A Process is how it is achieved.  And answering the “What is my/our purpose?” question is surprisingly difficult to do.

For example I often ask doctors “What is our purpose?”  The first reaction is usually “What a dumb question – it is obvious”.  “OK – so if it is obvious can you describe it?”  The reply is usually “Well, err, um, I suppose, um – ah yes – our purpose is to heal the sick!”  “OK – so if that is our purpose how well are we doing?”  Embarrassed silence. We do not know because we do not all measure our outcomes as a matter of course. We measure activity and utilisation – which are measures of our process not of our purpose – and we justify not measuring outcome by being too busy – measuring activity and utilisation.

Sometimes I ask the purpose question a different way. There is a Latin phrase that is often used in medicine: primum non nocere which means “First do no harm”.  So I ask – “Is that our purpose?”.  The reply is usually something like “No but safety is more important than efficiency!”  “OK – safety and efficiency are both important but are they our purpose?”.  It is not an easy question to answer.

A Process can be designed – because it has to obey the Laws of Physics. The Purpose relates to People not to Physics – so we cannot design The Purpose, we can only design a process to achieve The Purpose. We can define The Purpose though – and in so doing we achieve clarity of purpose.  For a healthcare organisation a possible Clear Statement of Purpose might be “WE want a system that protects, improves and restores health“.

Purpose statements state what we want to have. They do not state what we want to do, to not do or to not have.  This may seem like a splitting hairs but it is important because the Statement of Purpose is key to the Many Small Nudges approach.

Whenever we have a decision to make we can ask “How will this decision contribute to The Purpose?”.  If an option would move us in the direction of The Purpose then it gets a higher ranking to a choice that would steer us away from The Purpose.  There is only one On Purpose direction and many Off Purpose ones – and this insight explains why avoiding what we do not want (i.e. harm) is not the same as achieving what we do want.  We can avoid doing harm and yet not achieve health and be very busy all at the same time.


Leaders often assume that it is their job to define The Purpose for their Organisation – to create the Vision Statement, or the Mission Statement. Experience suggests that clarifying the existing but unspoken purpose is all that is needed – just by asking one little question – “What is our purpose?” – and asking it often and of everyone – and not being satisfied with a “process” answer.