More than the Sum or Less?

It is often assumed that if you combine world-class individuals into a team you will get a world-class team.

Meredith Belbin showed 30 years ago that you do not and it was a big shock at the time!

So, if world class individuals are not enough, what are the necessary and sufficient conditions for a world-class team?

The late Russell Ackoff described it perfectly – he said that if you take the best parts of all the available cars and put them together you do not get the best car – you do not even get a car. The parts are necessary but they are not sufficient – how the parts connect to each other and how they influence each other is more important.  These interdependencies are part of the system – and to understand a system requires understanding both the parts and their relationships.

A car is a mechanical system; the human body is a biological system; and a team is a social system. So to create a high performance, healthy, world class team requires that both the individuals and their relationships with each other are aligned and resonant.

When the parts are aligned we get more than the sum of the parts; and when they are not we get less.

If we were to define intelligence quotient as “an ability to understand and solve novel problems” then the capability of a team to solve novel problems is the collective intelligence.  Experience suggests that a group can appear to be less intelligent than any of the individual members.  The problem here is with the relationships between the parts – and the term that is often applied is “dysfunctional”.

The root cause is almost always distrustful attitudes which lead from disrespectful prejudices and that lead to discounting behaviour.  We learn these prejudices, attitudes and behaviours from each other and we reinforce them with years of practice.  But if they are learned then they can be un-learned. It is simple in theory, and it is possible in practice, but it is not easy.

So if we want to (dis)solve complex, novel problems thenwe need world-class problem solving teams; and to transform our 3rd class dysfunctional teams we must first learn to challenge respectfully our disrespectful behaviour.

The elephant is in the room!

Does More Efficient equal More Productive?

It is often assumed that efficiency and productivity are the same thing – and this assumption leads to the conclusion that if we use our resources more efficiently then we will automatically be more productive. This is incorrect. The definition of productivity is the ratio of what we expect to get out divided by what we put in – and the important caveat to remember is that only the output which meets expectation is counted – only output that passes the required quality specification.

This caveat has two important implications:

1. Not all activity contributes to productivity. Failures do not.
2. To measure productivity we must define a quality specification.

Efficiency is how resources are used and is often presented as metric called utilisation – the ratio of how much time a resource was used to how much time a resource was available.  So, utilisation includes time spent by resources detecting and correcting avoidable errors.

Increasing utilisation does not always imply increasing productivity: It is possible to become more efficient and less productive by making, checking, detecting and fixing more errors.

For example, if we make more mistakes we will have more output that fails to meet the expected quality, our customers complain and productivity has gone down. Our standard reaction to this situation is to put pressure on ourselves to do more checking and to correct the erros we find – which implies that our utilisation has gone up but our productivity has remained down: we are doing more work to achieve the same outcome.

However, if we remove the cause of the mistakes then more output will meet the quality specification and productivity will go up (better outcome with same resources); and we also have have less re-work to do so utilisation goes down which means productivity goes up even further (remember: productivity = success out divided by effort in). Fixing the root case of errors delivers a double-productivity-improvement.

In the UK we have become a victim of our own success – we have a population that is living longer (hurray) and that will present a greater demand for medical care in the future – however the resources that are available to provide healthcare cannot increase at the same pace (boo) – so we have a problem looming that is not going to go away just by ignoring it. Our healthcare system needs to become more productive. It needs to deliver more care with the same cash – and that implies three requirements:
1. We need to specify our expectation of required quality.
2. We need to measure productivity so that we can measure improvement over time.
3. We need to diagnose the root-causes of errors rather than just treat their effects.

Improved productivity requires improved quality and lower costs – which is good because we want both!

How Do We Measure the Cost of Waste?

There is a saying in Yorkshire “Where there’s muck there’s brass” which means that muck or waste is expensive to create and to clean up. 

Improvement science provides the theory, techniques and tools to reduce the cost of waste and to re-invest the savings in further improvement.  But how much does waste cost us? How much can we expect to release to re-invest?  The answer is deceptively simple to work out and decidedly alarming when we do.

We start with the conventional measurement of cost – the expenses – be they materials, direct labour, indirect labour, whatever. We just add up all the costs for a period of time to give the total spend – let us call that the stage cost. The next step requires some new thinking – it requires looking from the perspective of the job or customer – and following the path backwards from the intended outcome, recording what was done, how much resource-time and material it required and how much that required work actually cost.  This is what one satisfied customer is prepared to pay for; so let us call this the required stream cost. We now just multiply the output or activity for the period of time by the required stream cost and we will call that the total stream cost. We now just compare the stage cost and the stream cost – the difference is the cost of waste – the cost of all the resources consumed that did not contribute to the intended outcome. The difference is usually large; the stream cost is typically only 20%-50% of the stage cost!

This may sound unbelieveable but it is true – and the only way to prove it to go and observe the process and do the calculation – just looking at our conventional finanical reports will not give us the answer.  Once we do this simple experiment we will see the opportunity that Improvement Science offers – to reduce the cost of waste in a planned and predictable manner.

But if we are not prepared to challenge our assumptions by testing them against reality then we will deny ourselves that opportunity. The choice is ours.

One of the commonest assumptions we make is called the Flaw of Averages: the assumption that it is always valid to use averages when developing business cases. This assumption is incorrect.  But it is not immediately obvious why it is incorrect and the explanation sounds counter-intuitive. So, one way to illustrate is with a real example and here is one that has been created using a process simulation tool – virtual reality:

When Is Seeing Believing?

One of the problems with our caveman brains is that they are a bit slow. It may not feel that way but they are – and if you don’t believe me try this experiment: Stand up, get a book, hold it in your left hand open it at any page, hold a coin in your right hand between finger and thumb so that it will land on the floor when you drop it. Then close your eyes and count to three. Open your eyes, drop the coin, and immediately start reading the book. How long is it before you are consciously aware of the meaning of the words. My guess is that the coin hits the floor about the same time that you start to making sense of what is on the page. That means it takes about half a second to start perceiving what you are seeing. That long delay is a problem because the world around us is often changing much faster than that and, to survive, we need to keep up. So what we do is fill in the gaps – what we perceive is a combination of what we actually see and what we expect to see – the process is seamless, automatic and unconscious. And that is OK so long as expectation and reality stay in tune – but what happens when they don’t? We experience the “Eh?” effect which signals that we are temporarily confused – an uncomfortable and scary feeling which resolves when we re-align our perception with reality. Over time we all learn to avoid that uncomfortable confusion feeling with a simple mind trick – we just filter out the things we see that do not fit our expectation. Psychologists call this “perceptual distortion” and the effect is even greater when we look with our minds-eye rather than our real eyes – then we only perceive  what we expect to see and we avoid the uncomfortable “Eh?” effect completely.  This unconscious behaviour we all demonstrate is called self-delusion and it is a powerful barrier to improvement – because to improve we have to first accept that what we have is not good enough and that reality does not match our expectation.

To become a master of improvement it is necessary to learn to be comfortable with the “eh?” feeling – to disconnect it from the negative emotion of fear that drives the denial reaction and self-justifying behaviour – and instead to reconnect it to the positive emotion of excitement that drives the curiosity action and exploratory behaviour.  One ewasy way to generate the “eh?” effect is to perform reality checks – to consciously compare what we actually see with what we expect to see.  That is not easy because our perception is very slippery – we are all very,very good at perceptual distortion. A way around this is to present ourselves with a picture of realilty over time, using the past as a baseline, and our understanding of the system, we can predict what we believe will happen in the near future. We then compare what actually happens with our expectation.  Any significant deviations are “eh?” effects that we can use to focus our curiosity – for there hide the nuggets of new knowledge.  But how do we know what is a “signifcant” deviation? To answer that we must avoid using our slippery self-delusional perception system – we need a tool that is designed to do this interpretation safely, easily, and quickly.  Click here for an example of such a tool.