Measuring Chaos

One of the big hurdles in health care improvement is that most of the low hanging fruit have been harvested.

These are the small improvement projects that can be done quickly because as soon as the issue is made visible to the stakeholders the cause is obvious and the solution is too.

This is where kaizen works well.

The problem is that many health care issues are rather more difficult because the process that needs improving is complicated (i.e. it has lots of interacting parts) and usually exhibits rather complex behaviour (e.g. chaotic).

One good example of this is a one stop multidisciplinary clinic.

These are widely used in healthcare and for good reason.  It is better for a patient with a complex illness, such as diabetes, to be able to access whatever specialist assessment and advice they need when they need it … i.e. in an outpatient clinic.

The multi-disciplinary team (MDT) is more effective and efficient when it can problem-solve collaboratively.

The problem is that the scheduling design of a one stop clinic is rather trickier than a traditional simple-but-slow-and-sequential new-review-refer design.

A one stop clinic that has not been well-designed feels chaotic and stressful for both staff and patients and usually exhibits the paradoxical behaviour of waiting patients and waiting staff.


So what do we need to do?

We need to map and measure the process and diagnose the root cause of the chaos, and then treat it.  A quick kaizen exercise should do the trick. Yes?

But how do we map and measure the chaotic behaviour of lots of specialists buzzing around like blue-***** flies trying to fix the emergent clinical and operational problems on the hoof?  This is not the linear, deterministic, predictable, standardised machine-dominated production line environment where kaizen evolved.

One approach might be to get the staff to audit what they are doing as they do it. But that adds extra work, usually makes the chaos worse, fuels frustration and results in a very patchy set of data.

Another approach is to employ a small army of observers who record what happens, as it happens.  This is possible and it works, but to be able to do this well requires a lot of experience of the process being observed.  And even if that is achieved the next barrier is the onerous task of transcribing and analysing the ocean of harvested data.  And then the challenge of feeding back the results much later … i.e. when the sands have shifted.


So we need a different approach … one that is able to capture the fine detail of a complex process in real-time, with minimal impact on the process itself, and that can process and present the wealth of data in a visual easy-to-assess format, and in real-time too.

This is a really tough design challenge …
… and it has just been solved.

Here are two recent case studies that describe how it was done using a robust systems engineering method.

Abstract

Abstract