Over the last couple of years, Mark has been interested in how management can focus on process behavior charts to differentiate between signal and noise. This post is a recap of Mark's presentation. However, the webinar contains many more valuable examples, so we highly recommend that you watch the webinar.
Presented by Mark Graban, author of
"Measures of Success: React Less, Lead Better, Improve More"
In previous webinars, Mark Graban has introduced the potential rewards for using Process Behavior Charts -- including why they are a better alternative to "bowling charts" and how they help us not react to every uptick or downturn in a metric. However, viewers wanted more information - and here we'll deliver!
In this webinar, you will:
The example chart shows data that is just fluctuating around an average. We could say that all of these data points represent noise. What this tells us is that there is a consistent process that is generating predictable results. One might not like the level of performance, but the good news is that it is likely to remain predictable. Future performance is going to fall somewhere between the upper and lower limits. Because we only have noise, we don't need to figure out why one data point is lower or higher than the next. If we don't like the level of performance, we need to focus on figuring out how to improve the system.
Once we have the chart plotted and have identified signals, we can ask, "Did something really change significantly?" in the system or not. As an example, we took a look at registrations for this webinar. The registration statistics look like this:
All of these statements are true, but they don't give us a lot of insight into the performance of our system. There's nothing in those numbers to help us determine if these results indicate a signal or whether it's just noise. What can do that is a process behavior chart.
If we plot out webinar registrations going back to 2014 (Figure 3), you can see that most registrations hovered around the average. Still, we did have one that exceeded the upper control limit. It was a webinar by Jess Orr on how to use A3 thinking in everyday life. It had over 700 registrations. That webinar was a signal because we had a data point above the upper limit. It was not randomly occurring, so we need to ask why the results of our webinar promotion were different in this particular case. Did we send more reminders or use a different language? Perhaps Jess Orr is a top-rated speaker.
So to give it some context, we created a run chart going back to 1790. (Figure 4.) Was 2018 really record-setting? Maybe for the last 50 years, but the chart tells a different story. This analysis reminds us that it is not necessarily useful to have charts that only show this year's data or last year's data in the workplace. We want to be careful that we are not showing such a limited time frame that it changes the conclusions that one might draw.
In this case, if we believe that voting is desirable behavior, we might take steps to amplify the change and encourage more folks to vote. Perhaps it was the increase of mail-in voting or extending voting hours. A root cause analysis would be indicated.
There are many other headlines that make it sound like something significant has happened, only to turn out to be noise in a system fluctuating around an average. Examples include emergency room wait times in Canada, automobile accident fatalities in the US and pedestrian fatalities.
When discussing the decline in fatal accidents between 2016 and 2017, the NHTSA said they were encouraged by the fall but commented that "There is no single reason for the overall decline." This is a wise statement, given that the decline in deaths is an indicator of noise in the system rather than a signal, which likely would have an identifiable cause.
In life and the workplace, we sometimes implement changes to try to improve the system and encourage desirable behavior. One example is Washington DC, where they are debating eliminating right turns on red lights at some intersections to reduce accidents and save lives under the theory that the practice is dangerous for pedestrians. However, experts disagree whether the move would increase safety or increase risk.
Should they implement this countermeasure, and then see a drop in pedestrian fatalities that goes below the lower control limit, could the conclusion be made that the ban on right turns on red caused the change? Not really. One data point is a signal, but it could just be a blip, or there could have been other factors that caused the change. To determine that a countermeasure has been effective, you need to see a sustained shift in the data.
If you are trying to chart rare events, such as central line infections in a hospital, which don't happen very often and hover around an average of one, your process behavior chart may make everything look like noise when it really isn't. An alternative is to chart the days between events. When you see spikes in the number of days between events that exceed the process control limits, you might find signals that indicate it is necessary to find out what caused the change to the system.
Using process behavior charts can help you react less by seeing through the noise and only worrying about data that indicates a valid signal. In addition, it can help you lead better by gaining more understanding of cause-and-effect relationships. When you react less and lead better, you can ultimately improve more which is the goal.