If the forecast says there is a 10 percent chance of rain and you don’t bring an umbrella, did you make a bad decision? If you look at live traffic on your map app and pick the fastest route, and then an accident causes a delay, did you make a bad decision?
The common assumption that these are bad choices is an example of outcome bias, or “resulting,” where we evaluate our decision or our process based on the outcome alone.
Why is this bad? This approach is flawed because it does not give us a clear path to improvement. We become reactive, and we end up separating process from outcome. This can lead to bad processes and bad decisions when we fail to understand cause and effect, and instead just react to the effect.
The classic example is drinking and driving. We all know someone who has not crashed after driving while intoxicated, and then draws the conclusions that they are “good at drinking and driving.” This violates all knowledge of science, yet the wrong conclusion is drawn consciously every day by very smart people.
How do we use lean thinking to overcome outcome bias?
Start with standards. Where there is no standard, there can be no improvement. To use my first example, exactly what percentage chance of rain should cause me to bring an umbrella? If I sometimes bring one when it’s a 10 percent chance, and sometimes don’t when it’s a 50 percent chance, then how do I improve? I have no standard, and nothing to return to for evaluation and improvement.
To use standards to overcome outcome bias, when you get an outcome you don’t want from a decision or process, you must ask three questions: First, do we have a standard? Second, did we follow the standard? Third, can we improve the standard?
Part of the standard must include the owner of the standard. This is important because many people might react to an outcome and change something. But with a clear owner, they can provide some governance over changing standards to ensure there is a good reason.
Build in reflection. Reflection is often seen as a cycle of reviewing our decisions, actions or process for improvement. It can be done individually or collectively. A simple effective method is the After Action Review. This method has many different forms, but the simple and effective version includes just four questions: First, what was supposed to happen? (Or, what was the standard?)Second, what did happen and why? Third, what can we learn? And last, what will we do differently (if anything)?
The most common mistake made in reflection is only doing it when there is a failure. If this is your approach, it will likely increase your outcome bias. If your process worked for 364 days before it failed, and you only reflect after that event, there will be a strong pull towards changing something even if the process has proven itself effective.
Understand your data. The deliberate use of data and trends within your process will help keep you centered on what is most effective, and avoid overreacting to anomalies. Whether this is market forecast data or machine performance data, what is the data telling you about cause and effect? Having the right way to capture the data so that it is telling you the right story over the right time period is important. Understanding how many hamburgers you sold across a 24-hour period is not the same as understanding the rate from 11:00 to 2:00.
What’s just as important is understanding when the data about the past is no longer valid. Here is when outcome bias can be used to your advantage, as long as you use the signal of an undesired outcome to ask yourself the right questions: What are the assumptions behind using the data you have been using? And are these assumptions still valid? When the normal conditions in which your data was an effective prediction tool cease to exist, your method for making decisions must change accordingly.
Jamie Flinchbaugh is an author, speaker, and consultant on lean principles. He spoke at the 2019 Supply Summit. Copyright 2019. Informa.