Humans are prone to predictable decision making errors. Learn how Daniel Kahneman’s work in behavioral economics can provide solutions for leaders and their organizations.
Nobel laureate Daniel Kahneman’s 2011 book, Thinking, Fast and Slow is a best-selling examination of decision making and cognitive biases. The book is a popular treatment of Kahneman’s academic work in the field of behavioral economics, which studies the reasons for irrational choices. Thinking Fast and Slow has found applications in business, psychology, organizational behavior, and other fields.
This article will examine Kahneman’s ideas and how to apply the book’s lessons to decision making by leaders and organizations.
A central premise of Thinking, Fast and Slow is that the brain uses two distinct systems to process information:
System 1. System 1 thinking is fast and intuitive. It relies on heuristics – or shortcuts – to cut through large quantities of information and arrive at a “good enough” conclusion.
System 2. System 2 is slower and requires more effort. It is said to be rational, analytical, and deliberate. It is useful when we encounter novel situations or need to engage in close reasoning.
Biases, logical fallacies, and Prospect Theory. Kahneman explores a number of biases and logical fallacies that can cloud our judgment, particularly when we use System 1 heuristic thinking. Most relevant to this discussion are overconfidence, discussed below, and Prospect Theory.
Originally developed by Kahneman and Amos Tversky in the 1970s, Prospect Theory seeks to account for behavior that appears irrational under conventional economic models. Among the common traits described by Prospect Theory are:
Loss aversion. People tend to feel losses more keenly than gains. Their behavior tends, paradoxically, to maximize losses while minimizing gains. Examples include selling a stock too quickly to avoid the possibility of a loss or doubling a losing bet to avoid an immediate loss.
Framing effect. Equivalent options will receive different responses depending on whether they are framed as losses or gains. For example, an ad that promises to “gain 50% better click-through rates” will be less effective than one that warns “don’t lose 50% of your potential click-throughs.”
Probability weighing. People are poor at calculating probabilities. They tend to overestimate the probability of attractive but rare events (e.g., winning the lottery) while underestimating the probability of common ones (e.g., auto accident rates).
According to a McKinsey article, these errors aren’t confined to the general public. They affect the ability of even the most sophisticated organizations to maximize long-term value.
The problem of overconfidence. Overconfidence is one of the primary biases tacked in Thinking, Fast and Slow.
Kahneman has said, “We’re fundamentally overconfident in the sense that we jump to conclusions — and to complete, coherent stories — to create interpretations. So we misunderstand situations, spontaneously and automatically. And that’s very difficult to control.”
While the problem is general, the effects of overconfidence are magnified among leaders, according to the Harvard Business Review (HBR). Being more experienced may cause leaders to underestimate the complexity of a problem and to overestimate their ability to solve it. Overconfidence may also cause leaders to discount the opinions and concerns of those around them.
Self-awareness and the ladder of inference. The solution to overconfidence is greater self-awareness, according to the HBR author. Becoming more self-aware can help leaders identify their own blind spots and areas for improvement.
One practical exercise to build greater self-awareness is the “ladder of inference.” It is designed to fit within Kahneman’s System 2 deliberate mode of thinking.
When faced with a problem, imagine a ladder leaning against a wall with its base in a pool of water. Starting at the bottom, we take the following steps:
Observing the data. The pool at the base of the ladder represents all of the facts and data pertaining to the problem. It may include documents and reports, institutional knowledge, and personal experience. As we observe the pool, we begin to look for familiar patterns.
Selecting the data. On the first rung of the ladder, we select the data we will pay attention to. There may be too much data to process, and some of the data may appear irrelevant or trivial. The selection process is subjective and varies among individuals.
Interpreting the data. On the second rung of the ladder, we interpret the selected data through the filters of our experiences, assumptions, and biases. We fill in gaps and create a meaningful story.
Drawing conclusions. On the top rung of the ladder, we draw conclusions about what the selected facts mean and how to respond. Our conclusions may strike us as logical and obvious, but they reflect and amplify each of the choices made at earlier steps in the process.
By slowing down and considering each of the steps in turn, we can test our thinking process and look for flaws. “The ladder supports flexibility by interrupting habits,” the HBR author writes.
Leaders can ask questions like: What data did I select and what did I ignore? What assumptions am I making? How is past experience influencing my interpretation? What else may be true in this situation?
Using the ladder of inference can help leaders become more aware of their mental habits and learn to break free of them.
While Thinking, Fast and Slow is mainly concerned with individual thought processes, its principles can be applied to organizations as well.
When he was asked about the prospects for improving organizational decision making, Kahneman replied, “I’m much more optimistic about organizations than individuals. Organizations can put systems in place to help them.”
Kahneman advises organizations to make human performance more consistent with the help of algorithms. In this context, he means a system of review and rating to be followed before making a decision. He explained, “Algorithms are noise free. People are not. When you put some data in front of an algorithm, you will always get the same response at the other end.”
Each algorithm will require only a modest amount of data. For a given topic or decision type, Kahneman recommends gathering a group of knowledgeable people to “make a list of five or six dimensions” to be measured before making a decision – this simple algorithm is enough for most decision making.
The group of decision makers on a subject matter will rank each “dimension” or element – independently of the other dimensions, on a scale (e.g., price: high or low, from 1-5), and repeat the same process for all remaining elements, then add up the scores at the end. “If you create good ranking scales on those dimensions, and give them equal weight, you will typically do just as well as with a very sophisticated statistical algorithm.”
One useful refinement is to include a final “global rating” in which individuals can use their intuitive skills to make an overall judgment on the merits of the decision.
The key is to make the global rating only after rating all of the other dimensions. “Global rating is very good – and intuition is very good – provided that you have [first] gone through the process of systematically and independently evaluating the constituents of the problem,” – the five or six “dimensions” above, Kahneman explained.
By adopting systematic ways of approaching problems, leaders and organizations can avoid the pitfalls of illogical or biased decision making. If you would like to learn more about how to improve decision making in your organization in the data drive AI age, please contact us.
Copyright ©️ 2025 by Stephen Wullschleger. All rights reserved.
Leave a Reply