Decision-making under Uncertainty: Heuristics and Biases

Happy readers,


Today I would like to discuss with you a paper by Amos Tversky and Daniel Kahneman [*] about the heuristics that used by people when making judgments in uncertain conditions. In their work, the authors review the influence of representativeness, information availability, as well as adjustment and anchoring, in subjective decision-making processes – all concepts that are important to understand when making high stakes decisions.


Often in clinical problems we will encounter questions such as “What is the probability that patient A can be classified into class B?”, which is a typical example of a classification problem. To answer such questions, people will often employ the representativeness heuristic, and identify which symptoms of patient A are representative of patients in class B. If patient A is coughing, feels weak, and has breathing problems one assumption could be that patient A has covid-19. The patient’s symptoms are representative, or similar, to those of covid-19. Individuals tend to assign probabilities according to information that appears to be representative to a given problem, even if no real connection can be established, which Tversky and Kahneman refer to as the illusion of validity. This introduces several pitfalls. We tend to ignore prior probability of outcomes as soon as we receive information that we assume to be representative. Furthermore, sample sizes tend to be ignored- the probability of obtaining above or below average samples from a population appears to be ignored for different sample sizes. Misconceptions of chance appear especially when individuals assess the probability of a sequence of events. In this case, it is assumed that a small sequence of events will inherently carry the characteristics of the underlying process itself. The regression to the mean is another factor that is often ignored. It describes the process in which outliers in an assessment have little validity, and repeated assessment results in a normalization to a mean value. If you flip a coin 10 times and obtain heads 7 times, can you conclude that the coin is not fair? Or should you flip it another 50 times?


Availability of information is another fallacy, in which we are influenced by the ease of recalling certain events. We tend to assume that events have a higher probability of occurring if we can easily remember instances of such event. This can either be influenced by the number of times that we have experienced said event, or by the impressiveness of the event (the more impressive the event was, the easier it is for us to remember it). Assessment may also be biased by imaginability- if we are able to imagine an event, they are often able to assess the probability of said event occurring. This assessment does however not reflect the actual probability of the event occurring. Furthermore, we may experience an illusional correlation, when assessing the possibility of an event occurring and having learned from a small sample size (both artificial intelligence algorithms and people can be in trouble when learning from small datasets).


Adjustment and anchoring are processes in which individuals make numerical estimations by starting at an initial value and adjusting that value during the decision-making process. Research shows that the adjustments made from the starting point are usually not sufficient, and the initial starting point has a high impact on the final estimation making estimations biased towards the starting point. This is especially true when individuals are faced with conjunctive statistical problems.


The heuristics and biases discussed in the paper are not only observed in laymen, but also by researchers. This can be attributed to the lack of humans to statistically process information, and to the lack of appropriate coding in the brain. Being able to understand the presented biases and heuristics may, however, alleviate their negative effects on decision making processes. Note that while part of the presented problems may be alleviated by outsourcing analysis to an algorithm, others may very well find their way into your algorithms if you do not watch out. For a more detailed explanation, I recommend you read the original publication in the citation below. 🙂




[*] Tversky, Amos, and Daniel Kahneman. “Judgment under Uncertainty: Heuristics and Biases.” Science, vol. 185, no. 4157, 1974, pp. 1124–1131.