Intern Emerg Med (2015) 10:195–203 DOI 10.1007/s11739-014-1143-y

CE - COCHRANE’S CORNER

Heuristics: foundations for a novel approach to medical decision making Nicolai Bodemer • Yaniv Hanoch Konstantinos V. Katsikopoulos



Received: 24 July 2014 / Accepted: 11 October 2014 / Published online: 28 October 2014  SIMI 2014

Abstract Medical decision-making is a complex process that often takes place during uncertainty, that is, when knowledge, time, and resources are limited. How can we ensure good decisions? We present research on heuristics—simple rules of thumb—and discuss how medical decision-making can benefit from these tools. We challenge the common view that heuristics are only second-best solutions by showing that they can be more accur ate, faster, and easier to apply in comparison to more complex strategies. Using the example of fast-and-frugal decision trees, we illustrate how heuristics can be studied and implemented in the medical context. Finally, we suggest how a heuristic-friendly culture supports the study and application of heuristics as complementary strategies to existing decision rules. Keywords Medical decision making  Heuristics  Rules of thumb  Decision tools  Bounded rationality

treatment of unforeseen illness or injury…which includes the initial evaluation, diagnosis, treatment, and disposition of any patient requiring expeditious medical, surgical, or psychiatric care’’ [1]. By definition, emergency medicine requires decision-making during uncertainty, that is, when knowledge, time, and, often, resources are limited. Given these constraints, how can we assure ‘‘good’’ decisions? How should decision tools be designed and implemented to guide the decision maker? Heuristics make up an important category of decision tools. A heuristic is defined as ‘‘a strategy that ignores part of the information, with the goal of making decisions more quickly, frugally, or accurately than more complex methods’’ [2, p. 454]. The paper is divided into two main sections. The first section provides a general overview of what heuristics are, their underlying rationales, and why they are essential decision tools. The second section focuses on heuristics in medicine, the implications for their use, and avenues for implementation.

Introduction The American College of Emergency Physicians defines emergency medicine as ‘‘dedicated to the diagnosis and N. Bodemer (&) Harding Center for Risk Literacy, Berlin, Germany e-mail: [email protected] [email protected] N. Bodemer  K. V. Katsikopoulos Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany Y. Hanoch School of Psychology, Plymouth University, Plymouth, UK

Theoretical and methodological basis The heuristics and biases research program Tversky and Kahneman’s [3] heuristics and biases research program highlights the chasm between rational agent models as used in neoclassical economics and actual human behavior as studied in psychology. Kahneman [4, p. 1,449] summarizes their work as ‘‘exploring the systematic biases that separate the beliefs that people have, and the choices they make from the optimal beliefs and choices assumed in rational-agent models’’. In numerous studies, they tested human decision-making against decision-making of fully rational agents. Results document that

123

196

humans often deviate from normative optimal solutions, and fall prey to biases and cognitive errors, resulting in inferior decisions. According to Tversky and Kahneman, the reason for these biases lies in the use of heuristics, simple rules of thumb or mental shortcuts, which we use as a consequence of cognitive limitations. Heuristics have often been perceived as inferior strategies to more complex optimization models that always outperform them with respect to accuracy. According to the theory of the accuracy–effort trade-off, heuristics are employed to reduce effort at the expense of accuracy [5]. The heuristics and biases program was extended into the medical realm to better understand whether health professionals’ decisions are also swayed by various biases. For instance, several researchers have summarized numerous heuristics that doctors use that could result in wrong diagnoses and faulty treatment choices [6–9]. Many suggest the implementation of medical education programs to de-bias health professionals, by raising awareness of these biases and teaching health professionals to use alternative tools, such as statistical prediction rules that incorporate the ‘‘most relevant information and … assign that information its proper weight—even when the weightings come from the doctors themselves’’ [6, p. 104]. In sum, the study of heuristics has often highlighted their role in biased decisions. Simple, intuitive decisions cannot be, almost by definition, ‘‘rational’’ and therefore have to be substituted with more complex, statistical, sophisticated tools,—or such is the claim. An alternative research program on heuristics: ecological rationality An alternative research program on heuristics can be found in the work of Gigerenzer and colleagues [10, 11]. Following Herbert Simon’s ideas [12], these authors argue that ‘‘models of bounded rationality describe how a judgment or decision is reached (that is, the heuristic processes or proximal mechanisms), rather than merely the outcome of the decision, and they describe the class of environments in which these heuristics will succeed or fail’’ [11, p. 4]. This view offers a more positive perspective on the utility and nature of heuristics such as their ability to make good decisions, instead of highlighting and focusing on the possible negative aspects. The program further reveals several shortcomings of the original heuristics and biases program. For example, the heuristics and biases program lacks precise definitions [13], relies on post hoc explanations as to why heuristics fail, focuses on outcome rather than the underlying cognitive process, assumes fixed norms (e.g., logic and probability theory) [14], and, finally, but very importantly, tends to ignore the role of the environment.

123

Intern Emerg Med (2015) 10:195–203

To overcome some of these challenges, researchers approached the study of decision strategies from the perspective of ecological rationality, which considers the adaptation of the decision maker to the environment. A decision strategy is not good or bad per se. It may perform well in some environments but poorly in others; the key question is to find out when and why a strategy is successful. Hence, before condemning heuristics as inferior decision strategies, and banishing them from the physician’s arsenal, we need a more fine-grained analysis of both the underlying mechanisms of a decision strategy and the features of the environment. Let us first look at the underlying mechanisms. Fast-and-frugal heuristics The fast-and-frugal heuristics research program tackles the shortcomings of the heuristics and biases program, and offers an alternative vision [10, 15]. Indeed, it revises the perspective on heuristics, and argues that their possible shortcomings are in fact their strength. First, heuristics are fast. They require only little time to be applied to reach a decision. Second, they are frugal. That is, they ignore parts of the available information. Thus, instead of being bogged down by a complex and lengthy decision process, heuristics allow us to reach a fast and frugal decision. Gigerenzer and colleagues [10] identify three building blocks of heuristics: first, a search rule that defines what information is looked up; second, a stopping rule that defines when sufficient information is acquired; and, finally, a decision rule that determines how the information is integrated to make a decision. To gain a better idea of how this mechanism operates, let us first focus on a very simple strategy: the recognition heuristic. When comparing two alternatives, this strategy relies on mere recognition. More precisely, it suggests that if one of two alternatives is recognized, and the other is not, then the decision maker should infer that the recognized alternative has the higher value with respect to the criterion [16]. For instance, when students in the United States and Germany were asked to estimate which of two cities, Detroit or Milwaukee, had more inhabitants, only 60 % of the American students chose the correct answer, whereas 90 % of the German students did. This result can be explained with the recognition heuristic: Most German students have heard of Detroit, but not Milwaukee. Hence, they (correctly) inferred that Detroit had more inhabitants. In contrast, American students knew both cities, and could not apply the recognition heuristic, so they had to use different knowledge to make the inference. Recognition is vital for practitioners. For instance, the diagnosis of psoriasis is primarily based on recognition of skin lesions and psoriatic plaque without any further tests being required.

Intern Emerg Med (2015) 10:195–203

Another example is the tallying heuristic. Imagine a binary choice situation (e.g., whether a patient has a particular disease.) Based on a set of cues (e.g., the results of medical tests), one can perform a linear regression, that is, look up all cues, weight and add them, and decide. Tallying also considers all available cues, but it ignores the weights, as all cues have equal weights. This simplification, also known as Dawes’s rule, still leads to accurate decisions when compared with linear regression [17, 18]. Another example of a fast-and-frugal heuristic is takethe-best, a one-reason decision strategy. In a binary choice task with cues, this heuristic suggests ordering the cues according to their correlation with the variable of interest, and comparing the two objects on the most valid cue only. If the cue discriminates (e.g., the cue is present for one object, but not the other), search is stopped, and the object with the higher criterion value is selected. However, if the cue does not discriminate, the second most valid cue is looked up, and the procedure is repeated. The heuristic has been found to outperform tallying and linear regression in specific environments [19, 20]. An example of one-reason decision-making in a medical context is the diagnosis of herpes zoster. A diagnosis is sometimes based on a visual examination only, or, to rule out another infection (e.g., herpes simplex virus), an additional blood test to detect IgM antibodies is obtained. The heuristics presented here have in common that they all ignore cues, weights, or both, yet this does not make them poor strategies (for further examples see [2, 15, 19]) In fact, the research program on ecological rationality finds the so-called ‘‘less-is-more effect.’’ Heuristics can indeed be more accurate than complex tools. Complexity and accuracy follow an inverted U-shaped rather than a monotonically increasing relationship [2]. To put it differently, whereas too simple models may fail because they consider too little information, too complex models may fail as well, as they consider too much, and eventually use irrelevant or harmful information. When and why the respective heuristic performs well is a question of the environment. The fast-and-frugal heuristics program, like other programs, has not remained without challenges. For instance, the recognition heuristic as originally formulated by Gigerenzer and Goldstein (see [16]) has since led to controversies (for further information see [21]. On the other hand, it has been argued in studies investigating the take-the-best heuristic, that many participants do not actually apply this strategy (e.g., [22]). Nevertheless, such results do not really invalidate the program as its claim never was that all people always use fast and frugal heuristics; rather, as illustrated by the metaphor of the adaptive toolbox, different tools are expected to be used in different (or even the same) environments. Multiple factors such as expertise,

197

experience, emotions, and further situational influences may result in the application of very different strategies. The interesting question is to find out the conditions under which people use simple heuristics, and the conditions under which they use more complex strategies. Prediction versus fitting When evaluating decision tools, the first test usually refers to how well the tool fits the given data, that is, its explanatory power for existing observations. For instance, different models (e.g., linear regression, tallying, take-thebest), can be fitted to the data of patients with heart disease. However, a second test should go beyond fitting, to evaluate how well the strategy predicts new data. A strategy allowing doctors to classify former patients is neat, but their major concern and task is to classify future patients. A study across 20 different binary environments compares how well different strategies—among them take-thebest, tallying, and multiple regression—fits and predicts data [18]. Not surprisingly, when fitting the data, multiple regression, the model with more degrees of freedom, outperforms the simple heuristics. Also not surprisingly, all models perform better in fitting than in prediction. However, surprisingly, take-the-best and tallying outperforms multiple regression in prediction. This finding was particularly strong when the number of observations available to calibrate the model was small. Another example comes from the financial domain. Nobel Prize winner Harry Markowitz’s mean–variance portfolio model suggests how to invest money among different stocks. A comparison of this rather complex strategy with a simple heuristic, the 1/N heuristic, which suggests that one should equally distributes the resources on all available options, reveals an interesting result. Not only does the simple heuristic outperform the Markowitz model, but it would take up to 500 years before sufficient data would be available for the Markowitz model to outperform the heuristic [23]. For a review of applications of heuristics across different disciplines, see [19]. Hence, a strategy that fits given data well is not automatically the model of choice, as it may lack accuracy in predicting new data. In particular, when the training set is small, that is, when only a few observations are available, complex models run the risk of overfitting, making them poor prediction instruments, as illustrated in the following. The bias–variance dilemma Gigerenzer and Brighton [19] used the bias–variance dilemma to illustrate an explanation for the less-is-more effect. They decompose prediction error into three dimensions instead of only two:

123

198

Prediction error ¼ Bias þ Variance2 þ Noise Imagine a true function is known and different agents, each with a limited set of observations, try to learn this function. The deviation of the average function of the different samples from the underlying true function is the bias. The variance refers to the sum squared difference between the average function and the individual functions. Finally, noise is random and uncontrollable. If one is concerned only with reducing bias, then one is satisfied when the difference between the average function and the true underlying function is zero. However, variance can be high, as the individual functions may differ heavily from the mean function. To illustrate the dilemma, we consider a disease that affects a total of 3,000 patients in a given country. Assume 100 doctors with 30 patients with this disease. To predict their patients’ life expectancy, they use the same linear regression model with a fixed set of cues, and calibrate their parameters on their given sample. Alternatively, assume another 100 doctors with 30 patients with the same disease. They use tallying (weights do not need to be calibrated as all weights are equal), instead of multiple regression. The first group may have low bias; that is, if one averages all doctors’ regression models, they may capture well the underlying true function. However, variance may be high, as each doctor primarily calibrates a model based on 30 patients only, thereby modeling random patterns of each sample. That is, if one doctor uses his or her (regression) model for another set of 30 patients, that doctor performs rather poorly. In contrast, the 100 doctors who used tallying do not calibrate their model. Although bias may be high, as they do not capture the true underlying function that well, variance is low. The challenge is to find a good balance between bias and variance, as the decrease of one comes with an increase of the other. Hence, we need to find a compromise between too simple (high bias), and too complex (high variance), models—in other words, heuristics. However, it should be noted that one way to reduce variance is by increasing the sample size, but as already mentioned, in medicine, observations are often limited. Risk versus uncertainty To further understand the potential advantage of heuristics and illustrate the less-is-more effect, it is important to keep an important distinction in mind: the difference between risk and uncertainty. Generally, risk is defined as the probability and the magnitude of harm, where harm refers to threats to humans and things they value [24]. In that sense, risks are generally measurable, that is, outcomes and their probabilities are known (see also [25, 26]). In contrast

123

Intern Emerg Med (2015) 10:195–203

to risk, uncertainty is not measureable. The probabilities of outcomes or the outcomes are unknown [26]. A more relaxed definition allows for a quantification of uncertainty in the form of confidence intervals, ranges, or expert confidence ratings [27]. This distinction is important for at least two reasons: First, we can, in theory, compute optimal solutions under risk, but in general we are not capable of computing optimal solutions during uncertainty due to the lack of knowledge. Second, a strategy that works well for risk may perform poorly during uncertainty, and vice versa. Despite increasing efforts in evidence-based medicine to estimate probabilities and derive gold standards, many medical decisions involve uncertainty on different levels. For instance, Feufel and Bodemer [28] differentiate between uncertainty with respect to evidence, and uncertainty with respect to goals. Uncertain evidence comprises data-centered uncertainty (i.e., only limited scientific evidence is available to correctly diagnose patients or select appropriate treatments), and provider-centered uncertainty (i.e., health professionals may not be familiar with the given evidence due to lack of time, resources, or interest). Uncertain goals comprise system-centered uncertainty (i.e., sometimes conflicting interests of different stakeholders, such as patients, providers, private/public care organizations, insurance companies, and the pharmaceutical industry pursuing different goals), and patient-centered uncertainty (i.e., if possible, diagnoses and treatments need to be carefully discussed with patients). Given these dimensions of uncertainty, complemented by the fact that most decisions have to be made in the presence of time scarcity, making good decisions seems a tough endeavor. The health care provider’s toolbox needs to contain strategies that are able to deal with uncertainty. Heuristics represent such strategies. For instance, early detection of pulmonary embolism is a major challenge in primary care [29]. The diagnosis is difficult despite the presence of decision rules such as the Wells criteria or the Geneva score in combination with a D-dimer test to categorize patients. The application of these rules is time- and cost-intensive, and henceforth not always possible. Moreover, doctors may not be aware of the available tools. Finally, symptoms vary across patients, and are often subtle. This high burden of uncertainty makes diagnosis difficult. One solution would be to establish simple rules to guide the decision maker.

Heuristics in medicine The first section provided a theoretical and methodological framework with which to understand and study heuristics. In the second section we describe how to apply and implement heuristics in the medical domain, and discuss

Intern Emerg Med (2015) 10:195–203

future avenues for bringing heuristics to the attention of researchers and practitioners. A research program for studying heuristics in (emergency) medicine Heuristics have long been studied as sources of bias, but research grounded in the concepts of ecological rationality suggests that the study of heuristics has great potential for medical decision makers. The inherent uncertainty in medical decision-making, the major challenge of predicting and classifying new patients, and the less-is-more effect illustrated by the bias–variance dilemma suggest that we need a better understanding of what heuristics exist in the medical domain, how they can be exploited to improve decisionmaking, and how decision makers can be taught to use them. It is important to distinguish between two approaches. First, the study of heuristic involves a descriptive aspect, which centers on the question of what heuristics are actually (implicitly or explicitly) used in a particular domain. McDonald [30] criticizes the dearth of research in the medical domain on heuristics, which ‘‘are poorly understood and rarely discussed’’ (p. 56). Although physicians’ reasoning is not errorless, we can learn by studying their adaptive toolbox, that is, the collection of cognitive strategies they actually use in their everyday routines. Understanding when and how their strategies perform will enable us to formalize existing heuristics as a basis to derive decision rules. In a similar vein, Groopman [31] stresses the importance of heuristics for practitioners, which he considers ‘‘the foundation of all mature medical thinking’’ (p. 27). This leads to the second dimension, the prescriptive aspect. Based on the formalization of heuristics, it is possible to thoroughly investigate them, to define the niches in which they perform well, and eventually to teach them to decision makers. We can identify heuristics that can guide health care professionals in making quick and accurate decisions. The prescriptive part does not necessarily rely on a descriptive part. Although the descriptive study of existing heuristics can inspire the design of prescriptive heuristics, the latter can also originate from data only. However, to assure that the heuristic is intuitively designed, that is, relies on the cognitive capacities of the decision maker, the two parts should go hand in hand. Evaluating heuristics Before implementing a particular heuristic—just as any other decision tool—a thorough evaluation is advisable. The following three criteria seem most crucial to assess in any test of heuristics in the medical domain. Accuracy is probably the most important criterion by which to evaluate a heuristic. Particularly in medicine,

199

where errors have far-reaching consequences, any risk of inadequate diagnoses and treatments has to be minimized. Imagine a patient who rushed to the hospital with chest pain, a possible sign of ischemic heart failure. Should the patient be admitted to the coronary care unit (CCU) or just a regular nursing bed? In a hospital in Michigan, about 90 % of patients with this problem were sent to the CCU, but only 25 % of all patients actually had a myocardial infarction [32]. One could argue that the doctors decided defensively, avoiding false-negative allocations (i.e., sending patients with myocardial infarction to a regular nursing bed). However, considering that resources are scarce, and that patients incorrectly assigned to the CCU occupy limited resources and burden the health care system, a good decision tool should have both high sensitivity—that is, the correct allocation of patients with myocardial infarction to the CCU—and a low false-positive rate—that is, the unnecessary allocation of patients without myocardial infarction to the CCU. Evaluating accuracy requires competitive testing. The quality of a decision tool can only be assessed and judged with respect to how well it performs in relation to other tools. As illustrated above, this competition has to include prediction as an outcome criterion, as fitting alone is not sufficient to test the performance of a given tool. A second criterion on which to evaluate decision tools is speed. A patient who suffers a myocardial infarction requires immediate treatment, and any delay puts the patient at further risk. A highly accurate tool that demands too much time fails in most emergency decisions. Moreover, as illustrated above, speed is not necessarily accompanied by less accuracy; rather, faster tools are eventually more accurate. Third, applicability matters. The evaluation of decision tools must include tests on how intuitive, transparent, and user-friendly they are. Complex tools that keep the decision maker in the dark about the underlying rationale of the decision tool are unattractive: They are difficult to memorize and error-prone in their application, and will therefore most likely be rejected. To assure that a tool can be easily taught and will be adopted, its rationale should be transparent, and should guide the decision maker through the process. Given that medical decision making often occurs under cognitive load, multi-tasking, time constraints, noise, stress, and fatigue, health professionals might greatly benefit from more intuitive tools. In the next section, we illustrate these principles with a specific family of examples of a fast-and-frugal heuristic. Fast-and-frugal decision trees One solution to the above-mentioned problem in a Michigan hospital was proposed by Green and Mehr [32]. As a

123

200

consequence of the finding that 90 % of patients rushing to the hospital with chest pain had been sent to the CCU, they developed and compared two tools to reduce false allocation. First, they designed and tested the Heart Disease Predictive Instrument (HDPI; Fig. 1). Equipped with a chart and a pocket calculator, doctors could enter the relevant parameters of seven symptoms, and compute the probabilities with a pocket calculator. Based on a

Intern Emerg Med (2015) 10:195–203

predefined threshold, a decision could be made. The authors also developed and tested a second tool, a fast-andfrugal decision tree (FFT; Fig. 2, left). This tree consists of three yes/no questions (i.e., diagnostic cues), that guide the decision-making process. Each question either has an exit (i.e., ‘‘assign patient to coronary care unit’’ or ‘‘assign patient to regular nursing bed,’’) or leads to the subsequent question. Only the last question has d two exits with the

Fig. 1 The heart disease predictive instrument [30]. Doctors were equipped with a chart and a pocket calculator and had to enter the relevant parameters of seven symptoms to compute the probabilities depending on the whether symptoms were present or not. Probabilities were derived from logistic regression analysis. Based on a predefined threshold, a decision was made. The tool is rather unintuitive and was adopted by few doctors due to its lack of transparency and seemingly complex implementation

Fig. 2 Examples of fast-and-frugal decision trees (FFTs). Both trees are easy to memorize, transparent, and fast to apply. Left An FFT for allocating patients with chest pain in the emergency department to either the coronary care unit or a regular nursing bed [29]. The tree is highly intuitive and relies on only three simple questions to allocate patients. A decision is reached after a maximum of these three

123

questions or even already after the first or second question. Right Simple Triage and Rapid Treatment (START) tree for the categorization of patients to receive either immediate or delayed care (taken from [31]). Similar to the FFT, no more than five questions are required to decide whether immediate treatment is required

Intern Emerg Med (2015) 10:195–203

201

Fig. 3 Comparison of the performance of the Heart Disease Predictive Instrument (HDPI), a fast-and-frugal decision tree (FFT), and physicians’ intuitive strategy for coronary care unit allocation. The xaxis represents the proportion of patients without myocardial infarction who were (incorrectly) assigned to CCU (that is, the false positive rate, or 1-specificity); the y-axis represents the proportion of patients with myocardial infarction who were (correctly) assigned to CCU (that is, the true positive rate, or sensitivity). The reason why the

HDPI has several data points is that different probability thresholds can be applied influencing the sensitivity and false positive rate. The graph illustrates that the FFT provides the best trade-off between proportions of patients correctly assigned (y axis) and the proportion of patients incorrectly assigned (x axis) to the coronary care unit. Hence, the number of false allocations is relatively low. The FFT is not only simpler, but also more efficient and effective in allocating patients. (adapted from [41])

two potential outcomes. Hence, the doctor has to ask a maximum of three questions; a decision would also be possible after one or two questions only. Despite—or because of—its simplicity, the FFT proves to be a highly efficient and effective tool. Compared with the HDPI, it is more accurate, faster, and more intuitive (Fig. 3). Whereas doctors quickly adopt the FFT, they struggle with the applicability of the less intuitive HDPI. Right after the September 11, 2001 terrorist attack, doctors faced many patients, and had to decide whether a patient required immediate treatment, or whether treatment could be delayed. Given the limited time, knowledge, and resources to make a diagnosis, how did doctors make such decisions? Again, an FFT proved to be a solution. The Simple Triage and Rapid Treatment (START [33]) is a tree that has two possible outcomes, delayed versus immediate treatment (Fig. 2, right). It contains a total of five diagnostic cues that are ordered sequentially. The decision maker has to ask the first question (i.e., can the patient walk?) and if the answer is yes, treatment is delayed; if no, the subsequent question is asked. Again, it is not necessary to ask all the questions; a decision can be made after each question.

Generally, an FFT is defined as ‘‘a decision tree that has m ? 1 exits, with one exit for each of the first m - 1 cues and two exits for the last cue’’ [34, p. 320]. These classification trees have been studied in different domains, such as medicine, law, engineering, and the military [20]. In comparison to other, more complex models such as multiple regression, the FFTs are outperformed in data fitting, but when the data available is scarce, FFTs perform better in prediction across 30 environments [35]. FFTs represent one family of heuristics that are promising in the medical context as they have the advantages mentioned above: They are very accurate in prediction, in particular when available data are limited; they are fast because they require only few diagnostic cues; and they are intuitive, as they can be memorized easily, and applied with only little effort. However, despite their simplicity, it is important to note that the construction of FFTs may not be trivial. FFTs can originate from at least two methodologies. A descriptive analysis of physicians’ actual decision making can reveal the relevant diagnostic cues, their order, and the respective exists. FFTs can also be designed based on data. For instance, Luan, Schooler, and Gigerenzer use signal-

123

202

detection theory to design FFTs, test different trees based on subjective decision criteria, and investigate how the order and properties of cues influence performance. [34].

A heuristic-friendly culture Although hardly any doctor would deny the existence of heuristics, admitting their use is still a taboo. Why is this? Heuristics are still perceived as simplified, intuitive, and, hence, nonscientific strategies. Competent decision makers—so the neoclassical economic model assumes—rely on sophisticated models, demonstrating their ability to use complex tools to solve complex problems. A possible error following the use of a heuristic could reveal incompetence, and leave the decision maker open to losing face. This also implies possible legal issues, as an error following a heuristic might be perceived as more severe and avoidable than an error resulting from an ‘‘optimization’’ model. Moreover, despite the uncertainty involved in most (medical) decisions, a common way to cope with uncertainty is to neglect it [36]. Maintaining an illusion of certainty, a world in which solutions are perfectly tractable with complex algorithms, makes the use of heuristics appear unneeded or unwanted. A heuristic-friendly culture understands that heuristics can serve as powerful decision tools. Based on the abovementioned evaluation criteria—accuracy, speed, and applicability—heuristics should be seriously considered as candidates for inclusion in the physician’s decision toolbox. As McDonald phrases it, there is no shame in admitting the use of heuristics [30]. In fact, one characteristic of expertise is the ability to have good intuition. Experts are able to extract relevant information and ignore irrelevant information, a process that often occurs automatically. Hence, an overt discussion of heuristics would open new avenues for studying and improving medical decision making. Many theories and models of decision making assume that complexity must be solved with complexity (for a challenge of this view see [15, 37]). Research along these lines seeks to identify the limitations of heuristics and to substitute more complex decision rules in their stead. In contrast, we propose a descriptive study of heuristics to study their pros and cons, and to formalize intuitive decision tools that equip health professionals with smart heuristics that allow fast and accurate decisions. Heuristics can also be used to educate medical students and laypeople. For example, medical education should not try to merely de-bias medical students, but should actually teach them the merits and advantages, along with the disadvantages, of using heuristics. Heuristics are easy to acquire and apply. Particularly, in highly demanding

123

Intern Emerg Med (2015) 10:195–203

situations when experience is still limited, medical students can greatly benefit from incorporating them in their intuitive judgments. For instance, researchers have designed and evaluated an FFT to screen patients with depression [38]. Heuristics come with another advantage. A study in nine European countries demonstrates that only a few laypeople are able to identify the core symptoms of heart failure and stroke, and do not know what action should be taken (i.e., calling an ambulance) [39]. Simple heuristics based on one-reason decision-making or FFTs can be easily taught to laypeople, preparing them to respond adequately to emergency situations. Also, patients might benefit from heuristics as, for instance, they often have difficulty in complying with treatment recommendations. Simple ecologically rational tools tailored to specific patient groups would overcome this shortcoming and facilitate patient involvement in their own health decisions. Last but not least, medical decisions would benefit from a more transparent and open error culture. No decision rule is perfect, and given the high uncertainty in medical decisions, errors are unavoidable. Yet, to reduce errors in the future, we need to learn from past errors. This, in turn, requires not keeping quiet but discussing and studying the sources of such errors to be able to identify them in the future. In fact, tolerance and even invitation to err have been shown to lead to better performance in a number of systems [40]. Rather than reducing trust in doctors, disclosing errors could result in the opposite, as it acknowledges the problems and limitations in the medical decision-making context. This could form the basis for a formal study of the decision rules being used and possible improvements for the future. A heuristic-friendly culture aims at overtly discussing, promoting and funding research on heuristics and its application in the medical context. This includes a common understanding and conceptualization of what heuristics are, how they operate, and which methodological tools we need to develop to study and critically evaluate them. Formal rules, often neglected by practitioners due to the lack of transparency, can be translated into clear and handy decision tools that not only facilitate applicability, but also convey the underlying rationale of the tool and help the practitioner to reflect on the tools they use. Rather than dichotomizing between ‘‘rational’’ and ‘‘irrational’’ tools, an ecological perspective allows one to sensitively and adequately adapt to challenges posed by the respective decision context.

Conclusion In this paper, we discussed the role of heuristics in medical decision-making and outlined a research program that considers heuristics to be accurate, fast, and easy-to-apply tools.

Intern Emerg Med (2015) 10:195–203

This holds particularly true for uncertain situations such as emergency medicine where knowledge, time, and resources are limited. However, it is important to note that arguing for heuristics does not assume an argument against more complex, statistical tools. As the concept of ecological rationality underlines, the quality of any strategy depends on the environment. Hence, heuristics do not necessarily substitute for more complex tools, but complement them. Ideally, the toolbox contains as many tools as necessary to successfully master specific situations and challenges. Acknowledgements We thank Anita Todd for editing the paper. We thank Giovanni Casazza and Giorgio Costantino for providing valuable feedback along the whole process of writing this manuscript. Conflict of interest

None.

References 1. American College of Emergency Physicians (2014) Definition of emergency medicine http://www.acep.org/content.aspx?id= 29164 2. Gigerenzer G, Gaissmaier W (2011) Heuristic decision making. Annu Rev Psychol 62:451–482. doi:10.1146/annurev-psych120709-145346 3. Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science 185:1124–1131. doi:10.1126/sci ence.185.4157.1124 4. Kahneman D (2003) Maps of bounded rationality: psychology for behavioral economics. Am Econ Rev 93:1449–1475. doi:10. 1257/000282803322655392 5. Payne JW, Bettman JR, Johnson EJ (1993) The adaptive decision maker. Cambridge University Press, New York 6. Bornstein BH, Emler AC (2001) Rationality in medical decision making: a review of the literature on doctors’ decision-making biases. J Eval Clin Pract 7:97–107 7. Elstein AS (1999) Heuristics and biases: selected errors in clinical reasoning. Acad Med 74:791–794 8. Hall KH (2002) Reviewing intuitive decision-making and uncertainty: the implications for medical education. Med Educ 36:216–224 9. Klein JG (2005) Five pitfalls in decisions about diagnosis and prescribing. BMJ 330:781–784 10. Gigerenzer G, Todd PM, ABC Research Group (1999) Simple heuristics that make us smart. Oxford University Press, New York 11. Gigerenzer G, Selten R (2001) Bounded rationality: the adaptive toolbox. MIT Press, Cambridge 12. Simon HA (1956) Rational choice and the structure of the environment. Psychol Rev 63:129–138. doi:10.1037/h0042769 13. Hertwig R, Pachur T, Kurzenha¨user S (2005) Judgments of risk frequencies: tests of possible cognitive mechanisms. J Exp Psychol Learn Mem Cogn 31:621–642 14. Gigerenzer G, Hug K (1992) Domain-specific reasoning: social contracts, cheating, and perspective change. Cognition 43:127–171 15. Gigerenzer G, Hertwig R, Pachur T (eds) (2011) Heuristics: the foundations of adaptive behavior. Oxford University Press, New York 16. Goldstein D, Gigerenzer G (2002) Models of ecological rationality: the recognition heuristic. Psychol Rev 109:75–90 17. Dawes RM (1979) The robust beauty of improper linear models in decision making. Am Psychol 34:571–582

203 18. Czerlinski J, Gigerenzer G, Goldstein DG (1999) How good are simple heuristics? In: Gigerenzer G, Todd PM, ABC Research Group (eds) Simple heuristics that make us smart. Oxford University Press, New York, pp 97–118 19. Gigerenzer G, Brighton H (2009) Homo heuristicus: why biased minds make better inferences. Topics Cogn Sci 1:107–143 20. Katsikopoulos KV (2011) Psychological heuristics for making inferences: definition, performance, and the emerging theory and practice. Decis Anal 8:10–29 21. Marewski J, Pohl R, Vitouch O (2010) Recognition-based judgments and decisions: introduction to the special issue (Vol. 1). Judg Dec Mak 5:207–215 22. Newell BR, Shanks DR (2004) On the role of recognition in decision making. J Exp Psych 30:923–935 23. DeMiguel V, Garlappi L, Uppal R (2011) Optimal versus naive diversification: how inefficient is the 1/N portfolio strategy. In: Gigerenzer G, Hertwig R, Pachur T (eds) Heuristics: the foundations of adaptive behavior. Oxford University Press, New York, pp 642–668 24. Hohenemser CR, Kates W, Slovic P (1985) A casual taxonomy. In: Kates RW, Hohenemser C, Kasperson JX (eds) Perilous progress: managing the hazards of technology. Westview Press, Boulder, pp 67–89 25. Knight F (1921) Risk, uncertainty, and profit. Houghton Mifflin, Boston 26. Meder B, Le Lec F, Osman M (2013) Decision making in uncertain times: what can cognitive and decision sciences say about or learn from economic crises? Trends Cog Sci 17:257–260 27. Politi MC, Han PKJ, Col NF (2007) Communicating the uncertainty of harms and benefits of medical interventions. Med Dec Mak 27:681–695. doi:10.1177/0272989X07307270 28. Feufel MA, Bodemer N (2014) Nudging, social marketing, empowerment: When to use which to improve health decisions? Manuscript under review 29. Lawrence L (2013). Decision-making rules for diagnosing PE may save lives. Retrieved July 20th, 2014 http://www.acpinter nist.org/archives/2013/03/pulmonology.htm 30. McDonald CJ (1996) Medical heuristics: the silent adjudicators of clinical practice. Ann Intern Med 124:56–62 31. Groopman J (2007) How doctors think. Houghton Mifflin, Boston 32. Green L, Mehr DR (1997) What alters physicians’ decisions to admit to the coronary care unit? J Fam Pract 45:219–226 33. Super G (1984) START: A triage training module. Hoag Memorial Hospital Presbyterian, Newport Beach 34. Luan S, Schooler L, Gigerenzer G (2011) A signal detection analysis of fast-and-frugal trees. Psychol Rev 118:316–338 35. Martignon L, Katsikopoulos KV, Woike J (2008) Categorization with limited resources: a family of simple heuristics. J Math Psychol 52:352–361 36. Gigerenzer G (2014) Risk savvy: how to make good decisions. Viking, New York 37. Hertwig R, Todd PM (2003) More is not always better: the benefits of cognitive limits. In: Hardman D, Macchi L (eds) Thinking: psychological perspectives on reasoning, judgment and decision making. Wiley, Chichester, pp 213–232 38. Jenny MA, Pachur T, Williams SL, Becker E, Margraf J (2013) Simple rules for detecting depression. J Appl Res Mem Cogn 2:149–157 39. Mata J, Frank R, Gigerenzer G (2014) Symptom recognition of heart attack and stroke in nine European countries: a representative study. Health Expect 17:376–387 40. Clausing DP, Katsikopoulos KV (2008) Rationality in systems engineering: beyond calculation or political action. Syst Eng 11:309–328 41. Marewski J, Gigerenzer G (2012) Heuristic decision making in medicine. Dialogues Clin Neurosci 14:77–89

123

Heuristics: foundations for a novel approach to medical decision making.

Medical decision-making is a complex process that often takes place during uncertainty, that is, when knowledge, time, and resources are limited. How ...
425KB Sizes 0 Downloads 6 Views