Representation of Preferences in Decision-Support Systems Brad R. Fair and Ross D. Shachter*

Section on Medical Infornatics, Stanford University and *Department of Engineering-Economic Systems, Stanford University Abstract

Background

The recommendations of computer-based decision-support systems depend on the preferences of an expert on which the model is based. Qften, these preferences are represented only implicitly, rather than explicitly, in the system. Decision-theoretic preference models that explicitly represent the preferences of the decision maker provide numerous advantagesfor decision-support systems. In this paper, we describe these advantages. The creation and refinement of decision-theoretic preference models, however, remains a difficult task.

A computer-based system that makes recommendations for action must have a method for comparing the available alternatives. Such comparison involves predicting the outcomes, or results, of the option under consideration, and assessing the desirability, or value, of the outcome. Although the values associated with the outcomes are pivotal in determining the course of action, these values are often represented only implicitly in computer-based decision-support systems.

We describe an accurate and efficient method for determining the preferences of domain experts and for refining the model that captures those preferences. In this preference-assessment method, we simulate decisions common in the expert's area. We then infer the preferences of the expert from the choices that she makes on the simulated decisions, and use the preference information to refine the model automatically. Introduction All decisions depend on the values, or preferences, of the decision maker. The recommendations of computer-based decision-support systems also depend on preferences, even when the preferences are not represented explicitly in the system. Researchers in decision theory have developed methods, such a multiattribute value models, for representing and manipulating preferences. Using decisiontheoretic techniques to represent preference information in computer-based systems provides a number of advantages. Nonetheless, the determination of the decision maker's preferences, and the creation and refinement of the preference model, are difficult tasks. We have developed a method for assessing preferences and refining preference models that is accurate and efficient. We use a computer-based assessment tool that simulates decision problems in the domain of interest, and infer the preference information from the choices of the decision maker.

0195-4210/91/$5.00 © 1992 AMIA, Inc.

Researchers have identified several problems caused by the lack of an explicit representation of the value of decision outcomes [10; 12; 13]. For example, the outcomes and their associated values are often context dependent, and may vary substantially. Because the optimal decision depends on the possible outcomes, and on the values associated with those outcomes, this information is indispensable to the decision-making process. Researchers have proposed decision-support systems with decisiontheoretic preference models as a solution to the problems caused by the lack of explicit information about values [5; 11]. In this paper, we describe a preference-assessment method suitable for refining the preference model of a decisionsupport system. The method is broadly applicable to many decision problems, although we developed it initially in the domain of ventilator therapy. We use the ventilator-therapy example to describe the development and use of the assessment method.

VentPlan is a computer-based decision-support system that makes recommendations for settings of a mechanical ventilator [17; 19]. We chose a multiattribute value model to represent treatment preferences in the system [2]. The value model is based on the primary goals of ventilator therapy, and contains objectives for these basic goals. The model includes objectives for oxygenation, oxygen toxicity, barotrauma, and ventilation. Each objective has one or more attributes that provide a measure of achievement for the objective. Each attribute has associated individual attribute value functions. For

1018

Value

Value

Arteria p02

FiO2

Arteia pO2

Figure 1. Value functions for arterial PO2 and FiO2. Value increases with increasing PO2, but as P02 increases, the value increase per unit of pO0 decreases. Value decreases as FiO2 increases, and the relationship between value and FiO2 is more linear than is the relationship between value and pO2. The optimal ventilator setting represents a tradeoff between the attibutes, and depends on their relative importance.

example, the value of the oxygen-toxicity objective is a function of the fraction of inspired oxygen (FiO2.) The values of the individual objectives are weighted according to the latter's relative importance, then are combined to yield the overall value of the ventilator settings and the predicted blood-gas measurements. Physicians generally agree on the basic principles of ventilator management provision of adequate oxygenation is important, and iatrogenic side effects should be avoided. Because the value model is based on the fundamental principles of ventilator management, we understand what the basic behavior of the attribute value functions should be. For example, higher arterial partial pressures of oxygen (pO2) are preferred to lower p02 levels, but as the pO2 level increases, the relative increase in value per unit of P02 begins to decrease (in economic terms, there is decreasing marginal return). This behavior indicates that the value function for p02 level can be approximated by an exponential curve. Figure 1 shows the relationship between value and p02 graphically. Although the general behavior of a value model that captures these basic characteristics is qualitatively correct, if the system is to make clinically useful recommendations, the value model must include more detailed preference information.

Obtaining the information necessary to refine the value model through direct assessment is difficult, if not impossible. Direct assessment methods, such as lotterytype questions with levels of pO2 as outcomes, are not questions that most physicians can answer easily. Moreover, we found that the information obtained from physicians' answers to such questions often is contradictory and is not an accurate representation of the physicians' preferences. Direct assessment techniques are problematic not only for determining the shapes of the individual attribute value functions, but also for determining the weights of the

attributes. Although physicians agree that maintaining adequate oxygenation is more important than is avoiding oxygen toxicity, they are not comfortable assigning the numerical weights that specify how much more important the first goal is. If pressed to make an estimate, they can provide a number, but the accuracy-and thus the usefulness-of such an estimate is relatively low. Direct assessment questions can be constructed such that the responses to the questions specify the weights of the attributes. In general, however, this type of question is difficult for a clinician to answer, because the cases are not necessarily consistent physiologically, and may represent situations that would never occur in clinical practice. The realism of the assessment questions is important for the success of the assessment procedure [24, page 311][6, page 196]. To be realistic to a clinician, the question must be consistent physiologically and plausible clinically. Preference-assessment difficulties are not unique to ventilator management, or to medical problems in general. The difficulty in determining a decision maker's preferences is a significant limitation in the use of formal decision analysis methods [7; 9; 23].

The task of assessing preferences from human decision makers is complicated by the fact that the type of questions asked can have a marked effect on the information received. Researchers have identified systematic biases that arise as a result of the heuristics individuals use when making intuitive judgments involving uncertainty [20]. The shortcomings and inconsistencies in human judgment have been well documented [3].

Systematic biases affect human judgment concerning not only the likelihood of uncertain events, but also the desirability of events and outcomes [22]. One significant bias occurs in the framing of the question-for example, describing the question in terms of loses or gains.

1019

Questions that are objectively equivalent will elicit different responses depending on how the outcomes are framed. A medical example of the framing effect involves the preferences of patients for cancer treatments. Patients were asked to judge between radiation and surgical therapy, given statistical information about the outcomes of the two procedures. The preferences for the two treatments changed depending on whether the treatments were described in terms of percent survival or of percent mortality [16].

well, and the techniques used for simplifying that process are applicable here. For example, if the analyst can identify independence relationships among the attributes, then she can use simpler models for combining the attributes. The analyst must capture the basic behavior of the preference model, and therefore should determine, for example, whether there exists a monotonic relationship between an attribute and its value, and whether the attribute-value relationship is linear, exponential, or of some other form. The goal of the initial phase is to create a parameterized preference model that can capture the

The shift in preference brought about by problem formulation is not limited to choices involving uncertainty [4]. When evaluating compound or sequential outcomes, individuals may view the parts of the outcome from the same or different reference points, depending on the framing of the outcome. The reference point for comparison determines whether the outcome component will be viewed as a gain, as a loss, or as neutral. Because individuals generally weigh losses more heavily than they do gains, the overall evaluations of a compound outcome will depend on the frame of reference for the components [21].

preferences qualitatively.

Because preference assessment is a difficult, timeconsuming process, investigators attempted to automate the process with computer-based tools. Such tools may fit standard utility models using the results of lotteries whose outcomes are continuous scalar quantities, such as money [18]; determine the parameters of typical multiattribute utility models, based on indifference comparisons between multiattributed outcomes [6]; detect automatically and point out inconsistencies in user responses [23]; or interactively refime additive value models [8]. Although these computer-based tools can simplify and automate aspects of the preference-assessment process, they still ask hypothetical, abstract questions that may be unrealistic and difficult to answer.

Design Considerations We shall now describe the simulated-decision preferenceassessment method in general terms. The assessment method comprises thuee phases: construction of the basic value model, simulation of cases, and adjustment of parameters. In the initial phase, the analyst must construct a preference model that includes the basic elements of the finished model, and that captures the basic qualitative characteristics of the decision problem. The analyst must identify the objectives of the decision problem, select attributes that provide a measure of achievement with respect to the objectives, and specify the ranges of the

attributes.

Construction of this initial preference-model framework is required in traditional preference-assessment methods as

In the second phase, the analyst creates a set of simulated decision problems in the domain of interest, and determines the response of the expert to each of the simulated cases. The decision-making behavior of the expert on simulated cases provides the more detailed information necessary to refine the model.

The analyst must select cases that represent adequately the range of possibilities that the system is intended to handle. The simulated cases should cover the range of outcomes of interest, as well as the range of options. In other words, the cases should be selected such that the decision options that the expert chooses as optimal cover the intended range of the decision-support system. If there are indices of the decision problem that characterize this range, these indices can be used to generate a matrix of cases. The analyst constructs a computer-based assessment tool that simulates the decision task and determines the outcomes of the available decision alternatives. The analyst uses this assessment tool to present the representative cases to the expert. The assessment tool shows the expert not only the description of the case, but also the outcome that will result from any of the decision options he examines. In this manner, the assessment tool allows the expert to explore the space of possible options and outcomes for each case. The expert selects the optimal decision, based on a comparison of the possible outcomes in the scenario. The assessment tool presents each case in tum to the expert, and records the decision option that he selects as optimal. In the third phase, based on the simulated decisions of the expert, the analyst refines the model by adjusting the parameters of the model. The goal of the adjustment of the parameters is to minimize the difference between the recommendations made by the model and the recommendations made by the expert. The analyst must select a meaningful metric for measuring the difference between these two sets of recommendations. The parameter-adjustment process generally is too complicated for the analyst to perform manually. The analyst can use nonlinear programming techniques to perform an optimization on the model parameters [15].

1020

The product of this process is a parameterized preference model, which has components that correspond to the important aspects of the problem, and which reproduces the preferences of the expert. System Description We shall now describe a preference-assessment tool that uses the simulated-decision assessment method. We used the tool to develop the preference model for the VentPlan system. The tool simulates a decision problem in the domain of ventilator therapy, and the results of the available therapy. The physician then specifies the optimal ventilator treatment, based on the outcomes predicted by the simulation. This process is repeated, with patient scenarios that cover the range of interest. The simulated decisions then provide the basis for value-model adjustment. The main screen from the assessment tool is shown in Figure 2. The assessment tool shows the ventilator settings and physiological measurements for a simulated patient By using the simulated control buttons, the physician can change any or all of the four basic ventilator settings: FiO2, positive end expiratory pressure (PEEP), tidal volume (TV), and respiratory rate (RR). When the physician makes any changes to the proposed settings, the tool shows her the results of those changes. Prelwmnce Assessment Cwrut Currut PaUwUsbm r 0.epo.s6o06.62 PMER 0.00 COM 15.60 TV 340.0 p= 3522 RR

6.00

Tpaa b

C.6

3.50

0.10

v m

lzmm FtO2 ___ DscP021

4

It rb=-.M.rru' 4 aws PEEP'

rforRR

Pop. Prodic1in bn fUah" n &c035A U.64 PEEP 0.00 02Ct 16.13 TV 9500p.O 40.39

RI

4.50

l

C.6

3.50

0.10C

t

.ioivuProq mri.war.wNtfo oVSWiwio

Figure 2. The preference-assessment tool. The box in the upper left shows the initial ventilator settings (labeled "Current Settings") and five indicators of the physiology (labeled "Current Patient State") of the simulated patient. In the lower left is a brief text description of the patient. In the center is a set of buttons that change the simulated ventilator settings. The box in the upper right shows changes in the ventilator settings (labeled "Proposed Settings") and changes in the physiology (labeled "Predicted Patient State"). By selecting the "Increase Fi02" button in the center, the user has increased the FiO2 in the simulation model from 25 to 35 percent. The P02 level displayed under "Predicted Patient State" has changed from 66 to 83. The button in the lower right (labeled "Proposed Settings are optimal") advances the simulation to the next patient case.

A mathematical model of the simulated patient's physiology predicts the steady-state result of ventilator changes; these changes are shown on the screen instantly as the predicted patient state. The physician can make any changes in the proposed settings; for example, she can examine the improvement in p02 levels for various levels of FiO2. This instant feedback allows the physician to explore the potential treatment outcomes, and to determine the simulated patient's response to therapy. The physician adjusts the settings until she believes that the best possible settings for the simulated patient have been reached. She then indicates that the ventilator settings are optimal by selecting the "Proposed Settings are optimal" button. The physician is then presented with another simulated patient, but with different pathophysiology, for whom the optimal settings would then be different.

This assessment process is repeated, and simulated cases that cover a range of possible pathophysiology are presented. To create this range of pathophysiology, the system changes the parameters of the physiological model that characterize the simulated patient's physiology and response to therapy. We have also created a preference-assessment tool to determine preferences for a simplified aminoglycoside dosing problem. Figure 3 shows a sample screen from the assessment tool. The assessment tool numerically displays the drug dose and the dose interval, and the predicted peak and trough concentrations. The drug concentrations are also shown graphically for a 48-hour period. When the expert selects the appropriate button on the assessment tool to adjust the dose or the dose interval, the pharmacokinetic model calculates the new drug levels that correspond to the dose and dose interval selected. The assessment tool immediately displays the new peak and trough levels, and updates the graph to reflect the new drug levels.

To collect a library of representative drug-dosing regimens, the assessment tool first presents the patient description, the initial dosing regimen, and the predicted drug levels to the expert. The expert adjusts the dose and dose interval until he believes that the best possible dosing regimen has been reached; he then selects the button to indicate that the dose and dose interval are optimal. The assessment tool then presents a new patient case to the expert. In the new case, the volume of distribution or the renal clearance, or both, of the simulated patient are different. Changing the volume of distribution or clearance will change the drug levels that result from a particular dosing regimen. The new patient, even if the clinical scenario remains unchanged, will therefore require a different dosing regimen. The assessment tool repeats this process to create a library of physiologically distinct cases with their associated optimal dosing regimens. This library provides the basis for the optimization of the parameters of the utility model. 1021

altematives; as the inputs and parameters of the process model change, the predicted outcomes will change accordingly. Because the preference model uses the output of the process model, the system recommendations change automatically with changes in the environment. These environmental changes do not require changes in the preference model. For example, in a system that recommends aminoglycoside dosages, a change in the clearance of the drug will cause a change in the predicted blood levels of the drug, and a corresponding change in the recommended dose. Note that the new recommended dose may result in blood levels that are different from the previous levels. The system does not simply calculate the new dose to maintain the same blood levels of the drugthe pharmacokinetic model alone could perform this calculation; instead, it calculates the optimal blood levels (and corresponding doses) for the new situation.

irug losesDos

Propoftf D Dosw 170.0 Intenvl 12.0

Ped

qanos

(ZincrsIne

9.10

Tmugb 1.75

A

Pad"CSettingseL ~rV

LOVOI

'

Separate preference and process models can facilitate system development, because system developers can create and tst the process model independent from the preference model. If there is significant variability in the preference model, system developers can proceed with the creation of the system without committing the system to one particular set of preferences. For example, medical experts may disagree on the most appropriate ventilator settings for a given situation, but may agree completely on the underlying physiological principles. If the physiological model is separate form the preference model, system developers can develop and objectively test the physiological model, without committing to the treatment opinions of a particular expert The developers can then use the physiological model with any of the potential preference models in the fmished system.

--

Figure 3. Sample screen from the aminoglycoside preference-assessment tool. On the left of the upper panel are the dose and dose interval, and the resulting peak and trough levels. On the right are buttons to adjust the dose and interval. At the bottom of the upper panel are buttons to show the patient description, and to indicate that the dosing regimen is optimal. The lower panel displays graphically the drug concentration over a 48-hour

period. Discussion The simulated-decision preference-assessment method is appropriate for determining preferences for decisionsupport systems that have (at least) the following two components: an environmental (or process) model that predicts the outcomes of the decision alternatives, and a preference model that calculates the relative desirability of the outcomes. In the VentPlan example, the process model is a differential equation model of respiratory physiology, and the preference model is a multiattribute value model. We shall now describe the advantages of a preference model that is separate from the process model, and the advantages of using the simulated-decision method to refine the preference model. The recommendations of the system are sensitive to, and respond appropriately to, environmental changes. The process model predicts the outcomes of the decision

In the same manner that separating preferences simplifies the environmental model, separating the environmental factors simplifies the preference model. In the decision problem of the computer system, there are environmental factors that modify the outcomes and therefore influence the selection of the optimal alternative. If the important effects of these environmental factors are included in the process model, then these environmental factors do not need to be included in the preference model. A single preference model can make recommendations for cases throughout the range of the process model. As an example from ventilator therapy, the patient's fluid volume can have a significant effect on the selection of ventilator settings. If measures of the patient's fluid volume are included in the physiological model, the effects of fluid volume on the patient's condition are incorporated in the predictions of the model. Because the preference model uses the output of the physiological model, the system makes recommendations appropriate to the patient's fluid status, without the need to include fluid status in the model of treatment preferences.

1022

Another advantage of the techniques described here is the ability to tailor the advice of decision-support systems to individual physician preferences. Traditional experts systems incorporate the knowledge-and preferences, either implicitly or explicitly-of one or more domain experts. Potential users of the system generally have few options for changing the system to correspond to their own preferences. A parameterized preference model makes tailoring the system to individual preferences possible. A rapid, automated preference-assessment method makes tailoring the system practical. The simulated-decision preference-assessment method requires comparatively little time from the expert providing the preference information; in addition, the method is suitable for automation. To simplify individual tailoring, systems developers must identify a subset of the preference-model parameters that accounts for the variation among individuals. Because only this subset of parameters is refined, the amount of assessment required to adapt the system to an individual may be significantly less than the amount required during system development.

One potential criticism of conventional computer-based decision systems is that the systems are suitable for only the expert who was involved in the development. The acceptance of computer-based systems may improve if the potential users are able to modify the systems to conform to the local standards of practice, or even to individual standards. Developers of computer-based systems for aminoglycoside dosing have realized the importance of recognizing and incorporating the treatment preferences of the users of the system [14]. We have described how to tailor the recommendations of a decision-support system to match the preferences of the physician users. Many medical decisions depend on the preferences of the patient. A computer-based system with an explicit value model can represent the preferences of patients as well as of physicians. Because the medical decisions and outcomes may be unfamiliar to patients, different methods of preference assessment may be required for patients. In the simulated-decision preference-assessment method described in this paper, we infer preferences from observations of the decision maker's behavior. The preferences inferred in this manner, sometimes called revealed preferences, have been criticized as inappropriate for optimal decision maling, because the behavior that we observe may not be optimal [6, page 18]. However, if our goal is to represent and apply consistently the expertise of a medical expert, then the method is appropriate. A computer-based system that merely replicates consistently, without improving the decisions of a medical expert is both difficult to create and potentially useful.

Merely replicating the decisions of a medical expert, however, does not guarantee the quality of medical care.

Determining the benefits of any medical intervention, with or without computer assistance, requires a carefully controlled study of the health outcomes that result from the intervention. In the absence of appropriate studies to determine the outcomes of treatment, the actual results of an expert's recommendations are unknown, and in fact the recommendations may cause more harm than benefit. A fundamental prerequisite for a formal comparison of the outcomes of medical therapies is that the therapies under study be applied consistently. For some therapy comparisons, such as the comparison of one medication to an alternative medication, it is easy to ensure that the therapy is applied consistently to the study groups. Other comparisons are more problematic. For example, is conservative support of arterial p02 more beneficial than aggressive support of arterial pO2? The comparison requires that conservative and aggressive therapies be applied consistently to the study population. Basic treatment guidelines, such as "Maintain pO2 above 60 mmHg," can provide consistency, but are too simplistic to provide optimal recommendations for all patients. The development of a protocol with sufficient guidelines to make optimal recommendations according to just one treatment strategy is a massive task, even with computer support [1]. A computer-based system, with an explicit preference model, can make optimal recommendations according to a particular treatment strategy. The simulateddecision preference-assessment method makes feasible the creation of multiple preference models, each corresponding to a particular treatment strategy. By using the different preference models in the same decision-support system, researchers can apply the treatment strategies consistently and reliably, and can compare the resulting health outcomes. A preference model that corresponds to an individual's preferences may provide benefits other than individually tailored recommendations. Experts can gain insight into their own treatment strategies by comparing their preference model with those of other experts. Individually tailored preference models provide a consistent metric for examining interexpert variations in treatment practices.

Conclusion

Difficult decisions generally require tradeoffs among the objectives of the decision; the optimal decision depends on the preferences, or values, of the decision maker. The recommendations of computer-based decision-support systems also depend on values, even when values are not represented explicitly in the system. We have described advantages of the use of decision-theoretic value models in decision-support systems. Although the use of decision-theoretic value models in decision-support systems provides many advantages, determining the preferences of the decision maker and

1023

creating the value model to represent those preferences are difficult problems. We have described an accurate and efficient metiod for preference assessment and value-model refinement, in which decisions are simulated in the expert's domain of expertise, and the preferences are inferred from the experts responses. We have described the use of the techniques in the domain of ventilator therapy and drug dosing, but the techniques can be generalized to other applications. Acknowledgments

[10] Langlotz, C. P. "The feasibility of axiomatically[11]

[12] [13]

We gratefully acknowledge the contributions of all those involved in the VentPlan project. We tiank Leslie Lenert for sharing his pharmacological expertise. This research was supported in part by Grants LM-07033 and LM04136 from the National Library of Medicine.

[14]

References

[151

[1] East, T. D., Morris, A. H., Clemmer, T., Onne, J. F., Wallace, C. J., Henderson, S., Sittig, D. F. and Gradnr, R. M. "Development of computerized critical care protocols - a strategy that works!" Symposium on Computer Applications in Medical Care. Washington, D.C. R. A. Milier (Ed.) IEEE Computer Society. 1990. [2] Fanf, B. R. and Fagan, L. M. "Decision-theoretic evaluation of therapy plans." Symposium on Computer Applications in Medical Care. Washington, D.C. L. C. Kingsland (Ed.) IEEE Computer Society. 1989. [3] Kahneman, D., Slovic, P. and Tversky, A. Judgement under uncertainty: Heurstics and biases. Cambridge University Press, Cambridge. 1982. [4] Kahneman, D. and Tversky, A. "Choices, values, frames." American Psychologist. 39(4): 341-350. 1984. [51 Keeney, R. L. "Value-driven expert systems for decision support." Decision Support Systems. 4: 405-412. 1988. [6] Keeney, R. L. and Raiffa, H. Decisions with multiple objectives: Preferences and value tradeoffs. John Wiley and Sons, New York. 1976. [7] Keeney, R. L. and Sicherman, A. "Assessing and analyzing preferences concerning multiple objectives: an interactive computer program." Behavioral Science. 21: 173-182. 1976. [8] Klein, D. A. Interpretive value analysis. PhD Thesis. University of Pennsylvania. 1989. [9] Klein, G., Moskowitz, H., Mahesh, S. and Ravindran, A. "Assessment of multiattributed measurable value and utility functions via mathematical programming." Decision Sciences. 16: 309-324. 1985.

[16]

[17]

[18] [19]

[20] [21] [22]

[23]

[24]

1024

based expert systems." Computer Methods and Programs in Biomedicine. 30: 85-95. 1989. Langlotz, C. P., Fagan, L. M., Tu, S. W., Sikic, B. I. and Shortliffe, E. H. "A therapy planning architecture that combines decision theory and artificial inteUigence techniques." Computers and Biomedical Research. 20: 279-303. 1987. Langlotz, C. P. and Shortliffe, E. H. "Logical and decision-theoretic methods for planning under uncertainty." Al Magazine. 10(1): 39-47. 1989. Langlotz, C. P., Shortliffe, E. H. and Fagan, L. M. "Using decision theory to justify heuristics." Fifth National Conference on Artificial Intelligence, AAAI-86. Philadelphia, PA. American Association for Artificial Intelligence. 215-219. 1986. Lenert, L. Personal Communication. Department of Clinical Phannacology, Stanford University. 1991 Luenberger, D. G. Linear and nonlinear programming. Addison-Wesley, Reading, MA. 1984. McNeil, B. J., Pauker, S. G., Sox, H. C. and Tversky, A. "On the elicitation of preferences for altrnative therapies." New England Joumal of Medicine. 306: 1259-1262. 1982. Rutledge, G., Thomsen, G., Beinlich, I., Farr, B., Sheiner, L. and Fagan, L. "Combining qualitative and quantitative computation in a ventilator-therapy planner." Symposium on Computer Applications in Medical Care. Washington, D.C. L. C. Kingsland (Ed.) IEEE Computer Society. 1989. Schlaifer, R. Computer programsfor elementary decision analysis. Harvard University, Boston. 1971. Thomsen, G. and Sheiner, L. "SIMV: An application of mathematical modeling in ventilator management." Symposium on Computer Applications in Medical Care. Washington, D.C. L. C. Kingsland (Ed.) IEEE Computer Society. 1989. Tversky, A. and Kahneman, D. "Judgement under uncertainty: Heuristics and biases." Science. 185: 1124-1131. 1974. Tversky, A. and Kahneman, D. "The framing of decisions and the psychology of choice." Science. 211: 453-458. 1981. Tversky, A. and Kahneman, D. "Rational choices and the framing of decisions." Journal of Business. 59(4): 250-278. 1986. von Nitzsch, R. and Weber, M. "Utility function assessment on a micro-computer: an interactive approach." Annals of Operations Research. 16: 149160. 1988. von Winterfeldt, D. and Edwards, W. Decision analysis and behavioral research. Cambridge University Press, Cambridge. 1986.

Representation of preferences in decision-support systems.

The recommendations of computer-based decision-support systems depend on the preferences of an expert on which the model is based. Often, these prefer...
1MB Sizes 0 Downloads 0 Views