Research

Original Investigation

Clinical Questions Raised by Clinicians at the Point of Care A Systematic Review Guilherme Del Fiol, MD, PhD; T. Elizabeth Workman, PhD, MLIS; Paul N. Gorman, MD Invited Commentary page 719 IMPORTANCE In making decisions about patient care, clinicians raise questions and are unable to pursue or find answers to most of them. Unanswered questions may lead to suboptimal patient care decisions.

Supplemental content at jamainternalmedicine.com

OBJECTIVE To systematically review studies that examined the questions clinicians raise in the context of patient care decision making. DATA SOURCES MEDLINE (from 1966), CINAHL (from 1982), and Scopus (from 1947), all through May 26, 2011. STUDY SELECTION Studies that examined questions raised and observed by clinicians (physicians, medical residents, physician assistants, nurse practitioners, nurses, dentists, and care managers) in the context of patient care were independently screened and abstracted by 2 investigators. Of 21 710 citations, 72 met the selection criteria. DATA EXTRACTION AND SYNTHESIS Question frequency was estimated by pooling data from

studies with similar methods. MAIN OUTCOMES AND MEASURES Frequency of questions raised, pursued, and answered and questions by type according to a taxonomy of clinical questions. Thematic analysis of barriers to information seeking and the effects of information seeking on decision making. RESULTS In 11 studies, 7012 questions were elicited through short interviews with clinicians after each patient visit. The mean frequency of questions raised was 0.57 (95% CI, 0.38-0.77) per patient seen, and clinicians pursued 51% (36%-66%) of questions and found answers to 78% (67%-88%) of those they pursued. Overall, 34% of questions concerned drug treatment, and 24% concerned potential causes of a symptom, physical finding, or diagnostic test finding. Clinicians’ lack of time and doubt that a useful answer exists were the main barriers to information seeking. CONCLUSIONS AND RELEVANCE Clinicians frequently raise questions about patient care in their practice. Although they are effective at finding answers to questions they pursue, roughly half of the questions are never pursued. This picture has been fairly stable over time despite the broad availability of online evidence resources that can answer these questions. Technology-based solutions should enable clinicians to track their questions and provide just-in-time access to high-quality evidence in the context of patient care decision making. Opportunities for improvement include the recent adoption of electronic health record systems and maintenance of certification requirements.

JAMA Intern Med. 2014;174(5):710-718. doi:10.1001/jamainternmed.2014.368 Published online March 24, 2014. 710

Author Affiliations: Department of Biomedical Informatics, University of Utah, Salt Lake City (Del Fiol); Lister Hill Center, National Library of Medicine, Bethesda, Maryland (Workman); Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland (Gorman). Corresponding Author: Guilherme Del Fiol, MD, PhD, Department of Biomedical Informatics, University of Utah, 421 Wakara Way, Ste 140, Salt Lake City, UT 84108 ([email protected]). jamainternalmedicine.com

Copyright 2014 American Medical Association. All rights reserved.

Downloaded From: http://archinte.jamanetwork.com/ on 05/13/2015

Questions Raised by Clinicians at Point of Care

A

seminal 1985 study by Covell et al1 reported that internal medicine physicians raise 2 questions for every 3 patients they see in office practice. Since then, numerous studies have examined the questions clinicians raise during patient care. In general, these studies have confirmed that questions arise frequently and often go unanswered, but no systematic review of this literature exists to date. Unanswered questions are seen as an important opportunity to improve patient outcomes by filling gaps in medical knowledge in the context of clinical decisions.2-4 In addition, providing just-in-time answers to clinical questions offers an opportunity for effective adult learning.5 The challenge of maintaining current knowledge and practices is likely to be aggravated by the expansion of medical knowledge, increasing complexity of health care delivery, and the growing aging population.6-8 Understanding clinicians’ questions is essential to guide the design of interventions aimed at providing the right information at the right time to improve care. To increase current understanding, we conducted a systematic review of the literature on clinicians’ questions. We focused on the need for general medical knowledge that might be obtained from books, journals, specialists, and online knowledge resources. The systematic review was guided by 4 primary questions: (1) how often do clinicians raise clinical questions; (2) how often do clinicians pursue questions they raise; (3) how often do clinicians succeed at answering the questions that they pursue; and (4) what types of questions are asked? We also conducted a thematic analysis of the barriers to clinicians’ information seeking and the potential effects of information seeking on clinicians’ decision making.

Methods The methodology was based on the Standards for Systematic Reviews set by the Institute of Medicine.9 Study procedures were conducted based on formally defined processes and instruments that were drafted and piloted by one of us (G.D.F.) and refined with input from an expert review panel.

Data Sources and Searches We searched MEDLINE (1966 through May 26, 2011), CINAHL (1982 through May 26, 2011), and Scopus (1947 through May 26, 2011); inspected the citations of included articles and previous relevant reviews; and requested citations from experts on this topic. Search strategies (eAppendix in the Supplement) were developed with the assistance of 2 medical librarians.

Study Selection We searched for original studies that examined clinicians’ questions as defined by Ely et al,10 that is, “questions about medical knowledge that could potentially be answered by general sources such as textbooks and journals, not questions about patient data that would be answered by the medical record.” We used a broad definition for clinicians that included physicians, medical residents, physician assistants, nurse practitioners, nurses, dentists, and care managers. We included only studies that collected questions that arose in the care of real patients. jamainternalmedicine.com

Original Investigation Research

We excluded studies that met any of the following criteria: (1) data collection outside the context of patient care, such as surveys and focus groups; (2) focus on the use, awareness, satisfaction, impact, or quality of information resources without providing data on the frequency of information seeking or the nature of the questions asked; (3) questions of individuals not defined as clinicians in our study, such as patients, medical students, and administrators; (4) needs for specific patient data (eg, laboratory test results) that can be found in the patient’s medical record; (5) no data on at least 1 of the systematic review primary questions; and (6) articles not written in English.

Abstract Screening One of us (G.D.F.) independently reviewed the title and abstract of all retrieved citations. Two others (T.E.W. and P.N.G.) independently reviewed 2 random samples of 100 citations. In this phase, articles were labeled as “not relevant” or “potentially relevant.”

Article Selection Two of us (G.D.F. and T.E.W.) independently reviewed the full text of all citations labeled as potentially relevant. Included articles were classified into 1 of 5 categories based on the method used to collect clinical questions: (1) interviews with clinicians after each patient visit or at the end of a clinic session (aftervisit interviews); (2) clinicians’ keeping records of questions as they are raised in the care of their patients (self-report); (3) direct observation of clinicians by a researcher who records questions clinicians raise during routine patient care activities (direct observation); (4) analysis of inquiries submitted to information services, such as drug information services (information services); and (5) analysis of online information resource use logs (search logs). Disagreements between the 2 reviewers were reconciled through consensus with a third (P.N.G.).

Data Extraction Two of us (G.D.F. and T.E.W.) independently reviewed the included articles to extract the data into a data abstraction spreadsheet and verified quantitative data for accuracy. Disagreements were reconciled with the assistance of a third reviewer (P.N.G.).

Data Synthesis and Analysis For quantitative measures, we aggregated data from published studies to determine descriptive statistics across these studies. Owing to large variation in study methods and measurements, a meta-analysis of methodologic features and contextual factors associated with the frequency of questions was not possible.

Results Description of Studies Of 21 710 unique citations retrieved, 811 were selected for full-text screening; 72 articles met the study criteria (Figure). Clinical questions were collected in after-visit interviews in JAMA Internal Medicine May 2014 Volume 174, Number 5

Copyright 2014 American Medical Association. All rights reserved.

Downloaded From: http://archinte.jamanetwork.com/ on 05/13/2015

711

Research Original Investigation

Questions Raised by Clinicians at Point of Care

Figure. Process for Selecting Studies of Clinical Questions Raised by Clinicians at the Point of Care 22 124 Citations identified by literature search 9899 MEDLINE 7696 CINAHL 3849 Scopus 680 Manual search

Types of Questions Asked

21 710 Citations after duplicates removed 20 899 Citations excluded 811 Citations after abstract screening 739 Citations excluded 302 Data collected outside patient care context (eg, survey) 141 Information resource evaluation 68 Not original peer-reviewed data 15 Information needs of nonclinicians (eg, consumers) 12 Information needs on simulated cases 18 Needs for patient record data/functionality 151 Does not address 1 of the study questions 29 Not available in English 3 Same analysis of a data subset from a more recent study 72 Studies included in data abstraction

Of 21 710 citations retrieved, 811 were selected for full-text screening, and 72 met the study inclusion criteria.

19 studies, through clinician self-report in 11, by direct observation of patient care activities in 11, by analysis of questions submitted to an information service in 26, and by analysis of online information resource search logs in 8. Three studies used more than 1 method. Characteristics of included studies are listed in eTable 1 [Supplement]. The search also identified a systematic review on clinicians’ information-seeking behavior11 and several informal literature reviews on related topics.6,12-17 No systematic review was found of the questions clinicians raise at the point of care. Agreement on abstract and full-text screening was 99% (κ = 0.88) and 95% (κ = 0.74), respectively.

Frequency of Clinical Questions Raised, Pursued, and Answered Table 1 lists the number of questions raised by clinicians, the proportion pursued, and the proportion of pursued questions that were successfully answered. In 20 studies that provided sufficient data, the frequency of questions ranged from 0.16 to 1.85 per patient seen. The frequency varied according to study methods, with intermediate frequencies in 11 aftervisit interview studies (median, 0.57; range, 0.22-1.27), lower frequencies in 4 self-report studies (median, 0.20; range, 0.160.23), and higher frequencies in 5 direct observation studies (median, 0.85; range, 0.24-1.85) (Table 2). The proportion of questions that were pursued was available in 16 studies, with medians of 81% (range, 23%82%) in 3 self-report studies, 47% (range, 28%-85%) in 11 after visit-interview studies, and 47% (22%-71%) in 2 direct observation studies. Finally, the reported rates of success712

fully answered questions were most consistent: when clinicians decided to pursue a clinical question, they were successful approximately 80% of the time across all study types (Table 2).

Sixty-four studies classified questions using various methods and classification systems. Of these studies, 48 (75%) used ad hoc and informal classification approaches, using general categories such as diagnosis, therapy, etiology, and prognosis. Although these categories had similar names, the definitions and methods used were poorly defined and varied substantially among studies, precluding meaningful comparison or aggregation. For simplicity, we have collapsed data from these studies into approximate categories (eTable 2 in the Supplement). Five studies classified questions according to a formal taxonomy of 64 question types developed by Ely et al.56 The question types followed a Pareto distribution, with roughly 30% of the question types accounting for 80% of the questions clinicians asked. Table 3 lists the 13 most frequent question types across these 5 studies. Overall, 34% of the questions asked were about drug treatment, and 24% were related to the potential causes of a symptom, physical finding, or diagnostic test finding. Studies that focused on drug-related questions classified questions according to various categories, such as dose and administration, contraindications, and adverse reactions. The most frequent categories were dose and administration, indication, and adverse reactions (eTable 3 in the Supplement).

Other Substantial Findings The Box summarizes other substantial findings that were recurrent across studies. Clinicians cited several barriers to pursuing their questions, such as their lack of time (cited in 11 studies) and their perception that the question was not urgent (5 studies) or important (5 studies) for the patient’s care. Eleven studies reported that the information found by clinicians had some positive effect on clinical decision making. According to 4 studies, clinicians spent a mean of less than 2 to 3 minutes seeking an answer to a specific question. Two studies demonstrated that the perceived frequency of questions reported by clinicians in surveys was much lower than that obtained through patient care observations.

Discussion To our knowledge, this is the first systematic review of clinicians’ patient care questions. In nearly 3 decades since the study by Covell et al,1 more than 20 additional studies have addressed these issues, using differing methods in a variety of settings. What has emerged from these efforts is a fairly stable picture: clinicians have many questions in practice—at least 1 for every 2 patients they see, and although they find answers to most (78% to 87%) of the questions they pursue, more than half of their questions are never pursued and thus remain

JAMA Internal Medicine May 2014 Volume 174, Number 5

Copyright 2014 American Medical Association. All rights reserved.

Downloaded From: http://archinte.jamanetwork.com/ on 05/13/2015

jamainternalmedicine.com

Questions Raised by Clinicians at Point of Care

Original Investigation Research

Table 1. Frequency of Clinical Questions per Patient Seen, Percentage of Questions Pursued, and Percentage Answered

Source

Questions, No.

Patients Seen, No.

Questions, %

Questions per Patient, No.

Pursued

Pursued and Answered

Comments

After-visit interview Chambliss and Conley,18 1996

84

NR

NR

NA

54



Cogdill et al,19 2000

62

148

0.42

32

NR



Cogdill,20 2003

75

153

0.57

85

NR



269

409

0.66

NR

NR



Covell et al,1 1985 Dee and Blazek,21 1993

48

144

0.33

NR

NR

Ebell and White,22 2003a

415

966

0.43

68

74

Ely et al,10 1999

778

2467

0.32

36

80



Ely et al,23 2005

1062

NR

NR

55

72



Record review at end of half-day …

Gorman and Helfand,24 1995

295

514

0.57

30

80



Gorman et al,25 2004

585

705

0.83

47

77



Graber et al,26 2007

271

NR

NR

81

87



Green et al,5 2000

280

401

0.70

29

NR

Norlin et al,27 2007

193

890

0.22

28

98

Ramos et al,28 2003

274

215

1.27

69

NR

… Only unanswered questions in visit …

Self-report Barrie and Ward,29 1997

85

376

0.23

82

96

Crowley et al,30 2003

581

NR

NR

NA

82

Ebell and White,22 2003a

402

2496

0.16

81

55

González-Gonzáles et al,31 2007

635

3511

0.18

22

86

Jennett et al,32 1989

592

2767

0.21

NR

NR

Patel et al,33 2006

253

NR

NR

NA

87

Self-recorded searches

Schilling et al,34 2005

158

NR

NR

NA

89

Residents were assigned a patient-specific question to pursue

Schwartz et al,35 2003

… Self-recorded searches … Clinicians video-recorded their questions …

92

NR

NR

NA

54

Self-recorded searches

365

NR

NR

NA

87

Self-recorded searches

Davies,37 2009

286

1210

0.24

22

NR

Recorded by volunteer clinical librarians

Dorr et al,38 2006

128

NR

NR

71

80

36

NR

NR

NA

92

Observed information-seeking episodes

363

228

1.59

NA

68

Residents followed rounds, collecting and pursuing questions

Van Duppen et al,36 2007 Direct observation

Ely et al,39 1992 Hauser et al,40 2007 Osheroff et al,41 1991

77

91

0.85

NR

NR

Ethnographic method

Sackett and Straus,42 1998

98

196

0.50

NA

NR

Clinical team recorded questions they pursued using “evidence cart”

Timpka and Arborelius,43 1990

85

46

1.85

NR

NR

Video-recorded visits reviewed with physicians …

Search logs Del Fiol et al,44 2008

115

NR

NR

NA

87

1125

NR

NR

NA

53



Magrabi et al,46 2005

63

NR

NR

NA

73



Maviglia et al,47 2006

289

NR

NR

NA

84



54

NR

NR

NA

57



Hoogendam et al,45 2008

Xu et al,48 2005 Information service Barley et al,49 2009

84

NR

NR

NA

83



1618

NR

NR

NA

91



Del Mar et al,51 2001

84

NR

NR

NA

82



Fozi et al,52 2000

78

NR

NR

NA

40



158

NR

NR

NA

96



Swinglehurst et al,54 2001

60

NR

NR

NA

95



Verhoeven and Schuling,55 2004

61

NR

NR

NA

92



Bergus and Emerson,50 2005

Swain et al,53 1983

Abbreviations: NA, not applicable (the study collected and analyzed only the questions that clinicians pursued); NR, not reported. a

Ebell and White22 used both the after-visit interview and self-reported methods.

jamainternalmedicine.com

JAMA Internal Medicine May 2014 Volume 174, Number 5

Copyright 2014 American Medical Association. All rights reserved.

Downloaded From: http://archinte.jamanetwork.com/ on 05/13/2015

713

Research Original Investigation

Questions Raised by Clinicians at Point of Care

Table 2. Frequency of Clinical Questions Questions, %b

Questions per Patient Seen, No.a

Data Collection Method

Questions Pursued

Questions Pursued and Answeredc

After-visit interview (3274 questions; 7012 patients seen) Studies, No.

11

Mean (95% CI)

11

0.57 (0.38-0.77)

Median

0.57

8

51 (36-66)

78 (67-88)

47

78

Self-report (1714 questions; 9150 patients seen) Studies

4

Mean (95% CI)

3

0.20 (0.15-0.24)

Median

0.20

8

62 (NP)

80 (66-93)

81

87

Direct observation (909 questions; 1771 patients seen) Studies

5

Mean (95% CI)

2

1.01 (0.15-1.87)

Median

0.85

3

47 (NP)

80 (NP)

47

80

Information service (2106 questions submitted) Studies

0

0

6

Mean (95% CI)





74 (51-98)

Median





83

Search log (1767 search sessions) Studies

0

0

7

Mean (95% CI)





77 (62-93)

Median





84

Abbreviation: NP, not possible to estimate owing to small number of studies.

b

Data represent percentage of questions except where otherwise indicated.

a

c

Number of questions answered divided by number of questions pursued.

Data represent number of questions per patient except where otherwise indicated.

Table 3. Clinical Questions Classified According to the Taxonomy of Ely et ala Questions, % GonzálezGonzález et al,31 2007

Taxonomy Code

Gorman and Helfand,24 1995

Ely et al,10 1999

What is the drug of choice for condition X?

2.1.2.1

13

10

7

10

13

10

What is the cause of symptom X?

1.1.1.1

3

10

20

3

6

10

How should I treat condition X (not limited to drug treatment)?

2.2.1.1

10

6

2

5

15

7

What is the cause of physical finding X?

1.1.2.1

2

6

15

3

3

7

What test is indicated in situation X?

1.3.1.1

9

8

3

8

6

6

What is the dose of drug X?

2.1.1.2

3

8

3

13

2

6

Can drug X cause (adverse) finding Y?

2.1.3.1

6

4

1

7

8

5

What is the cause of test finding X?

1.1.3.1

4

5

3

2

5

4

Could this patient have condition X?

1.1.4.1

1

4

6

1

2

4

How should I manage condition X (not specifying diagnostic or therapeutic)?

3.1.1.1

2

5

4

0.4

1

4

What is the prognosis of condition X?

4.3.1.1

NA

NA

0.2

4

6

2

What are the manifestations of condition X?

1.2.1.1

NA

NA

1

8

2

2

What conditions or risk factors are associated with condition Y?

4.2.1.1

NA

NA

1

6

1

2

Question Type

Graber et al,26 2007

Ebell et al,57 2011

Overall

Abbreviation: NA, Not available. a

Data include the 13 most frequent question types across studies, accounting for 80% of the questions asked and classified according to the taxonomy of Ely et al.56

unanswered. These unanswered questions continue to represent a significant opportunity to improve patient care and offer self-directed learning by providing needed information to clinicians in the context of care. 714

Study Methods The methods of the included studies varied substantially regarding the definition of clinical questions, data collection and analysis, care setting, and clinician background. We found im-

JAMA Internal Medicine May 2014 Volume 174, Number 5

Copyright 2014 American Medical Association. All rights reserved.

Downloaded From: http://archinte.jamanetwork.com/ on 05/13/2015

jamainternalmedicine.com

Questions Raised by Clinicians at Point of Care

portant differences in the results that can be explained in part by these differences. Direct observation studies can provide more information about the underlying context that motivates a clinical question and may identify questions that clinicians fail to articulate. However, the presence of an observer might artificially stimulate29 or inhibit62 articulation of clinical questions. Furthermore, there may be greater variation in how direct observation is performed. After-visit interviews may have a smaller effect on artificially stimulating questions, but these studies might miss questions that clinicians fail to articulate. On the other hand, the after-visit method may be more consistently applied, resulting in more stable estimates of the frequency of questions. The self-report method is the least expensive and intrusive but most susceptible to memory or saliency bias. On the other hand, given the difficult logistics and expense of direct observations and after-visit interviews, the self-report method may be a useful alternative when the goal is to collect a large number of clinical questions from a variety of settings.

Frequency of Clinical Questions Raised, Pursued, and Answered Considering methodologic differences, we found fairly stable reports of the frequency of questions, the pursuit of information, and clinician success in finding answers to the questions they elected to pursue. Most after-visit interview studies were conducted in community clinics and used similar methods. Except for 2 outliers, these studies reported similar results. Therefore, 0.57 (95% CI, 0.38-0.77) seems to be a reasonable estimate of the mean frequency of recognized clinicians’ questions in outpatient community settings. The 2 outliers can be explained by methodologic differences. On the lower extreme (at a per-patient question frequency of 0.22), Norlin et al27 excluded simple drug reference questions and questions answered during the patient visit. At the other extreme (at a per-patient frequency of 1.27), 25 residents asked twice as many questions as 11 faculty members, increasing the overall question frequency.28 The direct observation studies were the least uniform regarding data collection methods and observation setting, which may explain the wide 95% CIs for the frequency of questions. The per-patient question frequency for self-report studies ranged from 0.16 to 0.23, but this method is likely to underestimate the question frequency owing to recall bias. According to 13 after-visit interview and direct observation studies, clinicians pursued roughly half of their questions. The percentage of questions they answered was similar across studies, with the median per study type ranging between 78% and 87%. This relatively high success rate may be explained by clinicians’ ability to selectively pursue questions that can be answered quickly. 63 According to information foraging theory, humans constantly weigh the expected benefits vs the estimated cost of engaging in certain information-seeking activities.64 This process is more notably observed among experts and in time-sensitive environments. jamainternalmedicine.com

Original Investigation Research

Box. Other Substantial and Recurring Findings Barriers to pursuing clinical questions/reasons not to pursue questions Lack of time* Question not urgent5,10,20,24,28 Question not important23,27,28,31,58 Doubt that a useful answer exists10,20,23,24,26,27,58 Forgetting question5,31 Referral23,28,31 Information found affected clinician and decision making, confirming or changing decisions† Most questions are pursued when the patient is still in the practice20,24,25 Most questions are highly patient specific and nongeneralizable1,20,24 Clinicians used human and paper resources more often than computer resources1,10,19,20,23-25 Clinicians spend mean of

Clinical questions raised by clinicians at the point of care: a systematic review.

In making decisions about patient care, clinicians raise questions and are unable to pursue or find answers to most of them. Unanswered questions may ...
288KB Sizes 0 Downloads 3 Views