EDUCATIONAL ADVANCE

Critical Appraisal of Emergency Medicine Education Research: The Best Publications of 2013 Susan E. Farrell, MD, EdM, Gloria J. Kuhn, DO, Wendy C. Coates, MD, Phillip H. Shayne, MD, Jonathan Fisher, MD, MPH, Lauren A. Maggio, MSL, PhD, and Michelle Lin, MD

Abstract Objectives: The objective was to critically appraise and highlight methodologically superior medical education research articles published in 2013 whose outcomes are pertinent to teaching and education in emergency medicine (EM). Methods: A search of the English-language literature in 2013 querying Education Resources Information Center (ERIC), PsychINFO, PubMed, and Scopus identified 251 EM-related studies using hypothesis-testing or observational investigations of educational interventions. Two reviewers independently screened all of the publications and removed articles using established exclusion criteria. Six reviewers then independently scored the remaining 43 publications using either a qualitative a or quantitative scoring system, based on the research methodology of each article. Each scoring system consisted of nine criteria. Selected criteria were based on accepted educational review literature and chosen a priori. Both scoring systems used parallel scoring metrics and have been used previously within this annual review. Results: Forty-three medical education research papers (37 quantitative and six qualitative studies) met the a priori criteria for inclusion and were reviewed. Six quantitative and one qualitative study were scored and ranked most highly by the reviewers as exemplary and are summarized in this article. Conclusions: This annual critical appraisal article aims to promote superior research in EM-related education, by reviewing and highlighting seven of 43 major education research studies, meeting a priori criteria, and published in 2013. Common methodologic pitfalls in the 2013 papers are noted, and current trends in medical education research in EM are discussed. ACADEMIC EMERGENCY MEDICINE 2014;21:1274–1283 © 2014 by the Society for Academic Emergency Medicine

Q

uality, hypothesis-driven education research is necessary to promote evidence-based decisions about effective ways to teach the physicians of tomorrow. Education research has gained increasing support and prominence in emergency medicine (EM) academia, with available grant opportunities from the Society of Academic Emergency Medicine and the Council of Emergency Medicine Residency Directors. Furthermore, the 2012 Academic Emergency Medicine consensus conference focused on the theme “Education Research in Emergency Medicine: Opportu-

nities, Challenges, and Strategies for Success” to promote a national initiative to advance the field of education research.1 In this sixth installment of the annual critical appraisal series, the same six reviewers used previously published criteria2 to critically analyze and rank the EM education research from 2013. The focus of this article is to review and highlight the methodologically superior studies that are pertinent to teaching and education in EM. Trends in EM education research over the past 6 years, as they can be inferred from this review, are summarized. It is

From the Partners Healthcare International (SEF), Beth Israel Deaconess Medical Center (JF), Harvard Medical School, Boston, MA; the Wayne State University School of Medicine (GJK), Detroit, MI; Harbor–UCLA Medical Center, University of California at Los Angeles (WCC), Los Angeles, CA; the Emory University School of Medicine (PHS), Atlanta, GA; the Stanford University School of Medicine (LAM), Stanford, CA; and San Francisco General Hospital (ML), San Francisco, CA. Received July 18, 2014; accepted July 27, 2014. The authors have no relevant financial information or potential conflicts to disclose. Supervising Editor: John Burton, MD. Address for correspondence and reprints: Susan E. Farrell, MD, EdM; e-mail: [email protected].

1274 1274

ISSN 1069-6563 1274 PII ISSN 1069-6563583

© 2014 by the Society for Academic Emergency Medicine doi: 10.1111/acem.12507

ACADEMIC EMERGENCY MEDICINE • November 2014, Vol. 21, No. 11 • www.aemj.org

hoped that this article will serve as a valuable resource for EM educators and researchers invested in the scholarship of teaching.3 METHODS Article Identification The previously described methodology to search for relevant publications was used.2 Publications were limited to English-language papers published in 2013. Searches were performed in December 2013 and April 2014. Inclusion and Exclusion Criteria Publications relevant to the EM education of medical students, residents, academic and nonacademic attending physicians, and other emergency providers were included. Medical education studies were defined as hypothesis-testing investigations and measurements of educational interventions using either quantitative or qualitative methods. Publications were excluded if they were not considered to be research. In other words, opinion pieces, commentaries, literature reviews, pure descriptions of activities, single-site attitudinal surveys, reports on education of prehospital personnel, and studies that could not be generalized to EM training outside of the country in which they were performed were excluded. Data Collection Two authors (SEF, ML) independently screened 251 abstracts from all retrieved publications and applied the exclusion criteria. All differences in opinion were resolved by discussion. Retrieved publications were maintained in an ENDNOTE X6 (Thomson Reuters) database. Forty-three publications were made electronically available for all six reviewers to score independently. Scoring The publications were first assigned to a scoring system based on whether they were primarily quantitative or qualitative studies. The quantitative studies used scoring criteria that were developed in 2009 and have been continually optimized and iteratively modified since then.3–7 Quantitative studies were scored in nine domains for a maximum total score of 25 points. The domains included the following: introduction (0–3 points), measurement (0–4 points), data collection (0–4 points), data analysis (0–3 points), discussion (0–3 points), limitations (0–2 points), innovation (0–2 points), generalizability (0–2 points), and clarity of writing (0–2 points). Each of the domains was scored based on predefined criteria to make scoring as objective as possible. Using accepted recommendations and hierarchical formulations,7–9 qualitative studies were assessed and scored in the nine domains, parallel to those applied to the quantitative studies, for a maximum total score of 25 points. These also included the domains of measurement, data collection, and data analysis criteria, as defined specifically for high-quality qualitative research. The scoring criteria for both quantitative and qualitative research studies have been previously published in this review series and are represented in Tables 1 and 2.2

1275

Data Analysis Reviewers were excluded from scoring publications in which there was deemed to be significant conflict of interest (own publication, own institution, or had a vested interest in the authors or work). Publications were listed alphabetically by first author surname, and each reviewer started the review with the article whose author surname was alphabetically closest to that of the reviewer’s surname. This process has been used throughout this review series in an attempt to prevent bias resulting from reviewer fatigue. Each reviewer independently reviewed and scored each publication. A total rating score was calculated for each article and entered into a spreadsheet using Microsoft Excel 2010. Using each reviewer’s total rating score for each article, a rank list of quantitative studies and a rank list of qualitative studies were created for each reviewer. The rankings were then averaged among all six reviewers to prevent overvaluing any one reviewer’s scoring. The a priori criteria for quantitative studies to be featured as exemplary were: 1) the average of all reviewers’ rankings of an article placed the article’s rank in the top 10 and 2) at least (n – 1) reviewers ranked the article in their individual top 10 rankings, where n was the number of eligible reviewers. Because of the historical paucity of published qualitative studies, the single highest scoring qualitative study was highlighted. RESULTS A total of 251 satisfied the search criteria, and 43 papers met the inclusion criteria.10–52 The 43 articles (37 quantitative and six qualitative studies) were critically appraised by each of six reviewers. Six quantitative studies met a priori criteria as methodologically superior publications in education research, with a range of mean scores from 16.5 to 21.2 (maximum 25 points).12,20,21,30,34,36 One qualitative study received the highest score of 16.0 (maximum 25 points).23 The six highest ranking quantitative studies are presented in alphabetical order by the surname of the first author, followed by the one highest-ranked qualitative study. Akhtar, S, Hwang, U, Dickman, E, Nelson, BP, Morrison, RS, Todd, KH. A brief educational intervention is effective in teaching the femoral nerve block procedure to first-year emergency medicine residents. J Emerg Med 2013;45:726–30. Background: The use of femoral nerve block as an alternative to systemic opioids for the treatment of hip fracture pain in elderly patients is increasing. This study examined residents’ knowledge and skills retention after ultrasound and nerve stimulator–guided training to perform femoral nerve blocks. Methods: This was a pre–post observational assessment of the medical knowledge and clinical skills of first-year residents at three EM programs who completed didactic and hands-on education in ultrasound and nerve stimulator–guided femoral nerve block. Knowledge assessment included indications, anatomic landmarks, drug information, and the use of the nerve stimulator. Procedural skills were taught on a simulated

1276

Farrell et al. • APPRAISAL OF EM EDUCATION RESEARCH, 2013

Table 1 EM Education Research Scoring Metrics: Quantitative Research2

Domain

Item

Item Score

Introduction: Give 1 point for each criterion met Appropriate description of background literature Clearly frame the problem Clear objective/hypothesis Measurement: Give 0 or 1 point for each criterion met 1. Methodology Has no pretest or posttest Has a posttest only Has a pretest and posttest 2. Groups Both experimental and control group Random assignment to groups Data collection: Give 0 or 1 point for each criterion met 1. Institutions Single institution At least two institutions More than two institutions 2. Response rate Response rate < 50% or not reported Response rate ≥ 50% Response rate ≥ 75% Data analysis: Give 0 or 1 point for each criterion met 1. Appropriateness Data analysis inappropriate for study design/type of data Data analysis appropriate for study design and type of data 2. Sophistication Descriptive analysis only Beyond descriptive analysis Includes power analysis Discussion: Give 1 point for each criterion met Data support conclusion Conclusion clearly addresses hypothesis/objective Conclusions placed in context of literature Limitations: Assign a single best score Limitations not identified accurately Some limitations identified Limitations well addressed Innovation of project: Assign a single best score Previously described methods New use for known assessment New assessment methodology Relevance of project: Assign a single best score Impractical to most programs Relevant to some Highly generalizable Clarity of writing: Assign a single best score Unsatisfactory Fair Excellent Total

vascular gel mold. Knowledge was retested at 1 and 3 months, and procedural skills were retested at 3 months postintervention. Results: Thirty of 38 (79%) residents completed all pre and post assessments. Mean posttest knowledge scores increased by 26%, while retention of knowledge was maintained at 1 and 3 months (mean score decrement from immediate posttest of 18%). At 3 months, 83% of residents retained 85% of the predetermined critical procedural skills. Strengths of the Study: This pre–post pilot study was performed across three cohorts of EM residents, enhancing the generalizability of the results that knowledge and procedural skills in performing femoral nerve

Maximum Domain Score 3

1 1 1 4 0 1 1 1 1 4 0 1 1 0 1 1 3 0 1 0 1 1 3 1 1 1 2 0 1 2 2 0 1 2 2 0 1 2 2 0 1 2 25

block could be maintained at 3 months after a brief didactic and hands-on educational intervention. The authors were able to maintain postintervention assessments with 79% of their subjects. Relevance for Future Educational Advances: This multisite study was a simple pre–post clinical skills intervention that demonstrates longitudinal postintervention methodology. Bounds, R, Bush, C, Aghera, A, et al. Emergency medicine residents’ self-assessments play a critical role when receiving feedback. Acad Emerg Med 2013;20: 1055–61. Background: The most effective method of delivery of feedback and whether it is used to develop learning

ACADEMIC EMERGENCY MEDICINE • November 2014, Vol. 21, No. 11 • www.aemj.org

1277

Table 2 EM Education Research Scoring Metrics: Qualitative Research2

Domain

Item

Item Score

Introduction: Give 1 point for each criterion met Appropriate description of background literature Clearly frame the problem Clear objective/hypothesis Measurement: Give 1 point for each criterion met 1. Methodology Appropriate for study question 2. Sampling of participants Appropriate study population Enrolled full range of cases/settings beyond convenience Data collection: Give 0–1 point for each criterion met 1. Institutions Single institution At least two institutions More than two institutions 2. Sample size determination Appropriate sample size determination Data analysis: Give 1 point for each criterion met Clear, reproducible “audit trail” documenting systematic procedure for analysis Data saturation through a systematic iterative process of analysis Addressed contradictory responses Incorporated validation strategies (e.g., member checking, triangulation) Addressed reflexivity (impact of researcher’s background, position, biases on study) Discussion: Give 1 point for each criterion met Data supports conclusion Conclusion clearly addresses hypothesis/objective Conclusions placed in context of literature Limitations: Assign a score Limitations not identified accurately Some limitations identified Limitations well addressed Innovation of project: Assign a score Previously described methods New use for known assessment New assessment methodology Relevance of project: Assign a score Impractical to most programs Relevant to some Highly generalizable Clarity of writing: Assign a score Unsatisfactory Fair Excellent Total

goals has not been established. Faculty members often feel uncomfortable delivering negative feedback, and learners may consciously or unconsciously reject it. The purpose of this study was to determine the acceptance or rejection of feedback and the interaction of selfassessment and feedback in the formation and execution of learning goals. Methods: This was a multicenter, observational, cross-sectional study. Seventy-two senior EM residents participated in a standardized oral board examination, requiring competency in resuscitation, advanced cardiac life support, and communication. They were blinded to the critical actions required in the case and scoring instrument. The residents then completed a self-assessment form listing strengths and weaknesses of their performance. Faculty used both a structured validated quantitative scoring system and a qualitative feedback checklist to give both positive and negative feedback.

Maximum Domain Score 3

1 1 1 3 1 1 1 3 0 1 1 1 5 1 1 1 1 1 3 1 1 1 2 0 1 2 2 0 1 2 2 0 1 2 2 0 1 2 25

After self-assessment and delivery of feedback, residents were asked to form learning goals based on self-assessment, feedback, or both. Within 4 weeks, residents were asked to recall and describe actions taken as a result of the generated learning goals. Results: There were initially 226 learning goals generated (mean = 3.1, standard deviation [SD] = 1.3, per resident). Forty-seven percent were based on the residents’ self-assessments only, compared to 27% generated by feedback alone. Residents performing poorly were more apt to incorporate feedback when generating learning goals compared to high performers. Follow-up revealed 62 residents recalled 89 of 226 learning goals, of which 52 were acted upon. Although the number of recalled learning goals from self-assessment and feedback were equal, the greatest number (40%) of reportedly executed learning goals came from selfassessments and feedback in agreement.

1278

Conclusions: Residents (particularly high-performers) completing a standardized case generated the majority of learning goals based upon self-assessment rather than feedback. On follow-up, the number of learning goals from feedback and self-assessment were equal. However, the majority of learning goals acted upon stemmed from feedback and self-assessment in agreement. This study suggests that for feedback to have the most value in changing behavior, educators need to incorporate learners’ self-assessments into feedback. Strengths of the Study: This study uses a case-based assessment to compare quantitative measurements of performance based on faculty members’ observations and ratings, faculty members’ feedback, and learners’ self-assessment; it then sought to determine the basis on which learners determine their own goals, comparing the effects of faculty members’ feedback and learners’ self-assessment. Relevance for Future Educational Advances: Future research may draw on these results to determine the relationship between faculty members’ feedback and learners’ self-assessments in terms of developing learning goals. If feedback is perceived as important in contributing to meaningful learning goals, more research is warranted in determining the most effective methods for improving feedback conversations with learners and understanding the relationship between learners’ selfassessments and their incorporation of external feedback. Boutis, K, Grootendorst, P, Willan, A, et al. Effect of the Low Risk Ankle Rule on the frequency of radiography in children with ankle injuries. CMAJ 2013;185: E731–8. Background: The Low Risk Ankle Rule is a validated decision tool, different from the Ottawa Ankle Rule, to determine whether an ankle radiograph is indicated in children with acute ankle injuries. This quality improvement research study evaluated the effectiveness of phased educational interventions in actually reducing the frequency of radiographs obtained by clinicians, the target learner population. Methods: This was a prospective, multicenter, interrupted time series, pair-matched controlled study conducted at six Canadian emergency departments (EDs) over an 18-month period. During Phase 1, there were no educational interventions. During Phase 2, emergency physicians (EPs) attended teaching sessions, received pocket cards and a computer decision support system, and could review the Low Risk Ankle Rule on wall posters. During Phase 3, only the decision support system was used as the educational adjunct. The primary outcome measure was the proportion of pediatric patients receiving of ankle radiographs. Results: In the 2,151 enrolled patients, the incidence of radiographs decreased from 96.5% to 73.5% to 71.3% for the three phases in the intervention sites, compared to a stable pattern of 90.2, 88.3, and 87.8% in the control sites. The sensitivity and specificity of the ankle rule were 100 and 53.1%, respectively, during the study. Strengths of the Study: This education/quality improvement study not only incorporated the use of multiple sites to improve generalizability, but also matched them to control sites. A power calculation was

Farrell et al. • APPRAISAL OF EM EDUCATION RESEARCH, 2013

performed, determining that a sample size of 2,100 injuries over 78 weeks at six sites was needed to detect a clinically important effect, whereby there would be a 20% reduction in radiography rates. Relevance for Future Educational Advances: This study should be applauded because it is one of only a few educational research publications that focused on clinically oriented outcome measures rather than responses on a survey or a written examination. Fernandez, R, Pearce, M, Grand, JA, et al. Evaluation of a computer-based educational intervention to improve medical teamwork and performance during simulated patient resuscitations. Crit Care Med 2013;41:2551–62. Objectives: The goal of this study was to evaluate the efficacy of a computer-based teamwork process training (cTPT) intervention on teamwork and patient care performance during simulated patient resuscitations. Methods: This was a single-institution, prospective, blinded, and randomized case–control study of teamwork and patient care performance in group simulations with cTPT intervention versus “placebo” training. Medical students and EM residents were placed in teams, which were randomly exposed to a narrated PowerPoint presentation on principles and examples of teamwork (intervention) or to a more generic presentation on teams in health care (placebo control). Recordings of the simulation were rigorously scored for expected elements of teamwork and patient care by trained observers blinded to the team randomization. Results: A total of 230 subjects participated in 45 teams that were scored for teamwork and patient care. The scores were adjusted for the level of experience of the team members and compared. Teams assigned to the cTPT intervention showed a statistically significant 10% improvement in both teamwork and patient care metrics compared to the placebo group. Strengths of the Study: This methodologically rigorous study demonstrated that a low-cost, easily disseminated computer-based training module could positively affect teamwork performance and patient care in a simulation model. The study is highly relevant to any EM training program. It is easy and practical to implement. While it can only be inferred that the effect would translate to improved clinical care and the duration of the effect was not studied, the intervention can be implemented with minimal cost and no harm, thus making almost any benefit worthwhile. Relevance for Future Educational Advances: This study demonstrates a method for testing the effectiveness of an educational intervention on teamwork and patient care in a simulation laboratory. It also provides a proven, low-cost, easy way to implement a training module in these domains for EM residency programs. Ilgen, JS, Bowen, JL, McIntyre, LA, et al. Comparing diagnostic performance and the utility of clinical vignette–based assessment under testing conditions designed to encourage either automatic or analytic thought. Acad Med 2013;88:1545–51. Objectives: The objective was to compare different levels of emergency practitioners’ diagnostic performance when responding to a clinical vignette using their first impression versus a more deliberate thought process.

ACADEMIC EMERGENCY MEDICINE • November 2014, Vol. 21, No. 11 • www.aemj.org

Methods: This was a multicenter, computer-based, randomized study built on the results of several pilot projects. Clinicians (medical students, residents, faculty) were randomized to diagnose a series of simple and complex clinical vignettes under one of two instructional conditions: entering either just a first impression or a directed search, in which they were instructed to summarize the case, list probable diagnoses in order of likelihood, and list less probable but important diagnoses to consider. The authors analyzed diagnostic accuracy by instructional condition (first impression vs. directed search), vignette complexity, and clinician experience, plus compared clinician results to their USMLE board scores. Results: A total of 393 participants from EM and internal medicine representing approximately 10% of the eligible clinicians from eight medical centers completed the Web-based assessment. Diagnostic accuracy correlated positively with experience and inversely to case complexity, but did not vary significantly between instructional conditions (first impression vs. directed search). The clinicians directed to give first impressions completed the assessment significantly quicker than the direct search group, with no difference in accuracy. The authors note that “this investigation suggests that instructions to trust one’s first impressions result in similar diagnostic accuracy when compared with instructions to consider clinical information in a structured and systematic fashion. These results run counter to the assertion that encouraging clinicians to focus their thinking toward analytic processes is likely to reduce diagnostic errors.” Strengths of the Study: This was a methodologically rigorous, large-scale, multicenter study, built on several prior studies. It is highly relevant to medical education and especially to EM, where diagnostic accuracy and efficiency are both crucial. The results were thoroughly analyzed and explained. Relevance for Future Educational Advances: This study presents an innovative and effective way to study the diagnostic process to better understand how clinicians think, and can be helpful when teaching diagnostic reasoning. Jordan, J, Jalali, A, Clarke, S, Dyne, P, Spector, T, Coates, W. Asynchronous vs. didactic education: it’s too early to throw in the towel on tradition. BMC Med Educ 2013;13:105. Objectives: The objective of this study was to compare live didactic to asynchronous podcast lectures in contributing to how fourth-year medical students perform on a medical knowledge exam immediately postcourse and 10 weeks later. Methods: This single-center, prospective, observational, quasi-experimental study of senior medical students compared the pretest, immediate posttest, and delayed posttest knowledge on acute care topics. The tests were validated for comparable difficulty using EM interns. Content was delivered using either a traditional lecture format (shock, acute abdomen, dyspnea, field trauma) or by asynchronous podcast lectures (chest pain, electrocardiography interpretation, pain management, trauma). Results: With a 92% participation rate (44 of 48), information delivered by traditional didactic lectures

1279

resulted in higher mean gain scores than asynchronous lectures (28%, SD  18% vs. 10%, SD  23%), resulting in a mean difference of 18% (95% CI = 10.4% to 26.5%). Ten weeks after the course, retention testing resulted in similar mean score attrition between didactic versus asynchronous lectures (–15% vs. –18%). Strengths of the Study: The two strengths of this educational study address often criticized elements of studies assessing medical knowledge. The first was the careful validation and blueprinting of the pre- and posttests to ensure content equivalency between the preand posttests, using separate reference learners (EM interns). The second was the assessment of not only immediate knowledge retention, but also delayed retention 10 weeks later. Appropriate limitations were identified, including consideration of a ceiling effect whereby the students scored 68 and 70% for didactic versus asynchronous content. The reported greater score gains for didactic content may have been because the pretest scores for didactic versus asynchronous content were discrepant, at 40% versus 62%, respectively. Relevance for Future Educational Advances: This publication cautions about the rapid adoption and promotion of asynchronous material in medical education despite millennial learner preference, especially for novice learners like medical students. These learners may need more guided facilitation and a more structured framework for knowledge acquisition. Chan, T, Orlich, D, Kulasegaram, K, Sherbino, J. Understanding communication between emergency and consulting physicians: a qualitative study that describes and defines the essential elements of the emergency department consultation-referral process for junior learners. CJEM 2013;15:42–51. Background: In North America the consultation rate for EPs ranges from 38% to 40% for both inpatient and outpatient referrals. Despite the frequency of consultations, the preponderance of literature is opinion-based, and only one study has attempted to clarify the specifics of the interaction. No study has incorporated the opinions of residents. The objective of this qualitative study was to define the important elements of ED consultation requests and to develop a model for the process. Methods: A convenience sample of attending physicians and residents from EM, internal medicine, and general surgery were the subjects of a mixed-methods study using semistructured interviews of attending physicians or semistructured, moderated focus group discussions with residents. The discussions centered on the necessary components of a consultation and how junior learners should be taught consultation. Two investigators independently reviewed transcripts of interviews and focus groups used grounded theory to generate an index of themes until saturation was reached. Disagreements were resolved by consensus and 30% of transcripts were coded in duplicate to determine agreement. Results: Sixty-one participants (30 attending staff and 31 residents) were recruited. There were 46 important themes, which were categorized into four key components (themes around content or elements adding to the structure of consultation requests) and two modifiers (subthemes that altered the behavior of either the emergency or consulting physician). While there were

1280

differences between specialties and between residents and attending physicians, there was overall agreement as to the components a consultation should include and how provision of feedback should be given to residents by attending physicians/senior residents during the consultation. These components were organized into a simple framework (PIQUED), which can be summarized as (P)reparation before the encounter such as the initial workup, management, and review of key findings; (I) dentification of involved parties such as EM and consulting physicians, patient, and patient case specifics; (Q)uestions about the clinical case and answering consultant questions; (U)rgency of the request with explanation of reasons; (E)ducational modifications if consultant more senior; and (D)ebrief and Discuss providing feedback on the case. The authors noted that this model was developed in an academic setting with the objective of teaching junior residents to interact with consultants and may not generalize to other settings. Strengths of the Study: The authors developed an evidence-based model that can be used to structure a consultation encounter and teach junior residents an effective format for consultations. Using this model may alleviate much of the uncertainty felt by referring and consulting physicians, avoid misunderstandings and frustration, and lead to better communication and thus patient safety. Relevance for Future Educational Advances: This study demonstrates the use of qualitative analysis to understand the perceptions and behaviors of two cohorts of study subjects. The adoption of the resulting educational materials to enhance resident consultations can be further studied to assess their effects on resident communication in the ED and the resulting patient care processes. DISCUSSION Trends in Medical Education Research in 2013 As in previous years, we sought to identify trends in research topics and characteristics of articles that exemplified outstanding education research methodology. We reviewed 43 articles, of which seven (16%) met the established criteria to be designated as superior. Fourteen (32.5%) studies were funded.10,12,15,19,21–23,30,33,34,36– 38,44 Funding sources included federal grants,12,21 institutional or specialty organization support,10,22,23,33,34,36,44 foundations,15,19,30 and industry.37,38 Funded studies are prominent in the articles highlighted for superior methodology (86%).12,21,23,30,34,36 This is consistent with findings by Reed et al.,53 who noted that funded studies were more likely to have been of higher quality as assessed on a validated scale. Twenty-three of the studies that were fully reviewed (53%) appeared in EM journals; eight (19%) were published in medical education journals, and 11 (26%) were featured in journals that focus on the primary topic of research (simulation, critical care, radiology, patient safety, etc.). EM authors predominated in 23 studies (53%) and collaborated with specialists in medical education, statistics, or other clinical disciplines in 16 (37%), for a total of 39 of the 43 (91%) articles having EM authors identified. Research study design varied this year with an increased representation (seven studies) of

Farrell et al. • APPRAISAL OF EM EDUCATION RESEARCH, 2013

qualitative or mixed-methods methodology.10,16,22,23,25,37,44 Fourteen studies (33%) employed rigorous experimental or quasi-experimental methodologies,10,12,15,18,21,24,30,31,34,36,38,41,47,48 including five (71%) of the featured articles.10,21,30,34,36 Outcome measures in all studies reviewed ranged from learner satisfaction (Kirkpatrick level I) to improved performance (Kirkpatrick level 3).54 Although the outcome measures for the highest Kirkpatrick level (level 4) are difficult to study and focus on tangible patient outcome results after training, they are what education research studies should build toward. For example, after an educational intervention on hand washing in the ED, a level 1 outcome would endorse that learners felt they understood the importance of hand washing and/or enjoyed the session. Level 2 outcomes would show an increase in knowledge of why hand washing is important and/or the ability to demonstrate how to wash one’s hands properly. A level 3 outcome would observe the subjects actually washing their hands in the ED, and a level 4 outcome would demonstrate a decrease in infection rates in ED patients over time as a result of the intervention.55 The subjects of the research were primarily resident learners (74%). Student education was featured in four articles,16,36,40,45 while students were included with other learning groups in five additional studies.28,30,33,34,39 Of all topic categories, technology predominated (20 articles), with half employing simulation11,18,30,37,41,46–49,51 as the learning modality. The primary objective was to was to evaluate a new curriculum in 16 (37%) articles.10,15,16,21,24,29,30,33,36,38,39,42,46,47,49,50 Learner competence continued to be a prominent research topic, which was studied in 14 (33%) articles.13,14,17,18,22,23,25,27,32,35,43– 45,50 Pediatric topics continued to be studied at about the same rate as in 2012, with seven (16%) articles.18,21,25,33,37,40,48 Eleven articles (26%) evaluated learners’ procedural competency.11,12,24,27,32,35,37,39,46,48,49 Communication strategies were studied in three articles,22,23,25 each of which used a qualitative methodology. A new topic emerged as a trend this year in six (14%) articles,11,19,34,37,44,52 in which clinical reasoning strategies were taught or evaluated. In our review of the 2012 articles, we included a 5year evaluation of trends.56 The articles that were published in 2013 followed similar patterns, with technology and simulation remaining as a predominant theme, and there was a slight increase in the presence of qualitative research. A summary of the trends of articles published in 2013 is provided in Table 3. Although underrepresented in numbers compared to quantitative studies, qualitative studies are necessary in medical education to serve as a foundation for identifying themes that can later generate hypotheses for standard quantitative research studies with experimental or quasi-experimental design. Qualitative studies were most often published in medical education journals, Canadian EM journals, or topical journals. The next logical step in creating high-quality educational programs is to test a hypothesis in a single setting, which is the predominant method in the studies of 2013. Ideas for this pilot testing proceed logically from results garnered in the qualitative studies. EM educators are encouraged

ACADEMIC EMERGENCY MEDICINE • November 2014, Vol. 21, No. 11 • www.aemj.org

Table 3 Trends for the Reviewed Education Research Articles of 2013

Variable Funding Learner groups* Medical students Residents Other Study methodology* Observational Experimental Qualitative Topics of study* Technology Curriculum evaluation Assessment/competence EM procedures Simulation Pediatric Clinical reasoning

All Publications (n = 42)

Highlighted (n = 7)

14

6

9 31 13

3 5 2

23 14 7

1 5 1

20 16 14 11 10 7 6

4 3 1 1 1 1 1

*It is possible to exceed 100% in these categories due to multiple learner categories or study topics.

to read journals that routinely publish qualitative studies to get timely ideas for meaningful quantitative work. In 2013, eight studies were conducted in more than one setting.12,15,17,19–21,31,45 Extending a hypothesistested study to more than one institution allows evaluation of an educational program beyond the control of a single institution and enhances generalizability, demonstrating its effectiveness with a wide range of educators, learners, and practice conditions. LIMITATIONS Despite extensive searches repeated over several months of the English-language literature for all publications that met review criteria, these searches may have erroneously omitted high-quality studies. The exclusion criteria used to cull articles may be considered overly rigorous. For example, per the search criteria, single site, nonvalidated surveys were not included in the review process. These criteria may have inadvertently excluded survey-based studies; however, no single-site, survey-based study published from 2008 to 2011 was ranked sufficiently to be highlighted as a superior education study in this series over the past 4 years. The rating metrics that have been used in this review are adapted from accepted literature on education research review and assessment and include broadly accepted metrics for experimental studies, as well as the unique metrics associated with qualitative research design. However, the scoring methods have not been validated externally. This fact may contribute to a ranking cut-point of quality articles that may be too stringent. However, given that the goal of this continuing series is to review high-quality education research pertinent to the teaching of EM, the sequential adjustments to the serial article selection process and rating methods have set the bar higher, continuing to promote the

1281

best examples of medical education research in the field of EM. CONCLUSIONS This critical appraisal of the emergency medicine education research literature highlights quality publications and recent trends in the field. The six quantitative and one qualitative study featured represent methodologically superior research published in 2013. Each contributes to the expanding field of education research, while addressing the methods to control, justify, or minimize the limitations that are inherent to this focus. These highlighted studies can serve as exemplary models for emergency medicine educators interested in conducting highquality, methodologically sound education research. References 1. LaMantia J, Deiorio NM, Yarris LM. Executive summary: education research in emergency medicineopportunities, challenges, and strategies for success. Acad Emerg Med 2012;19:1319–22. 2. Fisher J, Lin M, Coates WC, et al. Critical appraisal of emergency medicine educational research: the best publications of 2011. Acad Emerg Med 2013;20:200–8. 3. Boyer E. Scholarship Reconsidered: Priorities of the Professoriate. Princeton, NJ: The Carnegie Foundation for the Advancement of Teaching, 1990. 4. Farrell SE, Coates WC, Khun GJ, Fisher J, Shayne P, Lin M. Highlights in emergency medicine medical education research: 2008. Acad Emerg Med 2009;16:1318–24. 5. Kuhn GJ, Shayne P, Coates WC, et al. Critical appraisal of emergency medicine educational research: the best publications of 2009. Acad Emerg Med 2010;17:S16–25. 6. Shayne P, Coates WC, Farrell SE, et al. Critical appraisal of emergency medicine educational research: the best publications of 2010. Acad Emerg Med 2011;18:1081–9.  L, Turgeon J. Appraising qualitative research 7. Cote articles in medicine and medical education. Med Teach 2005;27:71–5. 8. Daley J, Willis K, Small R, et al. A hierarchy of evidence for assessing qualitative health research. J Clin Epidemiol 2007;60:43–9. 9. O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med 2014;epub ahead of print. 10. Abu-Laban RS, Jarvis-Selinger S, Newton L, Chung B. Implementation and evaluation of a novel research education rotation for Royal College of Physicians and Surgeons Emergency Medicine residents. CJEM 2013;15:233–6. 11. Ahn J, Kharash M, Aronwald R, et al. Assessing the Accreditation Council for Graduate Medical Education requirement for temporary cardiac pacing procedural competency through simulation. Simul Healthc 2013;8:78–83.

1282

12. Akhtar S, Hwang U, Dickman E, Nelson BP, Morrison RS, Todd KH. A brief educational intervention is effective in teaching the femoral nerve block procedure to first-year emergency medicine residents. J Emerg Med 2013;45:726–30. 13. Aldeen AZ, Salzman DH, Gisondi MA, Courtney DM. Faculty prediction of in-training examination scores of emergency medicine residents. J Emerg Med 2013;46:390–5. 14. Aram N, Brazil V, Davin L, Greenslade J. Intern underperformance is detected more frequently in emergency medicine rotations. Emerg Med Australas 2013;25:68–74. 15. Bellows JW, Douglass K, Atilla R, Smith J, Kapur GB. Creation and implementation of an emergency medicine education and training program in Turkey: an effective educational intervention to address the practitioner gap. Int J Emerg Med 2013; 6:29. 16. Bernard AW, Baladis A, Kman NE, Caterino JM, Khandelwal S. Medical student self-assessment narratives: perceived educational needs during fourthyear emergency medicine clerkship. Teach Learn Med 2013;25:24–30. 17. Beskind D, Hiller KM, Stolz U, et al. Does the experience of the writer affect the evaluative components on the standardized letter of recommendation in emergency medicine? J Emerg Med 2013;46:544– 50. 18. Bloch SA, Bloch AJ. Simulation training based on observation with minimal participation improves paediatric emergency medicine knowledge, skills and confidence. Emerg Med J 2013;. doi:10.1136/ emermed-2013-202995 19. Boulouffe C, Doucet B, Muschart X, Charlin B, Vanpee D. Assessing clinical reasoning using a script concordance test with electrocardiogram in an emergency medicine clerkship rotation. Emerg Med J 2013;31:313–6. 20. Bounds R, Bush C, Aghera A, et al. Emergency medicine residents’ self-assessments play a critical role when receiving feedback. Acad Emerg Med 2013;20:1055–61. 21. Boutis K, Grootendorst P, Willan A, et al. Effect of the low risk ankle rule on the frequency of radiography in children with ankle injuries. CMAJ 2013;185: E731–8. 22. Chan T, Sabir K, Sanhan S, Sherbino J. Understanding the impact of residents’ interpersonal relationships during emergency department referrals and consultations. J Grad Med Educ 2013;5: 576–81. 23. Chan T, Orlich D, Kulasegaram K, Sherbino J. Understanding communication between emergency and consulting physicians: a qualitative study that describes and defines the essential elements of the emergency department consultation-referral process for junior learners. CJEM 2013;15:42–51. 24. Chisholm CB, Dodge WR, Balise RR, Williams SR, Gharahbaghian L, Beraud AS. Focused cardiac ultrasound training: how much is enough? J Emerg Med 2013;44:818–22.

Farrell et al. • APPRAISAL OF EM EDUCATION RESEARCH, 2013

25. Cho CS, Delgado EM, Barg FK, Posner JC. Resident perspectives on professionalism lack common consensus. Ann Emerg Med 2014;63:61–7. 26. Cicero MV, Riera A, Northrup V, Auerbach M, Pearson K, Baum CR. Design, validity, and reliability of a pediatric resident jumpSTART disaster triage scoring instrument. Acad Pediatr 2013;13:48–54. 27. Clark TR, Brizendine EJ, Milbrandt JC, Rodgers KG. Impact of an anesthesiology rotation on subsequent endotracheal intubation success. J Grad Med Educ 2013;5:70–3. 28. Cobb T, Jeanmonod D, Jeanmonod R. The impact of working with medical students on resident productivity in the emergency department. West J Emerg Med 2013;14:585–9. 29. Drukteinis DA, O’Keefe K, Sanson T, Orban D. Preparing emergency physicians for malpractice litigation: a joint emergency medicine residency-law school mock trial competition. J Emerg Med 2013;46:95–103. 30. Fernandez R, Pearce M, Grand JA, et al. Evaluation of a computer-based educational intervention to improve medical teamwork and performance during simulated patient resuscitations. Crit Care Med 2013;41:2552–62. 31. Gugelmann H, Shofer FS, Meisel QF, Perrone J. Multidisciplinary intervention decreased the use of opioid medication discharge pack from two urban emergency departments. Am J Emerg Med 2013;31:1343–8. 32. Hafner JW, Bryant A, Huang F, Swisher K. Effectiveness of a drill-assisted intraosseous catheter versus manual intraosseous catheter by resident physicians in a swine model. West J Emerg Med 2013;14:629–32. 33. Hansen M, Cedar A, Yarris L, Spiro D, Ilgen JS, Meckler G. Development and implementation of a web-based instrument to assess management of pediatric respiratory emergencies among trainees. Pediatr Emerg Care 2013;29:1037–40. 34. Ilgen JS, Bowen JL, McIntyre LA, Banh KV, Barnes D. Comparing diagnostic performance and the utility of clinical vignette-based assessment under testing conditions designed to encourage either automatic or analytic thought. Acad Med 2013;88:1545–51. 35. Je S, Cho Y, Choi HJ, Kang B, Lim T, Kang H. An application of the learning curve-cumulative summation test to evaluate training for endotracheal intubation in emergency medicine. Emerg Med J 2013;. doi:10.1136/emermed-2013-202470 36. Jordan J, Jalali A, Clarke S, Dyne P, Spector T, Coates W. Asynchronous versus didactic education: it’s too early to throw in the towel on tradition. BMC Med Educ 2013;13:105. 37. Kamdar G, Kessler DO, Tilt L, et al. Qualitative evaluation of just-in-time simulation-based learning. Simul Healthc 2013;8:43–8. 38. Kanaan Y, Knoepp UD, Kelly AM. The influence of education on appropriateness rates for CT pulmonary angiography in emergency department patients. Acad Radiol 2013;20:1107–14.

ACADEMIC EMERGENCY MEDICINE • November 2014, Vol. 21, No. 11 • www.aemj.org

39. Moak JH, Rajkumar JS, Woods WA. The wire is really easy to see (WIRES): sonographic visualization of the guidewire by novices. CJEM 2013;15:18–23. 40. Nagler J, Pina C, Weinter DL, Nagler A, Monoteaux MC, Bachur RG. Use of an automated case log to improve trainee evaluations on a pediatric emergency medicine rotation. Pediatr Emerg Care 2013;29:314–8. 41. Patterson MD, Geis GL, LeMaster T, Wears RL. Impact of multidisciplinary simulation-based training on patient safety in a paediatric emergency department. BMJ Qual Safe 2013;22:383–93. 42. Pourmand A, Lucas R, Nourace M. Asynchronous web-based learning, a practical method to enhance teaching in emergency medicine. Telemed Health 2013;19:169–72. 43. Ryan J, Barias D, Pollack S. The relationship between faculty performance assessment and results on the in-training examination for residents in an emergency medicine training program. J Grad Med Educ 2013;5:582–6. 44. Schubert CC, Denmark K, Crandall B, Grome A, Pappas J. Characterizing novice-expert differences in macrocognition: an exploratory study of cognitive work in the emergency department. Ann Emerg Med 2013;61:96–109. 45. Sherbino J, Kulasegaram K, Worster A, Norman GR. The reliability of encounter cards to assess the Can MED roles. Adv Health Sci 2013;18:987–96. 46. Shokoohi H, Boniface K, McCarthy M, et al. Ultrasound-guided peripheral intravenous access program is associated with a marked reduction in central venous catheter use in noncritically ill emergency department patients. Ann Emerg Med 2013;61:198–203. 47. Springer R, Moh J, Shusdock I, Brautigam R, Donahue S, Butler K. Simulation training in critical care: does practice make perfect? Surgery 2013;154:345– 50.

1283

48. Sylvia MJ, Maranda L, Harris KL, Thompson J, Welsh BM. Comparison of success rates using video laryngoscopy versus direct laryngoscopy by residents during a simulated pediatric emergency. Simul Healthc 2013;8:155–61. 49. Tobin CD, Clark CA, McEvoy MD, et al. An approach to moderate sedation simulation training. Simul Healthc 2013;8:114–23. 50. Visconti A, Gaeta T, Cabezon M, Briggs W, Pyle M. Focused board intervention (FBI): a remediation program for written board preparation and the medical knowledge core competency. J Grad Med Educ 2013;5:464–7. 51. Warrington SJ, Beeson MS, Fire FL. Are simulation stethoscopes a useful adjunct for emergency residents’ training on high-fidelity mannequins? West J Emerg Med 2013;14:275–7. 52. Wiswell J, Tsao K, Bellolio F, Hess EP, Cabrera D. “Sick” or “not-sick”: accuracy of System 1 diagnostic reasoning for the prediction of disposition and acuity in patients presenting to an academic ED. Am J Emerg Med 2013;31:1448–57. 53. Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA 2007;298:1002–9. 54. Hutchinson L. Evaluating and researching the effectiveness of educational interventions. Br Med J 1999;318:1267–9. 55. Kirkpatrick DL. Evaluation of training. in: Craig RL, ed. Training and Development Handbook: A Guide to Human Resource Development. New York, NY: McGraw Hill, 1976. 56. Lin M, Fisher J, Coates WC, et al. Critical appraisal of emergency medicine education research: the best publications of 2012. Acad Emerg Med 2014;21: 322–33.

Critical appraisal of emergency medicine education research: the best publications of 2013.

The objective was to critically appraise and highlight methodologically superior medical education research articles published in 2013 whose outcomes ...
256KB Sizes 0 Downloads 5 Views