Acta Ophthalmologica 2015

Simulation-based certification for cataract surgery Ann Sofia Skou Thomsen,1,2 Jens Folke Kiilgaard,1 Hadi Kjærbo,1 Morten la Cour1 and Lars Konge2 1

Department of Ophthalmology, Glostrup University Hospital, Glostrup, Denmark Centre for Clinical Education, Centre for HR, Capital Region of Denmark, Copenhagen, Denmark

2

ABSTRACT. Purpose: To evaluate the EyeSiTM simulator in regard to assessing competence in cataract surgery. The primary objective was to explore all simulator metrics to establish a proficiency-based test with solid evidence. The secondary objective was to evaluate whether the skill assessment was specific to cataract surgery. Methods: We included 26 ophthalmic trainees (no cataract surgery experience), 11 experienced cataract surgeons (>4000 cataract procedures) and five vitreoretinal surgeons. All subjects completed 13 different modules twice. Simulator metrics were used for the assessments. Results: Total module score on seven of 13 modules showed significant discriminative ability between the novices and experienced cataract surgeons. The intermodule reliability coefficient was 0.76 (p < 0.001). A pass/fail level was defined from the total score on these seven modules using the contrasting-groups method. The test had an overall ability to discriminate between novices and experienced cataract surgeons, as 21 of 26 novices (81%) versus one of 11 experienced surgeons (9%) did not pass the test. The vitreoretinal surgeons scored significantly higher than the novices (p = 0.006), but not significantly lower than the experienced cataract surgeons (p = 0.32). Conclusion: We have established a performance test, consisting of seven modules on the EyeSiTM simulator, which possess evidence of validity. The test is a useful and reliable tool for assessment of both cataract surgical and general microsurgical skills in vitro. Key words: assessment – cataract virtual reality simulation

surgery – proficiency-based

training – standard

setting –

Acta Ophthalmol. 2015: 93: 416–421 ª 2015 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd

doi: 10.1111/aos.12691

Introduction Cataract surgery, mostly done by phacoemulsification, is one of the most common surgical procedures performed in Western countries (Solborg et al. 2014). Exquisite hand–eye coordination is required to perform the procedure, and learning curves for novices are long (Randleman et al. 2007). Virtual reality simulation (VRS) training can offer a safe environment for the novice surgeon during the first part of the learning curve, and

416

the use of VRS is increasing in cataract surgery training (Gillan & Saleh 2013; Saleh et al. 2013; Bergqvist et al. 2014; Kloek et al. 2014). Recent retrospective studies have shown significant beneficial effects of VRS cataract surgery training on subsequent operation duration (Pokroy et al. 2013), rate of errant capsulorhexes (McCannel et al. 2013), mean phaco time and percentage phaco power in the OR (Belyea et al. 2011). However, the amount of training on a cataract surgery VRS before reaching

proficiency remains uncertain. Learning curves are highly individual, and rates of skill acquisition vary widely between novices (Gallagher et al. 2005). Therefore, to set a time limit for training is not applicable. Instead, establishment of a proficiency-based test is needed to ensure that novices are sufficiently qualified before moving on to in vivo surgery (Ward et al. 2006; Grantcharov & Reznick 2008). A proficiency-based test results in the opportunity for individualization – as well as evaluation – of the applied training programme. Furthermore, previous studies have shown that training to criterion enhances learning and retention (Seymour et al. 2002; Gershuni et al. 2013). The important decision of when a trainee is ready to perform procedures on patients has to be based on a test with solid evidence. Validity and reliability are central concepts in performance testing and a test without reliability and validity has limited use. In modern validity theory, construct validity encompasses all types of validity, describing the extent to which a test measures what it claims to measure – in this regard, cataract surgery skills. The framework for validity used by the American Educational Research Association requires evidence regarding five aspects of the assessment (American Educational Research Association 1999). These aspects all fall under the concept of construct validity and include response process (e.g. quality control), internal structure (i.e. reliability), relations with other variables (e.g. performance related to surgical experience) and consequences (e.g. impact of the test). Also, the content of the test is of great importance; to predict ability

Acta Ophthalmologica 2015

within a certain area, the assessment should be representative, in this case for a cataract surgery curriculum (Ackerman & Beier 2006). In this regard, it is still unknown whether the content of a cataract surgery VRS measures specific cataract surgery skills or general microsurgical skills. It is possible to practise phacoemulsification on the EyeSiTM simulator, which is used in educational facilities throughout Europe and the USA. The EyeSiTM simulator consists of both abstract training modules and modules representing each step of the phacoemulsification procedure. Several studies have looked into the discriminative ability of single modules on the simulator, but none has investigated a broader range of modules and established pass/fail scores based on credible standard setting methods (Privett et al. 2010; Le et al. 2011; Selvander & Asman 2013; Spiteri et al. 2014). Our aim was to investigate all modules on a cataract surgery VRS, selecting those with discriminative ability, to design an evidence-based performance test. Furthermore, we wanted to investigate whether the simulator content is specific for cataract surgery or if the skills tested represent general microsurgical skills.

Methods The trial was conducted as a prospective controlled interventional study. It was carried out at the Centre for Clinical Education, Capital Region of Denmark. The phacoemulsification interface on the EyeSiTM simulator (VRmagic, version 2.8.10) was used for the study. The test was constructed by selecting modules corresponding to learning objectives in a cataract surgery curriculum. All modules on the simulator were included in the study except one, representing an alternative approach (chopping) not deemed relevant for the young trainee. Determination of the difficulty level of each module was based on results from previous validation studies on the EyeSiTM simulator (Mahr & Hodge 2008; Privett et al. 2010; Selvander & Asman 2010, 2013; Le et al. 2011). Difficulty level of those modules not previously investigated were chosen based on a pilot study performed by two novices, one experienced cataract surgeon (HK) and one

experienced vitreoretinal surgeon (MLC). Three groups of participants were defined: (i) novices, (ii) experienced cataract surgeons and (iii) experienced vitreoretinal surgeons. Inclusion criteria for each of the groups were as follows: (i) doctors employed at an ophthalmology department without any cataract surgery experience, (ii) surgeons with >4000 cataract procedures and (iii) surgeons with >200 vitreoretinal procedures, and limited cataract surgery experience. All novices were recruited from the Department of Ophthalmology, Glostrup University Hospital. Cataract and vitreoretinal surgeons were recruited from ophthalmology departments or private specialist clinics in the Zealand region of Denmark. All surgeons would have to be active surgeons at the time of the study, that is operating at least one day per week. Study size: To assume normal distribution of test scores, we aimed for more than 10 experienced cataract surgeons (Bloch & Norman 2012). As novices perform less consistent (McGraw-Hill 2007), we recruited all ophthalmic trainees meeting the inclusion criteria. Vitreoretinal surgeons were difficult to recruit, resulting in a convenience sample. This study adhered to the tenets of the Declaration of Helsinki. All participants signed informed consent and completed a questionnaire regarding demographic data and experience (Table 1). Stereoacuity was measured using the TNO test (Lameris Ootech BV, 16th edition). All participants had to complete two test sessions, each session consisting of all 13 modules, preceded by 10 min of familiarization with the simulator. A maximum of two hours of testing was allowed. If only one session was completed within this time limit or if the participant was exhausted, a new session was scheduled within the same week to finish the testing. During testing, one author (AT) gave instructions to all participants. Instructions were given orally in a standardized manner based on a written document to ensure that the same instructions were given to all participants. Only the technical aspects of their performance were intended to be assessed. Between 21 and 33 different outcomes were assessed for each module

and were categorized into four domains by the simulator software: target achievement, efficiency, instrument handling and tissue treatment. The outcome for this study was total module score, calculated by the simulator software from all of the above mentioned domains, with a value of maximum 100 points. SPSS software version 19.0 (SPSS, Inc., Chicago, IL, USA) was used for statistical analysis. When tested for normality, data on novices and experienced cataract surgeons were found to be normally distributed. Modules where novices had a higher mean module score than experienced cataract surgeons were excluded. One-tailed independent samples t-tests were used to investigate discriminative ability between novices and experienced cataract surgeons. Modules without statistically significant discriminative ability on a 5% level were removed. Intermodule reliability analysis was performed using intraclass correlation coefficients, average measures, absolute agreement definition. Module score for the remaining modules amounted to a final test score. Mean test scores for novices and experienced cataract surgeons were compared using independent samples t-test, whereas intergroup comparison with the vitreoretinal surgeons was analysed using the Mann– Whitney U-test. The contrastinggroups method was used to calculate a proficiency level. The pass/fail score was determined at the intersection between the distributions of test scores obtained from the novices and experienced cataract surgeons (Downing & Yudkowsky 2009). The Ethics Committee of the Capital Region of Denmark ruled that approval was not required for this study (protocol no. 210170).

Results A total of 42 participants were included in the study: 26 novices, 11 experienced cataract surgeons and five vitreoretinal surgeons. All 42 participants completed the study. There were no significant differences in dexterity or stereoacuity between the three groups (Table 1). In the group of novices, 15 (58%) had prior experience with intraocular injections (mean number of injections 1400, range 15–2500), and 9 (35%) had prior

417

Acta Ophthalmologica 2015

microsurgical wet lab experience with a median of 14 training hours (range 12– 500 hr). Of the included vitreoretinal surgeons, only three had prior experience with cataract surgery. One had performed 250 cataract surgeries 28 years ago. The two other vitreoretinal surgeons had performed 5 and 10 cataract surgeries, respectively, 9 and 2 years before inclusion in the study.

There was a significant improvement in module scores from the first to second session for all groups, including the experienced cataract surgeons (Table 2). We therefore decided to use data only from the second session as it was apparent that the first session served as an additional familiarization with the simulator.

Table 1. Group characteristics.

Novices N = 42 Gender (n) Male Age (median) Dexterity (n) Right-handed TNO median (range) Experience Phacoemulsification* Total Last 12 months Time since last surgery (years) Vitreoretinal procedures* Total Last 12 months Virtual reality simulation†

26

Vitreoretinal surgeons

Cataract surgeons

5

11

12 (46%) 33

5 (100%) 49

8 (73%) 53

23 (88%) 60 (15;480)

5 (100%) 60 (30;60)

11 (100%) 60 (30;240)

0 0 –

50 (0;250) 0 13 (2;28)

0 0 0,3

9820 (4000;24000) 880 (400;1500) 0

3180 (400;8000) 335 (200;500) 1,0

20 (0;200) 0 0,0

* Mean number of surgeries (range). † Mean number of hours spent on a virtual reality simulator.

Six modules could not discriminate between novices and experienced cataract surgeons, whereas the remaining seven of the 13 modules showed statistical significant discriminative ability during both sessions (Table 2). Those seven modules also showed good reliability (ICC 0.76, p < 0.001) and were included in the final performance test. Mean test scores (SD) were 333 (96), 462 (68) and 497 (52) for novices, vitreoretinal surgeons and experienced cataract surgeons, respectively (Fig. 1). There were statistically significant differences in test scores between novices and experienced cataract surgeons (p < 0.001), and between novices and vitreoretinal surgeons (p = 0.006). There was a no statistical significant difference in mean test score comparing the group of vitreoretinal surgeons and experienced cataract surgeons (p = 0.32). The pass/fail score was determined at 422 (Fig. 2). Consequence analysis showed that the test had an overall ability to distinguish between novices and experienced cataract surgeons, as 21 (81%) novices did not pass the proficiency test versus one (9%) of the experienced cataract surgeons (Fig. 3). The template for the assessment tool is available online (Fig. S1).

Table 2. Construct validity: Overview of all tested modules including module score for experienced cataract surgeons and novices.

Module 1 2 3 4 5 6 7

8 9 10 11 12 13

Abstract/ procedural task

Novice Mean score Session 1

Novice Mean score Session 2

Experienced Mean score Session 1

Experienced Mean score Session 2

p-value (diff1)

p-value (diff2)

Task description

Level

Total no. of levels

Navigation training Intracapsular navigation Antitremor training Intracapsular antitremor Forceps training Bimanuel training Cracking and chopping training Phaco training Capsulorhexis Hydrodissection Phaco divide and conquer Irrigation and aspiration IOL insertion

2 2

3 3

Abstract Abstract

76 (14) 41 (30)

84 (12) 69 (24)

85 (12) 86 (14)

91 (11) 94 (4)

0.028 0.000

0.055 0.000*

4 2

7 5

Abstract Abstract

30 (28) 9 (36)

40 (25) 22 (26)

52 (36) 36 (38)

62 (29) 53 (30)

0.026 0.022

0.014* 0.002*

4 5 8

4 5 8

Abstract Abstract Abstract

41 (26) 53 (12) 33 (38)

57 (25) 57 (11) 49 (42)

61 (25) 60 (13) 65 (35)

86 (7) 66 (13) 60 (33)

0.022 0.046 0.011

0.000* 0.020* 0.205

2 1† 4 5

3 3 8 8

Abstract Procedural Procedural Procedural

46 33 72 31

51 35 76 53

47 62 62 54

52 73 73 64

(20) (18) (16) (12)

0.479 0.004 0.105 0.011

0.441 0.000* 0.362 0.030*

1

5

Procedural

47 (35)

55 (35)

66 (29)

69 (27)

0.047

0.100

2

4

Procedural

45 (33)

66 (23)

49 (25)

77 (14)

0.362

0.079

(21) (30) (10) (29)

(22) (27) (22) (23)

(24) (26) (25) (20)

The p-values refer to the statistical difference in module score between the two groups in first (Diff1) and second (Diff2) session. Modules with nonsignificant discriminatory ability are greyed out. Navigation: Ability to move instruments. Anti-tremor: Ability to stabilize instruments. Intracapsular: Anatomical description. Cracking and chopping training: Bimanual training procedure. Phaco training: Hand-foot coordination module. Module 9-13: Specific cataract surgery procedures. * Modules with statistical significant discriminative ability between experienced cataract surgeons and novices on a 5% level in both sessions † Caps.: Weak zonula. No initial tear.

418

Acta Ophthalmologica 2015

Virtual-reality simulator test score

600

500

400

300

200

100

0

Novices

Experienced cataract surgeons

Vitreoretinal surgeons

Fig. 1. Score distribution between groups: Box-plot showing outliers, minimum, first quartile, median, third quartile and maximum.

Group Novices Experienced

Pass/fail score = 422

0

200

400

600

Fig. 2. Setting a pass/fail standard using the contrasting-groups method: Proficiency level defined by the intersection between score distributions for novices and experienced cataract surgeons. The curves are showing the normal distribution of test scores using mean test score and standard deviation for each group.

Discussion We have designed an objective test of competence in cataract surgery on the EyeSiTM simulator based on validity

evidence and established a pass/fail standard to ensure proficiency. The pass/fail standard represents the level at which trainees are expected to have received sufficient training on the simu-

lator. To our knowledge, this is the first study to consider all modules on the EyeSiTM simulator, using evidencebased methods to evaluate the assessment, thereby addressing the need for a proficiency criterion to use in a prepatient training programme in cataract surgery (Grantcharov & Reznick 2008). We found validity evidence for the simulator metrics on seven modules of the EyeSiTM simulator, both with regard to content, discriminative ability, reliability and consequences (American Educational Research Association 1999). Thus, several validity aspects from the framework used by the American Educational Research Association were assessed. Modules without discriminative ability may have had construct validity if other difficulty levels had been chosen. Ideally, all the specific procedural modules should have been included in the test to reflect a cataract surgery curriculum, whereas exclusion of the abstract (non-procedural) modules has no importance for the content (Ackerman & Beier 2006). However, the two procedural modules included in the test (capsulorhexis and Phaco divideand-conquer) were previously rated the two most difficult steps in phacoemulsification by trainee surgeons, thus strengthening the usefulness of the test (Dooley & O’Brien 2006). We found good reliability between the seven modules included in the test (ICC 0.76). The required level of reliability should reflect the purpose of the assessment; a reliability coefficient from 0.70 to 0.79 is appropriate for formative assessment (i.e. feedback), whereas above 0.8 is required for summative assessment (i.e. certification purposes) (Downing 2004). Applying a test twice will increase reliability (Streiner & Norman 2008). To ensure an adequate reliability for the purpose of the assessment, we therefore recommend that the defined proficiency level should be reached in two consecutive sessions before passing the trainee. The established pass/fail standard produced one false negative (experienced cataract surgeon who failed) and five false positive (novices who passed). This could be the result of a one-time event, and therefore, it is also in this regard a relevant requirement having to pass the test in two consecutive sessions. It is also important to emphasize that the purpose of this test was to assess only technical skills in a

419

Acta Ophthalmologica 2015

Group

Virtual-reality test score

600

Novices Experienced

500

Pass 400

Fail

300

200

100

0

Fig. 3. Consequence analysis: Relationship between test score for study participants and the established pass/fail standard.

controlled environment. It takes more to make an expert, such as cognitive and interpersonal skills (Norman et al. 2006). Also, the ability to cope with the unexpected is a characteristic of expertise. Furthermore, a significant proportion of the novices in our study had prior experience with wet lab and/or intraocular injections under microscope. This could lead to a diminished difference between the groups. Regarding generalization, it is an advantage that we have included novices with ophthalmological experience rather than medical students. Administering the test to medical students would increase the reliability of the test as the difference between medical students and experienced cataract surgeons is more pronounced (Streiner & Norman 2008). The resulting reliability, however, will not provide information about the utility of the test in the population of interest: trainees in ophthalmology. Other methods could be used to define a proficiency level. However, the applied approach is widely accepted and results in a credible distinction between our clearly defined groups as shown by our consequence analysis. We found no statistically significant difference between experienced cataract surgeons and vitreoretinal surgeons, suggesting that general microsurgical skills are assessed by the simulator metrics. The lack of difference in the test scores could be explained by a

420

small sample size or previous cataract surgical experience in the group of vitreoretinal surgeons. As evident from Table 1, the vitreoretinal surgeons included in the study have only very limited prior experience in cataract surgery. Therefore, the relative high observed mean score for the vitreoretinal surgeons is more likely due to extent microsurgical experience. This finding is not unexpected, as both cataract and vitreoretinal surgeries are microsurgical procedures, and previous studies have shown that some technical skills are transferable between surgical methods and tasks (Kwasnicki et al. 2013; Panait et al. 2014). Spiteri and colleagues (Spiteri et al. 2014) have recently published a study on a stepwise training programme on the EyeSiTM simulator, consisting of five modules on different difficulty levels. The stepwise structure of the programme seems appropriate for a curriculum, although complex regarding the assessment of proficiency as fragmented outcomes of the simulator performance are evaluated. We have chosen only to incorporate the total score of all metrics assessed by the simulator, as this is the readily available output for the users. It is important to stress that a training programme may differ from a performance test. This means that all the modules on the EyeSiTM simulator can be used for training if they provide

the student with either relevant cognitive or technical aspects of cataract surgery. Some of the simulator content is highly cataract surgery specific, such as the placement of the intraocular lens, which may supplement the learning of technical skills with theoretical knowledge. This was not investigated in this study as detailed instructions about the content of the modules were given by an instructor. Our aim was to provide a simple assessment tool for technical skills in cataract surgery. This pass/fail standard should not be interpreted as a single measure of proficiency; rather, it should be used in conjunction with other measures, that is assessment of cognitive competency. Ethical issues make simulation-based learning inevitable as traditional practice on patients no longer seems acceptable. Evidence indicates that operative performance improves when introducing virtual reality simulation into the cataract surgery curriculum (Belyea et al. 2011; McCannel et al. 2013; Pokroy et al. 2013). Furthermore, an associated cost reduction in residency programmes has been shown (Lowry et al. 2013). Indeed, there is a need for an assessment tool and an associated criterion, which objectively and in a standardized manner assesses when the trainee has achieved proficiency within the area. The established pass/fail standard represents an accessible and meaningful use of the assessment instrument – the EyeSiTM simulator – both for possible future certification/recertification and evaluation of applied training programmes in in vitro cataract surgery. Feudner et al. (2009) found that training on a VR simulator improved performance of capsulorhexis in the wet lab, but direct transfer to real operations remains to be shown. Experienced surgeons perform significantly better than ophthalmic trainees on a set of modules on the virtual reality simulator, and future studies should explore whether simulator training to the predefined performance level directly improves true cataract surgical skills.

References Ackerman PL, Beier ME (2006): Methods for Studying the Structure of Expertise: Psychometric Approaches. In: Ericsson KA, Charness N, Hoffman RR, Feltovich PJ (eds.) The Cambridge Handbook of Expertise and

Acta Ophthalmologica 2015

Expert Performance. New York: Cambridge University Press 147–166. American Educational Research Association (1999): Standards for educational and psychological testing. Washington DC: American Educational Research Association. Belyea DA, Brown SE & Rajjoub LZ (2011): Influence of surgery simulator training on ophthalmology resident phacoemulsification performance. J Cataract Refract Surg 37: 1756–1761. Bergqvist J, Person A, Vestergaard A & Grauslund J (2014): Establishment of a validated training programme on the Eyesi cataract simulator. A prospective randomized study. Acta Ophthalmol 92: 629–634. Bloch R & Norman G (2012): Generalizability theory for the perplexed: a practical introduction and guide: AMEE Guide No. 68. Med Teach 34: 960–992. Dooley IJ & O’Brien PD (2006): Subjective difficulty of each stage of phacoemulsification cataract surgery performed by basic surgical trainees. J Cataract Refract Surg 32: 604–608. Downing SM (2004): Reliability: on the reproducibility of assessment data. Med Educ 38: 1006–1012. Downing SM & Yudkowsky R (2009): Assessment in Health Professions Education. New York: Routledge. Feudner EM, Engel C, Neuhann IM, Petermeier K, Bartz-Schmidt KU & Szurman P (2009): Virtual reality training improves wet-lab performance of capsulorhexis: results of a randomized, controlled study. Graefes Arch Clin Exp Ophthalmol 247: 955–963. Gallagher AG, Ritter EM, Champion H, Higgins G, Fried MP, Moses G, Smith CD & Satava RM (2005): Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training. Ann Surg 241: 364–372. Gershuni V, Woodhouse J & Brunt LM (2013): Retention of suturing and knottying skills in senior medical students after proficiency-based training: Results of a prospective, randomized trial. Surgery 154: 823–829. Gillan SN & Saleh GM (2013): Ophthalmic surgical simulation: a new era. JAMA Ophthalmol 131: 1623–1624. Grantcharov TP & Reznick RK (2008): Teaching procedural skills. BMJ 336: 1129–1131. Kloek CE, Borboli-Gerogiannis S, Chang K, Kuperwaser M, Newman LR, Marie AM & Loewenstein JI (2014): A broadly applicable surgical teaching method: evaluation of a stepwise introduction to cataract surgery. J Surg Educ 71: 169–175. Kwasnicki RM, Aggarwal R, Lewis TM, Purkayastha S, Darzi A & Paraskeva PA (2013): A comparison of skill acquisition and transfer in single incision and multi-port laparoscopic surgery. J Surg Educ 70: 172– 179. Le TDB, Adatia FA & Lam WC (2011): Virtual reality ophthalmic surgical simula-

tion as a feasible training and assessment tool: results of a multicentre study. Can J Ophthalmol 46: 56–60. Lowry EA, Porco TC & Naseri A (2013): Cost analysis of virtual-reality phacoemulsification simulation in ophthalmology training programs. J Cataract Refract Surg 39: 1616– 1617. Mahr MA & Hodge DO (2008): Construct validity of anterior segment anti-tremor and forceps surgical simulator training modules: attending versus resident surgeon performance. J Cataract Refract Surg 34: 980–985. McCannel CA, Reed DC & Goldman DR (2013): Ophthalmic surgery simulator training improves resident performance of capsulorhexis in the operating room. Ophthalmology 120: 2456–2461. Magill RA (2007): Motor learning and control: concepts and applications. New York: McGraw-Hill. Norman G, Eva K, Brooks L, Hamstra S (2006): Expertise in Medicine and Surgery. In: Ericsson KA, Charness N, Hoffman RR, Feltovich PJ (eds.) The Cambridge Handbook of Expertise and Expert Performance. New York: Cambridge University Press 339–353. Panait L, Shetty S, Shewokis PA & Sanchez JA (2014): Do laparoscopic skills transfer to robotic surgery? J Surg Res 187: 53–58. Pokroy R, Du E, Alzaga A, Khodadadeh S, Steen D, Bachynski B & Edwards P (2013): Impact of simulator training on resident cataract surgery. Graefes Arch Clin Exp Ophthalmol 251: 777–781. Privett B, Greenlee E, Rogers G & Oetting TA (2010): Construct validity of a surgical simulator as a valid model for capsulorhexis training. J Cataract Refract Surg 36: 1835– 1838. Randleman JB, Wolfe JD, Woodward MW, Lynn MJ, Cherwek DH & Srivastava SK (2007): The resident surgeon phacoemulsification learning curve. Arch Ophthalmol 125: 1215–1219. Saleh GM, Lamparter J, Sullivan PM, O’Sullivan F, Hussain B, Athanasiadis I, Litwin AS & Gillan SN (2013): The international forum of ophthalmic simulation: developing a virtual reality training curriculum for ophthalmology. Br J Ophthalmol 97: 789– 792. Selvander M &  Asman P (2012): Virtual reality cataract surgery training: learning curves and concurrent validity. Acta Ophthalmol 90: 412–417. Selvander M &  Asman P (2013): Cataract surgeons outperform medical students in Eyesi virtual reality cataract surgery: evidence for construct validity. Acta Ophthalmol 91: 469–474. Seymour NE, Gallagher AG, Roman SA, O’Brien MK, Bansal VK, Andersen DK & Satava RM (2002): Virtual reality training improves operating room performance: results of a randomized, double-blinded study. Ann Surg 236: 458–463.

Solborg Bjerrum S, Mikkelsen KL & la Cour M (2014): Epidemiology of 411 140 cataract operations performed in public hospitals and private hospitals/clinics in Denmark between 2004 and 2012. Acta Ophthalmol 93: 16–23. Spiteri AV, Aggarwal R, Kersey TL, Sira M, Benjamin L, Darzi AW & Bloom PA (2014): Development of a virtual reality training curriculum for phacoemulsification surgery. Eye (Lond) 28: 78–84. Streiner DL & Norman GR (2008): Reliability. In: Streiner DL & Norman GR (eds.) Health Measurement Scales -a practical guide to their development and use. New York: Oxford University Press 167–210. Ward P, Williams AM, Hancock PA (2006): Simulation for Performance and Training. In: Ericsson KA, Charness N, Hoffman RR, Feltovich PJ (eds.) The Cambridge Handbook of Expertise and Expert Performance. New York: Cambridge University Press 243–262.

Received on October 17th, 2014. Accepted on January 12th, 2015. Correspondence: Ann Sofia Skou Thomsen Department of Ophthalmology Glostrup University Hospital Ndr. Ringvej 57, DK-2600 Glostrup, Denmark Tel: +45 38634700 Fax: +45 38634669 Email: [email protected] Parts of the study results have been presented at the Nordic Congress of Ophthalmology in Stockholm, August 2014. The study was funded by Fight for Sight Denmark and Synoptik Foundation. The funding organization had no role in the design or conduct of this research. All authors have completed the ICMJE uniform disclosure form (available on request from the corresponding author) and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; and no other relationships or activities that could appear to have influenced the submitted work. Statistical code and data set available from the corresponding author.

Supporting Information Additional Supporting Information may be found in the online version of this article: Figure S1. Template for assessment of cataract surgical proficiency on the EyeSiTM simulator.

421

Simulation-based certification for cataract surgery.

To evaluate the EyeSi(™) simulator in regard to assessing competence in cataract surgery. The primary objective was to explore all simulator metrics t...
241KB Sizes 0 Downloads 12 Views