Social media

A novel use of Twitter to provide feedback and evaluations Bobby Desai, Department of Emergency Medicine, University of Florida College of Medicine, Gainesville, Florida, USA

SUMMARY Background: Inconsistencies in work schedules and faculty supervision are barriers to monthly emergency medicine (EM) resident doctor evaluations. Direct and contemporaneous feedback may be effective in providing specific details that determine a resident’s evaluation. Objectives: To determine whether Twitter, an easy to use application that is available on the Internet via smartphones and desktops, can provide direct and contemporaneous feedback that is easily accessible, and easy to store and refer back to.

Methods: First- to third-year EM residents were administered a survey to assess their thoughts on the current monthly evaluation system. Subsequently, residents obtained a Twitter account and were instructed to follow a single general faculty Twitter account for ease of data collection. Following completion of an 8–week study period, a second survey was administered to assess resident thoughts on contemporaneous feedback and evaluations versus the traditional form. Results: Of the 24 EM residents, 13 were available for study. A total of 220 ‘tweets’ were provided by seven faculty

members, with a mean of 11 tweets (range 8–17) per resident. The 13 residents received a total of eight formal evaluations from 19 faculty members. The second survey demonstrated that this method provided more detailed evaluations and increased the volume of feedback. Conclusion: Contemporaneous feedback and evaluation provides a greater volume of feedback that is more detailed than end-of-course evaluations. Twitter is an effective and easy means to provide this feedback. Limitations included the length of study time and the inability to have all of the EM residents involved in the study.

Direct feedback may be effective in providing specific details that determine a resident’s evaluation

© 2014 John Wiley & Sons Ltd. THE CLINICAL TEACHER 2014; 11: 141–145 141

tct_12086.indd 141

3/5/2014 6:51:55 PM

Residents are dissatisfied with current evaluation processes

INTRODUCTION

P

erformance feedback is essential in education, and is crucial for future development.1 Most learners agree that feedback is important to their education1,2; however, the delivery of information and the reception of feedback is occasionally challenging. Studies show that students perceive a lack of feedback because of an actual lack of feedback, difficulties in assessing provided feedback and not realising that feedback has occurred.3 In fact, residents are dissatisfied with current evaluation processes.4 Specific to Emergency Medicine, Croskerry cited case infrequency, shift changeover and shift work as problematic to feedback.5 Formal evaluations typically occur at the end of a course unit and are a summation of the feedback provided. These are ‘summative’ evaluations, and require the evaluator to recall specific events and any corrective feedback provided at the time.6 However, if feedback is not recognised, it may be difficult to recognise specific statements within an evaluation derived from that feedback. The corollary is that in order to devise a constructive evaluation, it is important to assess how the learner processed the feedback given in a prior scenario and how they then used it in later encounters. The recollection of feedback given by evaluators may be problematic, especially if multiple evaluations are being performed at one time. It can be assumed that these evaluations are less substantive than evaluations formulated contemporaneously. With shift work, some problems arise. Faculty members may work with a learner minimally, and sometimes not at all. Thus, it is difficult to assess the use of prior feedback in later encounters, and an evaluation is not helpful with-

out prior feedback and witnessing change in response to that feedback.3 Daily evaluations provide more information about a learner’s experience during clinical scenarios.7 These ‘formative’ evaluations are based on continuous evaluation, and eliminate two biases: ‘recency’, where end performance has a greater impact on the overall evaluation; and ‘primacy’, where first impressions override subsequent performance.6,8 Thus, in this context, the terms ‘feedback’ and ‘evaluation’ are synonymous. Formative feedback and evaluation has several advantages. Information on performance can rapidly be disseminated. Learners receive immediate feedback, and changes in behaviour and performance can quickly be implemented, and improvement (or lack thereof) is noted sooner. The delivery of formative evaluations is important. If survey driven, questions should be kept to the minimum to ensure compliance. Secondly, data collection and analysis should be easy: the method of data collection is equally important to the data. Finally, how these evaluations are recorded and stored is also important. One study examined feedback provided to students via e–mail. E–mail was an effective way to disseminate information, but some responses were received within 48 hours – not contemporaneously – although the amount of information obtained was deemed to be adequate.7 The Internet is a valuable tool for educators. ‘Web 2.0’ applications facilitate information sharing and partnership.9 For this study, Twitter was ideal. Twitter is a social networking and microblogging site that allows users to send text-based ‘tweets’ that can

be sent via multiple modalities, including smartphones. In short, a tweet can be sent quickly and easily. Twitter provides significant advantages for feedback and evaluation. It is free and secure. Direct messages are private, and can be sent and read only by permitted individuals. Messages are instant and are restricted to 140 characters, ensuring pointed feedback, and can be collated. Thus, one can tweet about a response to specific feedback given in prior tweets, which can ultimately provide an effective evaluation. This study aims to evaluate this novel use of Twitter.

METHODS Participants Twenty-four residents and five faculty members were recruited; the study was approved by our institutional review board. Procedure and materials Residents completed a closedquestion survey of seven questions assessing their thoughts on the current feedback and evaluation process. Five questions concerned the current feedback process; two questions dealt with the current evaluation system. A ‘useful’ evaluation noted deficiencies and either an improvement of the deficiency or developed a mechanism to improve any deficiencies. All questions used a five-point Likert scale. These questions are listed in Table 1.

142 © 2014 John Wiley & Sons Ltd. THE CLINICAL TEACHER 2014; 11: 141–145

tct_12086.indd 142

3/5/2014 6:51:56 PM

Table 1. Pre-study survey (% of respondents) Strongly agree

Agree

Neutral

Disagree

Strongly disagree

0

15

20

10

55

I would like to receive end-of-shift feedback

95

0

5

0

0

I receive feedback post-procedure

0

20

20

60

0

I receive feedback post-resuscitation

0

25

20

55

0

I would prefer more frequent feedback

95

5

0

0

0

End-of-block evaluations alone are appropriate

10

20

35

20

15

5

10

35

35

15

Question I receive end-of-shift feedback

I receive useful end-of-rotation evaluations

All participants obtained a Twitter account. All residents then ‘followed’ the faculty account, allowing both to ‘see’ each other. In this manner participants would be able to send private bidirectional direct messages to each other. Faculty member participants were to provide feedback (immediate information that over time results in a change in behaviour) after three occurrences: postshift; post-procedure; and

post-resuscitation. The response to feedback provided via Twitter would be then detailed in formal evaluations.

Participants would be able to send private bidirectional direct messages to each other

evaluations. The post-survey questions are listed in Tables 2 and 3.

RESULTS After the study period, a seven-question survey was sent to residents assessing usefulness, volume of feedback and the adequacy of formal evaluations provided. Participating faculty members were surveyed on their perceptions of the feedback process, the use of Twitter and the construction of formative

Twenty out of 24 residents responded to the pre-study survey. Of these, 65 per cent disagreed that they received end-of-shift feedback, and 95 per cent wanted more consistent and frequent end-of-shift feedback; 60 and 55 per cent noted minimal feedback post-procedure and

Table 2. Post-study survey (% of respondents) Agree

Neutral

Disagree

Strongly disagree

92

8

0

0

0

100

0

0

0

0

I receive useful feedback post-procedure with the Twitter method*

38

54

0

8

0

I receive useful post-resuscitation feedback with the Twitter method*

38

54

0

8

0

I receive useful end-of-rotation evaluations with the current system

0

31

45

8

16

I receive useful end-of-rotation evaluations with the Twitter method*

77

23

0

0

0

I receive MORE end-of-rotation evaluations with the Twitter method*

100

0

0

0

0

Question I receive more useful feedback contemporaneously with the Twitter method* I receive a greater volume of feedback with the Twitter method*

Strongly agree

*Twitter method: evaluations provided by the faculty members participating in the study. © 2014 John Wiley & Sons Ltd. THE CLINICAL TEACHER 2014; 11: 141–145 143

tct_12086.indd 143

3/5/2014 6:51:57 PM

Residents agreed that they received more feedback contemporaneously

Table 3. Post-study survey of faculty members (% of respondents) Strongly agree

Question

Agree

Neutral

Disagree

Strongly disagree

I found it easy to construct a formal endof-block evaluation prior to using Twitter

100

0

0

0

0

I felt I provided enough feedback prior to using Twitter

0

0

40

40

20

I felt I provided enough post-procedure feedback prior to using Twitter

0

100

0

0

0

I felt I provided enough post-resuscitation feedback prior to using Twitter

0

100

0

0

0

This method has not changed the way I do evaluations

0

0

0

0

100

The time spent in doing evaluations has not changed

0

0

0

20

80

post-resuscitation respectively (‘disagreed’), with 20 per cent being neutral. Most were neutral or unsatisfied with the current evaluation system, and 70 per cent felt that end-of-month evaluations alone were not enough as a formative tool. Ninety per cent wanted more frequent feedback. For the Twitter phase of the study, 13 of 24 residents were available. A total of 220 ‘tweets’ were provided, with a mean of 11 tweets per resident (range 8–17). During the study, 13 residents received a total of eight formal evaluations from 19 faculty members that were not participating in the Twitter pilot. The study faculty members provided a formal evaluation to each of the study residents at the end of each block period, with a total of 65 formal evaluations for each block. A post-study survey was sent to all participating residents, with a 100 per cent response rate. All reported receiving a greater volume of feedback with Twitter, including post-procedure and post-resuscitation feedback. Faculty members used Twitter comments when writing formal

evaluations, with all agreeing that they were easier to construct because the feedback information was preserved, and because any responses or changes in behaviour secondary to the feedback were also preserved, within the Twitter feed. For the survey of faculty members, all five responded. All were dissatisfied with the current evaluation system and felt that it lacked the breadth and depth provided by Twitter, as prior feedback comments were not recalled when formulating evaluations, and couldn’t then be commented upon. All felt that it was easier to provide feedback post-procedure and post-resuscitation with Twitter, and 80 per cent commented that they did more evaluations in less time with Twitter because the feedback was already present, and all that was needed was to ascertain any response to the feedback.

DISCUSSION The study aim was to evaluate two separate processes – feedback and evaluation – under the hypothesis that a formative evaluation can take place only via the response to direct feedback. As daily feedback provides more information about

a learner’s experience during clinical scenarios, and eliminates bias, ‘feedback’ and ‘evaluation’ are synonymous.7 Furthermore, the aim of this study was to assess how learners perceived instant feedback provided in a novel way, as well as to assess how easily faculty members were able to provide contemporaneous feedback and, later, a formal written evaluation. The surveys were designed specifically to ensure compliance. The first survey agrees with prior studies showing dissatisfaction with current evaluation processes.4 More frequent feedback was requested, especially in three instances: at the end of shifts, and after every procedure and resuscitation. Finally, most felt that end-of-rotation evaluations were adequate; however, most were neutral or dissatisfied with the current system. Subsequent to Twitter, residents agreed that they received more feedback contemporaneously and agreed that the usefulness (detail and volume) of this feedback was greater. All residents agreed that the usefulness of end-of-block evaluations was greatly enhanced. We now hypothesise that Twitter is superior to the current

144 © 2014 John Wiley & Sons Ltd. THE CLINICAL TEACHER 2014; 11: 141–145

tct_12086.indd 144

3/5/2014 6:51:57 PM

evaluation system because of the sheer volume of feedback. The volume of feedback provided allowed participating faculty members to give everyone a formal evaluation, whereas with the current system just eight formal evaluations were provided by the other faculty members. The simplicity of ‘tweeting’ seemingly engaged the faculty members more than the typical interactions, thereby increasing the overall volume of feedback and easing the preparation of a formal evaluation. Furthermore, as data were retained within Twitter, faculty members merely had to assess any change in practice or behaviour secondary to the feedback provided. All felt strongly that this method was easier and faster, and provided a more solid foundation for formal evaluation.

study. Secondly, there was no time limit for submitting formal evaluations; thus, there was a lack of control evaluations. However, it is difficult to ascertain the significance of an evaluation several weeks or months after a rotation, as habits may have changed, and intellectual and clinical growth is expected. Lastly, there was selection bias with the faculty members. All were educators, and could be assumed to provide thorough and thoughtful feedback and evaluations. These limitations notwithstanding, Twitter remains an effective and easy means to provide feedback, and allows for the development of thoughtful and perceptive end-of-block evaluations.

3.

Branch WT, Paranjape A. Feedback and Reflection: Teaching Methods for Clinical Settings. Acad Med 2002;77:1185–1188.

4.

Isaacson JH, Post LK, Litaker DG, Halperin AK. Resident Perception of the evaluation process. J Gen Intern Med 1995;10:S89.

5.

Croskerry P. The Feedback Sanction. Acad Emerg Med 2000;7: 1232–1238.

6.

Stieger S, Burger C. Let’s Go Formative: Continuous Student Ratings with Web 2.0 Application Twitter. Cyberpsychol Behav Soc Netw 2010;13:163–167.

7.

Kogan JR, Bellini LM, Shea JA. Have you had your feedback today? Acad Med 2000;75:1041.

8.

Steiner DD, Rain JS. Immediate and delayed primacy and recency effects in performance evaluation. Journal of Applied Psychology 1989;74:136–142.

9.

O’Reilly T. What Is Web 2.0. Available at http://oreilly.com/ pub/a/web2/archive/what-isweb-20.html. Accessed on 6 August 2011.

REFERENCES 1.

Sostok MA, MD, Coberly L, Rouan G. Feedback Process between Faculty and Students. Acad Med 2002;77:267.

2.

Gil DH. Perceptions of medical school faculty members and students on

LIMITATIONS Not all of the residents could be evaluated in Twitter during the

clinical clerkship feedback. J Med Educ 1984;59:856–864.

The simplicity of ‘tweeting’ engaged the faculty members more than the typical interactions

Corresponding author’s contact details: Dr Bobby Desai, Department of Emergency Medicine, University of Florida College of Medicine, PO Box 100186, Gainesville, FL 32610, USA. E-mail: [email protected]

Funding: None. Conflict of interest: None. Ethical approval: The University of Florida Institutional Review Board approved this study. doi: 10.1111/tct.12086

© 2014 John Wiley & Sons Ltd. THE CLINICAL TEACHER 2014; 11: 141–145 145

tct_12086.indd 145

3/5/2014 6:51:57 PM

A novel use of Twitter to provide feedback and evaluations.

Inconsistencies in work schedules and faculty supervision are barriers to monthly emergency medicine (EM) resident doctor evaluations. Direct and cont...
94KB Sizes 3 Downloads 3 Views