545896 research-article2014

JHI0010.1177/1460458214545896Health Informatics JournalSims et al.

Original Article

Provider impressions of the use of a mobile crowdsourcing app in medical practice

Health Informatics Journal 1­–11 © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/1460458214545896 jhi.sagepub.com

Max H Sims

University of Rochester, USA

Maria Fagnano, Jill S Halterman and Marc W Halterman University of Rochester Medical Center, USA

Abstract In our prior work, we conducted a field trial of the mobile application DocCHIRP (Crowdsourcing Health Information Retrieval Protocol for Doctors), designed to help clinicians problem-solve at the point of care by crowdsourcing their peers. Here, we present the results of our post-trial survey that investigated the impressions of participating clinicians regarding the use of medical crowdsourcing and to identify factors influencing adoption of the technology. In all, 72 valid surveys were received from 85 registered users (85% response rate). The majority of clinicians (>80%) felt crowdsourcing would be useful to diagnose unusual cases, facilitate patient referrals, and problem-solve at the point of care. Perceived barriers to adoption included interruptions in workflow and the reluctance to publicly expose knowledge gaps. While considered a useful alternative to existing methods, future studies are needed to investigate whether the approach and application can be modified to effectively address these barriers, and to determine whether crowdsourcing will enhance provider performance and the quality of care delivered.

Keywords clinical decision-making, collaborative work practices, information and knowledge management, IT design and development, IT health care evaluation

Introduction Given the pace of medical practice, opportunities for face-to-face collaboration between health care providers (HCPs) are sporadic, and neither text paging nor email lends themselves towards effective collaboration at the point of care.1 Physicians typically have two questions for every three Corresponding author: Marc W Halterman, Department of Neurology, Center for Neural Development & Disease, University of Rochester Medical Center, 601 Elmwood Avenue, Box 645, Rochester, NY 14642, USA. Email: [email protected]

Downloaded from jhi.sagepub.com at U.A.E University on November 14, 2015

2

Health Informatics Journal 

patients encountered, yet finding relevant articles to address focused questions quickly is difficult and in practice, providers use medical references fewer than nine times per month.2 The “information overload” problem is not specific to the field of medicine, and other disciplines have started to adopt an evolving form of digital collaboration called crowdsourcing.3 Fueled by expanded access to the Internet and availability of web-enabled smart devices, crowdsourcing has been used to engage online communities to accomplish tasks of varying complexity. Large organizations crowdsource the collective wisdom of employees and the public at large to solve internal operational issues, conduct market research, and advance their branding presence.4 Corporations are also beginning to consider the frequency and quality of employee engagement with internal crowdsourcing systems as part of the formal job review and promotion process.5 To a lesser degree, the medical community has begun to use both social media and crowdsourcing in practice with doctors using Facebook and Twitter to engage their patients and promote their reputation.6 Examples of crowdsourcing have begun to emerge in the public domain with companies proposing to connect patients with providers using programs like HealthTap’s “Talk-to-Docs” or to invite HCPs to solve diagnostic dilemmas through sites like crowdmed.com. It remains unclear whether physicians would use crowdsourcing for peer-to-peer collaboration. To study this question, we conducted a field trial of the mobile crowdsourcing application DocCHIRP (Crowdsourcing Health Information Retrieval Protocol for Doctors). In this study, we report the results from our post-trial provider survey whose primary objective was to understand user perspectives regarding the pros and cons of using real-time crowdsourcing in clinical practice. Related objectives were to understand current provider information seeking behaviors, define user opinion regarding the value of collaborative interactions, and define the potential barriers to implementing peer-to-peer crowdsourcing into the workflow of medical practice.

Methods Program design DocCHIRP is both a mobile and web-based application that allows HCPs to post questions to trusted colleagues in real time. Details regarding program development and design have been previously reported.7 The mobile application was designed to run on both Apple and Android devices, and trial participants downloaded the program from the Apple App store or the android compatible version directly from the DocCHIRP server. Network access was restricted to users holding verified server accounts, and providers were able to select and manage members of a single crowd, set notification preferences (email, texting, or both), and publicly display their areas of expertise. When faced with a clinical question, the consulting provider (hereafter referred to as the index provider) could send consult questions and carry on one-to-many conversations with colleagues in real time. Responses, which were collated according to response time, were associated with the initial consult question and viewable by all participants in that provider’s group.

Study population Both the DocCHIRP field trial and post-trial survey were approved by the University of Rochester Research Subjects Review Board (RSRB) and designated as minimal risk. Trial participants (n = 85) were recruited from the division of Pediatric Neurology and Departments of Neurology, Pediatrics, Neuroradiology, Psychiatry, Orthopedics, Emergency Medicine, Internal Medicine, and

Downloaded from jhi.sagepub.com at U.A.E University on November 14, 2015

3

Sims et al.

Family Medicine. We did not seek parity in either age or gender representation, and consent to participate was obtained as part of account activation. Participants included attending physicians (n = 63), residents (n = 13), fellows (n = 1), and nurse practitioners (n = 8). Subjects were invited to participate in an email that included a cover letter and the notice of approval from the RSRB. Follow-up emails4 and a written invitation were also sent over a period of 30 days. Participants received a coffee coupon 1 day prior to closing the survey at the end of the month.

Data analysis The 10-min survey was anonymous and conducted online (https://www.surveymonkey.com). Closed-ended questions, in which the respondent picked an answer from a given number of mutually exclusive options (see supplemental materials), were grouped by category. Questions included items regarding provider demographics, current use of mobile technologies in clinical practice, frequency and modes of provider-to-provider communication, and impressions of medical crowdsourcing. To understand factors affecting technology adoption, survey respondents were asked to self-identify as users or non-users by recalling the frequency of program engagement (never, occasionally, regularly). Responses were verified against the server transcripts.

Statistical analyses We performed all analyses using Statistical Product and Service Solutions (SPSS) version 15.0 software (SPSS Inc., Chicago, IL, USA). We used standard summary statistics to describe overall demographics and survey sub-domains. Data were organized in two-by-two tables, and Fisher’s exact tests were performed to look for differences between users and non-users. An exact, twosided α level of less than 0.05 was considered statistically significant.

Results Participant demographics Of the 85 providers that created DocCHIRP accounts, we received responses from 72 participants, resulting in an 85 percent response rate. DocCHIRP registrants were divided into “user” and “nonuser” groups by self-report of whether they used DocCHIRP regularly or occasionally versus never with 40 respondents indicated that they logged in more than once. The median age of study participants was 43 years, and neither age nor gender had an influence on whether participants used the mobile application (Table 1). The majority of participants (82%) accessed DocCHIRP using the iPhone; however, we found no correlation between the preferred mobile device(s) and user status.

Existing information seeking and communication behaviors When asked how difficult it is to find good evidence or actionable information to solve clinical issues when needed, most providers indicated it was either “very easy” (17%) or “easy” (62%), while 22% indicated having difficulty. To understand where clinicians turn when they need to close knowledge gaps and solve clinical problems, we assessed the range of reference materials providers reported using at the point of care on a weekly or daily basis (Table 2). Clinicians relied heavily on online resources like Up to Date or eMedicine (90.1%) and face-to-face advice from colleagues (88.6%). We also found no differences between users and non-users in terms of their use of mobile applications (i.e. Epocrates, 5-Minute Consult), the published literature, or paging a colleague.

Downloaded from jhi.sagepub.com at U.A.E University on November 14, 2015

4

Health Informatics Journal 

Table 1.  Study population.

Age (median 43 years)   Median age Gender  Women  Men Education   MD or DO  MD/PhD  NP   Mobile devicea  iPhone  iPad  Droid  Blackberry  Other

Overall (n = 72), n (%)

User (n = 42), n (%)

Non-user (n = 30), n (%)

p value

34 (47.2) 38 (52.8)

23 (54.8) 19 (45.2)

11 (36.7) 19 (63.3)

0.810  

31 (43.1) 41 (56.9)

19 (61.3) 23 (56.1)

12 (38.7) 18 (43.9)

0.156  

53 (73.6) 12 (16.7) 7 (9.7)

28 (66.7) 9 (21.4) 5 (11.9)

25 (83.3) 3 (10.0) 2 (6.7)

0.283    

Overall (n = 106), n (%)

User (n = 61), n (%)

Non-user (n = 45), n (%)

59 (81.9) 34 (47.2) 8 (11.1) 1 (1.4) 4 (5.6)

35 (83.3) 19 (45.2) 5 (11.9) 1 (2.4) 1 (2.4)

24 (80.0) 15 (50.0) 3 (10.0) 0 (0.0) 3 (10.0)



0.763 0.812 1.000 1.000 0.301

DocCHIRP: Crowdsourcing Health Information Retrieval Protocol for Doctors. aUsers interacted with DocCHIRP using more than one device in some cases resulting in a higher number of overall devices registered relative to the number of respondents.

However, compared to non-users, DocCHIRP users were more likely to engage their colleagues through face-to-face discussions (92.5% vs 78.6%; p 

Provider impressions of the use of a mobile crowdsourcing app in medical practice.

In our prior work, we conducted a field trial of the mobile application DocCHIRP (Crowdsourcing Health Information Retrieval Protocol for Doctors), de...
769KB Sizes 0 Downloads 5 Views