letters to the editor preconceptions, histories and understandings to any endeavour.7,8 It is therefore not expected that the auditor or reviewer will come to exactly the same conclusions as the auditee or author, but that he or she will decide to what extent the research is credible, dependable and confirmable.4,7

REFERENCES 1 ten Cate O, Brewster D, Cruess R, Calman K, Rogers W, Supe A,

Gruppen L. Research fraud and its combat: what can a journal do? Med Educ 2013;47:638–40. 2 Guba EG. Criteria for assessing the trustworthiness of naturalistic inquiries. Educ Comm Tech J 1981;29: 75–91. 3 Lincoln YS, Guba EG. Naturalistic Inquiry. Newbury Park, CA: Sage Publications 1985. 4 Morse JM, Barrett M, Mayan M, Olson K, Spiers J. Verification strategies for establishing reliability and validity in qualitative research. Int J Qual Meth 2002;1: 13–22.

5 Barbour R. Checklists for improving rigour in qualitative research: a case of the tail wagging the dog? BMJ 2001;322: 1115–7. 6 Akkerman S, Admiraal W, Brekelmans M, Oost H. Auditing quality of research in social sciences. Qual Quant 2008;42:257–74. 7 Koch T. Establishing rigour in qualitative research: the decision trail. J Adv Nurs 2006;53:91–103. 8 Gadamer H-G. Philosophical Hermeneutics. Berkeley, CA: University of California Press 1976.

Removing the rose-coloured glasses: it’s high time we published the actual data Martin V Pusic1 Editor – I congratulate Dr Ten Cate and colleagues for highlighting research malfeasance and making suggestions for its prevention.1 Their ideas would form the basis of an incremental improvement in the well-established process of peer review and research publication. I would encourage one further change to the process: namely, that the researcher(s) (and journal) should be required to publish a digital version of the full dataset that is the basis for the main results of the study. As Ten Cate et al.1 mention, allowing peer reviewers to examine the dataset would allow a more finely 1 Department of Emergency Medicine, New York University, New York, New York, USA

Correspondence: Martin V Pusic, MD, PhD, 550 First Avenue, New York, New York 10016, USA. Tel: 00 1 212 263 2053; E-mail: [email protected] doi: 10.1111/medu.12312

334

grained look at the data. This logic applies not only to studies that report numerical measurements, but also to qualitative studies, in which allowing third parties to see de-identified original transcripts might discourage fraud. Journal space constraints often mandate a keyhole view of the data and summary statistics chosen by the author(s) predominate. However, those space constraints are mainly historical and attributable to the costs of publishing journals in paper form. Given the ever-expanding web presence of journals such as Medical Education, the publication of digital datasets should not be difficult. A numerical dataset of 5000 participants with 30 numerical variables has a CSV (comma separated values) file size of 1 Mb. A full transcript of a 1hour focus group discussion uses even less space.

There would be logistical issues to overcome. Some standardisation to stipulate the use of a file type that can be opened in common statistical packages would be important. Journal editors and staff may have some but not all of the analytical skills necessary to oversee this process. Privacy concerns would need to be addressed. We can imagine a number of other benefits to publishing research datasets. Consider how the ability to inspect a dataset might facilitate the task of the independent researcher who is concerned with repeating or synthesising research studies. With appropriate safeguards to blunt hindsight bias, the datasets could be re-analysed secondarily in ways that the original researchers may not have envisaged. These datasets might deepen educational activities such as journal clubs and data analysis courses.

ª 2014 John Wiley & Sons Ltd. MEDICAL EDUCATION 2014; 48: 333–335

letters to the editor Research malfeasance is fortunately rare. The recommendation of Ten Cate et al.1 to have researchers submit datasets to peer reviewers will give editors and peer reviewers one more tool to help them identify research inconsisten-

cies. However, publishing the actual datasets would not only encourage transparency, but would enhance all publications by allowing the research audience to better engage with the work of the researcher(s).

ª 2014 John Wiley & Sons Ltd. MEDICAL EDUCATION 2014; 48: 333–335

REFERENCE 1 Ten Cate O, Brewster D, Cruess R, Calman K, Rogers W, Supe A, Gruppen L. Research fraud and its combat: what can a journal do? Med Educ 2013;47:638–40.

335

Removing the rose-coloured glasses: it's high time we published the actual data.

Removing the rose-coloured glasses: it's high time we published the actual data. - PDF Download Free
35KB Sizes 2 Downloads 0 Views