Editorial peer review: methodology and data collection*t§ By Ann C. Weller, M.A. Deputy Librarian of the Health Sciences

Library of the Health Sciences University of Illinois at Chicago P.O. Box 7509 Chicago, Illinois 60680

This study reports on the editorial peer review practices of two categories of U.S. medical journals indexed in Index Medicus. Journals in group 1 were included on each of three lists of recommended journals, had a circulation of 10,000, and were cited at least 5,000 times per year. Group 2 journals, also indexed in Index Medicus, met none of the criteria. After being pretested, data were collected through a series of interviews and questionnaires. A summary of the methodology and an analysis of the differences between data collected through questionnaires and interviews is reported. The study concluded that initial interviews are very helpful in designing a questionnaire; a high percentage of editors agreed to be interviewed (100% for sixteen group 1 editors and 93.8% for sixteen group 2 editors); a 69.4% response rate to the mailed questionnaire indicates either sufficient follow-up or a high rate of interest in the subject matter; no trends identified by the questionnaire were reversed by changes in answers given during the interviews; approximately 11% to 15% of the answers differed between the questionnaire and interview methodology; and for some sensitive issues, editors were more likely to give answers on the questionnaire according to what was perceived as the most appropriate answer, rather than the actual practices of the journal.

INTRODUCTION Before publication a scientific manuscript undergoes critical evaluation through the process of editorial peer review. Manuscript reviewers are selected by the editor; reviewers are asked to judge the validity of the submitted manuscript and recommend its acceptability for publication. According to Ziman, the refereeing system is generally believed by scientists to provide the best method of imposing a uniform scientific standard; it is the "lynchpin about which the whole business of Science is pivoted" [1]. * Based on a paper presented May 18, 1987, at the Eighty-Seventh Annual Meeting of the Medical Library Association, Portland, Oregon. t This research was supported in part by a Doctoral Fellowship from the Medical Library Association, May 1986. § This research was supported in part by the Faculty Development Allocations Committee, University Library, University of Illinois at Chicago, April 1988.

258

In his recent monograph on editorial peer review, Lock asked, Is validation . . . [of scientific data] . . . really secured through the conventional system of manuscript review? The answer must be no: that is the role of time.

The refereeing system is generally believed by scientists to provide the best method of imposing a uniform scientific standard; it is the "lynchpin about which the whole business of Science is pivoted." Editorial peer review should "distinguish between the useful and the useless" [2]. Lock has thoroughly reviewed the literature while concentrating on the British Medical Journal. Similar to Lock's monograph, most literature on editorial peer review of medical journals focuses on well-known journals and usually comes from editors reporting on their own journals. Bull Med Libr Assoc 78(3) July 1990

Editorial peer review

Bailar and Patterson, after reviewing quantitative studies on editorial peer review, argued for betterdesigned studies [3]. They stated that studies on peer review are methodologically weak, "poorly conceived, . . . [and] ... based on small samples." An underlying assumption of Bailar and Patterson's criticism and Lock's monograph is that the process of editorial peer review is the same for all journals. Most studies of editorial peer review are based on this assumption; few have been undertaken in the field of medicine. Only twelve studies from the last ten years have been pertinent to medicine, and none of these used a random sample of journals [4]. Juhasz compared editorial peer review of large (greater than 300 manuscripts per year) and small (less than 300 manuscripts per year) journals [5]. In Juhasz's study, journals were not selected randomly and no statistical analysis was undertaken. Journal titles were selected from the serials holding list of a large science and technology library, and to protect confidentiality, no disciplines were listed. If editorial peer review imposes a uniform standard, as Ziman suggested, this standard should produce uniform editorial guidelines and procedures for all journals. The objective of the present study was to identify any characteristics of the editorial peer review process that differentiate two distinct categories of indexed U.S. medical journals. A systematic study of editorial peer review that examines different categories of medical journals has not previously been undertaken. This report examines the methodological problems connected with data collection and details the different responses using two methods of survey research: questionnaires and interviews. The analysis of the results was published separately [6].

METHODOLOGY Journal selection Journals indexed in Index Medicus were used for this study to assure that all journals in the sample had obtained a national reputation as recognized by the National Library of Medicine (NLM). Journals indexed in Index Medicus were divided into two categories. The first category, group 1, met the following criteria: * Listed in the 1987 Brandon/Hill list [7]* * Listed in the 1987 Abridged Index Medicus * Listed in the 1985 Library for Internists [8] * Had a U.S. circulation of at least 10,000 [9-12] * Cited at least 5,000 times per year [13] The Brandon/Hill list, published biennially by the Medical Library Association, has been used as a collection development tool by medical libraries for *

All group 1 titles were also listed in the 1989 Brandon/Hill list.

Bull Med Libr Assoc 78(3) July 1990

twenty years. Abridged Index Medicus is a subset of journals indexed in Index Medicus and is also used as a collection development guide. The Library for Internists is a recommended list of journals published by the American College of Physicians. Requiring a circulation of 10,000 assured that each journal in group 1 had a wide distribution and would be of interest to a large medical community. The inclusion of a minimum number of cited references provided an implication of the relative readership and the subsequent use of material from the journal. Sixteen medical journals published in the United States met the criteria for a group 1 journal. Journals in group 2 met none of the previously described criteria. The purpose of these criteria was to use a variety of measures that would produce two sets of journals that were distinguishable from each other. Journals were at both ends of a continuum of indexed U.S. medical journals, with the range of journals in the middle skipped. All journals in this study have met the rigorous requirements for inclusion in Index Medicus. There was no intention of placing any value judgement on the journals being studied. Many group 2 journals were specialized or interdisciplinary and therefore, did not rank high on a citational or reputational scale. There is no implication that journals with lower ranks were of inferior quality [14].

Data collection This study examined the editorial peer review practices of these two categories of medical journals through a series of interviews and questionnaires. All sixteen group 1 editors were asked for an interview; group 1 editors received no questionnaires. Questionnaires were mailed to a statistically valid random sample of group 2 editors. Sixteen group 2 editors were asked for an interview. Information gathered in the first few interviews was used to develop the questionnaire for group 2 editors.

Interviews with group 1 editors The purpose of the interviews with group 1 editors was to examine in-depth their procedures and opinions, to identify important aspects of editorial peer review to include in this survey, and to provide a forum to discuss topics that might be considered sensitive. Only a limited amount of information can be obtained in either a telephone interview or a mailed questionnaire. Questionnaire respondents most likely spend no more than thirty minutes answering a survey [15]. The response rate to face-to-face or telephone interviews is about 70% to 80% for the general population, while the response rate to mailed questionnaires varies from 10% to 50%, even with followup mailings [16]. Group 1 contained some of the best259

Weller

known U.S. medical journals. These editors are likely to be approached often for all types of survey research by both scholarly researchers and the popular press. Most editors are physicians who often do not welcome interviews [17]. It was felt that group 1 editors would most likely not respond to a mailed questionnaire. The interviews with editors fit the description of an "elite or specialized interview"-interviews with a prominent or professional individual [18]. According to Dexter, in a specialized interview the interviewer must have a fair knowledge of the subject area and must be willing to let the respondent be instructive about the problem, situation, or question. The respondents in a specialized interview usually resent the typical restrictions of a conventional interview. They demand a more active interchange, want to explain their view of the situation, and prefer a discussion. They are intelligent, quick thinking, and at home in the realm of ideas, policies, and generalizations.

The respondents in a specialized interview usually resent the typical restrictions of a conventional interview. They demand a more active interchange, want to explain their view of the situation, and prefer a discussion. Five pilot interviews were conducted with editors who knew they were participating in a pretest. The analysis of these interviews was helpful in clarifying the wording of questions, pinpointing the exact information each question would generate, and identifying questions to add and eliminate. Goodwill needed to be established with the editors from the beginning and maintained during all contact [19]. An unexpected condition revealed during the interview might adversely affect the editor's confidence in the interviewer. For that reason, a detailed introductory letter contained a brief description of the study, the investigator's academic affiliations, the source of financial support, the number of journals that met the criteria for a group 1 journal, and a request for an interview. The letter also guaranteed confidentiality, estimated the length of time the interview would take, and contained other organizational details. Included with the letter were a onepage summary of the study and a set of initial questions that probably could not be answered without checking editorial files. Initially six interviews with group 1 editors were conducted and analyzed before proceeding further with the investigation. These six editors were reinterviewed by telephone at the approximate time of the remaining interviews. 260

Questionnaires to group 2 editors Nineteen questionnaires were mailed to the randomly selected group 2 editors for the pilot study. Sixteen (84.2%) were returned. Data from these sixteen questionnaires and the first six group 1 interviews were compared using the two-tailed t test [20]. The analysis of the pilot study indicated that no major changes were needed in the questionnaire; the wording of some questions was clarified, and one question was dropped. The six-page questionnaire covered many aspects of the decision-making process used by the editors in reviewing a manuscript. Questions were arranged by subject following the approximate pathway taken by a manuscript in the editorial offices (Appendix A). The rejection rate from the pilot study of 56.2% with a standard deviation of .1761 was used to estimate sample size. Calculations determined that about fifty group 2 editors were needed for a valid sample [21]. In case either the response rate was less than expected or the standard deviation higher than expected, approximately 100 questionnaires were mailed to a random sample of group 2 editors. Essentially the same letter as was sent to group 1 was used for group 2. A duplicate follow-up mailing was sent to all nonrespondents about one week after the stated deadline for questionnaire return. In addition, there was a form to return if the editor was not going to participate, asking for the reason. Dillman demonstrated the importance of sufficient follow-up to raise the response rate [22]. All nonrespondents were telephoned about one week after the second deadline.

Interviews with group 2 editors Sixteen randomly selected group 2 editors who returned the questionnaire were also asked for an interview. These interviews provided an opportunity to group 2 editors for explaining procedures and expressing ideas; interviews also provided a method of clarifying and validating the data from the questionnaire. The group 1 pilot study had shown that the necessary transfer of data from interview to questionnaire format for statistical analysis was difficult, since answers given verbally often did not correspond to options listed on the questionnaire. Guidelines were needed to transfer data from an interview to a questionnaire.

RESPONSES TO INTERVIEWS AND QUESTIONNAIRES Interviews Sixteen (100%) group 1 editors or managing editors were interviewed, including fifteen (93.8%) editors and one (6.2%) managing editor alone. Fifteen (93.8%) group 2 editors were interviewed; in three group 2 Bull Med Libr Assoc 78(3) July 1990

Editorial peer review

cases, the managing editor was also present during the interview, and in one case the editor was interviewed only by telephone. All tapes were transcribed, then topics were grouped by subject. In order to keep the amount of information manageable, answers were summarized. Information from the interviews was transferred to the questionnaire using the following guidelines: a description of an occurence was converted to a percentage and vice versa-98% to 100% was considered equivalent to "always"; 74% to 97% to "usually"; 25% to 75% to "sometimes"; 3% to 24% to "rarely"; and 1% to 2% to "never."~ If a percentage or number was given as the answer, the response was coded as stated and data were subjected to statistical analysis. If a check was placed for the answer or a verbal response indicated that a particular procedure was used, but a percentage or number was not given or known, the answer was coded as a "yes" or "no" response. A second statistical analysis was carried out after all percentages and numerical data were converted to "yes" or "no" responses. If the respondent gave an exact percentage, but the questionnaire had a range of options, the answer was coded as the range. If a reasonable correspondence could be made between the response and one of the fixed-alternative options, that option was coded as the answer. If no reasonable correspondence could be drawn beween the stated answer and the options in the questionnaire, the answer was coded as "missing" or added as a response to the "other" option.

Questionnaires for group 2 editors Of the 124 questionnaires mailed, 86 were returned, giving a response rate of 69.4%. Almost 25% of those not answering the questionnaire gave "lack of time" or "insufficient staff" as the reason for not responding to the questionnaire. Five percent stated that they never answer a questionnaire. Of the 38 who did not return the questionnaire, 20 (52.5%) either said they would complete it and did not, never returned a telephone call, or were never contacted by telephone (Table 1). Four editors who had not returned the questionnaire agreed to answer a few questions over the telephone. Their answers indicated they had editorial peer review practices similar to those who responded. These additional contacts with editors increased the response rate by 3.2%. Direct contact was made with all but 13.7% of those surveyed.

Comparison of interviews and questionnaires for group 2 When the answers given during the interviews with group 2 editors did not correspond to the answers given on the questionnaires, the interview response Bull Med Libr Assoc 78(3) July 1990

Table 1 Reasons for the 38 nonresponses to the questionnaire (group 2) Reason for not answering the questionnaire Too busy Not enough staff to complete questionnaire Not a medical journal Editor not available New editor No longer editor Does not answer questionnaires Subtotal Reason for not answering the questionnaire could not be determined Stated would complete, but did not Editor never retumed telephone calls Could not locate current telephone number Blank form, letter, or questionnaire returned Subtotal Total nonrespondents

Number of responses 7 (18.4%) 2 (5.3%) 2 (5.3%) 2 (5.3%) 2 (5.3%) 1 (2.6%) 2 (5.3%) 18 (47.5%)

6 (15.7%) 9 (23.6%) 2 (5.3%) 3 (7.9%) 20 (52.5%) 38 (100.0%)

was used for the analysis. There was a careful analysis of the differences between answers given in the questionnaires and interviews. When an answer on the questionnaire contained more than one option, there were two methods of tabulating different responses between the questionnaires and interviews; these take into account both the total number of questions and the total number of options. The questionnaire had 56 questions with a total of 123 variables. In the first method, each of the fifty-six questions was counted as one unit. A changed response between the questionnaire and the interview was counted as a fraction of the number of options for an answer when the questionnaire had more than one option for an answer. For example, question 8 asked how reviewers were identified and listed eight options. If an editor stated during the interview that a literature search was sometimes used to locate reviewers, but had not indicated this on the questionnaire, the additional information was tabulated as an increase response of one-eighth or 0.125. In the second method, each of the 123 variables was counted as one additional response regardless of the number of options for a question. Table 2 summarizes results using both methods. Four types of differences between questionnaires and interviews were identified: a change in the answer, a question left blank on the questionnaire and answered during the interview, information supplementing that given on the questionnaire, and an answer changed to "not applicable" because of a response given during the interview. The follow-up interviews supplemented or altered the information obtained in questionnaires by approximately 11% to 15%. In order not to interfere with the rapport estab261

Weller

Table 2 Group 2 editors: different responses between the interviews and the questionnaires

Variables

56* 123**

Answer changed

Question left blank on questionnaire but answered in interview

Another reason added during interview

Percentage Answer on questionNumber of responses of answers naire changed to altered by interview altered "not applicable"

79.035 (61.7%)

25.295 (19.7%)

1.76 (1.4%)

22.0 (17.2%)

128.09 (1 00.0%)

15.2%

106

39

10

53

208

11.2%

(50.9%)

(18.7%)

(4.8%)

(25.5%)

(99.9%)

56 questions on the questionnaire 123 variables on the questionnaire

lished with the editors during the interviews, answers that differed from the questionnaire were not pointed out during the interviews. The purpose of the interviews was both to validate the information in the questionnaires and to obtain more data than could be acquired through questionnaires alone. Surprisingly, 50% to 60% of the differences between questionnaires and interviews were accounted for when an answer given on a questionnaire was different or changed from the answer given during the interview. This accounts for approximately 5% of all questions answered during the interview. Using either method, a question left blank on the questionnaire and answered during the interview accounted for about 20% of the changes between the questionnaires and the interviews.

The purpose of the interviews was both to validate the information in the questionnaires and to obtain more data than could be acquired through questionnaires alone. Another reason was added to the "other" option of questions with fixed-alternative options ten times. When answering the questionnaire, respondents usually checked one of the options given, rather than adding an alternative in the "other" option. Because of the time constraints of the interview, opinion questions were sometimes skipped. The number of "other" choices added during the interviews, therefore, represents a minimum number of potential changes. Depending on the method used, between 17% and 25% of all changes were accounted for when a question answered on the questionnaire was changed to "not applicable." The answer given by the editor on the questionnaire might have indicated how the editor would approach a particular situation. But in these 262

cases, the editor stated at the time of the interview that the situation had never before been encountered. Table 3 compares the mean responses on the questionnaires with the interviews using the fifty-six variables method when four or more editors altered an answer during the interview. An example of the change between questionnaires and interviews is illustrated in variations in the rejection rate. Four of the six differences from the interviews were within 10% to 15% of the rate stated on the questionnaire. When comparing answers from the fifteen editors who were interviewed with their responses on the questionnaire, the mean rejection rate was 7% lower for the interviews. Similarly, eight editors stated on the questionnaire that 100% of all manuscripts were reviewed, but their answers given during the interviews ranged from 80% to 98%. For these eight editors, the mean percentage reviewed was 14% lower for the interviews than the questionnaires. Along the same line, two editors answered on the questionnaire that the reviewer file was searchable by subject, yet demonstrated during the interview that it was not. Several editors did not indicate unusual procedures or unique journal features when answering the questionnaire. One editor who edited more than one journal answered questions according to how the second journal was handled. The journal named in the cover letter published only review articles and the second journal did not meet the criteria for group 2. In another example, an editor answered on the questionnaire that 90% of manuscripts were accepted, but stated during the interview that all manuscripts were accepted. Another editor answered that about 37% of manuscripts were rejected, but during the interview stated that all manuscripts were accepted. There were nineteen variables that were statistically significant between group 1 interviews and group 2 questionnaires but not between the two groups of interviews. Table 4 lists the ten variables where the difference between the mean is the same Bull Med Libr Assoc 78(3) July 1990

Editorial peer review Table 3 Comparison of questionnaire vs. interview responses: group 2 editors-four or more altered responses using 56-variables method Number of differences in answer

Variables 3. 4. 4. 6.

9. 10. 10. 15. 15. 18. 20. 21. 28.

6 8 9 4

rejection rate a. percentage of all manuscripts reviewed c. percentage of rejected manuscripts reviewed associate editor can 1 ) decide to review 2) select reviewers 3) decide to revise 4) decide to reject 5) decide to accept a. percentage reviewer file searchable by subject a. use reviewer author requested b. not use reviewer at author's request a. copy of reviewer's report to author b. paraphrase of report to author note quality of reviewers' reports b. if statistics, use statistical reviewer d. revised manuscript retumed to reviewer b. share all reviewers' reports with all reviewers

4 4 4 4 4 4 5 5 4

Group 2 questionnaire (N = 15)

Group 2 interview (N = 15)

39.8%(n = 14) 93.3% (n = 14) 91.1% (n = 14)

32.8%(n = 15) 87.2% (n = 15) 77.1% (n = 14)

60.0% (n = 13, NA = 3) 70.0% (n = 13, NA = 3) 90.0% (n = 13, NA = 3) 70.0% (n = 13, NA = 3) 70.0% (n = 13, NA = 3) 57.1% (n = 14) 9.1% (n = 14) 70.0% (n = 14, NA = 4) 93.3% (n = 15) 15.4% (n = 13) 38.5% (n = 13) 42.9% (n = 14) 73.3% (n = 14) 33.3% (n = 15)

46.2% (n = 53.8% (n = 76.9% (n = 53.8% (n = 53.8% (n = 42.9% (n = 0.0% (n = 71.4% (n = 80.0% (n = 6.7% (n = 33.3% (n = 26.7% (n = 53.3% (n = 35.7% (n =

15, NA = 2) 15, NA = 2) 15, NA = 2) 15, NA = 2) 15, NA = 2) 15, NA = 1) 15, NA = 6) 15, NA = 8)

15) 15) 15) 15) 15, NA = 1)

15, NA = 1)

N = total number in group; n = number of respondents per question; NA = not applicable

or larger between group 1 and group 2 interviews than between group 1 interviews and group 2 questionnaires. Table 5 lists the nine variables where the difference between the mean is smaller. The order relation, however, remained the same. If group 1 had a larger mean for one variable, it remained larger for the comparison of group 1 with both the group 2 questionnaires and interviews.

CONCLUSIONS Initial interviews were very helpful in designing the questionnaire and in identifying those areas of par-

ticular interest to the editor. Requests for interviews resulted in a high acceptance rate: 100% for group 1 and 93.8% for group 2. The investigator did not speak directly to any editor who declined the request for an interview. Sufficient follow-up to the questionnaire with both a second mailing and telephone contact helped to produce a response rate of 69.4%. Several editors wrote letters expressing their interest in the study. Seventy (81.4%) editors requested a copy of the final report. The editors' level of interest might also have contributed to the high response rate. Answering the questionnaire might have been more of an inconvenience for editors than most respon-

Table 4 Comparison of group 2 questionnaires and group 1 interviews: difference in mean greater, interviews established trends preserved, statistical significance less Variables 1. b. number of hours work on joumal per week 8. reviewers located by 4) editorial board 21. a. revision decision made by: 4) editorial meeting 23. a. hold editorial meetings 27. acceptance decision made by: 1) editor 28. a. editor informs reviewer of final decision 28. b. authors receive reviewers' reports 29. a. if author complains, editor will 2) contact and explain 3) suggest 2d journal 5) reconsider in-house N

=

total number in group; n

=

Group 1 interview (N = 16)

Group 2 questionnaire (N = 86)

20.6 (n = 13)

12.6* (n = 73)

68.8% (n = 16)

88.4%* (n

25.0% (n 81.3% (n

=

75.0% (n 93.8% (n 93.8% (n

= = =

=

Group 2 interview (N = 15) 11.1

(P= .07)(n=11)

86)

93.3% (P = .08) (n = 15)

16) 16)

7.0%* (n = 84) 53.7%* (n = 82)

6.7% (P > .1) (n = 15) 53.7% (P > .1) (n = 15)

16)

95.2%* (n =84) 68.2%* (n = 85) 83.1%* (n = 83)

100.0% (P = .07) (n = 15) 66.7% (P = .10) (n = 15)

16) 16)

43.8% (n = 16) 0.0% (n = 16) 0.0% (n = 16)

=

68.8%* (n = 85, NA = 8) 9.1%* (n = 85, NA = 8) 13.0%* (n = 85, NA = 8)

80.0% (P = .07) (n = 15) 69.2% (P > .1) (n = 15, NA = 2) 15.4% (P > .1) (n = 15, NA = 2) 15.4% (P = .09) (n = 15, NA = 2)

number of respondents per question; NA = not applicable; ^P < .05.

Bull Med Libr Assoc 78(3) July 1990

263

Weller

Table 5 Comparison of group 2 questionnaires and group 1 interviews: differences in mean less, statistical significance less for interviews Variables 4. 5. 8. 8.

a. percentage of all manuscripts reviewed c. solicited and unsolicited same review process percentage of time reviewers located in the file reviewers ever located by:

1) reviewer file 14. a. understood editorial board members are reviewers 15. a. receive a paraphrase of reviewers' report 19. b. types of reviewer bias: 1) different conclusions than reviewer expected 21. d. revised manuscripts returned to reviewer 29. a. if author complains, editor will 1) do nothing

Group I interview (N = 16)

Group 2 questionnaire (N = 86)

Group 2 interview (N = 15)

82.6% (n = 16) 40.0% (n = 16, NA = 1) 72.7% (n = 16)

93.1%* (n = 86) 73.8%* (n = 81, NA = 20) 40.7%* (n = 86)

87.2% (P > .1) (n = 15) 55.6% (P > .1) (n = 15, NA = 6) 44.2% (P = .07) (n = 15)

93.8% (n = 16) 73.3% (n = 15) 0.0% (n = 16)

65.1%* (n = 86) 94.2%* (n = 69) 28.0%* (n = 82)

73.3% (P = .13) (n = 15) 83.3% (P> .1) (n = 12) 6.7% (P> .1) (n = 15)

15.4% (n = 13) 18.8% (n = 16)

38.2%* (n = 82, NA = 6) 63.1%* (n = 84)

30.8% (P = .09) (n = 15, NA = 2) 53.3% (P > .1) (n = 15, NA = 1)

0.0% (n = 16)

7.8%* (n = 85, NA = 8)

0.0% (P > .1) (n = 15, NA = 2)

N = total number in group; n = number of respondents per question; NA = not applicable; *P < .05.

dents to questionnaires. The survey was long and could not be completed without checking editoral files. Because of their dual or triple roles as editors, clinicians, and researchers, the editors are busy individuals with many demands on their time. The reasons editors gave for nonparticipation indicated that more follow-up would not have resulted in a substantial increase in the response rate. Most studies of survey research emphasize techniques that increase participation, but do not investigate reasons for nonparticipation [23]. Results of this study showed that the inconvenience of answering the questionnaire was responsible for nonparticipation rather than any intrinsic differences between respondents and nonrespondents.

Most editors stated on the questionnaire that their journals reviewed all material received. In practice, however, almost all editors rejected a certain percentage of manuscripts that were considered inappropriate. Approximately 11% to 15% of the answers on the questionnaire were altered as a result of the interviews with group 2 editors. Most survey research compares different methods of data collection but does not examine different responses of the same individuals. Bradburn and Sudman concluded that responses to threatening questions differ depending on the method of survey research [24]. Although questions in the present survey would more precisely be characterized as sensitive, the present study supports Bradburn and Sudman's conclusion. If unusual procedures were used, the editor might not include this information on the questionnaire but would mention it during the interview. On the questionnaire, an ed264

itor was more likely to represent the journal according to what an editor believes are the generally accepted procedures of editorial peer review, while during the interview the editor was more likely to answer the question closer to actual experience. Questions that had four or more alterations (Table 3) include the most sensitive issues. For example, most editors stated on the questionnaire that their journals reviewed all material received. In practice, however, almost all editors rejected a certain percentage of manuscripts that were considered inappropriate. Fifty to sixty percent of the changes between the questionnaires and the interviews were accounted for by the respondent giving a different answer to a question during the interview. Editors discussed changes they had initiated; very few of the changed answers were a result of procedural changes in the interval between the two surveys. There was no indication from answers on the questionnaire that any question was misunderstood. One problematic question identified by the pilot study was removed from the questionnaire. The study showed that, although there were some examples of changes in answers that resulted in changes of statistical significance, there is not one example of a trend identified through the questionnaires that was reversed because of the interviews. If a variable from one group was significantly larger (P < .05) than a variable from the other group using the questionnaires, but not significantly larger using the interviews, the mean for the first group always remained larger than the mean for the other group. There were ten examples of a loss of statistical significance even though the difference in the mean was increased when variables in group 1 were compared to group 2 interviews. When comparing sample sizes as small as fifteen or sixteen, in order to be statistically significant, the differences between the mean must be greater than for a larger sample. Bull Med Libr Assoc 78(3) July 1990

Editorial peer review

This study has examined the methodological problems of comparing data from a questionnaire with data from an interview. Details and subtleties not brought out from questionnaire responses were explored in the interviews. Considerably more detailed information was obtained as a result of the interviews. Sensitive topics were identified. The results of the comparison of group 1 and group 2 are examined by Weller [25]. This portion of the study has shown the value of interviewing a sample of questionnaire respondents as a means of clarifying and validating information obtained on the questionnaire.

ACKNOWLEDGMENTS The author wishes to gratefully acknowledge the assistance of her dissertation committee at the Graduate Library School, University of Chicago. Don Swanson (chair), Julie Hurd, and Abraham Bookstein have guided this project and given substantial advice during its design, execution, and analysis.

REFERENCES 1. ZIMAN JM. Public knowledge: an essay concerning the social development of science. Cambridge, MA: University Press, 1968. 2. LOCK S. A difficult balance: editorial peer review in medicine. Philadelphia: ISI Press, 1986. 3. BAiLAR JC, III, PATTERSON K. Journal peer review. The need for a research agenda. N Engl J Med 1985 Mar 7; 312(10):654-7. 4. IBID. 5. JUHASZ S, CALVERT E, JACKSON T, KRONICK DA, SHIPMAN J. Acceptance and rejection of manuscripts. IEEE Trans Prof Commun 1975 Sep;PC18(3):177-85. 6. WELLER AC. Editorial peer review in U. S. medical journals. JAMA 1990 Mar 9;263(10):1344-7. 7. BRANDON AN, HILL DR. Selected list of books and journals for the small medical library. Bull Med Libr Assoc 1987 Apr;75(2):133-65.

Bull Med Libr Assoc 78(3) July 1990

8. LEWIS CS, JR. A library for internists. V. Recommended by the American College of Physicians. Ann Intern Med 1985 Mar;102(3):423-37. 9. The standard periodical directory. 10th ed. New York: Oxbridge Communications, 1987. 10. Ulrich's international periodicals directory 1986-1987. 25th ed. New York: R. R. Bowker, 1986. 11. 1987 Gale directory of publications. 119th ed. Detroit, MI: Gale Research, 1987. 12. The serials directory. An international reference book. Birmingham, AL: EBSCO Publishing, 1986. 13. GARFIELD E, ed. SCI journal citation reports. A bibliometric analysis of science journals in the ISI data base. Philadelphia: Institute for Scientific Information, 1986. 14. WEISHEI RA, REGOLI RM. Ranking journals. Schol Publ 1984 Jul;15(4):313-25. 15. KIDDER LH. Selltiz, Wrightsman and Cook's research methods in social relations. 4th ed. New York: Holt, Rinehart and Winston, 1981:152. 16. IBID., 150. 17. DExrER LA. The good will of important people: more on the jeopardy of the interview. Publ Opin Q 1964 Winter; 28(4):556-63. 18. DExTER LA. Elite and specialized interviewing. Evanston, IL: Northwestern University Press, 1970. 19. DExrER LA. The good will, op. cit. 20. ANDREWS FM, KLEM L, DAVIDSON TN, O'MALLEY PM, RoDGERs WL. A guide for selecting statistical techniques for analyzing social science data. 2d ed. Ann Arbor, MI: University of Michigan Press, 1981. 21. YAMANE T. Statistics, an introductory analysis. 3d ed. New York: Harper & Row, 1973:215. 22. DILLMAN DA, CHmSTENSON JA, CMAPENTER EH, BROOKS RM. Increasing mail questionnaire response: a four state comparison. Am Sociol Rev 1974 Oct;39(5):744-56. 23. GoYDER J. Face-to-face interviews and mailed questionnaires: the net difference in response rate. Publ Opin Q 1985 Summer;49(2):234-52. 24. BRADBURN NM, SuDMAN S. Improving interview method and questionnaire design. San Francisco: Jossey-Bass, 1980. 25. WELLER AC, op. cit.

Received June 1989; accepted September 1989

265

Weller

APPENDIX

Questionnaire on Editorial Peer Review in U.S. Medical Journals It would be appreciated if you return the questionnaire by May 15, 1988. In this questionnaire, reviewer or review refers to peer review done by someone other than the editor, co-editor, or associate editors. Associate editor refers to any professional editorial staff other than the editor; does not refer to clerical, secretarial, or copyediting staff. la. Are you the chief editor of the journal named in the cover letter? Yes No If no, how many co-editors are there, excluding yourself? b. How many hours per week do you spend editing this journal? c. How many associate editors are there? 2. How many manuscripts did this journal receive in 1987? 3. What percent of all manuscripts received does this journal accept? % (Indicate the year this figure is from: 1987.) 1986; 4a. What percent of all manuscripts received undergo review? % b. What percent of accepted manuscripts undergo review? % c. What percent of rejected manuscripts undergo review? % If 0% for #4a, please return this portion of the questionnaire. It will be very useful to the study. Otherwise, please continue with #5a. 5a. Which of the following types of manuscripts are reviewed? (Use the most appropriate number: always-1; usually-2; sometimes-3; rarely-4; never-5) Solicited Unsolicited Manuscript: Manuscript: Original research Review articles Case reports Editorials Letters Other (specify) b. What percent of the following types of manuscripts are solicited? % Editorials S Original research % Letters Review articles % Other (specify) % % Case reports c. If solicited manuscripts are reviewed, do they undergo the same review process as unsolicited manuscripts? Never Sometimes Rarely Usually Always d. What percent of solicited manuscripts are accepted? % 6. If the journal has any associate editors, do they have the authority to: No Decide if a manuscript will be reviewed? Yes No Yes Select reviewers? Decide if a manuscript will be revised? Yes No Decide if a manuscript will be rejected? Yes No Decide if a manuscript will be accepted? Yes No 7. What level of in-house review does a manuscript receive? (Use the most appropriate number: always-1; usually-2; sometimes-3; rarely-4; never-5) A manuscript receives an in-house review by the editor or associate editor that is more thorough than a review by an external reviewer; A manuscript receives an in-house review that is equivalent to a review by an external reviewer;

266

Bull Med Libr Assoc 78(3) July 1990

Editorial peer review A manuscript receives an in-house review that is less thorough than a review by an external reviewer; A manuscript receives a quick review in-house to determine if it should receive external review; Other (specify) How do you locate a reviewer for a manuscript?

8.

Percent of time used: Names currently in the list of reviewers' names Contacts at meetings Personal acquaintances Editorial board Manuscript's bibliography A society's membership list Literature searches Other (specify)

%

% % % % % %

9. If a file of reviewers' names is maintained, b. Is this file computerized? a. Is it searchable by subject? No Yes _ _No Yes c. Are potential reviewers contacted before their names are added to the file? _No Yes _ 10a. Is an author's request not to use a particular reviewer honored? Never Sometimes Rarely Usually Always b. Is an author's request to use a particular reviewer honored? Never Sometimes Rarely Usually Always l la. What is the average number of reviewers per manuscript? (Circle the most appropriate number.) 2 4 5 6+ 0 3 b. What is the average number of editors or associate editors that evaluate each manuscript? (Circle the most appropriate number.) 6+ 4 0 2 3 5

12. 13. 14a.

b. 15a. b. 16a.

b. 17.

18.

19a.

What was the total number of reviewers used for this journal in 1987? What is the average number of times during 1987 that each reviewer was used? 16+ 11-15 6-10 1-2 3-5 Are reviewers contacted before manuscripts are sent to them? Sometimes Rarely Usually Always Is it understood that board members will review manuscripts for the journal? No Yes How often does the author receive a copy of the reviewers' reports? Sometimes Rarely Usually Always How often does the author receive a paraphrase of the reviewers' reports? Sometimes Rarely Usually Always Do you tell the author the names of the reviewers? Sometimes Rarely Usually Always Do you remove the reviewer's name, if the report is signed? Sometimes Rarely Usually Always Do you inform the reviewer of the author's name or affiliation? Sometimes Rarely Usually Always Is a record kept of the quality of the reviewer's reports? Sometimes Rarely Usually Always How often have you encountered evidence of reviewer bias? Sometimes Rarely Usually Always

Bull Med Libr Assoc 78(3) July 1990

Never

Never Never Never Never Never Never

Never 267

Weller b. If you have ever encountered evidence of reviewer bias, even if only rarely, what do you think are the reasons for the bias? (Use the most appropriate number: always-1; usually-2; sometimes-3; rarely-4; never-5) Manuscript's conclusions are different from reviewer's conclusions in a similar study. Reviewer is known to have certain opinions on a particular subject. Reviewer has a personal bias for or against the author. Reviewer has a personal bias for or against the author's institution. Other (specify) 20a. What percent of manuscripts contain some statistical analysis? 1-20% 21-40% 41-60% 61-80% 81-100% b. For those manuscripts that use statistics, how often is a statistical reviewer used? Always Sometimes Usually Never Rarely c. If a statistical reviewer is ever used, under which circumstances is one used? (Check all that apply.) Any manuscript that uses statistics Any manuscript that uses statistics and has received favorable review A manuscript in which the reviewer has questioned the appropriateness or accuracy of the statistics A manuscript in which the editor has questioned the appropriateness or accuracy of the statistics It is assumed that the reviewer is also the statistical reviewer. Another reason (specify) 21a. Who decides if a manuscript is to be revised? (Use the most appropriate number: always-1; usually-2; sometimes-3; rarely-4; never-5) Editor Associate editor(s) Reviewers Consensus at an editorial meeting Other (specify) b. What percentage of accepted manuscripts are revised prior to acceptance? (Do not include copy-editing revisions.) 1-20% 21-40% 41-60% 61-80% 81-100% c. What percentage of manuscripts require more than 1 revision? 1-20% 21-40% 41-60% 61-80% 81-100% d. Do revised manuscripts go back to the same reviewers? Always Usually Sometimes Rarely Never 22. What percentage of published manuscripts are substantially strengthened by the review process? 23a. Are editorial meetings held regularly? Yes No b. If yes, how many meetings are held per month? (Circle the most appropriate number.)

Editorial peer review: methodology and data collection.

This study reports on the editorial peer review practices of two categories of U.S. medical journals indexed in Index Medicus. Journals in group 1 wer...
2MB Sizes 0 Downloads 0 Views