SPECIAL TOPIC Structure and Establishing Validity in Survey Research Michael T. Nolte, B.S. Melissa J. Shauver, M.P.H. Kevin C. Chung, M.D., M.S. Ann Arbor, Mich.

Summary: Survey research is a commonly used research tool used to gather opinions regarding emerging technologies and current practices. As randomized clinical trials rose to prominence, the Consolidated Standards of Reporting Trials Study Group created a checklist of items necessary for the complete and transparent reporting of randomized clinical trials. Similar checklists have been created for a variety of research methodologies. Notably missing is a checklist for the reporting of survey studies. This article examines the conduct of survey research publications and proposes a checklist to evaluate the validity of survey studies published in Plastic and Reconstructive Surgery. (Plast. Reconstr. Surg. 135: 216e, 2015.)

S

urvey research, the strategy of gathering data from a subset of individuals to characterize a larger population, is among the most widely applied research methodologies.1 Survey research may be best known for its use in marketing and politics, but it is increasingly used in health care as well. Furthermore, advances in technology have facilitated the ease of development, distribution, and participation in surveys, thereby making the methodology increasingly common.2 In 2011, approximately 17 percent of the 724,831 publications added to MEDLINE contained the keyword “survey,” and survey research was published in 83 percent of medical research journals.3 However, of the journals that published survey research, 92 percent failed to provide any guidance on how to conduct or report such studies.3 Of the journals that did provide guidance, most included only a single brief statement or direction, such as “If appropriate, include how many participants were assessed out of those enrolled, e.g., what was the response rate for a survey?” and “The results should include: … the number of patients in the updated series who were examined, the number who responded to questionnaires. …”3 In summary, survey research makes up a substantial proportion of all health care research, yet there is an inadequate amount of quality guidance to inform proper reporting of this methodology.3,4 In 2010, the Consolidated Standards of Reporting Trials Study Group published a checklist of items From the Department of Surgery, Section of Plastic Surgery, University of Michigan Medical School. Received for publication March 18, 2014; accepted May 21, 2014. Copyright © 2014 by the American Society of Plastic Surgeons DOI: 10.1097/PRS.0000000000000794

216e

that should be included in any high-quality report of a randomized clinical trial.5 The 25-item checklist includes reporting on the creation of the randomization scheme, exactly who was blinded and how, and potential sources of bias. Including as many of these items as possible in an article helps to ensure a complete and transparent report of the research such that any results can be fully assessed for quality and accuracy.5 The Consolidated Standards of Reporting Trials Study Group then proceeded to develop the Enhancing the Quality and Transparency of Health Research Network. The Enhancing the Quality and Transparency of Health Research Network is a global initiative that has identified over 200 key reporting guidelines for many research methodologies, including the Preferred Reporting Items for Systematic Reviews and Meta-Analyses, the Standards for Reporting of Diagnostic Accuracy, and the Consolidated Health Economic Evaluation Reporting Standards statement, all striving to improve the value of published health research literature.6 The Enhancing the Quality and Transparency of Health Research Network is a highly valuable resource to the medical community, but at this time it does not include a guide for the conduct and reporting of survey research. However, the Network has recognized and endorsed a number of guides that can contribute to the creation of such a guide. These include the Checklist for Reporting Results of Internet E-Surveys and the good practice in the conduct and reporting of survey research checklist.4,7 In addition, because surveys are a type of observational study aimed at providing a representative Disclosure: The authors have no financial interest to declare in relation to the content of this article.

www.PRSJournal.com

Volume 135, Number 1 • Guide to Survey Research description of the opinions, views, thoughts, feelings, perspectives, and/or experiences of a specific population at a given time, we can apply many of the items in the Strengthening the Reporting of Observational Studies in Epidemiology statement.8 Although we encourage investigators who perform survey research to read these documents in detail, we have combined items from all three of these sources to provide a checklist of essential criteria for survey studies submitted to Plastic and Reconstructive Surgery (Table 1). When relevant, we will provide examples from a 2011 national survey, published by Plastic and Reconstructive Surgery, regarding practice patterns of postmastectomy breast reconstruction.9 Although many of the criteria presented will not be new to the readers, the publication and endorsement of guidelines reiterating the importance of transparent reporting methods have been associated with improved quality of research.10

ESSENTIAL REPORTING CRITERIA FOR SURVEY RESEARCH Survey Instrument The most important element of a survey study is the actual survey instrument, the specific tool

used for obtaining data from respondents. The survey itself must be designed to elicit the most truly representative responses of the sample while minimizing bias. Whenever possible, it is preferable to use an existing instrument. In the event that it is possible to use an existing instrument, describe it thoroughly and provide justification for its use. If no survey exists that addresses the research question, it may be necessary to create a novel instrument. In doing so, it is ideal to draw from existing and recognized surveys, and provide appropriate citations. Creating a novel instrument can be work intensive, though, and adequate time should be set aside for the development, testing, and revision of the instrument. Detailed instructions for the writing of a novel and quality research survey have previously been published.11 Asking the Appropriate Questions A meaningful survey must pose questions that yield accurate and valid information regarding the population being studied. The development of such questions begins with clearly identifying the goals of the project and the data needing to be obtained. A valuable exercise is the creation of sample tables, even before any data are collected.12

Table 1.  Table of Criteria for Conducting and Reporting Survey Research Criteria Introduction  Research question  Background/rationale  Specific objectives Methods  Survey instrument  Method of administration Survey sample  Sampling frame  Sample identification  Sampling methodology Results  Descriptive data  Outcome data  Response rate Bias  Nonresponse bias  Response bias  Adjustment for bias Discussion  Key results  Generalizability  Limitations  Conclusion

Explanation State one or more clear and understandable questions that the survey aims to answer. Explain why the survey should be conducted. Clearly state specific aims and hypotheses. Describe the research tool used to gather data. If a new survey was created or the readership is likely to be unfamiliar with an existing instrument, provide a copy as an appendix. State the means by which the respondents were contacted and how the survey was completed (e.g., by Internet, phone, mail). Describe the target population and the subset of individuals presented with the survey (who, when, where). Address any possible sampling bias and controls. Explain how this group of individuals was identified for completion of the survey. State the strategy by which respondents were selected (e.g., random versus nonrandom sampling), and any incentives offered. Report the demographic information of the respondents who successfully completed the survey. Report outcomes and statistical significance. Include what percentage of the sample successfully completed the survey and potential reasons for nonparticipation. Address how the answers of the respondents might differ from the answers of those who did not respond. Discuss controls for possible social desirability and recall bias by the respondents. Describe any statistical measures taken to adjust for potential bias. Explain the key findings with regard to the specific aims. Clarify and elaborate the populations or subpopulations to which the results of the survey can be applied. Acknowledge potential shortcomings of the study, and shed light on how these may be corrected in future studies. Explain the importance of the research and future directions.

217e

Plastic and Reconstructive Surgery • January 2015 This aids in the identification of potential data collection fields that may be inadvertently omitted during the planning process. Furthermore, it will help the researcher gain insights into the topic and allow him or her to answer questions such as, “Is the topic sharply-defined and clear-cut, or is it more complex?” These steps will guide the researcher to the types of questions that should be asked. There are two types of survey questions: openended and closed-ended. Open-ended questions allow the respondent to freely express himself or herself and can reveal nuances that closed-ended questions cannot. However, because of the potential for great variability in responses, open-ended questions can be tedious to analyze and can make drawing conclusions difficult.13 We have limited this discussion to more commonly used closedended questions. These include selection, ranking, and the use of visual analogue scales. For a selection question, the respondent selects a response from a list of options. The provided answer choices should be exhaustive (all possible responses are represented) and mutually exclusive (none of the responses overlap).11,12 For some questions, a simple true or false will satisfy these requirements, but for others, it might be necessary to have numerous choices. Despite the mutually exclusive nature of the response options, the directions should explicitly state that only one option should be selected, if this is desired. Lastly, it is inevitable that with even the most careful question construction and the most detailed instructions, some respondents will feel that the choices do not include their desired opinions or that more than one choice is appropriate. An a priori procedure to handle “write-in” and multiple responses should be in place. Many research aims may require more nuanced responses and may therefore be better analyzed through ranking questions. A ranking question asks the respondents to specify the degree to which they endorse a particular concept relative to others. Ranking questions can be presented in one of two ways. In the first case, respondents can be asked to literally rank items according to preference, agreeability, or any other metric. For example, given the vast array of communications available, patients may be given a list of contact methods (e.g., postal mail, e-mail, phone, text) and asked to rank them in order, from most to least preferred. Alternatively, a Likert scale, which is a five- to seven-item scale of varying response options, may be used to measure respondents’ agreement with a particular topic. Likert scales are commonly used, for instance, to determine satisfaction (completely

218e

satisfied, somewhat satisfied, neutral, somewhat dissatisfied, and completely dissatisfied). As with selection questions, it is inevitable that some respondents will be compelled to select more than one option or mark between options. Similarly, a large number of respondents may select the middle option (neutral) on a Likert scale.14 Because opinions regarding the utility of the neutral option vary, the scales can be written without this choice, thereby encouraging respondents to express an actual opinion.1,14 Ranking questions can be written in a third way, with only the extreme ends of possible responses provided. A visual analogue scale allows respondents to more precisely indicate their opinion on a particular topic compared with the Likert scale. A line with two opposing concepts at either end is presented and the respondent is asked to mark the point on the line where his or her agreement falls. Data are recorded and later analyzed by measuring the length along the line where the respondent made his or her mark. As a final note, ranking questions should be as specific as possible to narrow the focus of the respondent on a singular concept.15 Questions that are too broad may lead to a “double-barrel” effect where two distinct concepts are involved. An example of such a question might be, “How satisfied were you with the cost and the duration of your hospital visit?” Respondents may agree with or feel strongly about one item, but not the other, making it difficult to select one response. This can lead to possible confounding effects in the data and should be avoided. Identifying and Minimizing Sources of Bias Validity is the relationship between survey responses and “reality.” In a survey of plastic surgeons, for example, validity is the similarity (or dissimilarity) of the survey results versus what the plastic surgeons actually think and feel.9 It is a key factor of high-quality survey research, but the accuracy of the results can be influenced by bias. Many types of biases exist, especially in survey research. One commonly encountered type is recall bias, which occurs because respondents remember past events in different ways, with more recent and more emotional events often recalled more vividly.1 Surveys that require the respondent to recall past events or past feelings should be careful to account for this bias. Similarly, leading questions, or those that belie the response the researchers “desire,” can also introduce what is known as a response bias.16 Questions should be worded as neutrally as possible and not include unnecessary verbiage. Furthermore, the questions should be

Volume 135, Number 1 • Guide to Survey Research written with a vocabulary that is appropriate for the target respondents.17 The 2011 survey, for example, was written with a terminology appropriate for highly trained physicians, and might not have been understood by all respondents if the survey had been administered to the general public.9 In fact, all health-related surveys for the general public should be written in plain language to ensure complete and rapid understanding for the reader.18 Lastly, response bias can also be introduced through social desirability, or the idea that an individual will respond in a way that presents himself of herself in a favorable light. This type of bias can be minimized through a written or stated emphasis on the confidential nature of the survey (if this is the case) and through thoughtful wording of the questions.16 Phrasing questions in a way that asks the respondents whether they support or agree with a practice rather than admitting to engaging in such a practice can result in a more accurate response, especially for sensitive topics.19 For example, asking a participant to rate his or her agreement with a statement such as, “Occasional recreational drug use isn’t a very big deal,” rather than, “Do you use recreational drugs?” may result in more honest evaluation of the respondent’s views on recreational drug use. Another potential source of bias is the method of survey administration, which includes how the survey is distributed and how data are recorded. Surveys can be administered in-person, by the respondent, or by means of a proxy. Proxy administration is used only in specific circumstances and is not discussed in this article. In-person administration involves live and direct communication between the respondent and the researcher who records the responses. Although it is ideal for the respondent and researcher to be in the same physical location, the survey can also be administered over the telephone or through an online/ video chat. This in-person method allows both the researcher and the respondent to request clarification if needed.20 The researcher can also ensure that the survey is completed correctly, and he or she has the benefit of immediate access to the data. However, interaction and communication between the researcher and the respondent has potential to bias results.21 Response bias, as discussed earlier, may affect a respondent’s willingness to provide accurate answers to the researcher’s questions. In surveys that pertain to satisfaction with medical care, for example, respondents may worry about hurting the researcher’s feelings or may fear that their medical care and/or insurance coverage may change based on the responses given.22

Providing the respondent with the survey and allowing the individual to complete it on his or her own may facilitate more honest responses. However, this method adds the burden of delivering the survey to the respondent and, even more challenging, retrieving it for data entry and analysis. To eliminate the latter, surveys can be completed in an environment where they can be submitted directly on completion, such as a hospital or clinic. Unfortunately, this is often not feasible and may hinder the respondent’s ability to provide honest answers. A common method of survey research is the mail-based survey. Although this reinforces the anonymous nature of the survey and the value of confidentiality, the response rate may suffer because of added inconvenience for the respondent involved with returning the survey.20,23 Online surveys, the final survey type we examine, have become increasingly popular. There are a variety of companies that facilitate free creation and dissemination of surveys, and there are paid services that will give access to panels of respondents selected for specific sampling needs.24 Online surveys provide the researcher a high level of control over responses.20,25 Questions can be set to permit only one response and respondents cannot write extra information in the margins, avoiding the previously mentioned multiple response dilemma. Furthermore, online surveys reduce data entry time because data are saved as the respondents complete the survey and can be easily downloaded for analysis. In addition, many survey builders will even produce tables and figures to specifications. Although the anonymity of the Internet can induce respondents to answer freely and honestly, the most apparent drawback is that every member of the sample must have Internet access.23 As Internet access becomes more prevalent, another problem is attempting to accommodate the wide variety of devices and platforms used to access the Internet. At a minimum, surveys should be compatible with major Web browsers, tablets, mobile devices, and desktop and laptop computers. Survey Sample The definition of survey research is to use a sample of individuals from a population to characterize attributes of the entire population. Unless the researchers can conduct a census (discussed later), the validity of the survey’s results will depend on the characteristics of the sample matching the characteristics of the population. Differences between the sample and the population can lead to sampling error. For this reason, detailed information about sampling is absolutely

219e

Plastic and Reconstructive Surgery • January 2015 necessary for any reader to assess the legitimacy of survey research results.4 The sampling frame, or target population, is the who, when, and where of a study.26 In the case of the 2011 survey of plastic surgeons, the sampling frame was active members of the American Society of Plastic Surgeons who perform breast reconstruction (who), in 2008 (when), who resided in the continental United States (where).27 The results of this project were a generalization of the opinions of all the members of this group. There are three basic sampling methods: census, random sampling, and nonrandom sampling. From a semantics standpoint, a census is not sampling at all.1 Rather than surveying a subset of a population, a census surveys all members. This is the ideal way to collect data but is only feasible from a cost and manpower standpoint if the sampling frame is small and easily accessible. For larger populations, a census is not possible and sampling must be performed. Random sampling means that every member of the population has an equal chance of being selected for the sample. In a 2010 project surveying members of the American Society for Reconstructive Microsurgery, for example, we numbered each member alphabetically and used a random number generator to produce a list of 200 numbers, which we then matched to 200 surgeons.28 By doing so, each member of our sampling frame had an equal chance of being selected for the sample. Like a census, this requires a well-defined sampling frame, with all members being identified and easily accessible. Nonrandom sampling selects certain members of the population using explicitly defined criteria to represent the whole. For example, an ongoing survey and assessment study of patients with rheumatoid arthritis affecting the hands has strict inclusion criteria, including age and the degree and severity of deformity. In addition, the patients are culled from only three hand centers. The opinions of these patients, such as their reasons for choosing to undergo corrective hand surgery, are then extrapolated to the wider population—in this case, all patients with rheumatoid arthritis affecting the hand in the United States and the United Kingdom.29 Both random and nonrandom sampling can be performed in a myriad of ways.26 Lastly, the number of individuals selected must be reported. There is no defined number of individuals that is considered “sufficient,” although a larger sample is generally considered to be more desirable. In other words, a larger number of selected respondents is more likely to

220e

result in a smaller sampling error and therefore a more accurate representation of the population.1 Response Rate Just as there is no proven acceptable sample size, there is no proven acceptable response rate, or the proportion of individuals sampled from whom data were collected.30 However, no academic journal will consider a report of a survey study for publication if it is lacking a presented response rate. Response rates can provide valuable insights regarding the accessibility of the survey instrument and potential biases of the data. Nonresponse Bias Nonresponse bias can occur when sampled individuals who did not respond to the survey differ in some key way from those who did respond. If, for instance, the 24 percent of plastic surgeons who did not complete the survey had a significantly older mean age, there may be a nonresponse bias in the form of age. However, if age and the key outcome variable(s) are not related in the responders, this bias may not be statistically relevant.1 Nonresponder analysis can be hindered by there being very little information available about nonresponders. If nonresponse bias is suspected, it may be beneficial to try to contact those who did not complete the survey. Any potential sources of nonresponse bias, and any statistical methods used to counteract bias, should be carefully described.7,30 Generalizability It bears repeating that the aim of any survey study is to characterize a large population based on a sample of that population. However, even in a best-case scenario, sampling error is inevitable, and survey results are only truly representative of the individuals in the sampling frame. The results of the discussed survey can only be generalized to active members of the American Society of Plastic Surgeons who perform breast reconstruction, in 2008, who resided in the continental United States.9 The results cannot be assumed to represent the opinions of plastic surgeons who do not perform breast reconstruction, for example, nor can the views be assumed to represent those of plastic surgeons who are not members of the American Society of Plastic Surgeons. High response rate alone does not ensure generalizability.30 Generalizability must be demonstrated by providing evidence that the survey sample is representative of the whole population. Although

Volume 135, Number 1 • Guide to Survey Research it is true that the larger the sample size, the more likely the data will be representative of the larger population, size alone is not sufficient.

DISCUSSION Plastic surgery is a unique field of medical research, often reliant on highly variable and personal outcome measures such as pain level, aesthetics, and patient satisfaction.31 These metrics can be measured in a myriad of ways and are sometimes reported using novel tools that have not undergone rigorous psychometric testing. This can render the reporting of plastic surgery research quite difficult. In response, Plastic and Reconstructive Surgery has become an ardent leader in complete and transparent research reporting. For example, conflicts of interest can contribute significantly to biased reporting.32 In 2006, Plastic and Reconstructive Surgery emphasized the importance of disclosing conflicts of interest and initiated new authorship requirements to improve the quality of publications.33 Since then, Plastic and Reconstructive Surgery has disclosed authors’ conflicts of interest significantly more than the other top plastic surgery research journals.34 In emphasizing the importance of essential criteria for reporting survey research, Plastic and Reconstructive Surgery continues to serve as a leader in high-quality plastic surgery research.

CONCLUSIONS Survey research, when conducted properly, is a powerful and efficient strategy for data collection. Precise and accurate use of this methodology can provide valuable insights, but incomplete and inadequate survey research can lead to misleading and misrepresentative data. The ability to accurately represent and speak for a large population based on the results of a sampling of individuals requires the explicit and transparent disclosure of study procedures. Readers of the medical literature must be able to look beyond a response rate or sample size to determine whether a survey study truly characterizes the population to which it relates. As technology continues to evolve and the interconnectedness of individuals continues to increase, the frequency of survey studies will steadily rise.2 Nevertheless, despite medical journals publishing many articles containing surveys, few of these journals provide adequate guidelines for the proper reporting of such research.3 The critical challenge to the medical community, therefore, is to be able to conduct, report, and identify studies based on high-quality survey research.

Kevin C. Chung, M.D., M.S. Section of Plastic Surgery University of Michigan Health System 2130 Taubman Center, SPC 5340 1500 East Medical Center Drive Ann Arbor, Mich. 48109-5340 [email protected]

references 1. Fowler FJ. Survey Research Methods. 2nd ed. Newbury Park, Calif: Sage; 1993. 2. Evans JR, Mathur A. The value of online surveys. Internet Res. 2005;15:195–219. 3. Bennett C, Khangura S, Brehaut JC, et al. Reporting guidelines for survey research: An analysis of published guidance and reporting practices. PLoS Med. 2010;8:e1001069. 4. Kelley K, Clark B, Brown V, Sitzia J. Good practice in the conduct and reporting of survey research. Int J Qual Health Care 2003;15:261–266. 5. Moher D, Hopewell S, Schulz KF, et al.; Consolidated Standards of Reporting Trials Group. CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials. J Clin Epidemiol. 2010;63:e1–37. 6. Simera I. Get the content right: Following reporting guidelines will make your research paper more complete, transparent and usable. J Pak Med Assoc. 2013;63:283–285. 7. Eysenbach G. Improving the quality of Web surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res. 2004;6:e34. 8. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP; STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: Guidelines for reporting observational studies. J Clin Epidemiol. 2008;61:344–349. 9. Alderman AK, Atisha D, Streu R, et al. Patterns and correlates of postmastectomy breast reconstruction by U.S. plastic surgeons: Results from a national survey. Plast Reconstr Surg. 2011;127:1796–1803. 10. Plint AC, Moher D, Morrison A, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust. 2006;185:263–267. 11. Alderman AK, Salem B. Survey research. Plast Reconstr Surg. 2010;126:1381–1389. 12. Sinkowitz-Cochran RL. Survey design: To ask or not to ask? That is the question. Clin Infect Dis. 2013;56:1159–1164. 13. Shauver MJ, Chung KC. A guide to qualitative research in plastic surgery. Plast Reconstr Surg. 2010;126:1089–1097. 14. Aday LA, Cornelius LJ. Formulating questions about knowledge and attitudes. In: Designing and Conducting Health Surveys: A Comprehensive Guide. 3rd ed. San Francisco: JosseyBass; 2006:243–260. 15. Fowler FJ. Improving Survey Questions: Design and Evaluation. Thousand Oaks, Calif: Sage; 1995. 16. Fisher RJ, Katz JE. Social-desirability bias and the validity of self-reported values. Psychol Market. 2000;17:105–120. 17. Hunt SD, Sparkman RD, Wilcox JB. The pretest in surveyresearch: Issues and preliminary findings. J Market Res. 1982;19:269–273. 18. Stableford S, Mettger W. Plain language: A strategic response to the health literacy challenge. J Public Health Policy 2007;28:71–93. 19. Schuman H, Presser S. Question wording as an inde pendent variable in survey analysis. Sociol Method Res. 1977;6:151–170.

221e

Plastic and Reconstructive Surgery • January 2015 20. Sprague S, Quigley L, Bhandari M. Survey design in orthopaedic surgery: Getting surgeons to respond. J Bone Joint Surg Am. 2009;91(Suppl 3):27–34. 21. Aday LA, Cornelius LJ. Monitoring and carrying out the survey. In: Designing and Conducting Health Surveys: A Comprehensive Guide. 3rd ed. San Francisco: Jossey-Bass; 2006:281–304. 22. George S, Duran N, Norris K. A systematic review of barriers and facilitators to minority research participation among African Americans, Latinos, Asian Americans, and Pacific Islanders. Am J Public Health 2014;104:e16–e31. 23. Lau FH, Chung KC. Survey research: A primer for hand surgery. J Hand Surg Am. 2005;30:893.e891–893.e811. 24. Alderman AK, Salem B. Survey research. Plast Reconstr Surg. 2010;126:1381–1389. 25. Dillman DA. Why choice of survey mode makes a difference. Public Health Rep. 2006;121:11–13. 26. Aday LA, Cornelius LJ. Deciding who will be in the sample. Designing and Conducting Health Surveys: A Comprehensive Guide. 3rd ed. San Francisco: Jossey-Bass; 2006112–142. 27. Kulkarni AR, Sears ED, Atisha DM, Alderman AK. Use of autologous and microsurgical breast reconstruction by U.S. plastic surgeons. Plast Reconstr Surg. 2013;132:534–541.

222e

28. Chung KC, Shauver MJ, Saddawi-Konefka D, Haase SC. A decision analysis of amputation versus reconstruction for severe open tibial fracture from the physician and patient perspectives. Ann Plast Surg. 2011;66:185–191. 29. Chung KC, Kotsis SV, Kim HM, Burke FD, Wilgis EF. Reasons why rheumatoid arthritis patients seek surgical treatment for hand deformities. J Hand Surg Am. 2006;31:289–294. 30. Johnson TP, Wislar JS. Response rates and nonresponse errors in surveys. JAMA 2012;307:1805–1806. 31. Alderman AK, Wilkins EG, Lowery JC, Kim M, Davis JA. Determinants of patient satisfaction in postmastectomy breast reconstruction. Plast Reconstr Surg. 2000;106:769–776. 32. Lopez J, Prifogle E, Nyame TT, Milton J, May JW Jr. The impact of conflicts of interest in plastic surgery: An analysis of acellular dermal matrix, implant-based breast reconstruction (Abstract 16). Plast Reconstr Surg. 2014;133(Suppl):984. 33. Rohrich RJ. Full disclosure: Conflict of interest in scientific publications. Plast Reconstr Surg. 2006;118:1649–1652. 34. Sinno H, Lutfy J, Tahiri Y, Neel OF, Gilardino M. Reporting disclosures to the reader in plastic surgery journal publications. Can J Plast Surg. 2012;20:e35–e36.

Structure and establishing validity in survey research.

Survey research is a commonly used research tool used to gather opinions regarding emerging technologies and current practices. As randomized clinical...
199KB Sizes 1 Downloads 4 Views