Applied Nursing Research 27 (2014) 133–136

Contents lists available at ScienceDirect

Applied Nursing Research journal homepage: www.elsevier.com/locate/apnr

Clinical Methods

Challenges of conducting experimental studies within a clinical nursing context M. Gustafsson, RN, PhD student ⁎, D.M. Bohman, RN, PhD, G. Borglin, RN, PhD Department of Health Science, Blekinge Institute of Technology, SE-379 71 Blekinge, Sweden

a r t i c l e

i n f o

Article history: Received 4 September 2013 Revised 14 November 2013 Accepted 17 November 2013 Keywords: Evidence-based nursing Intervention studies Nursing research Quasi-experimental design

a b s t r a c t In recent years, several distinguished scholars have advocated for nursing research that may carry strong evidence for practice. Their advocacy have highlighted that nursing science has reached a point where as nurse researchers we need to develop the questions we ask and design studies that have the power to produce solid, translational, evidence-based knowledge. To do so, we need to carry out experimental tests on complex, everyday nursing interventions and activities. We also need to create public space to present accounts of our endeavours pursuing this type of design in clinical practice. This paper will discuss some of the most important insights gained from conducting a quasi-experimental study in which the aim was to investigate the effect of a theory-based intervention, targeting knowledge and attitudes among registered nurses regarding cancer pain management. The importance of careful practical and methodological planning is emphasised, and the need for participation-friendly interventions is discussed. © 2014 Elsevier Inc. All rights reserved.

1. Introduction This paper will highlight the challenges and experiences of conducting an experimental study that aims to implement and evaluate a theory-based educational intervention where the target group are registered nurses in clinical practice (Gustafsson & Borglin, 2013). What are these challenges and how do they affect the practicality of experimental designs? We identify with the idea that experimental designs in nursing research can further improve patient care as they offer a different approach and open up potential to test a hypothesis in an actual care situation. According to Polit and Beck (2012), the use of experimental studies is regarded as a valid method for evaluating healthcare interventions. However, as Brink and Wood (1998) state when performing experimental research in nursing science, there are many aspects that need to be taken into account, partly for ethical reasons but also in the light of the discrepancy between a controlled environment and the actual nursing context (Brink & Wood, 1998). There is a call nowadays for implementation of evidence-based care that involves applying knowledge in the clinical nursing context in order to raise the quality of nursing care. Our standpoint is that nursing science can benefit from experimental designs. The barriers are nonetheless numerous and diverse and, in line with Richards and Hamers (2009), these barriers must be dealt with through careful practical and methodological planning. In many cases the use of a theoretical framework can help us to better understand and deal with pre-existing barriers (Craig et al., 2008). Stark, Craig, and Miller (2011) acknowledge Competing interests: The authors declare that they have no competing interests. ⁎ Corresponding author. Tel.: +46 455 385 485. E-mail address: [email protected] (M. Gustafsson). 0897-1897/$ – see front matter © 2014 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.apnr.2013.11.013

the importance of building a nursing intervention on top of a model in order to evaluate and refine the design of the intervention. However, we must also bear in mind that some barriers and facilitators may only become apparent during the implementation process (Grol, Wensing, & Eccles, 2004). The nursing environment is a unique setting, where the nurse researcher must take into account numerous influencing factors that can make or break the intervention. In this paper we will discuss the importance of experimental designs as well as the challenge of conducting them. 2. Experimental studies within the clinical nursing context The use of experimental studies is seen as a valid method for evaluating healthcare interventions (Polit & Beck, 2012), and the practice of conducting experimental studies in nursing science has increased over the years (Smith et al., 2008). True experimental studies offer the most convincing evidence about the relationship between cause and effect and consist of three elements: manipulation, control and randomisation. Manipulation is the researcher's ability to influence the independent variable; control is the researcher's control over the study environment and confounding factors; randomisation refers to how the study participants are divided into study groups (Polit & Beck, 2012). According to Brink and Wood (1998), there is no more suitable design when accounting for control of extraneous variables and maximising the effect of the independent variable (Brink & Wood, 1998). The manner in which the experimental studies are reported is also important in order to ensure a high level of quality. The nurse researcher can use a set of recommendations, named the CONSORT, for reporting randomised controlled trials (RCTs), which is a requirement laid down by several high-quality, peer-reviewed

134

M. Gustafsson et al. / Applied Nursing Research 27 (2014) 133–136

journals when reporting results from experimental studies (Lindsay, 2004). The CONSORT extension for pragmatic RCTs is an attempt to help researchers involved in pragmatic experimental research in a healthcare context (Zwarenstein et al., 2008). According to Borglin and Richards (2010), the pragmatic extension can be used to improve the experimental design, make it better suited to the clinical nursing context and generally improve the way in which experimental studies are reported within nursing research. However, when designing an experimental study, the researcher is obliged to appraise scientific rigour in relation to the context and available resources (Thompson & Panacek, 2006). As resources are frequently scarce within healthcare management, the use of a true experimental design can lead to financial problems. We therefore believe that the research design must also be suitable for the nursing environment, and by only using the 'gold standard', i.e. RCT, the progress of nursing care can be impeded and can slow down implementation of knowledge, which needs to be continuous. Nowadays, there is a call for implementation of evidence-based care and to help initiate and evaluate new knowledge systematically; an experimental design that can be adopted for the organisation would be of great value. According to Brink and Wood (1998), a quasiexperimental design has less scientific rigour but is more costefficient and easier to adapt to the clinical nursing context. Depending on the research issue, a true experimental design might not always be the most suitable, for ethical or legal reasons, to answer the question, and in such cases a quasi-experimental design could be of interest (Brink & Wood, 1998). Quasi-experimental studies often lack the randomisation element but still have the control and manipulation elements (Thompson & Panacek, 2006). Randomisation, however, only helps to check for selection bias, which is just one of the four biases within clinical research. Consequently, randomisation has no direct impact on detection, attrition and performance biases (Borglin & Richards, 2010). In those instances that randomisation is not possible, conducting research that checks for remaining biases is still far more preferable in relation to randomised research, where none of the biases is assessed accurately (Hill, 1962). Hence, this kind of experimental design could prove suitable for the implementation of different improvement projects since the randomisation element can be removed, which allows for more pragmatic considerations that are better suited to the clinical nursing context. 3. Challenges when conducting experimental research within nursing Experiences from the field confirm numerous factors that have the potential to influence the outcomes measured in an intervention within a clinical nursing context. According to Grol et al. (2004), when planning an experimental study in a healthcare context it is recommended that it be checked carefully first for possible influencing, context-specific elements although certain barriers and facilitators may only appear during the actual implementation process (Grol et al., 2004). According to Rycroft‐Malone (2012), many evidencebased practice projects produce more questions than answers (Rycroft‐Malone, 2012). This could be a result of the practical difficulties of working in an organisational context where full control over influencing factors cannot be achieved and every situation is unique in its own setting. Rycroft‐Malone also highlights the difficulties that arise in improving nursing practice that involves numerous interactions. These difficulties will ultimately govern the outcome of the study (Rycroft‐Malone, 2012). In an attempt to design and evaluate these kinds of multifaceted interventions systematically, the United Kingdom Medical Research Council (MRC) has created a framework for managing complex interventions that consists of five phases. The first three phases are related to preparation for the RCT, and the two following phases make up the intervention and implementation process. The first phase establishes the theoretical basis for the intervention, and the second phase, the modelling phase,

recommends pilot testing in an attempt to increase knowledge of the components that make up the intervention. The third phase is the exploratory trial, where the feasibility of the intervention is explored. These preparation phases aim to provide a solid methodological framework for interventions within a medical setting (Craig et al., 2008). In accordance with the recommendations from the MRC, we regard the first three phases as essential when planning for upcoming practical difficulties, and the use of pilot testing can help discover practical and methodological difficulties that may only become apparent when working in the actual context. This plan should also involve an assessment of how to effectively monitor the adherence and competent delivery of the intervention as this otherwise will affect the validity of the intervention. Richards and Hamers (2009) suggest that the complexity of nursing and the artificial state of experimental research can be accommodated by means of methodology that has been carefully thought out. However, in a commentary response to the paper by Richards and Hamers, Rolfe (2009) argues that the complexity of nursing cannot be based on the assumption that this complexity can be broken down into different elements that can be identified and measured through RCTs. A complex nursing intervention is unique, and the same intervention in another setting might yield different outcomes (Rolfe, 2009). Forbes (2009) agrees that the nursing intervention is performed within a complex nursing context but emphasises that the intervention is a specific activity that requires its impact to be tested to ensure its usefulness to the patient (Forbes, 2009). We stress that this could prove to be a problem with regard to the transferability of experimental studies in different clinical nursing contexts. Nevertheless, it also depends on the research issue. If our research aims to implement evidence-based nursing knowledge in the clinical nursing context, this might not be as important. Consequently, this would require extensive testing whenever new knowledge is to be implemented and if any generalisability is to be achieved. Providing external validity is one the difficulties in experimental nursing research. This is exemplified in a multifactorial interventional RCT study that examines prevention of falls among psychogeriatric nursing home patients and where the majority of nursing homes declined to take part due to merger procedures and internal reorganisation of the care process (Neyens et al., 2009). Drop-outs of this nature are a significant problem for the nurse researcher since the organisational structure does not allow easy access and does not put enough effort into connecting nursing research with organisational development. If the healthcare organisation does not allow access, this clearly affects the practicality of experimental research and inhibits the progress of nursing care. We can see a distancing between nursing research and the healthcare organisation, where nursing research is not seen as something that is incorporated naturally and performed on a daily basis. Instead, it is viewed as a time-consuming activity that detracts from the nurses' care of the patients. Consequently, this could affect the response rate and adherence to the purpose of the experimental study. This is illustrated in a quasi-experimental study, where the aim was to evaluate the impact of a computerised educational module on nurses' diabetes knowledge and confidence levels. Both the pre-test and the post-test survey had a low response rate, 44% and 9% respectively. The authors discussed the possibility of frequent interruptions when the nurses filled in the surveys as being an important barrier although the introduction of a new electronic system could also have interrupted the process of completing the surveys (Eaton-Spiva & Day, 2011). However, occurrences of this nature are common in a clinical nursing context and should not influence adherence so strongly. If the organisation cannot work with implementation of evidence-based care because of a lack of integration into the structures and resources, how can we improve care for our patients? The experiences of working closely with experimental nursing research, both as researchers and as participating nurses, have led us

M. Gustafsson et al. / Applied Nursing Research 27 (2014) 133–136

to the understanding that despite careful methodological and practical planning, profound efforts to account for perceived and upcoming barriers within the current context (Borglin, Gustafsson, & Krona, 2011; Gustafsson & Borglin, 2013) may result in a notable lack of adherence to the principles of the intervention. We would like to specifically draw the attention to one of our most taxing design issues, patient recruitment. Careful negotiations about when to recruit and about who should recruit resulted in an agreement between the research team and the intervention ward. The ward nurses agreed to be in charge of recruitment of patients at admission. This joint decision between the ward and the research team turned out to be detrimental for the study. It resulted in us not being able to investigate our secondary outcome. Thus, if our theory based educational intervention targeting the nurses, actually would have a positive effect on the patients' pain perception i.e. patients reporting less pain could not be investigated (Borglin et al., 2011; Gustafsson & Borglin, 2013). Unknowingly to us, as well as to the ward nurses, turbulent organisational changes impossible to foresee concurred with the study. It turned out, and we had anticipated, that recruiting enough patients could become a barrier, but the problem was not the lack of admitted patients to recruit. Instead, the nurses reported back, after repeated prompting concerning recruitment, that they simply could not find the time to recruit participants. Later on when raising the issue with the nurses who did not comply with intervention instructions, a frequent comment was: “This research is really important for the patients but if I have to fill in this survey or do this extra measurement I don’t have time to take care of my patients. I feel sorry for not contributing but I have to prioritise.” Our experience highlights the importance of nurse researchers keeping in mind the practicality and adherence to the intervention when designing the study. One way of addressing this could be by introducing a liaison nurse responsible for the screening of the participants and the recruiting, thus assisting researchers from outside of the organization (Weierbach, Glick, Fletcher, Rowlands, & Lyder, 2010). We believe this liaison nurse could be beneficial for the recruitment process, as they besides having the main responsibility for recruitment also have unique knowledge regarding the clinical nursing context. Additionally, they might be more inclined to remain on the outside of unexpected events within the organization. Dividing the responsibility amongst all nurses to recruit, as we did, might not be a feasible way. We believe that collaborative strategies between the academic and the clinical environment (Engelke & Marshburn, 2006) are crucial when it concerns the interventions success or failure. Sustaining the fidelity of the intervention i.e. the adherence and competent delivery of the intervention (Dumas, Lynch, Laughlin, Phillips Smith, & Prinz, 2001; Santacroce, Maccarelli, & Grey, 2004), can be a real challenge for the nurse researcher. But if we cannot assure that the registered nurses are adhering to the interventional procedures, this will negatively impede on the validity of the intervention (Santacroce et al., 2004). As a research team we thought we were aware of these challenges. We assumed that the registered nurses over time would adhere less to intervention's principles as they forgot the procedures. Therefore we applied methods aimed at lessening this potential lack of adherence. One of these methods was to be visible as researchers. Both by regularly providing information and updates concerning the study, as well as regularly visiting the nurses' workplace to enquire about their understanding of the intervention and if there were any issues concerning the procedures of the intervention. Dumas, Arriaga, Begle, and Longoria (2010) report their experiences when adapting a behavioural intervention to a group of people sharing a distinct type of cultural values and expectations. They emphasize that the intervention besides being true to its original intentions also has to be culturally appropriate for the target population (Dumas et al., 2010). We acknowledge the importance of taking into account those values and beliefs residing within the registered nurses' workplace, which has the potential to affect adherence of the intervention. To increase

135

adherence Dumas et al. (2010) modified their intervention after getting information through enquiries regarding: perceived need for the intervention amongst the targeted population and the distinct cultural values and beliefs. This information was received from both consultants as well as the targeted population (Dumas et al., 2010). Conducting these types of investigations prior to the actual intervention could be crucial for adherence. Because if we do not ask the participants how they themselves perceive the intervention, how can we then adjust for these diverse conditions? Another issue that could negatively affect adherence is by using extensive research instruments. When using an extensive list of measurements for the validity and reliability of the study design it might look very promising on paper, but when implementing the study the nurses do not find the time or interest to fill in all the instruments, and the intended purpose is lost. In a study by Kocaman et al. (2010), a substantial number of the barriers to research utilisation in nursing in eight different countries were related to organisational problems, i.e. an organisation that is inadequate for implementation, lack of cooperation from physicians, lack of time and lack of cooperation from staff (Kocaman et al., 2010). We perceive the organisational environment as a real challenge as even a well-planned intervention that takes into account leadership support, local barriers, simplicity and cooperating nurses with a perceived need to change, may not lead indefinitely to successful implementation (Van der Helm, Goossens, & Bossuyt, 2006). Our experiences indicate that the nurses need a mandate to carry out the instructions for the intervention and implementation needs to be perceived as important and not something that can be left undone. Fundamentally, there needs to be a working environment that can support nursing research. Research also highlights the significance of clear instructions and guidelines on how the nurses need to comply with the management of the intervention (Jansson, Pilhamar, & Forsberg, 2011). The intervention also needs to be participant-friendly if it is to achieve high instruction compliance. Nevertheless, even a well-planned practical and methodological experimental study cannot accurately foresee how the actual implementation process will progress and to what extent the participating nurses will adhere and complete all the necessary steps for the intervention. 4. Conclusion Experimental designs can be very valuable in nursing research, especially in implementation science. However, the clinical nursing context is a challenging environment, with numerous barriers that are diverse in nature and in some cases virtually impossible to account for. One of our most prominent issues was the recruitment of participants. Collaborative strategies between the academia and the clinical environment need to be further developed to avoid this issue. We have suggested that one viable strategy could be the introduction of a liaison nurse responsible for the screening and participant recruitment. Careful practical and methodological planning of the experimental study is of crucial importance, and in many cases the use of a theoretical framework can help us deal better with pre-existing barriers. When designing the study it is not only the scientific rigour that needs to be accounted for. Without taking into account how the nurses perceive the time constraints, the participation workload and the complexity of the study instruments and measurements used, adherence to the study and the intended purpose may be lost. References Borglin, G., Gustafsson, M., & Krona, H. (2011). A theory-based educational intervention targeting nurses’ attitudes and knowledge concerning cancer-related pain management: A study protocol of a quasi-experimental design. BMC Health Services Research, 11, 233. Borglin, G., & Richards, D. (2010). Bias in experimental nursing research: Strategies to improve the quality and explanatory power of nursing science. International Journal of Nursing Studies, 47(1), 123–128.

136

M. Gustafsson et al. / Applied Nursing Research 27 (2014) 133–136

Brink, P. J., & Wood, M. J. (1998). Advanced design in nursing research. Thousand Oaks, Calif.: Sage Publications. Craig, P., Dieppe, P., Macintyre, S., Michie, S., Nazareth, I., & Petticrew, M. (2008). Developing and evaluating complex interventions: The new Medical Research Council guidance. BMJ (Clinical Research Ed.), 337, a1655. Dumas, Jean E., Arriaga, X., Begle, A. M., & Longoria, Z. (2010). ‘When Will Your Program Be Available in Spanish?’ Adapting an early parenting intervention for Latino families. Cognitive and Behavioral Practice, 17(2), 176–187. Dumas, J. E., Lynch, A. M., Laughlin, J. E., Phillips Smith, E., & Prinz, R. J. (2001). Promoting intervention fidelity. Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial. American Journal of Preventive Medicine, 20(1 Suppl.), 38–47. Eaton-Spiva, L., & Day, A. (2011). Effectiveness of a computerized educational module on nurses’ knowledge and confidence level related to diabetes. Journal for Nurses in Staff Development. Official Journal of the National Nursing Staff Development Organization, 27(6), 285–289. Engelke, M. K., & Marshburn, D. M. M. (2006). Collaborative strategies to enhance research and evidence-based practice. Journal of Nursing Administration, 36(3), 131–135. Forbes, A. (2009). Clinical intervention research in nursing. International Journal of Nursing Studies, 46(4). Grol, R., Wensing, M., & Eccles, M. (2004). Improving patient care: The implementation of change in clinical practice. : Butterworth-Heinemann Ltd. Gustafsson, M., & Borglin, G. (2013). Can a theory-based educational intervention change nurses’ knowledge and attitudes concerning cancer pain management? A quasi-experimental design. BMC Health Services Research, 13(1), 328. Hill, A. B. (1962). Statistical methods in clinical and preventive medicine. Edinburgh: Livingstone. Jansson, I., Pilhamar, E., & Forsberg, A. (2011). Factors and conditions that have an impact in relation to the successful implementation and maintenance of individual care plans. Worldviews on Evidence-Based Nursing/Sigma Theta Tau International, Honor Society of Nursing, 8(2), 66–75. Kocaman, G., Seren, S., Lash, A. A., Kurt, S., Bengu, N., & Yurumezoglu, H. A. (2010). Barriers to research utilisation by staff nurses in a university hospital. Journal of Clinical Nursing, 19(13–14), 1908–1918.

Lindsay, B. (2004). Randomized controlled trials of socially complex nursing interventions: Creating bias and unreliability? Journal of Advanced Nursing, 45(1), 84–94. Neyens, J. C. L., Dijcks, B. P. J., Twisk, J., Schols, J. M. G. A., van Haastregt, J. C. M., van den Heuvel, W. J. A., et al. (2009). A multifactorial intervention for the prevention of falls in psychogeriatric nursing home patients, a randomised controlled trial (RCT). Age and Ageing, 38(2). Polit, D. F., & Beck, C. T. (2012). Nursing research: Generating and assessing evidence for nursing practice. Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins. Richards, D. A., & Hamers, J. P. H. (2009). RCTs in complex nursing interventions and laboratory experimental studies. International Journal of Nursing Studies, 46(4), 588–592. Rolfe, G. (2009). Complexity and uniqueness in nursing practice: Commentary on Richards and Hamers (2009). International Journal of Nursing Studies, 46(8), 1156–1158. Rycroft‐Malone, J. (2012). Implementing evidence‐based practice in the reality of clinical practice. Worldviews on Evidence-Based Nursing, 9(1), 1. Santacroce, S., Maccarelli, L., & Grey, M. (2004). Intervention fidelity. Nursing Research, 53(1), 63–66. Smith, B. A., Lee, H. -J., Lee, J. H., Choi, M., Jones, D. E., Bausell, R. B., et al. (2008). Quality of reporting randomized controlled trials (RCTs) in the nursing literature: Application of the Consolidated Standards of Reporting Trials (CONSORT). Nursing Outlook, 56(1), 31–37. Stark, M. A., Craig, J., & Miller, M. G. (2011). Designing an intervention: Therapeutic showering in labor. Applied Nursing Research, 24(4), 73–77. Thompson, C. B., & Panacek, E. A. (2006). Research study designs: Experimental and quasi-experimental. Air Medical Journal, 25(6), 242–246. Van der Helm, J., Goossens, A., & Bossuyt, P. (2006). When implementation fails: The case of a nursing guideline for fall prevention. Joint Commission Journal on Quality and Patient Safety/Joint Commission Resources, 32(3), 152–160. Weierbach, F. M., Glick, D. F., Fletcher, K., Rowlands, A., & Lyder, C. H. (2010). Nursing research and participant recruitment: Organizational challenges and strategies. JONA: The Journal of Nursing Administration, 40(1), 43–48. Zwarenstein, M., Treweek, S., Gagnier, J. J., Altman, D. G., Tunis, S., Haynes, B., et al. (2008). Improving the reporting of pragmatic trials: An extension of the CONSORT statement. BMJ (Clinical Research Ed.), 337, a2390.

Challenges of conducting experimental studies within a clinical nursing context.

In recent years, several distinguished scholars have advocated for nursing research that may carry strong evidence for practice. Their advocacy have h...
150KB Sizes 0 Downloads 0 Views