Carol A Deets, RN

Evaluating CE programs All too often the evaluation of continuing education (CE) offerings is essentially unplanned. Systematic evaluation is frequently intended but seldom depicted in a formal plan. According to Collart, this process includes five steps: determine what to evaluate, decide on acceptable evidence, collect data, summarize data, and make judgment of w0rth.l Carrying out each of these steps provides a blueprint for an evaluation plan. “Reflections: awareness for action” could make a sound plan for evaluating continuing education programs. One must reflect on what has been accomplished in past evaluations, be aware of a wide range of techniques that can be used in the evaluation, and

Carol A Deets, R N , EdD, is director of the Center for Health Care Research and Evaluation, University of Texas at Austin. She is a graduate of the Presbyterian Hospital School of Nursing, Charlotte, N C , and received her BS degree in nursing from Queens College, Charlotte. Her MS degree in nursing and her EdD degree are from Indiana University, Bloomington. This paper was presented by the author at the 1977 AORN Congress in Anaheim. 152

take positive action to choose and implement the evaluation plan. Reflecting on past evaluations allows one to determine: 1. types of evaluations that were successful 2. types of evaluations that were not successful 3. factor or factors causing a plan to be successful 4. factor or factors causing a plan to fail. Capitalizing on the work of others will help you proceed as effectively and efficiently as possible with your plan. When I refer to this activity as a review of the literature, I find many people are “turned off.” On the other hand, when I talk about the useful and practical information that results from what others have done, people see the advantage of spending time in the library. I believe that the amount of time spent reviewing the literature will be more than saved when it comes to implementing an evaluation plan. The format of the continuing education program is one area that has recently been deemed important to consider in developing a plan. There are many different ways to present content, but which is best for your participants? You must decide if you want to use practice sessions, group

AORN Journal,July 1977, Vol26, No 1

work, or some other technique. Once you have decided on a format, develop your evaluation plan so the value of using that format can be determined. Recently, I was involved in a study in which we investigated the effect of practice on learning when practice was included as a part of the CE program. We wanted to know if the costs of a program increased when practice time was included in the offering. Would the costs be too high, making it impractical to include practice time in CE programs? Would the cost of including a practice component be offset by the amount of knowledge participants gained? We found that for our programs the cost of providing practice time usually did not appreciably increase the total cost. In fact, other factors largely determined the cost of the CE programs we evaluated-not whether or not practice was involvedm2 Writing objectives is an important part of the evaluation process. All too often, objectives are stated at a simple level of behavior, for example, “identify the specific steps in procedure X.” This type of objective only encourages memory work. A better objective might be “determines appropriate nursing actions for procedure X” or “provides rationale for nursing actions during procedure X.” An individual must know the basic steps of procedure X to complete either of these objectives. This type of objective focuses on the learning of nursing actions and their rationale rather than on memorizing knowledge. It identifies the cognitive processes the participants are to use with the information provided during the CE workshop. If the CE program is designed only to impart information, with no attempt to allow participants time to assimilate and practice that information, the learning of information is all that will happen. 154

If you plan to include practice time in the CE workshop, your objective should be written accordingly. Objectives could be written at a general level such as “demonstrates steps in procedure X.” On the other hand, you may want to further specify your expectations, for example, “correctly identxies PVCs on an EKG strip.” The more specific your objectives, the easier it will be to evaluate whether or not the objectives are met. Next, you must consider how you are going to obtain evaluative information. You may find that some of the tools used in the past are appropriate for your evaluation, but more likely a more specific measurement tool will be best. You probably are interested in participants’ knowledge gain, performance of a specific skill, and/or attitudinal information. This presents a whole new set of problems for which you must define and measure the appropriate concept or concepts. You need to be aware of the current information and available tools for measuring attitudes, performance, and knowledge gained or lost. Attitudes and their role in predicting successful changes in behavior are interesting and challenging. If you can create more positive attitudes toward the topic, you will be better able to predict the actual behavior (both cognitive and performance) of your participants. The problem is whether the desired changes can be precipitated. I have noted that it is seldom an objective of a continuing education program to change an attitude; however, this is a by-product of many programs. A physical assessment workshop I evaluated emphasized the new role of the nurse as an assessor and what this new role meant to each nurse. Although developing a positive attitude toward the new role should have been a stated objective for the program, it was not.

AORN Journal,July 1977, Vol26, No 1

Figure 1 I am able to:

Yes

Inspect the conjunctiva, sclera, and cornea 2. Check all pupillary responses 3. Check the gag reflex 4. Palpate for the thyroid 5. Palpate pulses a. carotid b. radial I c. brachial d. .DoDliteal . e. dorsalis pedis 1.

I I

An example of a self-report tool.

However, the attitude did change, but the change would never have been observed if we had not had an evaluation plan that included the attitudinal dimension. The knowledge gained in the program was tremendous, and over a period of three months, that knowledge was maintained. In this case the change in attitude was probably what precipitated the knowledge gain.3 If your concern is whether a procedure has been learned, attitude is not going to be as important. If you are concerned, however, about the OR nurse learning assertive skills and being able to use them to benefit herself and her patients in the operating room, then attitudinal measurement and documentation is important. Awareness of norm-referenced and criterion-referenced measurement techniques will allow better measurement of learning that results from continuing education programs. Normreferenced measurement compares one student’s score t o the rest of the class. Criterion-referenced measurement compares a student’s score with a pre-

166

established standard 01 performance. The key concept in criterion-referenced measurement is the comparison of a performance to a preestablished standard. Gronlund points out that criterionreferenced testing requires: 1. clearly defined learning tasks 2. clearly defined instructional objectives written in behavioral terms 3. clearly defined standards of performance 4. adequate sampling of student performance 5 . items that closely reflect the objectives 6 . a scoring system that describes the student’s performance on the defined tasks.4 In reality, testing by either normreferenced or criterion-referenced techniques requires these specifications. The differences are the specificity required and the use of preestablished standards. By specificity, I mean that in norm-referenced measurement we often test on two or three units of material; whereas, in criterion-referenced measurement, a test is usually designed to measure accomplishment of only one learning task. The instructional objectives and standards for criterion testing are very specific: 0 the student writes six criterionreferenced items for Objective A with no item construction errors 0 90% of the time, the student correctly identifies the surgical instruments when displayed via flashcards. Awareness of different performance measures allows one to consider the area of psychomotor skills when evaluating a program. To date, there are few successful techniques for obtaining information about specific skills learned in a workshop setting. It

AORN Journal, July 1977, Vol26, No 1

Figure 2 I feel competent to:

All the time

Most

Seldom

Not at all

1. Assess strength and character of apex beat

2. Assess cardiac rate and mythm 3. Recognize thrills and friction rub 4. Recognize splitting of second sound 5. Inspect the conjunctiva, sclera, and cornea 6. Check all pupillary responses

An example of a self-report tool using degree of competence for responses.

does appear possible that this type of learning can occur and can be documented. A simple technique that can be used for documentation of skills is a selfreport tool. All the steps in a procedure are identified and specified with great care, A checklist is then created using each step in the procedure. Ten Brink details construction of such a c h e c k l i ~ t . ~This technique allows either the individual or someone else to observe the behaviors and indicate whether the skills have been successfully learned. The use of a checklist with observation in a real situation, ie, a hospital unit, is time consuming and expensive because an observer has to wait for the behavior to occur and then observe the behavior for as long as it takes to perform the task. It is much easier and probably as accurate to ask the nurse to report what she has done. One format is to request yesho answers t~ a list of tasks (Fig 1) or use the same list and ask how competent the nurse feels when doing the task (Fig 2). It is amazing how accurate and specific

168

nurses are; they are willing to admit and document what they have not done as well as what they have done. If they know they have not performed as well as they should, they will report that too. Another technique to measure psychomotor skills is process audit. Using standards of nursing care for the operating room, one could generate process criteria to be used for an evaluation of overall care in the OR room, specific aspects of OR care, and workshop behaviors (Fig 3). For instance, you have conducted a workshop for new employees on preoperative skin Preparation. The five criteria stated in the May 1976AORN Journal for preoperative skin preparation could be the starting point for a good process audit.s Those criteria are then modified to reflect your agency’s policies and procedures. The result is a process audit. One distinct advantage in using a prwess or outcome audit technique for your evaluation is that you can “kill two birds with one stone.” You create criteria for audit acceptable by the Joint Commission on

AORN Journal, July 1977, V o l 2 6 ,No 1

Figure 3

-

Insufficient Clearly Clearly data met lot met 1. Patient is

taught dosage of each take home medication 2. Patient is taught about his diet

Process audit tool emphasizing quality of data.

Accreditation of Hospitals and also for use in evaluating a workshop (Fig 4). Comparing the results of pre and postworkshop audits, which were specific to the workshop’s objectives, would allow one to see if changes have occurred as the result of the workshop. If the time span between the workshop and the audit is too long or if all the people in the OR did not participate in the workshop, there may be other reasons for the change in the OR nurses’ performance in addition to the workshop. These techniques for measuring performance can be used for either normreferenced or criterion-referenced measurement. They lend themselves to the more rigorous definition of task and standard necessary in criterion-referenced measurement. In my opinion, data collected using these techniques are more meaningful when using a criterion-referenced framework. Awareness of technique for developing cognitive measures is essential. Measurement of the knowledge domain is much more difficult than measurement of either the psychomotor or attitudinal domains. There are 160

guidelines for creating items to measure the cognitive domain. One of the most important is a table of specifications to determine the components of the objectives that are to be measured, the level of difficulty, and the appropriate number of items for each objective (Fig 5). A major advantage of using a table of specifications is that it provides evidence of content validity for the instrument being developed. There is also a set of guidelines for developing items. The actual construction of the items is a creative, complex, and ongoing activity. After identifying the specific content for an item, the item stem must be written in a clear, concise manner. This is often easier said than done, especially when one attempts to write items for more complex cognitive abilities. I use Bloom’s taxonomy as a means of determining the level I wish to test and to determine whether or not the items measure that level.’ The art of creating plausible distractors is difficult to communicate but essential if one wishes to construct items that measure the more complex cognitive abilities. I recommend some type of item analysis where you review your Joint Commission on Accreditation of Hospitals audit tool using process criteria.

Figure 4 Process criteria 1. Patient is taught dosage of each take home medication 2. Patient is taught about his diet

AORN Journal, July 1977, Vol26, No 1

Standard 0%

100%

Exceptions

items over a period of time, revising, updating, and continually attempting to improve them. Cognitive tests can be either criterion referenced or norm referenced. About 50% to 60% of the people should answer norm-referenced items correctly. Criterion-referenced items are written at the level of difficulty specified by the standard. This usually means that about 80% to 90% of the people should answer criterion-referenced items correctly. Awareness of different attitude measures allows one to select the most appropriate tool. From a theoretical point of view, it is often difficult to measure attitudes; however, there are two well-developed methodologies that can help you develop good tools. Osgood developed the semantic differential methodology t o measure the semantic meaning of words and/or phrases.8 He asked people to indicate the meaning of a word by selecting points between two adjective pairs opposite in meaning, eg, goodhad, bitterhweet. He found that the adjective pairs fell into three dimensionsactivity, potency, and evaluative. It was found that adjective pairs on the evaluative dimension functioned similarly to more traditional attitude measures. Snider and Osgood have a handbook listing adjective pairs by their dimension.9 To create a tool to measure attitudes, select adjective pairs on the evaluative dimension for the concept you wish t o measure making certain that the adjective pairs are appropriate (Fig 6). For example, when measuring the concept “myself as a nurse,” the adjective pair “rough/ smooth” makes little sense. It is also important to select carefully the word or phrase you want to measure. A concept such as “expanded role” could

162

Figure 5 Application 1. States rules for developing CRF items

2. Writes three CRF items to measure an obiective

The components of a teble of specifications.

lead to problems because there are many expanded roles. “The expanded role of the operating room nurse” is more descriptive. Problems with using the semantic differential are (1) it looks different from other tools and (2) what it measures is not always obvious to respondents. Interestingly enough, these are also assets from a measurement point of view. Once your participants become familiar with this tool, you will find you can obtain good results. The second methodology particularly useful in nursing was developed by Likert. Subjects respond to specific statements on a continuum from “strongly agree” to “strongly disagree.” Developing the statements is the key to this instrument, which is not easy to construct. The statements must represent different aspects of the phenomenon being measured. The statements must be clearly and concisely written. Usually this type of tool must be revised several times before its reliability reaches an acceptable level. lo The Likert tool looks like it measures what it is designed to measure and is usually familiar to subjects. With care and adequate revision, a Likert tool can measure as reliably as

AORN Journal, July 1977, Vol26, No 1

~

Figure 6 OR Nurse 1

2

3

4

5

6

7

1. Valuable

-:

-:

-:

-:

-:

-:

-:

Worthless

2. Difficult

-:

-:

-:

-:

-:

-:

-:

Easy

3. Applicable -:

-:

-:

-:

-:

--:

-:

Inapplicable

4. Inadequate -:

-:

-:

-:

-:

-:

-:

Adequate

5. Interesting -:

-:

-:

.--:

-:

-:

-:

Boring

6. Pleasant

-:

-:

-:

-:

-:

-:

-:

Unpleasant

7. Idealistic

-:

-:

-:

-.-:

-:

-:

-:

Realistic

8. Active

~ . - .

-.

Passive

-.-.

-*-.

An example of a semantic differential tool.

a semantic differential. The amount of time you have for developing the tool (Fig 7) may be the deciding factor in choosing the Likert or the semantic differential. Your ability to develop a series of descriptive statements may also influence your decision. I have not seen an attempt to use attitude data in a criterion-referenced framework. This is not to say one cannot. Criterion-referenced measurement is in its infancy, and attention has not yet directed it toward the attitudinal domain. Now we come t o the action phase of the plan, which includes collecting, analyzing, and interpreting the data to make a judgment of worth concerning the continuing education offering. This is an exciting stage-will we be able to demonstrate that the offering created change? If the offering is good and you have a complete plan, the answer should be yes. You will find there is a direct relationship between the quality of work in the other phases of the evaluation plan and the ease with

164

which you implement this action phase. If you have collected data in all the right dimensions, frequently enough, and with reliable and valid instruments, you will find that you can make a judgment of worth. When you conduct your analyses of the data, obtain consultation if you do not have these skills. Much time has been spent, so you want to get as much information from the data as possible. Not only will you be able to report many positive verbal statements about the offering, you will be able to determine if knowledge was gained, practice was changed, whether these changes lasted over a period of six months, and what factors produced the changes. Further continuing education offerings can then be developed using these factors. What if the evaluation demonstrates that none of these happened? You should have enough data to determine why they did not happen and to institute corrective measures. The quality of future offerings will improve, and

AORN Journal, July 1977, Vol26, No 1

Figure 7 SA A N D SD

nurse is no longer is a major contribution of the OR nurse

3. The OR nurse is the only one capable of

handling the instruments in complex surgical procedure An example of a Likert-type tool where the participant responds from strongly agree to strongly disagree.

you will be able to demonstrate more positive findings at a later date. Problems may still occur even with an excellent evaluation plan. By using some of these techniques, you may get a negative reaction from participants. Most of us are not used to indepth evaluations of offerings. I’ve noticed three areas of concern: test anxiety, perceived value, and time considerations. Most nurses, in fact most adults, suffer from test anxiety. When participants find they must take a pretest, they often become anxious. They are not used to being “tested” and are concerned about not doing well. One thing I have done to alleviate some of this anxiety is call my pre and posttests by another name: pre and postinstructional instruments. I also encourage participants to decide whether they need the workshop information based on the results of the pretest. They determine how well they did and whether they need to take part in the workshop. Many participants will find they know more than they thought 166

they did and will be so pleased and relieved they will stay for the workshop and have a positive attitude toward it. The “perceived value” problem tends to occur when attitude measures are obtained. Participants cannot see the value of these instruments and a fair amount of discomfort can be created. A brief explanation about the tool often alleviates this problem. Too little time is frequently a problem for the participants, faculty, and the evaluator. It takes time t o fill out the data collection tools. The evaluator needs to be realistic and reduce data collection time to a minimum. As the faculty are able to use the results of evaluations to improve offerings, they become much more cooperative and encourage participants to take time for data collection tools. When the participant realizes that evaluation is part of the offering and offerings are likely to improve as a result of their comments, they are more willing to take the time to return the posttest and follow-up instruments. Exposure to techniques of evaluation and the opportunity to see the results of such evaluations through improved continuing education programs encourage participants t o take part in evaluations. I invite you t o create a multidimensional evaluation plan, implement it, and see how useful the results of such an effort can be. 0 Notea 1. M E Collart, “An overview in planning, implementing, and evaluating continuing nursing education,” The Journal of Continuing Education in Nursing 7 (1976) 9-22. 2. D Blume, C Deets, Evaluating the Effectiveness of Selected Continuing Education Programs (Austin, Tex: The University of Texas System School of Nursing, 1975). 3. Ibid. 4. N E Gronlund, Preparing CriterionReferenced Tests for Classroom lnstruction (New York: Macmillan Publishing Co, 1973).

AORN Journal,July 1977, Vol26, No 1

5. T D Ten Brink, Evaluation4 Practical Guide for Teachers (New York: McGraw-Hill, 1974). 6. "Standards for preoperative skin preparalion of patients," AORN Journal 23 (May 1976) 974. 7. B S Bloom, J T Hastings, G F Madaus, Handbook on Formative and Summative Evalua-

tion of Student Learning (New York: McGraw-Hill, 1971). 8. C Osgood, G Suci, P Tannenbaum, The Measurement of Meaning (Urbana, 111: University of Press, 1957). 9. J G Snider, C E Osgood, eds. Semantic Differential Technique: A Source Book (Chicago: Aldine, 1969). 10. Ten Brink, Evaluation-A Practical Guide.

Don't move without us.

To insure uninterrupted service on your AORN JOURNAL subscription, please notify us at least six weeks before changing your address. 1. Attach your address label from recent issue in the space provided opposite. (If label not available, be sure to give us your old address including tip Code).

3. Mail entire notice to: Subscription Service Dept.

2. Print your name and new

AORN JOURNAL 10170 E Mississippi Ave Denver, Colo 80231

address below (be sure to include your Zip Code). Name New Address---

City

168

.

~

__

. .

~

~~

State

~

AORN Journal, July 1977, Val 26,No 1

Zip Code

~

Minilaparotomy for sterilization most effective Ligation via minilaparatomy is a significantly more effective sterilization procedure for women than any other of the three most common techniques. The three other techniques by frequency of use are culdoscopy, vaginal, and laparoscopy. A recent study in Singapore using life table analysis indicates that the 24-month cumulative pregnancy rate for women sterilized by ligation via minilaparotomy was 0.3 per 100. Pregnancy rates for culdoscopic, vaginal, and laparoscopic procedures were 1.7, 3.1, and 4.5 respectively. A report of the study is published in the Center for Disease Control's (CDC) Morbidity and Mortality Weekly Report (April 29, 1977). The study was implemented after an increasing number of women reported pregnancies following sterilization procedures. From a total of 10,174 women sterilized between 1970 and 1975, 51 were diagnosed pregnant during the period January 1974 to March 1975. Of the 51 pregnancies, 8 (16%) were ectopic and 43 (84%) were intrauterine. Five of the 16 pregnancies that followed ligation via minilaparotomy were ectopic, and of the 20 pregnancies that followed culdoscopic procedures, 3 were ectopic. No ectopic pregnancies were reported after vaginal or laparoscopic sterilization. Thirty-five women underwent religation. Laparotomy findings on these women indicate that recanalization and the establishment of fistulous openings caused the majority of failures.

A 8 r n R ALTERNATIVE TO ETHER. Miller-Stephenson's Presurgical Skin Dyeaser works for you gently, efficient y And contributes to a baftar Wital environment. It is nonfkmmabk, harmless to tissue and dries rapidly. It degreases skin effectively for secure surgical drape adhesion and ha8 a high degree of purity and 8 low order of toxicity. Convenient' wide-mouth, easy-pour jar (4 fl. as.).

Evaluating CE programs.

Carol A Deets, RN Evaluating CE programs All too often the evaluation of continuing education (CE) offerings is essentially unplanned. Systematic eva...
757KB Sizes 0 Downloads 0 Views