Journal of Cancer Education

ISSN: 0885-8195 (Print) 1543-0154 (Online) Journal homepage:

Determining cancer‐patient‐management educational needs William S. Donaldson EdD & Tennyson Williams MD To cite this article: William S. Donaldson EdD & Tennyson Williams MD (1991) Determining cancer‐patient‐management educational needs, Journal of Cancer Education, 6:1, 25-31 To link to this article:

Published online: 01 Oct 2009.

Submit your article to this journal

Article views: 2

View related articles

Citing articles: 2 View citing articles

Full Terms & Conditions of access and use can be found at Download by: [University of Auckland Library]

Date: 09 August 2017, At: 12:13

]. Cancer Education. Vol. 6, No. 1, pp. 25-31, 1991 Printed in the U.S.A. Pergamon Press pic

0885-8195/91 $3.00 + .00 O 1991 American Association for Cancer Education


Downloaded by [University of Auckland Library] at 12:13 09 August 2017



Abstract — The authors have proposed a model for curriculum planning and continuing medical education (CME) program development (JCE 4:255-259, 1989.) This paper reports the results of an application of this model. The knowledge examination administered was developed by specialists and family practice physicians, covering four cancer sites and eight cancer-patient management stages. The purpose of this application was to evaluate the model regarding its effectiveness in terms of whether specific core competencies could be identified as the basis for curriculum planning and/or CME programplanning use. It was assumed that data from this assessment could be extrapolated from the residency context to the development of CME programs. A total of 200 respondent reports (interns, residents, faculty) from 15 institutions (medical schools or hospitals) were analyzed. In general, more education improves test scores; faculty (6.5% of the respondents) averaged about 66% correct answers. The highest percentage of correct responses was associated with the site Breast (60%) and the stage Shared Care (70%). The lowest percentage of correct values was associated with the site Colorectal (45%) and the stage Treatment (44%). The pair Colorectal/Treatment produced the lowest percentage of correct answers observed (27%). The proactive model applied in this study was successful regarding determination of core-competency deficiencies. Participant responses were useful for identifying cancer-patientmanagement core-competency deficiencies, knowledge gaps that could be used for both curriculum planning and CME program development. Residency directors from participating institutions have reported using this information to modify the educational experience of individual residents as well as to modify their cancer program curriculum. Site- and stage-specific data from this study could be used by CME developers as topics for programs targeted for family practitioners.

INTRODUCTION How does education relate to the basic core competencies (essential understandings relevant to practice needs of family physicians) required to effect exemplary cancer-patient-management (CPM)? Effective CPM by practicing physicians must include core competencies associated with both cancer-site and management-stage concerns, but current medical knowledge tests do not attempt to assure coverage of each stage of medical management. Therefore, the Professional Education Committee of the Cancer Control Consortium of Ohio sought to determine the cancer educational needs of residents and pracThis work was supported by a grant from the Cancer Control Consortium of Ohio, Inc. *Adjunct Assistant Professor, Department of Family Medicine, The Ohio State University. †Professor, Department of Family Medicine, The Ohio State University. Reprint requests to: Tennyson Williams, MD, Department of Family Medicine, The Ohio State University, B0902 UHC, 456 West Tenth Avenue, Columbus, Ohio 43210.


ticing physicians for each management stage of cancer-patient-management.l From the literature, two of the alternative methods for determining education-program content are considered. The reactive model asks physicians to identify topics of immediate interest.2 The proactive model attempts to address the differences between "ideal and current practice" by determining topics for which a target audience demonstrates weaknesses.3 The development of core competencies has been utilized as a method for identifying essential and basic information about a topic area.4 The authors have proposed a proactive method for determining content for the preparation of continuing medical education (CME) programs.5 It was decided to utilize the proactive method with family practice interns, residents, and faculty members to determine its applicability to curriculum design and its potential for identification of CME topics. Eight management stages were proposed, based on three criteria: (a) must be critical to



Downloaded by [University of Auckland Library] at 12:13 09 August 2017

cancer-patient-management, (b) must provide global coverage of core competencies acutely needed in untested areas, and (c) must represent interventions currently available. The stages are a modification of the major functions of health care as described by Donabedian6: 1. 2. 3. 4. 5. 6. 7. 8.

Monitoring risk factors (MRF) Early disease screening (EDS) Diagnosis (DIA) Staging (STG) Treatment (TRE) Shared care (SHC) Follow-up monitoring (FUM) Advanced disease management (AVD)

Any test instrument utilized in this context should examine each management stage adequately; test items should represent discrete site/ management issues of importance to service delivery. Items included in the test instrument should become the foundation for curriculum planning purposes as determined by the three criteria listed above. Items should reflect a consensus between primary care and cancer-site specialties regarding relevance, importance, and availability. MATERIALS AND METHODS A proactive model for assessing core-competency needs has been described.5 To summarize this model's major points and strategies, the discussion below addresses only the five primary activities used for instrument development. First, tracer sites were selected. Gradually, the use of indicators in the study of health services has become structured to reflect the general quality of care provided by study subjects. This structure has come to include the strategy of using tracers. Tracers are conditions selected as surrogates of general care provided by given practitioners. Kessner and Kalk proposed characteristics that should be fulfilled by tracers,7 and these guidelines were followed in the selection of tracer cancers: 1. Significant functional impact on those affected.

2. Relatively well defined and easy to diagnose in field and practice settings. 3. A prevalence rate high enough to permit the collection of adequate data. 4. A natural history that will vary according to utilization and effectiveness of medical care. 5. Techniques of medical management that are well defined for at least one of the following processes: prevention, diagnosis, treatment, and rehabilitation or adjustment. 6. Understood effects on socioeconomic factors. The "tracer" conditions implicit in this model are site and management stage. Skin, breast, uterus, and colorectum were the cancer sites selected. Second, a group of medical content specialists (12 physicians, three for each site, teaching and practicing in central Ohio and recommended by their peers) identified the core competencies for the eight management stages of the tracer conditions that had been selected. The potential for improving morbidity and mortality outcomes was utilized optimally. Questions were then written to represent each core competency identified. Using the model of the Connecticut-Ohio Core Content of Family Medicine,8 a discussion was written by the content specialists for each question explaining the reason for the correct answer and discussing the reasons why distractors are incorrect. One or more references were included to be used by the participant for expanding his/her knowledge of the core competency involved. Third, questions were reviewed for practice relevance by a panel of three primary care physicians. On the basis of this review, questions unacceptable to reviewers were deleted and new questions were written. Fourth, the instrument was administered to a sample of medical students, residents, and practicing physicians and the results analyzed using conventional item-analysis procedures.9 Fifth, after item-analysis, questions were revised to enhance item-group and item-total correlations. Those questions that had poor item

Downloaded by [University of Auckland Library] at 12:13 09 August 2017

Determining educational needs


analysis characteristics were rewritten or deleted. Questions were written and evaluated with content validity as the foremost acceptance criterion; by definition and consensus among specialties involved, these would be the basic core competencies associated with CPM. Items were rewritten to clarify revealed limitations associated with syntax, word choice, and distractor relevance, but the core competency being examined remained inviolate. Items with a low proportion of correct responses, judged by primary care reviewers to be written at the appropriate level of difficulty, were retained because they represent concepts most relevant to CPM. The purpose of this testing application was not to grade the performance of individuals, but to discover concepts that peers agree are important to quality CPM. 10

sion that the test be undertaken as a closedbook exercise. Inducements to participate as testtakers included feedback from the discussion booklet and from a customized, computer-generated report comparing individual performance by item, cancer site, and management stage with that of all 200 participants. Each residency director also received a printout comparing the performance of his participants with that of all 200 participants, by education level. Following test administration and receipt of all responses, each residency director was sent a questionnaire asking about administration problems, resident attitudes, and how the results were being applied in the program.

The instrument prepared consisted of items representing core competencies (essential understandings relevant to practice needs of family physicians) with valid and reliable item characteristics. In addition, a reporting methodology was developed to provide a report assessing overall knowledge by cancer site and management stage as well as by individual competency items. The final 108-item instrument consisted of 14 items written at the recall level, 31 at the comprehension level, 27 at the problem-solving level, and 36 at the application level. Five items each represented these management stages: MRF, EDS, DIA; three each for stages STG, and TRE; and two each for stages FUM, SHC, and AVD, a total of 27 questions for each of the four cancer sites. Test items were then randomized by site and management stage into four equal groups of 27 items to provide administrative flexibility. Discussion booklets were prepared for each 27-item set. Phone solicitation (followed by written explanations and examples from the instrument) was made of 25 family practice residency directors, requesting that they administer the test instrument to residents and faculty members of their program. Program directors were chosen from among those known to the second author to enhance participation, with geographic and university/community hospital diversity given consideration. The method of administration was the residency director's option, with the provi-

Demographics. Eighteen of the 26 program directors contacted agreed to participate (69%). Fifteen programs actually participated (58%), returning 211 answer sheets. This represented 64% of the total number of eligible residents in the 15 programs. Participation rates within programs varied between 17% and 100% across the 15 participating sites. There were 11 returned answer sheets with responses to one-half or less of the questions. These were excluded from the final data analysis, leaving 200 participants in the study. Seven of the programs were in Ohio, four in Pennsylvania, two in Texas, one in Indiana, and one in Michigan. Two of the programs were university programs, and 13 were community hospital programs. Questionnaire returns indicated that residency directors were about evenly divided as to whether administration of the test was easy to accomplish or required one month or more of notice. They all agreed that division of the 108-item instrument into 27-question segments gave them more administrative flexibility. Respondents were neutral in their interest in participating. Most thought the questions were difficult but relevant to their future practice. A variety of uses were made of the results, but all directors found some application, including residents' using discussion booklets for immediate learning, modification of individual resident rotations, and modification of the overall program curriculum.




Downloaded by [University of Auckland Library] at 12:13 09 August 2017

Table 1. Summary statistics for 108 items







% correct

Sex Male Female Missing data

156 42 2

57.9 59.7

7.6 7.2

58 60

58 55

54 55

Education level Transition Intern 1st Year Resident 2nd Year Resident 3rd Year Resident Faculty Missing data

2 61 62 61 13 1

54.0 56.3 58.2 59.1 65.6

0.0 8.2 6.9 6.6 7.6

54 57 58 58 67

54 55 58 54 67

50 52 54 54 61

Total Group







•Two respondents did not indicate their sex; one respondent could not be classified by education level.

Gross comparisons were made regarding the variables sex and education level (see Table 1); not all 200 respondents provided these data. Although mean values do increase with years of education, all scores fall somewhat below reasonable expectations, given the core-content basis used for item construction. No sex difference of interest was noted. Of significance was the finding that, on average, scores ranged between 56 (52% correct) and 59 (55% correct) across the three years of residency. Clearly, considering that 15 educationally separate and unique institutions were involved with this survey , no one program observed represents a cancer education curriculum for family physicians in training that enabled respondents from that institution to excel on this test. Item analysis. Participants obtained 0%-25% correct responses on 19% of the items, 26%50% correct responses on 25% of the items, 51%-75% correct responses on 33% of the items, and 76% or greater correct responses on 23% of the items. Of the site-related items,

breast items received the highest correct response rate (60%) followed by skin (57%), uterus (49%), and colorectal (45%), respectively (Table 2). With regard to management stage, shared care (70%) and follow-up monitoring (60%) rated highest. Monitoring risk factors (46%) and treatment (44%) received the lowest ratings (Table 3). When viewed as individual residency programs, these trends were consistent with rare and minimal exceptions. A conventional item analysis of responses to the 108 test questions was conducted to afford the researchers access to certain statistics of interest. It is important to note, however, that these researchers, the group of cancer-site specialists, and the review panel of Family Medicine specialists (who made the final decisions regarding question content, format, etc.) were committed to preparation of test items aimed at identification of knowledge gaps. Thus, a low Table 3. Percentage correct response by management stage Management stage

Table 2. Percentage correct response by cancer site Site Skin Breast Uterus Colorectal

% correct response 57 60 49 45


% correct response 46 56 52 54 44 70 60 48

Determining educational needs

Table 4. Item analysis results A. Item difficulty distribution


Number of items

Percentage of items

0.81-1.00 0.61-0.80 0.41-0.60 0.21-0.40 0.00-0.20

15 20 23 30 20

14 19 21 28 19

Mean item difficulty = 0.46

Downloaded by [University of Auckland Library] at 12:13 09 August 2017

B. Item discrimination distribution Range

Number of items

Percentage of items

0.81-1.00 0.61-0.80 0.41-0.60 0.21-0.40 0.00-0.20 Below 0.00

0 0 3 31 69 5

0 0 3 29 64


Mean item discrimination = 0.17

percentage of correct responses to an individual item was not necessarily construed as indicative of a "poor" item. Rather, the low percentage was interpreted as signifying a primary knowledge deficiency. This analysis produced an internal consistency coefficient of 0.82. Mean Item Difficulty/ Discrimination are within acceptable ranges for decision-making purposes (Table 4). The five items below 0.00 on Item Discrimination present a not-trivial interpretation problem, given the item-development philosophy and methodology used in preparation of the test instrument. An adequate explanation is yet to be determined. Site and stage. Table 5 summarizes findings


from this study regarding site and stage percentages correct. The site breast (60% correct) and stage shared care (70% correct) were the highest percentages observed. Low percentages were observed for the site colorectal (45% correct) and the stage treatment (44% correct). These gross observations, however, are not as revealing as are those from site/stage pairs. Considerable row and column variation is readily apparent in Table 5. For example, diagnosis varies from 70% correct (skin) to 28% (colorectal). Moreover, four of the uterus management stages (monitoring risk factors, staging, treatment, and advanced disease) produced values of less than 40% correct. Particularly disconcerting are the colorectal stages diagnosis (28%) and treatment (27%). Table 5 values do not suggest a knowledge base sufficient for implementation of effective screening/early detection programs and aggressive treatment processes. DISCUSSION The purpose of this study was to implement the proactive model and to determine whether or not this model produces information valuable for curriculum and CME planning. Findings suggest that the model is viable for specific core competencies and that larger groupings (site and stage) could be used as global parameters about which information dissemination strategies could be structured, ie, a "uterus/staging" topic might be announced with content drawn generally from uterus and specifically from staging. Does the proactive model reveal core-corn-

Table 5. Stage x site % Correct Stage DIA






All stages

41 68 66 50

70 59 51 28

60 62 36 58

52 60 38 27

80 82 58 61

66 50 65 61

59 49 39 44

57 60 49 45










Skin Breast Uterus Colorectal

46 51 39 46

All sites



Downloaded by [University of Auckland Library] at 12:13 09 August 2017



petency deficiencies? Based on data obtained in this study the answer is yes. The anticipated education factor was observed — more education, higher score. However, the 200 participants analyzed demonstrated marked differences regarding their individual capacities for correctly discarding extraneous data. Not all the differences were attributable to educational level (a first-year resident scored an 81). These data raise questions not answerable by this study. For example, some first-year residents performed well above expectations on this test. Others with more training did not do as well as expected. Further investigation of these issues could reveal information useful for the purpose of CPM curriculum planning and/or revision. The test instrument used for this study has not been applied and revised enough times to insure its veracity. Although the development process did produce an instrument with content validity and acceptable item-analysis characteristics, subsequent applications should introduce refinements improving the test's likelihood of identifying deficient core competencies. A new panel of specialists should be convened to review the instrument and to offer suggested revisions. A larger sample of thirdyear residents and/or practitioners would be appropriate for replicating this study. Family physicians who score, say, 50% on this test have a multitude of knowledge gaps in need of immediate attention. Which of these deficiencies is of greatest importance, given that a case of any particular type (site/stage combination) may present itself with the next patient examined? What is the significance of a high (65% or higher) score on this test? Core competencies should be a fundamental part of each practitioner's awareness. Based on the findings of this study, there are numerous cancer-patientmanagement core competencies that should be considered as temporal bases for: (a) curriculum planning for residency programs of primary care specialties, (b) current CME program-planning purposes, as well as for (c) CME credit conferences/workshops/seminars designed for family medicine specialists and general practitioners alike. Moreover, the test results at both site- and stage-level (Table 5) reveal specific combina-

tions whose values are alarming. For example, the site/stage pair colorectal and treatment (27% correct) is the lowest value observed by the authors across four applications of the test instrument (three field tests during development and the present study). The values for Monitoring Risk Factors (46%) and Early Detection Screening (56%) are especially distressing, for it is in those two areas that the potential is greatest for effecting strategies calculated to result in cancer patients' presenting early-stage disease. On the other hand, the percentage correct values for Diagnosis, Staging, and Treatment also are less than acceptable. Each residency program that provided data for this study reported concern for rethinking its cancer education curriculum. Since this test assessed 108 CPM core competencies, residency directors saw immediate and compelling steps to take toward upgrading these programs. Revealed knowledge-gap findings from this study are unique to cancer patient management. Other areas of family medicine might show corecompetency limitations, given application of the proactive model used for this study. Could this model be extrapolated to other patient-management concerns in family medicine? The answer to this- question is an unqualified yes. The model applied in this study is generalizable across patient-management areas. Following the strategies explicit in the model should provide an adequate CME-planning database independent of the organ system or disease being evaluated. It is predictable that close adherence to these strategies will reveal core-competency deficiencies in any medical area evaluated. Further, this model is quantitatively superior to the reactive model, because the derivation of core competencies in the proactive model is based on experts' understanding of the health problem involved. Acknowledgements — The authors thank the residency faculty members who volunteered the participation of their residency program: Lauren Brown (Barberton Citizens Hospital, Barberton, Ohio), Mark E. Clasen (Department of Family Medicine, University of Texas Health Sciences Center, Houston, Texas), Robert Guthrie (Department of Family Medicine, The Ohio State University College of Medicine), Douglas Haddock (Kalamazoo Family Medicine, FMC/ Stryker Center, Kalamazoo, MI), Paul Hermany (Sacred Heart Hospital, Allentown, PA), Chris Marquart (while at

Determining educational needs

St. Thomas Hospital, Akron, OH), Paul McCausland (Bryn Mawr Family Practice, Bryn Mawr, PA), Alan Peterson (Lancaster Family Practice, Lancaster, PA), Lawrence G. Ratliff (Family Practice Residency, Grant Medical Center, Columbus, OH), David Rudy (while at Monsour Family Practice Residency, Jeannette, PA), Christopher Shank (Fairfield General Family Practice, Cleveland, OH), Jerry L. Stucky (Fort Wayne Medical Education Program, Fort Wayne, IN), Jay C. Williamson (Akron City Family Practice, Akron, OH), Theodore Wymslo (Miami Valley Hospital Family Practice Residency, Dayton, OH).

Downloaded by [University of Auckland Library] at 12:13 09 August 2017

REFERENCES 1. Cancer Control Consortium of Ohio. Annual Report 1986. Columbus, OH: Columbus Comprehensive Cancer Center, The Ohio State University, 1987. 2. Belsheim DJ: Models for continuing education. J Med Educ 61:971-978, 1986. 3. Lenhard RE, Waalkes DP, Herring D: Evaluation of the clinical management of cancer patients. JAMA 250: 3310-3316, 1983. 4. Young EA, Weser E, McBride HM, et al: Development of core competencies in clinical nutrition. Am J Clin Nutr 38:800-810, 1983.


5. Williams T, Donaldson WS: Toward the identification of CME content needs for primary care physicians. J Cancer Educ 4:255-259, 1989. 6. Donabedian A: Exploration in quality assessment and monitoring. Volume III, The methods and findings of quality assessment and monitoring: An illustrated analysis. Ann Arbor, MI: Health Administration Press, 1985, p 324. 7. Kessner DM, Kalk CE: Contrasts in health status. Vol 2, A strategy for evaluating health services. Washington, DC: Institute of Medicine, National Academy of Sciences, 1973, pp 6-28. 8. A guide to test question development. Explanation and reference. The Core Content of Family Medicine. Undated Mimeograph available from: Robert K. Shopter, MD, Editor and Education Director, The Core Content of Family Medicine, P.O. Box 30, Bloomfield, CN 06002. 9. Hubbard JP (ed). Measuring medical education; the tests and the experience of the National Board of Medical Examiners. 2nd ed, Philadelphia: Lea & Febiger, 1978, chapter 3. 10. Winicoff RN, Coltin KL, Morgan MA, et al: Improving physician performance through peer comparison feedback. Med Care 22:527-534, 1984.

Determining cancer-patient-management educational needs.

The authors have proposed a model for curriculum planning and continuing medical education (CME) program development (JCE 4:255-259, 1989.) This paper...
648KB Sizes 0 Downloads 0 Views