A

Validity

Problem in Measuring Exposure to Mass Media Campaigns Jane D. Brown, PhD Karl E. Bauman, PhD Connie A. Padgett, BA

Recognition of radio and television messages included in three mass media campaigns designed to keep adolescents from starting to smoke cigarettes was measured in six treatment and four control cities (Standard Metropolitan Statistical Areas) in the southeastern United States. The telephone survey of 574 randomly selected adolescents found high recognition of campaign messages even in the areas where the campaigns had not been broadcast. Campaign messages that differed significantly from other anti-smoking messages were less likely to be falsely recognized. These results reinforce the need to include true control groups in mass media evaluations and to construct distinctive sages if exposure is an important aspect of campaign evaluation.

mes-

INTRODUCTION Mass media health campaigns are designed to reach large audiences with specific messages. Since media campaigns can be effective only if they reach the intended audience, exposure to the campaigns often has been measured. This usually involves asking potential audience members whether they remember or recognize specific messages or materials from the campaign. Recent campaigns have reported high exposure rates. According to a Gallup Poll, about nine out of 10 adults reported having heard of the 1987 Great American Smokeout.’ Vaque and Salleras reported that three months after a multimedia antismoking campaign in Catalonia, Spain, 74% of those interviewed &dquo;remembered having seen some type of anti-smoking material. &dquo;’- An evaluation of the &dquo;Time to Quit&dquo; smoking series that ran in a Buffalo, New York newspaper showed that 78% of the respondents to a telephone survey said they remembered seeing the series; almost half (47%) reported reading one or more of the articles in the series.’ More than 90% of the adults surveyed in a community participating This research was supported by Grant CA38392 from the National Cancer Institute of the National Institutes of Health.

Jane D. Brown is with the School of Journalism, The University of North Carolina Chapel Hill, Karl E. Bauman and Connie Padgett are with the School of Public Health, The University of North Carolina at Chapel Hill. at

:

299-

Downloaded from heb.sagepub.com at University of British Columbia Library on June 16, 2015

300

Program (MHHP) reported &dquo;awareness&dquo; of at least one that had been heavily publicized in the mass media.4 event activity physical However, campaign evaluators should be aware that apparently simple exposure measures may provide inflated estimates of campaign exposure. Since evaluations often do not include true control groups, it often is impossible to assess the validity of these exposure measures. Some evaluations, however, provide clues that reported exposure rates may be inaccurate. Evaluators of the British government’s AIDS public education campaign, for example, found that more than three-fourths of their survey respondents reported having read a in the Minnesota Heart

brochure about AIDS even before distribution of brochures to every household in the country had been half completed In the MHHP study, respondents who reported having heard of a particular event were then asked if they had participated in it. When compared to actual participation rates as measured by official registration at the event, participation estimates derived from survey responses were overestimated by 28%. The evaluation of Minnesota’s state-wide radio and billboard campaigns designed to reduce tobacco use among young people reported high exposure rates of between 60 and 92% as measured by industry estimates, and aided recognition rates between 52 and 69%. But only betwen 12 and 15% of the adolescents interviewed after the campaign could spontaneously recall at least one ad from the campaign. From 20 to 33% of the adolescents interviewed before the campaigns began reported awareness of media messages against tobacco use. Posttest scores on this measure increased by only 6 to 13 percentage points.6 The few mass media campaign evaluations that have included control groups clearly show the potential of inflated estimates of exposure. For example, in one evaluation of a radio campaign about family planning, 23% of teenagers interviewed in a city where advertisements had not run reported having heard messages about family planning.’ Udry’s evaluation of a mass media campaign about family planning showed that this kind of response could be due to confusion about whether the respondent had seen the messages from the specific campaign being evaluated. In the control city before and after Udry’s media campaigns, between 40 and 50% of respondents reported they had seen an ad about birth control. Analysis of open-ended questions about what respondents had seen indicated they were recalling public service advertising for specific birth control products or news items concerning birth control pills and their side effects.,, In this article we present findings from our mass media campaigns aimed at adolescents. We also consider how inflated estimates of campaign exposure might be minimized.

METHODS The

Campaigns

and Evaluation

Strategy

During Fall 1985 and Spring 1986, nine messages designed to keep adolescents from becoming cigarette smokers were broadcast in six Standard Metropolitan Statistical Areas (SMSAs) in the southeastern United States.~ Four SMSAs served as a control group and did not receive any of the messages. Downloaded from heb.sagepub.com at University of British Columbia Library on June 16, 2015

301

different combinations of radio and television mesaired. In two 30-second messages that focused on the conSMSAs, sages of cigarette smoking expected by adolescents were broadcast over sequences radio stations popular with the targeted age group (12-15 years old). Each of these messages featured an adolescent discussing why he or she had decided not to smoke cigarettes. Each message highlighted one of seven consequences adolescents often associate with smoking. The adolescent talked in a natural way about why smoking cigarettes might cause bad breath, difficulty concentrating, loss of friends, difficulty with adults, or loss of appetite, or might not result in an increase in fun or relaxation. In another two SMSAs, the radio messages that featured expected consequences were supplemented with a 60-second sweepstakes message that encouraged adolescents to pledge to remain nonsmokers and to enter a sweepstakes for a chance to win up to $2,000. In the final two treatment SMSAs, both kinds of messages were broadcast over both radio and television. The campaigns were broadcast in three four-week periods over six months. The sweepstakes messages were included only in the first period. Broadcast time was purchased for the campaigns to assure exposure by the intended audience. The expected consequences messages were run in sequence so each was aired about the same number of times. In the SMSAs receiving only radio messages, an average of 315 messages were aired; in the other SMSAs, an average of 433 were aired. Estimates of the reach of the campaign were derived from Nielsen and Arbitron commercial rating service viewing data. This analysis predicted that between 60 to 80% of 12- to 17-year-old adolescents in the six treatment SMSAs would have seen or heard the messages an average of three to four times each four-week period. Evaluation of the long-term attitudinal and behavioral effects of the campaigns will be based on surveys conducted with adolescents and their mothers in each of the 10 SMSAs one year before and one year after the messages were broadcast. To assess campaign reach, a randomly selected subsample of 574 respondents from the adolescent panel was interviewed for an average of 15 minutes over the telephone by trained interviewers in the week immediately following the last month of the campaigns. We were unable to reach 97 of the original sample of 682 and 11 potential respondents refused to be interviewed. This resulted in a response rate of 84%. The campaign exposure data from this telephone survey are discussed here. Three

campaigns using

were

,

Measuring Campaign Exposure Respondents were asked if they recognized the primary verbal content from each of the eight expected consequences messages after the introduction: &dquo;Now, I’m going to ask you about some specific 30-second TV or radio ads and you tell me if you’ve heard them or not.&dquo; Respondents also were asked if they recognized the campaigns’ theme line and a bogus message that was similar to the broadcast messages. These 10 items were presented to the respondent in random order to avoid serial effects. When respondents said they had heard or seen the message they were asked if they had heard it on radio only, TV only, Downloaded from heb.sagepub.com at University of British Columbia Library on June 16, 2015

302

both radio and TV and then were asked to estimate how many times they had heard the message in the last six months on whichever medium they remembered. Finally, they were asked to rate the message on a scale of one to 10 with one meaning you didn’t like it and 10 meaning you did like it.&dquo; After this sequence of questions, respondents were asked similar questons about their recognition of the sweepstakes message. Control SMSAs had been chosen carefully to eliminate any chance that the campaigns could be received from broadcast signals originating in the treatment SMSAs. Coverage maps obtained from each of the broadcast stations were examined and showed no indication of media overlap. An employee in each city called area cable operators to ask if any television channels from our treatment SMSAs were carried and listened for radio signals. These tests found no contamination of the control SMSAs or broadcast overlap between treatment SMSAs. The SMSAs included in the study were an average of 325 miles apart; the two closest SMSAs were 90 miles apart. Of course, adolescents from the control SMSAs could have seen or heard campaign messages if they had traveled through or visited in the treatment SMSAs, but this should have occurred very rarely. Data from our sweepstakes entries provided further evidence of negligible contamination. Of the 13,020 sweepstakes entrants recruited from the broadcast sweepstakes message and a subsequent direct mail campaign, only five lived in SMSAs in this study that did not receive the sweepstakes treatment.

or on

RESULTS

Analysis of the telephone survey data showed that between 86 and 97% of those interviewed in the treatment SMSAs said they remembered hearing or seeing at least one of the expected consequences messages on either radio or television. This rate was similar to the rates predicted by the rating services. However, almost three-fourths (72°l0) of the adolescents surveyed in the control SMSAs also reported exposure to at least one message. Since the SMSAs had no media overlap, we assumed that these messages were not seen or heard by these adolescents. Why did many who had not been exposed report recognition? One explanation is that adolescents may have become frustrated at not being able to recognize any of the messages read to them during the interview. They may, after having to say &dquo;no&dquo; so many times, have said &dquo;yes&dquo; to please the interviewer or to make the experience seem more worthwhile. Public opinion survey methodologists have suggested that respondents may provide opinions about issues that they cannot possibly know about in an effort to save face. In one set of studies,&dquo;’ an average of 30% of survey respondents volunteered an opinion about a fictitious public affairs issue apparently because they did not want to appear uninformed. Some 5 to 10% of respondents persisted in providing an opinion even after being given the opportunity of saying they had not had enough time to think about the fictitious issue. One way to test if this demand characteristic was affecting response validity&dquo; would be to see if such face-saving occurred later in the sequence of questions when respondents might become worried about always answering in the negative. Downloaded from heb.sagepub.com at University of British Columbia Library on June 16, 2015

303

Unfortunately, we were unable to determine this directly because the computerassisted interviewing system presented the questions about each message to the interviewer in a random order and did not keep track of the sequence. Other evidence, however, suggests that this demand characteristic may not

provide a complete explanation. As can be seen in Table 1, an average of 11.2% of the respondents in the SMSAs where the sweepstakes did not run said they had seen the sweepstakes messages. If yea-saying had occurred consistently later in the interview, we would expect a higher false recognition rate for the sweepstakes message which was always the last in the sequence of questions. Further evidence is provided by the bogus message that was included in the randomly-ordered set of expected consequence messages. This bogus message was similar in style but discussed a different consequence (the expense of smoking cigarettes) than any of the messages that had been broadcast in the campaigns. Comparable proportions of respondents in both the control and campaign SMSAs (11.7% and 10.7%, respectively) said they recognized this bogus message. This would suggest that face-saving may have occurred at about the same rate in both the control and campaign SMSAs. Face-saving may not tell the whole story, however. As can be seen in Table 1, there were marked differences in false recognition rates for the individual messages in the control SMSAs. While for each message (except for the bogus message) recognition was significantly higher in the campaign SMSAs, messages most frequently falsely recognized by the control groups were those that said that a teenager can have fun without smoking (#8), that smoking causes bad Table 1. Percent

’ n

=

Recognition

of

Campaign Messages

236.

h n = 338. All differences ’

areas) except

are

statistically significant (p bogus message.

A validity problem in measuring exposure to mass media campaigns.

Recognition of radio and television messages included in three mass media campaigns designed to keep adolescents from starting to smoke cigarettes was...
457KB Sizes 0 Downloads 0 Views