to obtain the most accurate data and analyses of the various communications and events and their impacts, there is still considerable difficulty in being certain that particular causes are linked to particular effects. This is because each fac¬tor—variable in formal terms—is not neatly controlled for. With careful design and operation, the experimental method can, however, control each variable, vary it systematically, and provide more precise evidence on the impact of the independent variable on the dependent variable.
Arguably the most notable example of use of the experimental method to study the media and politics is that used in two studies, one by Iyengar and Kinder and the other by Ansolabehere and associates (although the first was not used in an electoral setting). As Iyengar and Kinder said: "The essence of the true experiment is control.... By creating the conditions of interest, the exper¬imenter holds extraneous factors constant and ensures that [the subjects] will encounter conditions that differ only in theoretically decisive ways" (that is, in ways that relate to the theory of the nature of the impact being looked at).32 Such an experiment also randomly assigns the subjects, so there are no demo¬graphic or political orientation differences between the groups that could skew the findings. In addition to use of the experimental method, the Iyengar and Kinder study also demonstrated the value of the multimethod approach by link¬ing the experimental findings with content analyses of network news trends and public opinion survey data. Drawing on agenda-setting research, they sought to more rigorously test whether TV news shows really could affect what issues people considered significant and even how people thought about those issues, as well as how their evaluations of presidential performance were affected by those responses.
Two types of experiments were at the heart of the Iyengar-Kinder study. The first was "sequential experiments." The experiment subjects were asked to not watch the regular network news shows for the week of the experiment. The subjects in the treatment group were then exposed to one network news show each day for a week, with each news show carefully reedited to carry a story on a given issue (such as adequacies of American defense preparedness); the control group saw news shows with no stories on the issue. The subjects' beliefs about the importance of specific national issues, as well as their evaluation of the pres¬ident's performance, were measured by questionnaires administered immedi¬ately before watching the first experimental newscast and then a full day after watching the final newscast. This afforded the opportunity to identify changes in the subjects' opinions toward the importance of the issues selected for study. Several different questions were asked along those lines, from which a compos¬ite index of the perceptions and opinions on the importance of the issues was compiled. The researchers also asked open-ended questions and looked both at the number of mentions of an issue's importance and at mentions of the evalua¬tions of the president's performance to see whether the subjects had been
282
The Media in Elections Methods of Study
"primed" to use those issues as criteria for evaluating that performance. Four such experiments were conducted.
Iyengar and Kinder also sought to obtain cross-checks on their findings regarding agenda-setting on various issues and the priming of peoples' presi¬dential performance evaluations based on given issues. The researchers com¬piled data for several years leading up to the study dates from three major national opinion polls on the public's opinions on the three issues involved in one series of sequential experiments (energy, inflation, and unemployment). They also measured how much attention the network news shows actually paid to those three issues by content coding from abstracts of the network news pro¬duced by the Vanderbilt Television News Archive at Vanderbilt University. In time series tests, they matched the amount of network news attention to the given issues over several years with the patterns of public opinion during the same period, which gave them another way of assessing agenda-setting. That is, if the amount of network coverage of, say, energy went up in a given month and the percentage of the public indicating that that was an important problem also went up in that period (normally with a slight delay for opinion to crystallize and be detected in the periodic polls), then this suggested that an agenda-setting function was at work. (This, of course, did not control for other factors that could affect that opinion pattern, but it was a rather strongly indicative correla¬tion, if not direct proof of causation.)
"Assemblage experiments" were the second type of experiment that Iyengar and Kinder conducted. These allowed a more precise manipulation of the varia¬tions in news conditions to which the respondents were treated. Basically, the treatment subjects watched a collection of "typical" news stories that paid a greater or lesser amount of attention to a given issue (defense, energy, inflation, civil rights, and so on). In these experiments, only post-test questionnaires were given immediately following the viewing, with "the appropriate test of agenda-setting ... to compare the importance participants attach to target problems across different experimental conditions representing different levels of [news) coverage."33 Iyengar and Kinder recognized that an immediate post-test did not deal with how much of this agenda-setting and priming response remained with the subjects over the longer term, so they also did follow-up interviews a week later for some of the experiments. (In order not to unduly alert the subjects to the purpose of the interviews and thus possibly engender artificial responses, the subjects were told that the interviews were for a general community survey.)
Those assemblage experiments were not intended as media and elections studies but for use as a model of the experimental method to use in the election realm, a first weakness would be that they were not done during an election period. An interesting and significant study by Ansolabehere, Iyengar, and oth¬ers34 sought to deal with that complaint. The media subject of this set of exper¬iments was political ads, and the issue was whether attack ads "demobilize"
283
Chapter 10 The Media in Elections I
voters. Their experimental studies took place during three different election campaigns in California, and they used real candidates in those elections as the subjects of the ads (unlike some studies that have used fictitious candidates). They had ads professionally produced that systematically varied the tone (nega¬tive versus positive) of the ads, while keeping everything else in the ads identical, including the visuals (thus controlling for other factors). They then embedded the ads in a local newscast and showed that to groups of randomly assigned respondents.
The strength of the experimental approach is certainly the greater ability to control the various factors involved and to isolate and focus on one or a few vari¬ables of interest. A few problems in these experimental methods should be briefly noted, however. The first problem we might call the "reality factor." These experiments are intended to tell us how "regular" citizens, in their nor¬mal lives, are affected by the emphases of news shows. But when regular citizens are brought into a university' setting to be a part of a study (even when they are not told the actual nature of the study, as in these cases), they are not in a "nat¬ural" setting by any means. The Iyengar-Kinder study sought to make the room as comfortable as possible and to have some newspapers and magazines around, but the fact remains that such a study in such a place puts people in a very unnat¬ural situation. The subjects are not only artificially alerted but they also do not have the normal distractions of kids, dog, phone calls from friends, dinner in front of them as they watch, and so on. This problem is very hard to get around, but we must be aware of it in assessing the study results. If the study design allows it, the best thing to do is to show the videotape stimulus material in the subjects' own homes. But that is time-consuming and expensive and it has some of its own, at least potential problems—including the fact that in some big city areas, it is just plain dangerous to send a researcher to do a home visit!
A second problem with the Iyengar-Kinder study was the lack of control of other sources of news and information on the issues. For example, for the exper¬iments done "in New Haven" (apparently at Yale University), did some or most of the subjects read a major newspaper before they got to the experiment site, and what of the newspapers they "provided the subjects"? Third, although they used undergraduate students in only a couple of the experiments, that use raises another issue. The author of this book sees it as rarely legitimate to use students to, in effect, represent the general adult public; students operate in very differ¬ent environments, they do not have the experience older citizens have in work¬ing through election information and choices, and so on. Using students is an inexpensive way to find subjects, but it is not a way to adequately test sociopolit¬ical responses representative of the general adult population.
284
The Media in Elections: Methods ol Study
Some (Further) Issues and Difficulties
in Methods of Studying the Media in Elections
Let us now discuss one problem in the method of content analysis that has been common to most studies using that method, including the Democracy '92 study the author was involved in. A central question in studies of the news media's coverage of election campaigns has been the extent to which policy issues are covered versus coverage of the "hor