Friday, November 7, 2008

Causing public opinion

It is interesting to consider what sorts of things cause shifts in public opinion about specific issues. This week's national election is one important example. But what about more focused issues -- for example, the many ballot initiatives that were considered in many states? To what extent can we discover whether there is a measurable effect on public opinion by the organized efforts of advocacy groups through advertising and other strategies for reaching the minds of voters?

In these cases we might imagine that voters have a prior set of attitudes towards the issue -- perhaps including a large number of "don't know/don't care" people. Then a set of advocates form to lobby the public pro and con. They mount campaigns to influence voters' opinions towards the option they prefer. And on the day of the election voters will indicate their approval -- often in ratios quite different from those that were measured in pre-campaign surveys. So something happened to change the composition of public opinion on the issue. The question here is whether it is possible to estimate the effects of various possible influencers.

This seems like potentially a very simple area of causal reasoning about social processes. The outcome variable is fairly observable through polling and the final election, and the interventions are also usually observable as well, both in timing and magnitude. So the world may present us with a series of interventions and outcomes that support fairly strong causal conclusions -- for example, "each time ad campaign X hits the airwaves in a given market, there is an observed uptick in support for the proposition." It is unlikely that the correlation occurred as a result of random variations in both terms; we have a theory of how advertising influences voters; and we conclude that "ad campaign X was a causal factor in shaping voter opinion in this time period." (It is even possible that X played a role in both segments of opinion, resulting in an up-tick in both yes and no responses. Then we might also judge that X was effective at polarizing voters -- not the effect the strategist would have aimed at.)

This is an example of singular causal reasoning, in that it has to do with one population, one issue, and a specific series of interventions. What would be needed in order to arrive at a conclusion with generic scope -- for example, "advertising along the lines of X is generally effective in increasing support for its issue"? The most straightforward argument to the generic conclusion would be a study of an extended set of cases with a variety of strategies in play. If we discover something like this -- "In 80% of cases where X is included in the mix it is observed to have a positive effect on opinion" -- then we would have inductive reason for accepting the generic causal claim as well. This is basic experimental reasoning.

Take a hypothetical issue -- a referendum on a proposal for changing the system the state uses for assessing business taxes. Suppose that a polling firm has done weekly polling on the question and has recorded "yes/no/no opinion" since October 2007. Suppose that two organizations emerged in December to advocate for and against the proposal; that each raised about $5 million; and that each included an advertising campaign in its strategy. Suppose further that the "no" campaign also included a well-organized effort at the parish level to persuade church members to vote against the measure on religious grounds and the "yes" campaign included a grassroots effort to get university students and staff to be supportive of the measure on pro-science and pro-economy grounds. And suppose each organization mounted a "new media" campaign using email lists and web comminication to make its case. Finally, suppose we have good timeline data about the occurrence and volume of media spots throughout the period of June through November.

This scenario involves three types of causes, a timeline representing the application of the interventions, and a timeline representing the effects. From this body of data can we arrive at estimates of the relative efficacy of the three treatments? And does this set if conclusions provide credible guidance for other campaigns over other issues in other places?

There is also the question of the efficacy of the implementation of the strategies. Take the ad campaigns. Whether a specific campaign succeeds in changing viewers' opinions depends on the content, message, and production quality. Does the message resonate with a target segment of voters? Does the production design stimulate emotions that will lead to the desired vote? So evaluating efficacy needs to be done across instances of media as well as across varieties of media. (This is the function of focus groups and snap polls -- to evaluate the effects of specific messages and production choices on real voters.)

(Here is a link to some information about the process leading up to a positive vote on the Michigan Stem Cell initiative this month. A good general introduction to the social psychological theories about the formation of attitudes and opinions is Stuart Oskamp and P. Wesley Schultz, Attitudes and Opinions.)

1 comment:

John said...

A nice, clear account. This curmudgeon notes, however, that these sorts of issues have been discussed for decades, if not longer, in the marketing research literature. They are pretty much standard features of standard textbooks on the subject. See "test marketing," "pre- and post-testing," "campaign audits" and similar topics.