Paula Poindexter, associate professor in the School of Journalism, is an expert on women and young voters, African American voters, polling and media coverage of elections. She is the co-editor of “Women, Men, and News: Divided and Disconnected in the News Media Landscape” (Lawrence Erlbaum Associates, Inc./Taylor and Francis 2008) and co-author with Maxwell McCombs of the textbook “Research in Mass Communication: A Practical Guide (Bedford/St. Martin’s 2000).”
Every election season we’re bombarded with polls, and the 2010 mid-term election is no exception.
Polls lead the evening news. Newspapers publish them on their front pages and post them on their Web sites. Cable news talking heads recite poll numbers as they explain why candidates are ahead or behind. There are even poll tweets and blogs devoted exclusively to the latest poll numbers.
But does the ubiquitous election poll really matter? Polls can provide insight into voters, their preferences and worries. Polls can also undermine the democratic process by influencing factors that contribute to an election’s outcome.
One factor that contributes to an election’s outcome is news coverage. Studies show journalists often report their election stories in the context of a horserace — who’s winning, who’s losing. Candidates, who are winning, according to the polls, get additional news coverage, better headlines and more favorable placement on the news pages and in network and local TV newscasts. These election stories can attract endorsements and larger amounts of money which can lead to robust poll numbers that may convince journalists and the electorate that the candidate has a strong likelihood of winning.
If a poll story with its horserace emphasis can influence factors that contribute to an election’s outcome, what if a poll is wrong? Might the best candidate lose? Polls are not perfect and they’re not exact.
The key is in knowing the critical factors that determine if a poll should be believed. A “no” answer to any of the following questions would call into question a poll’s reliability.
- Was the poll conducted by a credible organization?
- Was the poll conducted within the past 10 days?
- Were “likely” voters polled?
- Were landline and cell phone numbers of poll respondents randomly selected?
- Was the sample size at least 1,000?
- Was the sampling error no greater than plus or minus three percentage points?
Most, but not all election polls, are conducted by news organizations, and more often than not, their polls have been credible over time. Even so, when the poll was conducted matters. Because of news coverage and factors on the ground that can influence voters’ opinions, a poll conducted within the past 10 days is more believable than a poll conducted last month.
Polling a random sample of 1,000 likely voters should be believed over a poll of 1,000 registered voters who may or may not turn out to vote. Further, because more and more people have disconnected their landlines and use cell phones exclusively, polls that question likely voters on both landlines and cell phones are more believable than voters polled on landlines only.
Poll results without factoring in sampling error should not be believed. For an election in which it looks as if one candidate is leading, factoring in sampling error may reveal the candidates are actually tied. If sampling error is plus or minus three percentage points, the difference between candidates must exceed six percentage points if one candidate is truly ahead.
Consistent with past elections, horserace polls are playing a leading role in news coverage of the 2010 mid-term election. Just because the latest poll results are published, posted and tweeted doesn’t mean everything the poll says should be believed. Understanding what makes a poll believable is important for voters so they will be informed with credible information before casting their ballot on election day.
Visit the mid-term elections blog series home page for a complete lineup of faculty experts’ analyses.