UT Wordmark Primary UT Wordmark Formal Shield Texas UT News Camera Chevron Close Search Copy Link Download File Hamburger Menu Time Stamp Open in browser Load More Pull quote Cloudy and windy Cloudy Partly Cloudy Rain and snow Rain Showers Snow Sunny Thunderstorms Wind and Rain Windy Facebook Instagram LinkedIn Twitter email alert map calendar bullhorn

UT News

It’s Time for Pollsters to Report Margins of Error More Honestly

The issue is not to point a finger at pollsters, but to point out that myths can have consequences. 

Columns appearing on the service and this webpage represent the views of the authors, not of The University of Texas at Austin.

Two color orange horizontal divider
voters_830_2

In 2016, public opinion polling suffered two epic failures. Because polls erred in predicting the winners in both the Brexit referendum and the U.S. presidential election, critics have dissected pollsters’ questions and their methods. But a very insidious source of error has remained hidden from public view: margins of error and the way in which pollsters calculate and present them.

With another important round of elections coming up this year, it’s important for the public to understand the real levels of uncertainty in poll results, and for pollsters to report sampling error more honestly.

We’ve all read the disclaimers such as “The margin of sampling error is +/- 3 percentage points with a 95 percent level of confidence.” My research has found that such statements influence the level of trust readers place in a poll’s results.

Unfortunately, their trust is misplaced. A dirty secret of the polling business is that reported margins of error are essentially a myth. The manner in which margins are calculated and reported is unrelated to the actual results of the underlying poll. Consequently, these margins are misleading at best and fictitious at worst in representing the precision of the poll.

How can they be unrelated to a poll’s results? Here’s how:

First, the reported margin of sampling error may imply that it pertains to an entire poll. In reality, it is only for a single question, even though research shows that many of us think a reported margin pertains to the entire poll.

Second, the single question used to calculate the margin is a fictitious one. It does not reflect the actual response to any question posed in the poll.

And finally, the formula for calculating the margin arbitrarily assumes that the answer to the fictitious question is 50 percent “Yes” and 50 percent “No.” Only if, by chance, were the polled individuals to respond exactly that way would the reported margin be correct for that question.

The rationale offered for reporting margins of error this way is that a 50/50 split has the largest possible margin. If a poll reports the maximum sampling error possible, it is covering the worst-case scenario — for that (fictitious) question.

Except that the poll hasn’t necessarily done so. If a question has more than two possible answers — like, “Don’t Know” in addition to “Yes” and “No” — it requires a different formula. For subgroups such as “males” or “65 and over,” the margins will be larger than that for the overall poll.

Most important, margins of error can be cumulative. Many answers to questions in a poll are related to one another. Collectively, their margins add up to more than that for a single question. In a simple poll with only four questions, and a 95 percent confidence rate for each one, the total margin of sampling error of the poll might be as large as 19 percent.

There are more accurate ways for polls to represent their margins of error. One option is to present none at all. A recent Harvard CAPS-Harris Poll did just that.

Another option is to compute the individual sampling error margins for every question in a poll. Pollsters could then work out an average margin across all poll questions. Such estimates would still possess weaknesses, but they would still be superior to the single fictitious margin currently being reported.

Ultimately, pollsters should devise a multiquestion approach. Such an approach would simultaneously take into account the number, types, weights and relationships of questions, along with individual sampling error margins for actual answers and one for the overall poll.

My purpose here is not to point a finger at pollsters, but to point out that myths can have consequences. When polls understate their margins of error, particularly in a close race, the public is being misled by a false sense of precision. That can lead to the “surprising” results we saw in 2016.

Besides, it’s in pollsters’ best interests to present their results in a more ethical way. After the battering they’ve taken recently, doing so would offer them a chance to regain public confidence in the credibility of their polls. The winners would be not just pollsters and the public, but our overall democracy.

Robert A. Peterson is the John T. Stuart III Centennial Chair in Business Administration in the McCombs School of Business at The University of Texas at Austin. 

A version of this op-ed appeared in Fortune.

To view more op-eds from Texas Perspectives, click here.

Like us on Facebook.

Media Contact

University Communications
Email: UTMedia@utexas.edu
Phone: (512) 471-3151

Texas Perspectives is a wire-style service produced by The University of Texas at Austin that is intended to provide media outlets with meaningful and thoughtful opinion columns (op-eds) on a variety of topics and current events. Authors are faculty members and staffers at UT Austin who work with University Communications to craft columns that adhere to journalistic best practices and Associated Press style guidelines. The University of Texas at Austin offers these opinion articles for publication at no charge. Columns appearing on the service and this webpage represent the views of the authors, not of The University of Texas at Austin.

The University of Texas at Austin