How do you read poll results?
The question for you today is not only about how you read poll results but also about what thought process you have when reading a poll. You might say, “Well, it’s simple. 76% of people love Windows and 24% of people love Mac. Windows must be superior. End of story.”
Is it really that simple? Sometimes it is, but most of the time it’s not.
Politico published an article yesterday titled Poll: Just 21 percent approve of House’s Obamacare repeal bill. What’s your first reaction?
- Yeah, it’s about right. Nothing new here.
- Something gotta be wrong with this.
- Not again. Please don’t use ridiculous numbers to draw people’s attention.
- Hmmm… interesting.
Whatever your reaction is, do you trust that number? If not, what would you do?
If you read the article, you will soon find out it’s merely a summary of a poll conducted by Quinnipiac University Poll. The article, however, only provides the information about the poll in the last paragraph, which is among the most relevant information I want to know when I read poll results. I don’t know why such information wasn’t in the first or second paragraph.
Anyway, after reading the article, I did some research. I already knew the sample size, 1,078 voters nationwide. I knew it’s conducted by Quinnipiac University Poll and found that they have been doing a great job according to FiveThirtyEight’s pollster rating.
In addition, that 21 percent approve rate of the Revised GOP Health Plan seemed to be their key finding because they put that conclusion in the title as well as in the first paragraph. But the associated question appeared quite late in the survey:
Question 60. There is a revised Republican health care plan to replace Obamacare, known as the American Health Care Act. Do you approve or disapprove of this revised Republican health care plan?
At this point, I was about 70% confident, non-statistical speaking, that the number is trustworthy. But I wanted to check one more thing – how or whether did they deal with sampling bias – because they did not mention the proportion of each sub-population in terms of party identification (Republican, Democrat, Independent) in the survey sample. Nor did they mention whether the proportions were comparable to those in the population.
According to Pew Research Center, 34% of registered voters identify as independents, 33% identify with the Democratic Party while 29% identify as Republicans in 2016. If the survey sample did not consist of similar proportions, then the sample may not be representative with regard to the population.
Of course, there is a statistical method to deal with this issue. The researchers at the Quinnipiac University not only described their sampling procedures but also addressed the sampling bias in their Sample and Methodology detail. Basically, the technique is to perform weighting adjustment by giving more weight to the under-represented population and less weight to the over-represented population.
Now I feel good about the poll results. It’s not a tactic to draw people’s attention. Next time when you read a poll, try to find answers to these questions:
- How large was the sample size?
- How were the samples collected?
- Was the sampling bias addressed?
- Who conducted the poll?
By answering these questions, you should be able to figure out if the results can be trusted.