Friday, July 17, 2009

Polling 101: Why different polls show different results

READ STORY HERE

In 1936 one of the earliest examples of a nationally based opinion poll sought to predict whether Franklin Roosevelt would win a second term as President or if he would be defeated by Kansas Republican Alfred Landon.

The popular magazine Literary Digest released a poll just days before the election, results based on two million responses from questionnaires mailed to their readers. The result of their findings? Alf Landon the victor in a landslide, winner of 370-electoral votes. The actual result? Franklin Roosevelt wins reelection taking all but two states in the most one-sided Presidential Election in history.

It is generally agreed upon that the cause of the Literary Digest mistake and one that continues to haunt pollsters to this day was due to improper sampling. There were considerably more Republican readers of the magazine than Democrats and these questionnaire responders were understandably prepared to cast their votes for Landon over FDR. The disastrous error cost the Literary Digest dearly. Their credibility plummeted overnight and the publication never recovered.

Around the same time George Gallup an advertising executive organized a poll of 50,000 sampled responders and correctly predicted both that Roosevelt would win reelection easily, and that the Literary Digest results would be wrong. Since that time polling organizations like Gallup have played an essential part in elections and in charting public opinion on the issues.

Now in 2009 you would tend to expect that opinion polling would be a sophisticated enough process that most pollsters would show similar results. So how is it that certain polls can show such different results? Here are some examples of the polling process and its difficult task in rendering accuracy.

In the early-summer of 2008 it was widely accepted that Barack Obama was leading John McCain in the race for the White House by a margin of four to eight percentage points. Right around that time however two very reputable publications had the Illinois Senator leading his opponent by margins of 15% and 12%. Those pollsters were Newsweek and the Los Angeles Times and their findings caused quite a stir. Flash forward to late October when a Fox poll showed the Presidential race in a virtual dead heat, strikingly different than the 5-9% spread most other polls were indicating at the time.

How does this happen?

Well it generally takes an in depth look at the polls themselves to uncover where the top-line results generate their totals. Problem is that it is often difficult to find the actual poll report from the publication in question so people are at times at the mercy of the media’s assessment. Generally right leaning columnists will jump to report on a poll that looks particularly favorable to their candidates of choice, just as liberal reporters will thrill to see data that appears kind to their politicians. Even when dealing with something as easily depicted as numerical data, receiving “fair and balanced” reporting on polls can prove elusive.

The term “outlier” has been adopted by pollsters as a way to label polls that are significantly out of step from the average of others showing similar results. That does not mean however that such outlier polls are always flawed or incorrect however, just that they are generally viewed with a healthy dose of skepticism. For an example of what an outlier is lets pretend you took five tests on a subject you were familiar with. Let’s say “American History” was the topic and the results from these tests were made public. Here are the scores below;

Test 1: 92
Test 2: 85
Test 3: 88
Test 4: 90
Test 5: 65

The fifth and final test would be considered the outlier since the score was much lower than the average of the first four. The public would regard the fifth test as a fluke or outlier and assume your knowledge of the subject to be rather high. The teacher however with the answers to each question in hand notices a pattern. The fifth test revolved around a specific topic of American history you were less familiar with and the considerably lower score is testament to that.

Now take a look at the aforementioned poll examples from the 2008 election cycle. Two polls simultaneously released showing wildly positive numbers for Obama and later in the year the Fox poll that indicated a race that was practically tied. A deeper look revealed the answers. LA Times and Newsweek polled a disproportionate number of Democrats and the random sampling used by Fox had nearly as many Republican responders as Democrats. This in spite of data from partisan trends that indicated the number of Democrats outnumbered Republicans nationwide by 6-8% at the time.

Naturally there are other factors that make for “good” and “bad” polls. One such factor takes into account the subjectivity of poll questions, or as many who label them negatively, “push polls”. One of the more infamous examples of how push polls are used was in the 2000 Republican Primary. In an attempt to torpedo Senator John McCain’s campaign the George W. Bush campaign controversially mentioned a possible scandal under the veil of a simple poll question. “Would you be more likely or less likely to vote for John McCain for president if you knew he had fathered an illegitimate black child?”

Sometimes surveys show push-poll tendencies unintentionally. Results gathered from questionnaires are often based heavily on who is asking and how the question is being phrased. Let us use another hypothetical example of how two polls containing similar questions can render wildly different results.

Question 1: In light of the recent Supreme Court overruling of Judge Sandra Sotomayor’s decision to suspend promotions of New Haven firefighters in the Ricci vs. DeStefano case do you approve, disapprove, or are you unsure of her confirmation to the Supreme Court?

Question 2: Judge Sandra Sotomayor has recently come under criticism for her controversial decision to deny New Haven firefighters promotions based on racial bias in testing. In light of the Supreme Court’s overruling of her decision in the Ricci vs. DeStefano case do you approve, disapprove, or are you unsure of her confirmation to the Supreme Court?


Notice the difference in terminology between the two questions. The second in particular makes a point of informing the responder that Judge Sotomayor “has recently come under criticism” and that her decision was deemed “controversial”. The first question makes no such claims and simply lays out the basic facts. If both questions polled Americans equally weighed along biological, political and ideological lines it would not be surprising to see Sotomayor score considerably worse with second question responders being led or “pushed” towards a negative opinion of the nominee.

Lastly for our discussion the simple accuracy of a poll is often dictated by the size of it. Over 130-million Americans cast their votes in the Presidential election last November. Opinion polls however are only able to survey a tiny fraction of those voters at any one time. So long as the polls are weighted properly to reflect the makeup of voters and as many variables as possible, generally the more people you interview the better results.

In North Carolina for example Public Policy Polling surveyed 2,100 “likely” voters, quite large for a normal state poll. This compared with just 600-interviewed by Zogby. While both polls were close to the actual result, the PPP poll showed then Senator Obama leading 50-49% with Zogby showing McCain ahead by a 49-48% margin. Both polls were released two days before the election and the end result showed that the large sample size used by Public Policy Polling may have paid off to the tune of a perfectly accurate prediction.

Just as in regards to political reporting at large, conspiracy theories stretch far a wide regarding polling. That said this writer believes most pollsters to be honest and legitimate in their search for accurate data of public opinion. Some pollsters are simply better and more consistent with the questions they ask, the sample sizes they use, and the data they reflect. Polls are an often arduous time consuming process that lacks perfection in almost every instance. The role that organizations like Gallup and their many offspring play in the shaping of personal opinions, public policy, and the candidates themselves however cannot be understated.

No comments:

Post a Comment