A Citizen s Guide to Polling

Similar documents
From Straw Polls to Scientific Sampling: The Evolution of Opinion Polling

CHAPTER 11 PUBLIC OPINION AND POLITICAL SOCIALIZATION. Narrative Lecture Outline

I. Chapter Overview. Roots of Public Opinion Research. A. Learning Objectives

Public Opinion and Political Socialization. Chapter 7

Hey, there, (Name) here! Alright, so if you wouldn t mind just filling out this short

What is Public Opinion?

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

Chapter 8: Mass Media and Public Opinion Section 1 Objectives Key Terms public affairs: public opinion: mass media: peer group: opinion leader:

Survey Research (Polling)

POLLS! (SPRING 2012) MATH 119 KUNIYUKI 4 TH ED. OF TRIOLA: ESSENTIALS OF STATISTICS

Political Participation

AP AMERICAN GOVERNMENT STUDY GUIDE POLITICAL BELIEFS AND BEHAVIORS PUBLIC OPINION PUBLIC OPINION, THE SPECTRUM, & ISSUE TYPES DESCRIPTION

THE 2008 ELECTION: 1 DAY TO GO October 31 November 2, 2008

Swing Voters in Swing States Troubled By Iraq, Economy; Unimpressed With Bush and Kerry, Annenberg Data Show

The Media and Public Opinion

Polls Surveys of the Election Process

Public Opinion and Political Participation

The Electoral Process. Learning Objectives Students will be able to: STEP BY STEP. reading pages (double-sided ok) to the students.

FDR s first term in office had been a huge success! The economy was improving, and Roosevelt s New Deal programs were largely responsible.

The Cook Political Report / LSU Manship School Midterm Election Poll

Minnesota State Politics: Battles Over Constitution and State House

FOR RELEASE: SUNDAY, OCTOBER 13, 1991, A.M.

PEW RESEARCH CENTER FOR THE PEOPLE & THE PRESS JUNE 2000 VOTER ATTITUDES SURVEY 21ST CENTURY VOTER FINAL TOPLINE June 14-28, 2000 N=2,174

Unit 11 Public Opinion: Voice of the People

MEMORANDUM. Independent Voter Preferences

THE PRESIDENTIAL RACE HEADING INTO THE FIRST DEBATE September 21-24, 2008

The Electoral Process STEP BY STEP. the worksheet activity to the class. the answers with the class. (The PowerPoint works well for this.

Asian American Survey

Minnesota Public Radio News and Humphrey Institute Poll. Coleman Lead Neutralized by Financial Crisis and Polarizing Presidential Politics

Americans fear the financial crisis has far-reaching effects for the whole nation and are more pessimistic about the economy than ever.

Pew Research News IQ Quiz What the Public Knows about the Political Parties

PPIC Statewide Survey: Special Survey on Campaign Ethics

Issues vs. the Horse Race

WHAT IS PUBLIC OPINION? PUBLIC OPINION IS THOSE ATTITUDES HELD BY A SIGNIFICANT NUMBER OF PEOPLE ON MATTERS OF GOVERNMENT AND POLITICS

ABOUT THE SURVEY. ASK ALL WHO VOTED (Q1=1): Q.2 All in all, are you satisfied or dissatisfied with the way things are going in this country today?

Forecasting the 2012 U.S. Presidential Election: Should we Have Known Obama Would Win All Along?

BOOKER V. RIVERA AND THE POWER OF CABLE NEWS OBAMA APPROVAL DOWN SLIGHTLY

FOR RELEASE NOVEMBER 07, 2017

POLL: CLINTON MAINTAINS BIG LEAD OVER TRUMP IN BAY STATE. As early voting nears, Democrat holds 32-point advantage in presidential race

Political Polls John Zogby (2007)

November 9, By Jonathan Trichter Director, Pace Poll & Chris Paige Assistant Director, Pace Poll

Chapter 08: Public Opinion and Voting Multiple Choice

NUMBERS, FACTS AND TRENDS SHAPING THE WORLD. FOR RELEASE September 12, 2014 FOR FURTHER INFORMATION ON THIS REPORT:

Chapter 08 Public Opinion and Voting

American political campaigns

September 2011 Winthrop Poll Results

Newsweek Poll Congressional Elections/Marijuana Princeton Survey Research Associates International. Final Topline Results (10/22/10)

NEWS RELEASE. Political Sites Gain, But Major News Sites Still Dominant MODEST INCREASE IN INTERNET USE FOR CAMPAIGN 2002

Polling and Politics. Josh Clinton Abby and Jon Winkelried Chair Vanderbilt University

Before the Storm: The Presidential Race October 25-28, 2012

The Electoral Process

Partisans Dug in on Budget, Health Care Impasse

PEW RESEARCH CENTER FOR THE PEOPLE & THE PRESS CAMPAIGN CONSULTANTS SURVEY FINAL TOPLINE November 1997 March 1998 N=200

MODEST LISTING IN WYNNE S SHIP SEEMS TO HAVE CORRECTED ONTARIO LIBERAL PARTY SEEMS CHARTED FOR WIN

Asian American Survey

1,107 Iowa likely voters in the 2016 general election and congressional district Margin of error: ± 2.9 percentage points

HYPOTHETICAL 2016 MATCH-UPS: CHRISTIE BEATS OTHER REPUBLICANS AGAINST CLINTON STABILITY REMAINS FOR CHRISTIE A YEAR AFTER LANE CLOSURES

THE GOVERNOR, THE PRESIDENT, AND SANDY GOOD NUMBERS IN THE DAYS AFTER THE STORM

Moral Values Take Back Seat to Partisanship and the Economy In 2004 Presidential Election

Sociology 201: Social Research Design

TWELVE DAYS TO GO: BARACK OBAMA MAINTAINS DOUBLE-DIGIT LEAD October 19-22, 2008

Bellwork. Where do you think your political beliefs come from? What factors influence your beliefs?

Why The National Popular Vote Bill Is Not A Good Choice

HOT WATER FOR MENENDEZ? OR NJ VOTERS SAY MENENDEZ IS GUILTY; GOOD NEWS IS EVERYONE ELSE IS TOO

Public Opinion on Health Care Issues October 2012

Most opponents reject hearings no matter whom Obama nominates

University of North Florida Public Opinion Research Lab

MEDICAID EXPANSION RECEIVES BROAD SUPPORT CHRISTIE POSITIONED WELL AMONG ELECTORATE IMPROVES UPON FAVORABLES AMONG DEMOCRATS

MERKLEY REELECTION BID LAGGING EXPECTIONS

News English.com Ready-to-use ESL / EFL Lessons

Nonvoters in America 2012

Little Interest in Libya, European Debt Crisis Public Closely Tracking Economic and Political News

Partisan Interest, Reactions to IRS and AP Controversies

THE PRESIDENTIAL RACE AND THE DEBATES October 3-5, 2008

New Sachs/Mason-Dixon Florida Poll Shows Bill Nelson Vulnerable to Defeat in 2012

Santorum loses ground. Romney has reclaimed Michigan by 7.91 points after the CNN debate.

Economic Issues in Ohio Work to Kerry s Advantage

THE PRESIDENT, THE STATE OF THE UNION AND THE TROOP INCREASE January 18-21, 2007

Why 100% of the Polls Were Wrong

THE PRESIDENTIAL RACE: MIDSUMMER July 7-14, 2008

Changes in Party Identification among U.S. Adult Catholics in CARA Polls, % 48% 39% 41% 38% 30% 37% 31%

2016 Presidential Elections

Percentages of Support for Hillary Clinton by Party ID

AARP Pre-First-Debate National Survey Miami, September 30, 2004

FOR RELEASE APRIL 26, 2018

Obama Maintains Approval Advantage, But GOP Runs Even on Key Issues

Voters Divided Over Who Will Win Second Debate

Most Foresee Embarrassment, Not Impeachment AMERICANS UNMOVED BY PROSPECT OF CLINTON, LEWINSKY TESTIMONY

Statewide Survey on Job Approval of President Donald Trump

RECOMMENDED CITATION: Pew Research Center, May, 2015, Negative Views of New Congress Cross Party Lines

RECOMMENDED CITATION: Pew Research Center, September, 2015, Majority Says Any Budget Deal Must Include Planned Parenthood Funding

Implications of the 2012 Election for Health Care The Voters Perspective

Survey of US Voters Issues and Attitudes June 2014

1. Do you approve or disapprove of the way Barack Obama is handling his job as president? Do you approve/disapprove strongly or somewhat?

Chapter 9 Content Statement

Chapter Six: Public Opinion and Political Socialization

American Citizenship Chapter 8 Mass Media and Public Opinion. A. What is public opinion? a. One of the most overused and misunderstood terms b.

Executive Summary of Economic Attitudes, Most Important Problems, Ratings of Top Political Figures, and an Early Look at the 2018 Texas Elections

State of the Facts 2018

2018 State Legislative Elections: Will History Prevail? Sept. 27, 2018 OAS Episode 44

Transcription:

PS 101 University of Idaho Prof. McQuide A Citizen s Guide to Polling Introduction You will frequently see opinion polls in your local newspaper, on the TV news, or on Internet sites. Occasionally you may even be asked to participate in a poll. It is important for us to understand how polls are conducted, the different types of polls there are, and some of the problems associated with polls. First of all, why are opinion polls important? Here are several reasons: They allow us to obtain quick, repeated glimpses of US public opinion They provide information about what American s preferences are; this is very important to policymakers in a democratic system They can dispel myths we have about what Americans think. For example, Sniderman et al. (1993) found that liberals can be just as prejudiced towards blacks as conservatives are, dispelling old myths that conservatives were prejudiced and liberals are not. They help policymakers understand how to represent their constituents They help politicians figure out what issues are important before running political campaigns They can serve as important signals to an elected official when voters aren t happy with the job they are doing However, it is important to be a cautious consumer of polls. Public Opinion polls can and have been wrong. Two of the most infamous incorrect polls in American history occurred in 1936 and again in 1948. 1936 Presidential Election: Franklin D. Roosevelt was running against Alf Landon in 1936 for a 2 nd term as President. The Literary Digest had conducted presidential polls in 1920, 1924, 1928, and 1932 and correctly predicted the winner with only a small percentage of error. But in 1936, The Digest predicted Alf Landon to win: Landon 1, 293,669 FDR 972,897 Electoral votes and States predicted: States predicted to win E.V. Landon 32 370 FDR 16 161 Looks like a great poll, right? Over 2 million responses should be reasonably accurate, right? OOPS! FDR won the election by a landslide! He won with 62.5% of the vote and won 523 electoral votes, carrying 46 of 48 states. The Literary Digest folded and ceased to exist in 1937. George Gallup had correctly predicted FDR to win in 1936 and his polling organization still stands today (www.gallup.com) What went wrong????

The Literary Digest committed a number of errors: They used telephone lists and auto owner lists which biased their sample in a wealthy, conservative direction and missed unemployed people who were more likely to vote for FDR They used a mail survey they mailed 10 million ballots out and received back around 2 million ballots. This is problematic since it was based on self-selection. Those who returned ballots were more likely to be interested in the election and concerned about it. This made it a more unrepresentative sample. They sent the ballots out early in the presidential campaign; this missed the late shifts in the campaign in October 1936. 1948 Presidential Election: How did this famous picture come about? Journalists and pollsters were so sure Thomas Dewey would defeat President Harry Truman based on polling data collected through early October 1948. What happened? This time, George Gallup and other pollsters predicted Thomas Dewey (Republican) to defeat Harry Truman (Democrat). They turned out to be wrong. By 1948, most polling organizations had learned from the Literary Digest disaster and implemented random sampling, sampling from all income levels, and so on. But this time, many pollsters believed that people s minds would be made up by October and that it wasn t worth the time or money to take any more polls. Polls were not taken in the last two weeks before the election even though the last poll taken had shown 15% were undecided! President Truman embarked on a two week whistle stop tour throughout the country in the last two weeks of the campaign while Thomas Dewey slowed down his campaigning, confident he would win anyway and that he was so far ahead in the polls he could start turning to thinking about how to form his cabinet. As it turned out, many undecided voters shifted to Truman in the last two weeks of the election, giving President Truman his astonishing re-election victory. It is now known as one of the greatest comebacks in American election history. The lesson from this election was to keep polling right up to the election pollsters learned that undecided voters could swing an election, especially a close one. Below, you can see that all three of the major polling organizations got it wrong in 1948.

Types of Polls To be a skeptical consumer of polls, we need to know and be able to uidentify the different kinds of polls that are often used and reported in the mainstream media. 1. Straw Polls : These are unscientific polls, often taken at political conventions, in which polls organizers ask for volunteers to respond to an informal poll regarding their preferences on presidential, senatorial or gubernatorial candidates. These tend to attract party activists, who are more interested, informed, and partisan than the general population. Therefore, the results are often very skewed. 2. Interest Group Polls: Polls administered by interest groups are extremely unreliable since the question wording is often written in such a way as to elicit responses sympathetic to their causes (and then they tout these results to the media as evidence the American people are behind their cause!). Asher (2001) argues these should not even be considered polling. Look at the PETA poll below. I received this in my mailbox (most of you have probably seen something similar from other interest groups cluttering your mailbox). Look particularly at the question wording; you should be able to see how the questions are designed in such a way as to obtain positive answers for the group. For example, Question 3 asks, Do you approve of lethal animal experimentation to test new cosmetic and household products? This is a loaded question since no one really wants to say yes to such a question, so people will put down no. And then PETA gets the results they want to provide to the media as a scientific poll supporting their cause.

3. Presidential Approval Polls: Throughout a President s term, news organizations will poll citizens to find out how they think the President is doing. These are usually done on a weekly basis. The important thing to be aware of is that small shifts do not mean much of anything. They naturally fluctuate from week to week, especially since different people are polled every week. The time we really want to take notice is when there are sudden upswings or downswings in a President s ratings such as if Obama s approval rating drops from 65% to 55%. That would indicate a major shift in the public s perception of the job he is doing as President. Another thing to be aware of is the cycle effect. Typically Presidents start out with high approval ratings at the beginning of their presidency during the honeymoon period when Americans are excited about the new presidency. Over the course of the term, presidential approval ratings tend to steadily decline as people become discouraged by the slow pace of lawmaking, policy change and the inevitable twists and turns of events that occur internationally and domestically. Then, during the re-election campaign approval ratings tend to spike again. The first graph below shows President Obama s approval ratings so far:

As you can see here, Obama began his term with high approval ratings of 69%, but recently they have fallen to 58% as Americans are becoming frustrated with the pace of economic recovery, rising job unemployment, and the slow pace of reform in Congress for the financial system, health care and other issues. Now, compare this to President George W. Bush s approval ratings Source: Wall Street Journal (www.wsj.com) As you can see, President Bush s approval ratings spiked to 90% approval ratings after 9/11, increased with his re-election in 2004 and then steadily declined afterwards and drastically decreased in his second term. His second term decline was in part normal, as presidential approval ratings tend to decline during the second term, which is a lame-duck presidency due to the President s ineligibility for re-election. However, for Bush the economic collapse brought his approval rating to the lowest ever final approval rating for any modern president, 22%.

4. Exit Polls: These are polls conducted during elections to help media outlets project the winner of the election before the official vote counts are in. These are polls that news media take on Election Day; when voters are leaving the polling station, pollsters randomly pick every n th voter coming out of the polling station and ask them questions about who they voted for and why. These are useful because they: (1) provide news media some information to use to predict election outcomes, (2) provide valuable information to the media, scholars and the public about why the American people voted for certain candidates; and (3) these polls are much more reliable than pre-election day polls since the respondents are actual voters who did vote. However, these polls are a major source of controversy: o o o News media outlets use exit polls to try to be the first to predict the outcome of Presidential races, often before the polls close in many states. In 1980, the TV media predicted Reagan s victory so early that President Carter even conceded the election before the polls closed on the West coast. We have four time zones in the continental US. All polls must close at 8pm in that time zone. So, when the polls close in Florida at 8pm EST, voters in California are still voting in fact in large numbers since they are just getting off of work at 5pm PST. This means that if CBS predicts the presidential winner after the East coast polls close, then many voters on the West Coast may decide to stay home and not bother to vote. Imagine driving to the polls at 6pm here in the Intermountain West and you hear on the radio that the election has just been called with the winner having been projected to win enough electoral votes in the eastern states to win. Now what do you do? Go on to vote even though the winner has just been announced? Or turn around and drive home? More problematic is the fact that studies have found that those who agree to participate in these polls are more likely to be Democratic voters and that Republican voters often refuse to participate, skewing the election results (see the Surveying the Damage article in Module 4.6). Different solutions have been suggested: 1. Prohibit the networks from releasing exit poll results until after the polls on the West Coast close. 2. Use the Canadian solution: Restrict network coverage to begin by time zone, from East to West. This would mean that the networks can begin election coverage on the East Coast once the polls there have closed, but that coverage would not extend to any other area of the US. It would be regional coverage until the last polls in West Coast states have closed. 3. A voluntary agreement among the networks was made in 2000 to not release exit poll results for a state until after the polls closed in that state. However, this did not work well in Florida as the networks called the state for Gore, then Bush, then retracted it once they realized the race was too close to call. None of these have been agreed to and so the controversy continues on. Exit polls did a better job of predicting results in 2008, but the West Coast problem still remains. 5. Tracking Polls: These are polls taken during election campaigns to measure daily shifts in the electorate s support for the candidates. They can encourage a horse-race view of the election in which political pundits try to predict the election outcome as if though they were betting on a horse at a race-track. Here is an example from the close 2000 election:

You can see here how close the election was and how the polls shifted between Bush and Gore. One thing to watch for when you see tracking polls is that the poll reports the margin of error or sampling error. This may show that the results are actually closer than being reported. For example, in the Minnick-Sali Congressional race in Idaho s first district in 2008, Survey USA ran a poll in late October with the following results: Walt Minnick (D): 51% Bill Sali (R): 45% Undecided: 4% N=613 Sampling error: ±4% Survey conducted by telephone, October 18-19, 2008 How do you read these results? First, the N is the sample size. The sampling error of plus or minus 4% tells you that the survey results could be 4 percentage points in either direction. Basically this means that if there are any systematic errors from the sample size, the results could be slightly different from adding or subtracting 4% to the poll results. In this case, it could be substantially different. In this survey, the results could be a huge lead (55-41%) or a flipped result (47-49%). Another way to look at this is to view Minnick s true support range as being between 47-55% and Sali s as being between 41-49%. 6. Trial Heat Surveys: You will hear of these frequently early in the presidential election cycle. These surveys are designed to find out how different candidates would do against one another. For example, a pollster in 2012 might ask, If the election were to be held today, whom would you vote for, Governor Rommey or President Obama? and then ask another question, Now, if the election were held today whom would you vote for, Governor Sarah Palin or President Obama? and so on. 7. Push Polls: These are unethical polls that have been used by campaigns in recent years as part of dirty campaigning tactics. The polls are really not polls to collect data, but are designed to persuade voters to change their minds about a candidate by feeding them false, damaging information about the opponent. For example, a poll might sound like this:

Push Pollster: Good evening ma am. I am conducting a pre-election survey on the upcoming U.S. Senate race. Do you have a minute to participate? Respondent: Yes Push Pollster: Which candidate are you planning to support, Candidate Bill Smith or Candidate Tom Jones? Respondent: Tom Jones Push Pollster: Would you still vote for Tom Jones if you learned that he has been unfaithful to his wife and had seven extra marital affairs in the past two years? Respondent: Maybe Push Pollster: Would you still be willing to vote for Tom Jones if you learned that he is being investigated by the IRS for tax evasion? Respondent: I don t know Push Pollster: Would you still be willing to vote for Tom Jones if you learned that he is a closet homosexual and has frequented gay bars in Boise? Respondent: I guess not. As you can see, the real purpose of the poll is to feed the voter misleading, erroneous or even outright lies and falsehoods to plant doubts in the voter s mind about the candidate. It is not a real poll. National, reputable professional polling organizations such as the American Association for Public Opinion Research have condemned these polls as unethical. If you are asked to participate in one of these polls, you may want to refuse to participate, get the name and number of the campaign office, and report it to your state s elections commission. How Are Polls Conducted? Often, many Americans wonder how a survey based on the responses of 1500 people could possibly accurately reflect the national public opinion of 300 million people. The answer is in statistical probability theory. As long as the survey uses random sampling techniques, the survey sample will yield results that are representative of the national population. Here is a list of common sampling methods: Simple Random Samples: everyone has an equal chance of being selected. Computers generate random numbers for everyone in the population (i.e. phone number banks are often used) and then randomly selects n number of people to contact. The difficulty with the use of phone numbers of course is that not everyone has a telephone and a more serious difficulty today is the fact that many people now have cell phones and have chosen to cancel land line phone service. Another issue is that many people choose to have their

phone number be unlisted. These mean not everyone has an equal chance of being selected by random digit dialing methods. Systematic Sampling: Picks every nth person from a list. For example, if you were to pick up your local phone book and start calling every 5 th person in the book, you would be engaging in systematic sampling. Stratified sampling: the population is divided into subsets according to characteristics. This approach greatly increases representativeness. For example, if you were doing a survey of the Seattle area, you would want to survey sub samples that included racial minorities, women, Hispanics, etc. Cluster Sampling: running multiple interviews within the same geographic area (i.e. in Cook County, IL). Many polling organizations have been known to use this method they should report which cities or counties they used for their survey. If the survey is used with a random sampling method, then it can be considered a representative sample of some geographic area (USA, Idaho, Moscow, Latah County, etc.). The next thing you need to look for is the sample size and sampling error. These should always be reported with the survey results. Survey methods that should be viewed with skepticism since they do not use accepted random sampling methods include: Internet Surveys: Your textbook calls these pseudo-surveys since they are extremely problematic. You will often see these on the main page of news sites such as the Moscow- Pullman Daily News, Washington Post, or CNN.com. The problem is that respondents selfselect to participate they are not randomly chosen to participate. This, the sample size can be disproportionately made up of people who are more likely to be interested and passionate about the issue than the general population is. There is another issue many of these surveys allow you to vote multiple times!!! Try it sometime go to CNN.com or to the d-news.com sites and participate in their surveys. After completing the survey, refresh the site and vote again! A third problem with these surveys is that it samples Internet users, who tend to be younger, higher income and more educated than the national population. Man-on-the-street TV interviews: These are not representative; the respondents are not randomly drawn nor is the sample size large enough. Radio-call-in interviews, mail in newspaper surveys: These are not representative either (though they may elicit hundreds or even thousands of responses). The problem is that no demographic information is collected and they are subject to self-selection bias. Just because a survey reports a sample size of 45,000 does not make it more representative!! Finally, randomly generated mail surveys have mixed results. The biggest problem with mail surveys is the very low response rate. Often polling organizations will offer monetary incentives or coupons or gifts to people to entice them to complete and return the surveys. The upswing is that these surveys usually bring more honest answers (studies of illicit behavior such as drug use have found that people are more willing to report their behavior on mail surveys than phone or personal interviews) and interviewer effects are mitigated.

Common Problems and Errors to Beware of in Polls There are a number of problems to be aware of with polls that could skew the results. Below are some of the key problems we should be aware of as citizens: Response rates: Sometimes, organizations will claim their poll to be valid on the basis of a high response rate, when in actuality the poll s methodology may be unreliable. Example: Playboy and Cosmopolitan magazines sometimes have sexual behavior surveys of their readers. They may get 100,000 replies which is a much larger sample size than those you see on CNN or ABC News (those organizations use random samples of about 1000 or 1500 Americans). However, the response rate is really 2%. But that is not all these surveys are notoriously unreliable because the audience is inherently biased, there is a self-selection bias, the survey is not random or representative of the US as a whole, and people lie on these types of surveys. Self-selection bias: the problem of when people can volunteer for surveys (they are not randomly selected by a computer or other random generating method). The best example is a poll you find in a magazine asking you to fill out the survey and send it in to the magazine. You selected yourself to do the survey; no one picked you at random. Nonattitudes : the problem where people are responding to poll questions about which they have no genuine attitudes or opinions (Asher, 2001, p.25). A famous example of this was seen when the Washington Post in 1995 decided to test whether the public would respond to a question about a non-existent act which no one obviously knew anything about (and thus they really shouldn t have had an opinion on it). People were asked, President Clinton said the 1975 Public Affairs Act should be repealed. Do you agree or disagree? Surprisingly, 53% of the people polled stated an opinion about this non-existent act. (Asher, 2001, p.26-27) Why would people do this? The problem is one of social context people do not like to admit that they have no idea what the pollster is talking about and so they feel they must offer an opinion. The way to solve this is for pollsters to offer an option of I don t know or to specifically tell people they can skip the question if they do not know about the topic being asked about. Another way is to ask, Have you thought much about this issue? If the answer is no, then the pollster skips on to the next issue; if yes, then the pollster can ask questions of the person about it. Even political candidates have fallen into this trap! During the 2000 NY Senatorial debate between Rick Lazio (R) and Hilary Clinton (D), they were asked How do you stand on federal bill 602p? Under the bill that s now before Congress, the US Postal Service would be able to bill e-mail users 5 cents for each e-mail they send. They want this to help recoup losses of about $230 million a year because of the proliferation of e-mail. So I m wondering if you would vote for this bill, and do you see the Internet as a source of revenue for the government in the years to come? Both candidates stated they would not vote for such a bill; Lazio went even further to argue how this was an example of how greedy the federal government is and how we need to keep the

government s hands off the Internet. The only problem was Bill 602p is a fictional bill! It comes from an e-mail hoax that has been circulating in e-mail forwards and Internet sites since 1999. Congressional bills are designed as H.R. #### (i.e. H.R. 2450) and S. #### (i.e. S. 650). Had the candidates done their research on Congressional lawmaking beforehand, they would have picked up on that right away. Ooops! Middle Reponses : This problem is caused by questions that give people three responses, and most people pick the middle answer. Here is a hypothetical example: Q: As things presently stand, who do you think is ahead in the trade balance, the US or Canada, or do you think their trade balance is about the same with each other? Result: 24% US 22% Canada 47% same 7% don t know/no response/ refused to answer The problem here is that if people have no idea, it is easy for them to just say the same instead of I don t know. When you see surveys in the news, look for this problem it is very common. It s like asking your friends, Which ice cream do you like, vanilla or chocolate or both? You are very likely to get a large number of both answers. This problem is very, very common with Feeling Thermometers. This is another type of poll you should be familiar with. This is often used with surveys about Presidents. The following hypothetical poll is an example: EX: On a scale of 0-100, 0 being you feel very negatively about the person and 100 meaning you feel very positive about him/her, please rate the following politicians President Barack Obama The problem is that many people will say a number between 50-60 because they have no real opinion or feeling about the elected official and this range is a middle response answer. Often times, people will have no clue about the politician being asked about such as if the above question asked about Attorney General Eric Holder. Instead of admitting they have no clue who he is, people will just say Oh, about 53 I guess or something to that effect. Pollsters need to avoid this by asking people to skip over any politicians names they are unfamiliar with. Question Wording and Context Problems: 1. Use of loaded words or inflammatory phrases can affect the pattern of responses to a survey question. EX: In the PETA poll (used in the Interest Groups Poll slide earlier in this handout), the very first question asks, Before reading this mailing, were you aware of the vast numbers of animals who perish and suffer every year in American research laboratories? This question uses loaded words such as perish and suffer which inflames the reader to answer the rest of the questions in a certain way. It is a misleading and manipulative tool to bias the survey results. Reputable pollsters do not use such wording.

2. Good surveys will have questions that are worded in a non biased, fair, and straightforward manner. 3. Question order also matters! EX: If people in Illinois were to be given a survey about the chief Illiniweek issue (the University of Illinois mascot) and the first 10 questions were to be related to Native American culture and traditions and the history of white attempts to defame Native Americans and then they were asked a final question about whether the University of Illinois should keep or retire the Chief, then the answers may be quite different than if the survey asked that question in a different order. 3. Questions that do not allow people to give an answer are serious polling errors. EX: (classic example): Do you still beat your wife? (This is a terrible question since it leaves the respondent guilty no matter what if he says yes, then he admits to having previously beaten his wife and if he answers no, he admits to beating his wife in the past!) Instead, the question needs to ask, Have you ever beaten your spouse? If the answer is yes, then the pollster can follow up with the question, Have you stopped? A similar question in political science would be to ask, Do you still not vote? If you say yes, you are guilty for non-voting and if you say no, you admit to not voting in the past. 4. Ambiguous Questions A good example is: Have you taken a vacation in the last few months? (what does few mean? To some people it means 3, some it means 4 and so on) or Would you support a small tax increase to pay for health care reform? (what does small mean?) 5. Using double negatives in question wording will confuse people! EX: A 1992 Roper Center (University of Connecticut) poll asked, Does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened? 22% said it seemed possible that the Holocaust never happened! 12% said they did not know! This shocked the entire country, worrying Jewish and other civil rights leaders. The Roper Center concluded that this was a faulty question and re-ran the poll, finding only 1% of the people polled saying that it was possible the Holocaust never happened. 6. Question wording matters! Wording a question in slightly different ways can yield completely different answers! A New York Times Story in 1986 by Adam Clymer 1 cited abortion as being an issue where question wording can dramatically affect citizen responses. He used a 1985 New York Times poll set to show how different question wording can produce very conflicting results. What do you think about abortion? Should it be legal as it is now, legal only in such cases as saving the life of the mother, rape or incest, or should it not be permitted at all? 1 Clymer, Adam (23 February 1986) One Issue That Seems to Defy a Yes or No New York Times.

Legal as it is now 40% Legal only to save mother/rape or incest 40% Not permitted 16% Don t know 4% Which of these statements comes closest to your opinion? Abortion is the same thing as murdering a child, or abortion is not murder because a fetus isn t really a person? Murder 55% Not murder 35% Don t know 10% Do you agree or disagree with the following statement? Abortion sometimes is the best course in a bad situation. Agree 66% Disagree 26% Don t know 8% As you can see, these three surveys received very different answers. All three were worded differently. Pro-life lawmakers would cite the 2 nd survey while pro-choice lawmakers would use the 3 rd survey. The first one is more mixed. How does one determine what public opinion really is on abortion? This is one of the most troubling aspects of polling. Lying people lie to pollsters: Often in election surveys this occurs. When people are asked, Did you vote in the last election? people are often embarrassed to admit that they did not vote so they will lie and say they did. We know this is true since poll results do not match up with actual voter turnout numbers! Surveys on sexual behaviors, drug use, church attendance and other personal attitudes or behaviors often bring misleading answers or outright lies from respondents. A good example of lying to pollsters is in sex surveys such as those conducted by the Durex Corporation (a condom manufacturer). This study was awarded the 1998 Dubious Data award by STATS. For example, in the 2001 Durex Global Sex Survey, it was found that men claim to have sex more often than women (102 times a year versus 91 times a year). Unless the percentage of male homosexuals is much higher than previously thought, this data cannot possibly be right! (the number reported by males and females should be approximately equal). Men tend to over report how much they are having sex and females tend to underreport it. And Finally. Hopefully you now understand a bit more about polls and will be more skeptical in your consumption of polls. Remember to look for the sample size, sampling error, sampling method and question wording when you read or hear about a poll! Also, look for the source. There are many excellent, reputable polling firms (Gallup, Pew Research, Roper Center, CNN, New York Times, etc.) that do conduct excellent polls with credible results. But there are plenty of others that are associated with corporations or think tanks with an agenda to sell. If this topic interests you, take a public opinion or survey research course to learn much more!