계량적 여론조사의 한계, 선거에서 질적조사의 필요성
- 관련 기사1: 2016 미국 대선에서 얻은 교훈들 - 질적조사가 중요했다 Opens in a new tab
- 관련 기사2: More Than Numbers: How Qualitative Research Can Win ElectionsOpens in a new tab
- 관련 기사3: 양적 여론조사의 한계와 질적 데이터가 주는 해답Opens in a new tab
- 관련 기사4: 2016 미국 대선결과가 보여준 것: 양적조사와 질적조사의 균형이 절실하다Opens in a new tab
왜 필요한가?
외형적으로 선거는 숫자들의 경쟁이지만, 넓게 분석해보면 한 선거의 결과는 사회.문화적 변동, 나가서는 시대적 흐름의 결과.
하루하루 변하는 인기도/지지율도 중요하지만, 사회 저변에 깔린 민심의 성격이나 민심의 원천을 정확히 포착하는 일은 선거 전략수립에서 핵심적 부분.
영국이나 미국 등 선진국에서 질적조사는 기존의 계량적 여론조사를 보완하기 위해 이미 널리 사용되고 있다.
기존 설문조사의 한계를 보완
설문조사는 질문에 대한 답만 얻을 뿐, 질문하지 않은 이야기는 들을 수 없다. 듣고 싶은 것만 골라 듣기 때문에, 유권자들이 하고 싶은 말은 듣지 못하는 경우가 생긴다.
설문 문항에 대한 결과가 나오더라도 그것이 무엇을 의미하는지를 파악하는 것은 해석의 영역인데, 설문조사만으로 해석까지 이루어지는 것은 아니다.
설문조사에서는 응답한 내용에 대한 신뢰성을 확인할 수 없다.
명확한 답변을 위해 설문조사 문항은 지나친 이분법 혹은 단순형 질문의 형태를 띤다. 이 경우 실제 유권자들의 인식이 어떤 맥락에서 어떻게 변할 수 있는지에 대한 정보를 얻을 수 없다.
선거에서 질적조사의 활용
질적조사란 온라인/오프라인 대화, FGI, 심층인터뷰, 참여-관찰, 에스노그라피 방법론 등을 통해 현지사회와 유권자의 인식을 깊고 전면적으로 분석해내는 조사 방식을 말한다.
유권자들의 특정한 생각이 “왜” “어떻게” 형성된 것인지를 설명할 수 있다. 이를 통해, 계량적 여론조사의 결과가 어떤 의미를 띠고 있는 것인지를 명확히 알 수 있게 된다. 아울러, “왜”에 대한 정확한 정보는 선거전략을 만드는데 매우 유용한 정보가 된다.
수시로 변하는 특정 지지율과 인기도 아래 깔린 사람들의 기본적 욕구, 인식체계, 그리고 사회적 관계망을 분석해 낼 수 있다.
현장에서 유권자들의 언어를 포착.수집하게 되고, 이를 통해서 후보자는 유권자들의 언어로 소통할 수 있게 된다.
참고:
Survey says … Trump won? Lessons from 2016 election polls
Joe Hopper looks at five lessons researchers should draw from the 2016 election polls.
Article ID: 20170426-1 Published: April 24th, 2017 Author: Joe Hopper
: Joe Hopper is president of Chicago-based Versta Research.
It wasn’t long after the shock of Election Day that a colleague asked, “What do you think about the validity and accuracy of surveys and polls now? I’d say they’re all hogwash.” She was not alone. “The vitriol targeting pollsters in the last few days has been intense and ugly,” wrote another colleague via the AAPOR online discussion forum.
Most of us in marketing research work outside of election polling but of course any survey research is related to public opinion polling and our methods are the same. If election polling provides the proof that survey methods work, what are we to make of Donald Trump’s 2016 win being a surprise?
Here is my take: This election put research methods to the test and subjected them to public and professional scrutiny like never before. There is much to learn about what works and what does not. People still can’t believe how wrong the survey methods were. As an industry, marketing researchers can learn from these reactions. Some things went right, and some things went wrong. But what?
Let’s look at five lessons to draw from the 2016 election polls.
1. Surveys work. And they work extremely well. This may sound ridiculous in the wake of pollsters’ failure to predict Trump winning the White House but the polls did not fail. It was the attention-hungry people who interpreted, reported and prognosticated based on the polls that failed, and they failed miserably.
Clinton got 48 percent of the national popular vote. Trump got 46 percent. Clinton won the popular vote by a comfortable margin and nine out of the 10 top polls correctly predicted this. On average, the top 10 polls had Clinton winning the popular vote by 3 percentage points. She won by 2 percentage points.
If you do not find this remarkable, you should. Despite the enormous challenges polling faces today with plummeting response rates and the unattainability of probability samples, the polls – both those conducted online and those conducted by phone – worked.
Suppose you could have a fancy marketing research tool that predicted, within a percentage point or two, how many of your customers would buy your new product over a competitor’s. Would you want it? You can have it. Well-done, rigorously-executed surveys do exactly this.
2. Weight your data. The polls were surprisingly accurate but they got the election wrong. We all know why, right? Election polling measured the popular vote but it is the Electoral College that chooses the president. Despite the popular vote, only 42 percent of electors voted for Clinton, while 57 percent of them voted for Trump.
Because of the ways in which electors are chosen and cast their votes, every popular vote for Clinton was, in effect, down-weighted to .87 and every popular vote for Trump was up-weighted to 1.24. All votes are not created equal.
The inequality of votes is something we know all about in marketing research and it is a good reminder of why we weight – and how important it is to think through it carefully. Weighting data is all about making sure that the people we have in our data accurately reflect the population of decision-makers we care about. If my survey is about buying cars, I need to ensure my sample matches the car-buying population. If I have too many of a certain demographic group in my sample, their votes count for less. Weighting makes that happen.
All pollsters (we hope) weight their data to bring sampling into alignment with the true population of voters. But what if, after weighting their samples to the population of voters, they weighted to the population of electors? If their samples were big enough (most of them aren’t but surely they could be) then polling may have better reflected the population of electors.
Easier said than done. And I say this with trepidation, because the fancy election forecasts did try to account for the Electoral College, though in different ways. Which brings us to our third sobering lesson from the 2016 election polling debacle.
3. Beware the math-meisters. In the months leading up to the election, I looked at the election forecast of The New York Times only once. It struck me as absurd and so I never looked again. And it convinced me never to look at Nate Silver’s FiveThirtyEight election forecast either – Silver being the math-meister inspiration for The New York Times’ efforts.
As if polls are not tricky enough, these election forecasts are complicated mathematical models fed by polling data and other fundamentals (like economic data) to arrive at probabilistic statements about who will win. On July 19, Clinton was declared to have a 76 percent chance of winning. On election day, her chances were up to 85 percent.
But what on earth can such numbers mean? Does it mean that if we were to hold the exact same election 100 times, Clinton would win 85 times? No, that’s absurd; the election happens only once. Does it mean that in the history of all our presidential elections (there have been only 56 of them) the Clinton-like candidate won 85 percent of the time? But wait, we’ve never had a Clinton-like candidate, nor a Trump-like candidate before now.
Fear not, the forecasters gave us helpful guideposts to make sense of it. In July, Clinton’s chance of losing was “about the same probability that an NBA player will miss a free throw.” And on election day her chance of losing was “about the same as the probability that an NFL kicker misses a 37-yard field goal.”
If you’re not laughing at this, you ought to be crying. These numbers are absurd and the precision they communicate is misleading. As much as I love building mathematical models – and we do make good use of them in our work when appropriate – it is no wonder that the public feels betrayed and our clients roll their eyes when we talk about margins of error.
4. Qualitative is critical. A shortcoming of nearly all surveys is that quantitative research rarely gives us a deep feel for what drives the numbers. This election highlights that more than ever. There will always be disappointed voters and “Don’t Blame Me – I Voted for the Other Guy” bumper stickers but this one seems different. There is genuine disbelief that Trump won and genuine disbelief that so many voters could align with the vision articulated by his campaign.
Unfortunately, survey data doesn’t help much. We know the demographics of who voted for whom and the geographies and economics of where they live. But none of it gives a deeper sense of who, why and how. With the right kind of research, we can and should be saying, “Of course, all of this polling data makes sense.”
Good qualitative research might look like J.D. Vance’s Hillbilly Elegy, which offers a first-person account of “what a social, regional and class decline feels like … for a large segment of this country.” Or it would take a deep sociological approach like Arlie Hochschild’s Strangers in Their Own Land. In Hochschild’s words, “Hidden beneath the right-wing hostility to almost all government intervention … lies an anguishing loss of honor, alienation and engagement in a hidden social class war.”
Marketing research is no different. We are increasingly dazzled by the promise of more and more data, all of it immediately accessible, transformed into insights with newer technologies. We have increasingly sophisticated computational models at our fingertips and with free, open-source software, no less. This election demonstrated how quantitative data can (and will) fall on its face if that is all we do. We need focus groups, in-depth interviews, design labs and ethnographies. We need good, insightful qualitative research or the numbers just won’t make sense.
5. Do it only if it matters. Election polling puzzled me before I started a career in research. It seemed like constantly shifting numbers meant that polling was baloney, or alternatively, that the polls were measuring Jell-O. Either way, who cares? The only election poll that matters is the election itself. Soon enough, we will all know who won, so what is the value of predicting it ahead of time?
From my vantage point today I can think of many reasons that polling might be valuable. If it is commissioned by a campaign for internal use, polling helps candidates understand what matters to voters and how to make their messages resonate. When polling asks about issues beyond picking candidates, it can offer valuable insight that ought to influence public policy. And given that polling works, it can reinforce the validity of elections when losers cry foul, or it can provide evidence of fraud when elections are rigged.
But beyond my professional curiosity and wanting to learn from them as much as I can, I have a hard time seeing much value in the polls we witnessed in the last 12 months. Did they matter? Could we do anything with the results? Are we better off for having that constant view-in-advance of what might happen? I have a hard time even understanding the utility of a mathematical model that aggregates all those polls into an ongoing probability of outcomes. Professionally, all of it is fascinating. Personally, not so much.
Most of us in marketing research are not in the business of election polling. But there is a lesson to be learned. No matter what research or survey work you are doing, ask yourself whether it matters. Specify how it will be used. Identify specific decisions that need to be made. Know in advance how decisions are contingent upon the findings you will report.
If you find yourself scratching your head and can’t specify exactly how the research will be used, then shift your budget to something else so that the research you do will matter.
Fundamentally sound
The most important lesson for marketing research from the 2016 election is that our basic methods of inquiry are fundamentally sound. But we need to be vigilant about who we are measuring and how. We need to triangulate with non-mathematical approaches that help explain the numbers. And we need to think more deeply about what we are doing and why.
However you feel about the confusion of presidential polling and predictions, I hope you have been giving deep thought to the various implications for your research.