As the 2024 presidential election draws nearer, people are increasingly reflecting back on the 2020 election and how wrong the polls were.
You’ve probably heard some of the statistics: Biden was ahead by an average of about 7 percentage points in the national polls, but ended up winning by just a little over 4 points. While this wasn’t far off, the numbers were much further from the mark in many states: Maine (+3 Biden in the polls, +7 Trump in the result), Wisconsin (+10 Biden in the polls, <1 Biden in the result), Iowa (+1 Trump in the polls, +8 Trump in the result), and others. It’s clear that the polls systematically underestimated the Republican vote in many states, following roughly the same pattern as 2016.
There are a lot of theories as to how this might have happened. This article won’t try to answer that question. But we do want to clear the air about one thing: Political polling is a very different beast from market research.
Sure, political polling and market research both employ some similar techniques—namely, online and telephone surveys—but that’s about where the similarities end. Here are some of the main reasons why misses in election polls don’t worry us about the efficacy of market research.
Political Polling Is About Prediction, Market Research Is About Insight
This is the most important difference between the two. Political polling is meant to forecast voting outcomes, while market research is meant to inform complex business decisions. In other words, polling only cares about the ultimate voting outcome, while market research is usually trying to get at the “whys” behind consumer behavior, i.e. insight into attitudes, preferences, motivations, etc. For this reason, market research surveys are typically much more complex than their poll-related counterparts, containing more varied types of questions and a broader selection of topics.
Polls are merely a snapshot of how a certain sample of a population felt at a certain point in time, so strictly speaking, a poll itself can never be wrong. But pollsters then input the polling data into complex algorithms that try to predict the future election outcomes based on this snapshot in time. These algorithms contain all sorts of assumptions that can be incorrect. To name just a few:
An assumption that x% of people who say they intend to vote for a candidate will actually turn out to vote
-
An assumption that turnout patterns among different demographics will be roughly similar to previous years
-
An assumption that those who respond to political polls vote in basically similar ways to those who don’t
-
An assumption that no major events will change people’s minds between now and election day
-
An assumption that changes in voting methods (e.g., a massive pivot to mail-in voting) will not effect the turnout of specific types of voters
Because of all the assumptions involved, predicting a future outcome within a large population is much, much more difficult than taking a descriptive snapshot.
Political Polling Is Done on a Vastly Shorter Time Scale than Market Research
Polls conducted more than a few days ago are generally ignored, because each day’s news brings new events that might alter people’s opinions. Market research needs to be agile and responsive to the end-client’s changing needs, but because it is focused on longer-term business issues and consumer habits and sentiments that are slow to change (e.g., loyalty), it’s not as affected by day-to-day changes.
This also means that market researchers typically have much more time to do rigorous, complex analysis and interpretation of their results. Market research reporting is usually more in-depth and often includes various statistical analyses and visualization techniques to get at the “whys” behind the data, whereas polls need to be reported to the press almost immediately, with little analysis.
The 2020 Polls Weren’t That Wrong
Pollsters incorrectly gauged exactly how much of the vote would go to Trump in many states, but they ultimately pointed in the wrong direction for only two—North Carolina and Florida. In fact, the average polling error in close states and blue states for the presidential race was only about 2.5%. (It was larger in safely red states, but this is probably due in part to a well-known effect called “the winner’s bonus”).
So we’re talking about differences of a few percentage points—which matter greatly in a closely contested election, but less so in a market research setting. Often, the differences we find in market research are much wider and more robust than the slim margins between Democrats and Republicans in today’s America. And where we don’t find clear differences—i.e., where survey responses are close or within the margin of error—we don’t recommend basing a decision on those results. It’s as simple as that.
All that said, market research isn’t perfect either—it still leaves plenty of room for error and uncertainty. But the point of market research isn’t to give us definitive answers; it’s to reduce uncertainty and minimize risk by giving us additional data points. Does it seem safer to make a business decision without doing any customer research whatsoever, going on gut instincts of internal stakeholders alone? Or is it safer to make decisions with information on customer attitudes and preferences, even if that info has some margin of error? The tools of market research are still the best—often the only—way to improve your understanding of your target audiences.
Want to learn more? Reach out!