If 2016 taught me anything, it is how disastrous political polls have become. And as we head into the 2018 midterms, we should take a hard look how we are conducting them, crunching the numbers and using the results. Voters deserve it.
On November 2nd, 2016 Marquette University Law School in Wisconsin issued it’s highly regarded presidential polling results. It was memorable for two reasons. The first reason is that it was conducted between October 26th through the 31st, the time frame in which former Director of the FBI James Comey told Congress of his investigation into Democratic Candidate Hillary Clinton's private email server and respondents didn't seem phased.
And yet, out of the surveyed 1,255 likely Wisconsin voters, Democratic candidate Hillary Clinton was tipped to win by 46 percent, with 40 percent supporting Republican candidate Donald Trump.
Charles Franklin, the director of the school's election poll, surmised that “Concern about Clinton’s use of a private email system does not appear to have shifted much in the wake of the FBI news.”
The second reason this poll is memorable for me is that Trump immediately slammed the breaks on his impending Wisconsin rally. Afterall, the Marquette poll had been spot on in the 2012 and 2014 elections. But a few days later he won the state. And then he flipped the Midwest. One after another Iowa, Wisconsin, Michigan, Ohio, and Pennsylvania fell to the Republicans, and nearly all reputable state polling missed it.
MIXED UP METHODS
It didn’t use to be this hard. Just a few decades ago pollsters dealt with an expected 20 to 30 percent non-response rate. It’s much higher now. And so is the cost and time required to get good results.
"The first big problem isn’t anything nefarious; it’s part lazy collection and cheap skating the process,” says Steven Moore, a veteran public opinion researcher. He says that dealing with sample populations now compared to 20 years ago is, "Like comparing apples to broccoli."
“If you have a jar full of red and blue jellybeans if you shake it up for like 5 minutes thoroughly every time and then pick one out, and then shake it up again for like 5 minutes and take one out its probability theory,” says Moore. “But now, only the 5 percent at the top get shaken up. While 95 percent of the jellybeans are at the bottom not shaken and those could be all red.”
Online polling can be notoriously skewed if not done carefully, and it isn’t accurate in low population areas. Mail-in poll response rates have dwindled according to the Columbus Dispatch. It's famous mailer got it wrong the week before the presidential election as well. Also, its budget had been cut which meant less staff to help and fewer polls conducted. A common story.
As for dialing up an American, nearly everyone screens unknown numbers with caller ID. And take a guess who actually answers the landline calls? “If we just called land lines then 75 percent of our respondents will be 55 or older,” says Moore. It's not surprising, because most of America doesn't use a landline anymore. In 2016 the nation had just hit the halfway mark; fifty percent of all American households operated with only a cell phone according to the Centers for Disease Control. Now that number is nearer 53 percent. The less expensive automated dialing and questioning, known as 'robocalling', used on landlines is illegal on cell phones; instead hired staff must hand dial, which is costly.
“You’ve got to split it up properly,” says Jim Hobart, of public opinion strategies. “A 60/40 ratio of cell phones to landlines is good. Then, if you need Spanish language interviews, it's more money and more time.”
Jim Lee owns the Susquehanna poll in Pennsylvania, which incorrectly called the 2016 race for Clinton. With a 50/50 ratio on automated versus person-to-person questions, his people still got it wrong. “Half of the surveys were done by live interviewers, and that group found Clinton winning by eight,” he said. “The automated interviews had Trump winning by two.”
Maybe Trump voters didn’t want to admit to a live person how they would vote? Perhaps the questions were loaded or leading or vague? The public usually never finds this out, because it isn't commonly addressed in the media and its time to change that.
Manners Makes A Difference
Words matter and carefully constructed words in a poll take time and attention to detail.
“It didn’t speed up the process let’s put it that way," says Deborah Devedjian, the founder of The Chisel. It's a 100% bipartisan civic platform which recently undertook a pretty intimidating project centered on neutral, bipartisan polling. The Chisel formed a coalition of 30 partner organizations from across the political spectrum that spent months constructing a groundbreaking survey of 1,318 eligible voters to pinpoint where Americans diverge and agree upon their values. Now published in book form, the What’s Your American Dream? survey throws water on the common argument that America is fundamentally fractured.
The coalition combed liberal and conservative media to pinpoint the topic areas most commonly covered in the news and came up with 34 focal points. One of those was K-12 education. An example question: “What is your #1 goal or aspiration to make your American dream a reality for PREk-12 education?” Respondents were given "goals" to achieve a great k-12 education system.
The top three goals ranked in that category matched perfectly for the vast majority of respondents across the political spectrum. And the top goals for achieving a perfect outcome in all 34 question categories matched for 53 percent of the survey questions regardless of political affiliation.
Respondents also commented on their decision for each topic - 5,000 in all.
The survey questions, their wording, their order, the method of delivery, how respondents were accessed including through coalition's respective membership rolls and websites, even the language, and graphics used went through painstaking analysis to keep it all neutral. “What is the intention going in? How are the questions phrased? People might sense that there is a loaded aspect to it. And then you have bad data, and then people talk about it as if it’s real and then away it goes in the media."
The point here is that a large group of people with vastly different political ideologies acted on an expensive, time-consuming, bi-partisan idea for the good of the voter and it worked.
Devedjian says she knew she wasn’t getting the whole story based on widely cited polls cited back in 2016, and the media didn’t help. “The questions were so loaded that they were toxic. The questions were fundamentally so skewed that pollsters couldn’t get correct numbers, " she says. "That’s why so many polls couldn’t figure out what was happening in the 2016 election. Five years from now, how are polls going to be conducted? What are we moving into?”
Good question.
While pollsters scramble to revise methodology, it may help to take a cue from The Chisel and double check their methods and prognostication.
Want to see a pretty good indicator of pollster bias in 2016? Check out Nate Silver’s ranking over at FiveThirtyEight.