Home / General / The Electology.org poll, part 1/3: how could Trump have won?

The Electology.org poll, part 1/3: how could Trump have won?


I’m Jameson Quinn, a grad student in statistics at Harvard and a board member of the Center for Election Science, aka electology.org, a nonprofit that advocates for voting method reform. In these capacities, I’ve recently been analyzing a poll that the CES did in the last week before the election. (Some of you may have heard about that from a little bird in the comment sections here recently.)

I think there’s a lot of interesting lessons to be learned from this poll, so I’m going to be writing a series of three guest posts here. In this first installment, I’ll look at what it can teach us about the nature of the Clinton and Trump coalitions; in the second, I’ll discuss other candidates and hypothetical candidates, particularly Sanders; and in the last, I’ll look at what it teaches us about voting systems.

Before I start, I should say a few words about myself. Some of you perhaps know that I’m a regular reader here; and though I’ll say no more about that, it probably won’t surprise people to hear that I have a personal position on pretty much every one of the issues I’ll be discussing. I volunteered and voted for Hillary over Trump; earlier, for Bernie over Hillary; and when it comes to voting systems, I’m literally one of the biggest geeks and deeply committed to the idea of voting reform. But in writing this, I’ll try to keep my personal opinions out of the spotlight, and focus on what the data tells me.

Here’s what I think this data shows about the Hillary/Trump race:

  • It’s not that Clinton underperformed on election night; rather, Trump overperformed. This could be last-minute deciders (the Comey effect?), or it could be “shy Tories” who were always for Trump but were embarrassed to admit it.
  • Focusing on Trump’s bigotry seems to have been a strategic mistake. The data only tells me what did happen, not what might have happened, but it seems to me to suggest that the demographics that could be convinced by this argument already were, and that this argument didn’t make big inroads with Republican-leaners including women and pretty much any ethnic/racial category besides African-Americans.
  • It really looks to me as if vote suppression in Mississippi worked.
  • Controlling for a wide array of demographic factors, unlikely voters still had 1.25 times higher odds of supporting Clinton than likely voters. So if a certain demographic subgroup voted 50/50, the members of that subgroup who didn’t vote were likely split around 55/45 for Clinton. Gah! Looking on the bright side, at least there’s plenty of room for improvement on that score. This is after correcting for things like age, race, income, and education. Though in all honesty much or all of it may be a leftover effect of the bits of those aspects which escaped my coarse categorizations, it still emphasizes how important it is to improve this. I mean, it may be fruitless to hope that people making $26K start voting as much as those making $80K, but it is not crazy to hope that we can get them up to the level of somebody making $38K.

We were really lucky to get a data set of this high quality. Our pollster was GfK research, a market research firm. They have done a lot of work building up randomly-sampled panel from across the whole US, including giving tablets and internet access to the ones who didn’t already have that, so that they wouldn’t have to rely on phone polling. (I know that somebody in comments is going to complain that giving people internet access may change their behavior, but I’ll take that slight bias over the 93% nonresponse rate of a phone poll any day.) In this manner, they got an over 50% response rate; trust me, for polls these days, that is excellent. The total sample size was over 2000 respondents.

My primary tool in analyzing this data set is what Andrew Gelman calls Mr. P: Multilevel Regression and Poststratification. Stripped of (most of) the statistical jargon, this is actually a pretty simple idea. First, you use the polling data to make a model to predict what percentage of each given kind of person will respond to each binary choice. Then, you use demographic information about how many of each kind of person lives in each state to simulate how the full state population would choose. I’ll say some more about how Mr. P works in the “below-the-fold” part of this post.

But for now, the important thing is that I was able to model the simultaneous effects of 6 different demographic characteristics – age, gender, income, race/ethnicity, and state;region (divided into 4, 2, 4, 5, and 51;6 categories, respectively). I also included the three biggest interaction effects between any two of those characteristics (grouping states into regions), which were: gender by income, ethnicity by education, and region by income. I included group-level predictors for the states based on the 2012 Obama/Romney percentage and on the total third party vote percentage for 2000 and 2012 combined; this allowed my model to focus on learning what was new about 2016, without needing to learn what was already known about the political landscape. I also included a term in the model for “likely voters” according to GfK.

Then, in order to project this model’s results down to each state and up to the country as a whole, I assumed that the 2012 turnout percentages by gender, ethnicity, and state would hold constant. This assumption let me predict the turnout for each state to within 5%, except for the following states: MS (real turnout was 15% lower than what I predicted!!!); DC and HI (real turnout was 5-7% lower); WV, NE, PA, VT (real turnout was 5-7% higher); and AK, FL, NH (real turnout was around 10% higher).

So that’s the first lesson of this data set: in Mississippi, voter suppression worked:


My simplistic model of voter turnout overestimated in MS by 15% of eligible voters; and my (unrelated) sophisticated model of voter behavior overestimated Clinton’s votes there by 12% of eligible voters. I’m not an expert in what happened there, but that’s two separate pieces of evidence that something is rotten in Mississippi. If my model is right, there are over 150K missing Clinton votes in that state alone — quite possibly, more than twice the combined margin she lost by in WI, MI, and PA.

A similar graph for Trump also shows some interesting things:


First off, you can see the “pink” states, the ones I classed as “West” (AK, ID, IA, KS, MN, MO, MT, NE, ND, SD, WY), are all above the rest; for some reason, people in those states were particularly reluctant to reveal their Trump preferences to pollsters. You can see in the earlier graph that the pink states fall below the rest, meaning that people there seemed to have claimed to support Clinton when they didn’t; I also noticed that people in those states were more likely to refuse to state a 2-way preference. In the model whose parameters I give below, I artificially reweighted the respondents from this region (4/3 for Trump voters, 2/3 for Clinton voters, and 1 for the rest) to get the model to line up better with reality.

Second, even aside from those states (which, after all, are generally low in population), my model underestimated Trump support pretty consistently across most states, whether red or blue, and of various ethnic makeups. Obviously it’s impossible to know what really caused this mismatch. But it’s consistent with two different familiar stories. It could be that some voters broke late towards Trump, perhaps because Comey’s statements (although note that this poll was after the “retraction”.) Or it could be “shy Tories”, people who supported Trump but did not say so to pollsters. In either case, though, it’s tough to explain how generally even this effect seems to have been across the board.

But aside from the “Western” states, the model did not correspondingly underestimate Clinton support. Clinton didn’t, as a whole, fail to get Democratic voters; it’s Trump who got extra voters.

Third, it was absolutely criminal that the Clinton team didn’t see Wisconsin coming, and at least do some polling there. According to my model, Wisconsin could have been a lot worse. (The Utah outlier is less of a surprise, and easier to explain.)

Now we come to the parameters of the model. And the one that immediately leaps out is race/ethnicity. All these numbers are in log-odds-ratios, which means if the difference between group A and group B is 3.2, you google “exp(3.2)” to find that group A has 25 times lower odds of voting for Clinton; for instance, if an individual from group B had 10:1 odds (91%), then an otherwise-similar person from group A would have 1:2.5 odds (28%). Not that I have anything against group A, mind you; some of my best friends are group A. So anyway, here’s the ethnic effect sizes:

White, Non-Hispanic    


Black, Non-Hispanic      


Other, Non-Hispanic    




2+ Races, Non-Hispanic


Depressing. Black people are the only largely sane ones in the country; Hispanics are a bit over half sane; and everyone else is crazy, with White folks running the asylum.

Region is the next most important. The raw effects there are pretty much what you’d expect; slightly more interesting are the parameters after including 2012 as a predictor. This measures two things: swing since 2012, and, perhaps more importantly, the extent to which regional differences do not merely reflect demographics. By this measure, “Appalachia” (really, the region where a plurality report “American” ancestry, including AR and OK) is actually not as conservative as the South (a relative difference of 0.32 log-odds units) or the rust belt and “West” (relatively, 0,26 log-odds units). Meanwhile, the Southwest and West Coast are more liberal than you’d expect (by .39 and .25 units relative to Appalachia, respectively). Finally, the Northeast stood alongside Appalachia, with relatively little effect of region as opposed to demographics; though of course these two are very different in their leanings, that can mostly be explained by demographics alone.

Next most important is education. On this dimension, the middle two categories go to Trump, while the extremes go to Clinton:

Less than high school


High school  


Some college


Bachelors degree or higher  


Next, is gender, with a total effect size of only 0.51. That means that for an otherwise-similar woman and man, the woman has only 1.67 times the odds of voting Clinton versus Trump.

Next is age. Much has been said about this, but in fact, it seems to me that a lot of the apparent age gap is really an ethnic gap, so in a model that controls for ethnicity, the age gap is relatively small:









(The 18-24 group seems to stand out there, but you can’t really trust the parameters this kind of model gives to such a thin sliver of the population).

And finally, income. As with education, this dimension sags in the middle, with the 25K-50K bracket being the most Trumped-up:

Under $24,999






$75,000 or more


The gender by income interaction is also worth mentioning; for the 25K-50K bracket, the gender gap is about twice its average size, while it’s about half its average for the other brackets. (Since we’re working in log-odds units, “twice” means the square of the odds ratio, and “half” means the square root).

Once I had this model, I did a residual analysis, to see if any of the demographic or opinion variables I’d left out had significant further impact. Things like marital status, household size, and home ownership correlated, but weakly; my model already accounted for most of what you could learn from those things. Among Hispanics, Spanish fluency had an impact, but that’s to be expected. The notable finding there was that “my family is falling behind financially” had only about 60% of the correlation with insufficiently-Trumped-up predictions as “the US economy is going in the wrong direction”. So yes, “economic anxiety” does correlate with Trump support above and beyond what you’d expect from the demographic factors already listed, but that’s more true when such “anxiety” is generalized than when it’s personal.


More information on how “Mr. P” works. For instance, say the question were whether a person wears a hat. This model is built by assuming that each aspect of a person’s identity will have a certain impact on the odds; so if overall, 25% of people wear hats (1:3 odds), and college graduates have twice the odds, then 40% of them would (2:3 odds). To avoid overfitting to small sample sizes, the math tries to ensure that the effects of different groups along a given dimension — say, of the different income groupings — have impacts of similar magnitudes, grouped around zero. So if the survey data suggests that the income categories below 75K don’t impact hat-wearing much, but that there’s a huge effect starting at 75K, the model will suspect that that latter effect is partly a sampling artifact, and so scale it down some.

So such a model might say that x% of Hispanic women in California with some college making over 75K do so, while y% of non-Hispanic White men in Texas with a college degree making over 25-50K will do so. We simply multiply those percentages by the number of actual people in each category to get the simulated total number of hat-wearers for each state and for the country as a whole.

As a statistician, I feel bad about putting these numbers out without specifying a margin of error / confidence interval. But it turns out that the tool I used doesn’t do this calculation for me, so I have to hand-code an add-on for that. I’ll get that done eventually, but in the meantime, a back-of-the-envelope calculation shows that the margin of error is definitely below 6%.

Final note: this poll cost almost $10,000 for electology, and the analysis behind these three posts took significant unpaid work and expertise too. If you think this is worth it, consider donating to electology.org to help us support better voting methods in the US and around the world.

  • Facebook
  • Twitter
  • Linkedin
This div height required for enabling the sticky sidebar
Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views :