"The Literary Digest Affair," as it came to be known, is perhaps the most prominent, and certainly the most classic, example of the failures of non-representative polling. In this second and final part, we examine two questions: Why shouldn't a neural network model be considered a natural progression of the evolution of public opinion science? Secondly, why shouldn't a quantum computer (QC) be the best-suited platform upon which to stage such an evolved model? Say hello to Mr. They were just yielding disastrous results, as though the same assumptions one could make about society in 1936 no longer applied to 2015, 2016, or 2020. Statistical adjustments were being made, and history seemed to indicate such adjustments should have been the proper ones. In Part 1 of this voyage of Scale, we referenced three major political elections - two in the US, one in the UK - where the offset between the predicted result and the final one was attributed by experts to bias or at least the wrong kind of bias. At another, it's the injection of bias into a pattern being learned, so that a network can trigger a reaction when it recognizes the pattern elsewhere. The only significant difference is that Gallup had certain categories already in mind by comparison, neural nets start blank, compartmentalizing along the way.Īt one level, weighting is a means of neutralizing bias by making adjustments in the opposite direction. When a neural net model is trained, the values it stores trigger the adjustment of coefficient weights, which influence the degrees to which that model "perceives" successive training elements. And let's be honest about this: Today's neural networks and deep learning experiments like to play like they're all about neurons and axons and deeply esoteric concepts called "perceptrons," as though they were introduced in Star Trek along with Klingons, but they're actually almost entirely about weighting. Gallup's great contribution to the science of understanding the behavior of large subsets of people, was weighting. With each group balanced out, he could assemble a snapshot of the entire nation based on small subsets. He would then use mathematical weights (coefficients) as a means of balancing one group's participation in the total poll sample, against that group's representation in the voting population at large. This was a prime example of what, for the first time, was called non-response bias.īy contrast, Gallup compartmentalized his poll's participants into groups, whose classification structure would later be dubbed demographics. One could hypothesize that these non-respondents might have more likely to vote Democratic than the survey group but were less likely to admit as much to a literary publisher. What's more, the participant count represented less than a quarter of the magazine's mailing list, meaning about 7.6 million members did not respond. Before long, the post mortem showed, it'd be a safe bet they were more likely to vote Republican, and proudly. Their likelihood of owning telephones, it turned out, was much greater than for the general population - for 1936, a surprisingly narrow subset. The 2.4 million survey respondents, Oxford realized, were the sorts of folks who would willingly subscribe to a literary magazine. Specifically, if you don't ask people for enough facts about themselves, you never attain the information you need to estimate whether the people around them think and act in similar ways. Gallup's poll was "scientific," and Oxford wanted to explain what that meant, and why opinion polling deserved that lofty moniker.įor the first time, the Oxford publication explained a concept called selection bias. The following January, Oxford University's Public Opinion Quarterly published an essay that examined how a seemingly much smaller survey of only 50,000 participants, conducted by a fellow named George Gallup, yielded a far more accurate result than did Literary Digest. The week after the election, the magazine's cover announced in bold, black letters the message, "Is Our Face Red!"Īlso: Could quantum computers fix political polls? By a margin of 57 to 43, those readers reported they favored the Republican governor of Kansas, Alf Landon, over the incumbent Democrat, Franklin D. In 1936, some 2.4 million members of the Literary Digest magazine's mailing list responded to its publisher by mail, in the broadest presidential candidates' opinion poll conducted in the United States to that time. ZDNet explores what quantum computers will and won't be able to do, and the challenges we still face. Quantum computers offer great promise for cryptography and optimization problems. Special Report: The CIO's guide to Quantum computing (free PDF)