In 1954, American writer Darrell Huff penned How to Lie with Statistics—a seminal 20th century book, still in print today, which argues that statistical data is largely a game of swindles and cheats used to misrepresent reality. Those sympathetic to Huff’s prognosis could do no better than point to the handling of the covid-19 pandemic by world governments. The epidemiological blizzard of numbers, figures and graphs, often lacking in deeper context, has tended to obscure a more accurate picture of goings on.
As large as it is, this is just one object-lesson in the problem of statistics. With contradictory data on every subject under the sun now being plugged frequently on social media by ever more polarised interests, how do we know what to believe? And how can we better distinguish valid statistics from those which may be flawed?
British economist and author Tim Harford wades into the seemingly piranha-infested waters in an attempt to answer these questions. His book, How to Make the World Add Up: Ten Rules for Thinking Differently About Numbers is a timely, entertaining and erudite look into the world of research data, figures and stats, exploring the hidden factors that can skew them. Harford wrote the book not only to create a user manual for navigating the information age, but also to save and redeem good statistics from Huffian cynicism, which he regards as excessive and myopic. The book aims to convey the worth of pure statistical truth by explaining how it can be corrupted – illustrating instances where flawed logic, emotions and cognitive errors affect both the creation and interpretation of data.
“What we count and what we fail to count,” he writes, “is often the result of unexamined choice, of subtle biases and hidden assumptions that we haven’t realised are leading us astray… we can and should remember to ask who or what might be missing from the data we’re being told about.”
Harford divides his chapters into 10 rules of thumb, or habits of mind, on how to approach information. “Rule One: Search Your Feelings” argues that how we feel on a topic will affect how we go about creating and interpreting statistics about it. We find ways, he says, to dismiss evidence we don’t like. When data seem to support our preconceptions, we are less likely to look more closely for holes. Harford likens this to seeing fouls in a football game committed by the other team, but not one’s own.
In “Rule Two: Ponder Your Personal Experience,” the author explains that statistical data can fly in the face of what we see with our own eyes. When that occurs, we need to scrutinise both together. Sometimes that will reveal a flaw in the data, at other times a flaw in our assumptions. When the London Transit Commission released its ridership statistics one year, reporting average rider occupancy of 12 people per bus and 130 people per tube-train, Harford was convinced the data was wrong and that the real numbers had to be higher. Those stats completely flew against his lifetime of commuting experience on the city’s packed busses and trains. But, as he considered it more, he realised that his experience was limited to rush hour travel within central London. There are many other neighbourhoods whose buses and trains are far less crowded, especially outside peak hours, and whose ridership moderated those numbers.
Throughout his book, Harford repeatedly tells us to ask ourselves: what is actually being measured or counted that we can’t see? What contextual information are we not getting? When the United States in 2017 reported 39,717 “gun deaths,” the knee-jerk conclusion that people tended to draw, including the media that reported it, is that those were all homicides. Yet, that shocking figure was often divorced from the (albeit equally shocking) fact that around 60 per cent of gun deaths in the US that year were suicides. Overlooking seemingly small but crucial details like that happens all of the time, Harford says. “Avoid premature enumeration,” he adjures in his rule three.
Opinion polls, which at times can be notoriously inaccurate and thus the source of misinformation, also fall into the author’s huge catch net of examples. In both the UK general election of 2015 and the American presidential election of 2016 respectively, major polls indicated that both David Cameron and Donald Trump would lose. Yet the polls were incorrect, and by very wide margins at that. How? Because of sample bias: the polls did not represent a wide enough segment of the general population to create an accurate picture. Most people don’t respond to requests from pollsters to take part in polls, like they did in the past. In the cases above, conservative voters were harder to reach and get answers from than liberals. Hence, Harford tells us in rule six, “Ask Who is Missing.”
A similar sample bias underlies the practices, mentioned often in this journal, in psychology and other experiments, which tend to rely on western college students for participants, often with great numbers of men than women. Clinical trials by drug companies also don’t do enough to disaggregate data to allow an exploration of whether there might be different results in men and in women. “It’d be nice to fondly imagine that high-quality statistics simply appear in a spreadsheet somewhere, divine providence from the numerical heavens,” Harford writes. “Yet any dataset begins with somebody deciding to collect numbers. What numbers are and aren’t collected, what is and isn’t measured, and who is included or excluded, are the result of all-too-human assumptions, preconceptions and oversights.”
How to Make the World Add Up is about more than just understanding stats and information; it’s about how better to see what we don’t realise is missing. As stats and data increasingly come at us from numerous sources and directions, all of us need to become more adept at avoiding reading information at face value, questioning assumptions about that information, and trying to see what was not considered, or what may have skewed that data.
Harford argues that ‘good statistics,’ by comparison, are only as valuable as one’s willingness to accept them. The ability to do so is the corollary to navigating the pitfalls of flawed data. For that reason, the author ends his book by advocating a mental posture of curiosity and an open-mind when weighing competing depictions of the world based on numbers. Keeping an open mind means being willing and able to change your mind if the facts change, as the late 20th century American economist John Maynard Keynes was known to do. Keynes’ nimbleness of mind and ability to admit mistakes allowed him to adjust his views of the world and related activities such that he attained outstanding success in life.
Similarly, “being curious,” which is Harford’s “golden rule” at the end of his book, is the ability to engage with a statistical claim that challenges an existing worldview without an emotional or fearful response. That skill also makes us less prone to polarisation because we’re not vested in any particular outcome. Making the world add up, Harford asserts, is as much about allowing ourselves to be open to good, but inconvenient, statistics as it is about creating and reading statistics more accurately in the first place.
This review first appeared in the Human Givens Journal, Vol. 28, No. 2, 2021.