In a New York Times op-ed piece on the healthcare legislation, The Fight Is Over, the Myths Remain, Brendan Nyhan states:
Studies have shown that people tend to seek out information that is consistent with their views; think of liberal fans of MSNBC and conservative devotees of Fox News. Liberals and conservatives also tend to process the information that they receive with a bias toward their pre-existing opinions, accepting claims that are consistent with their point of view and rejecting those that are not. As a result, information that contradicts their prior attitudes or beliefs is often disregarded, especially if those beliefs are strongly held.
Nyhan addresses the curious tendency we humans have to regard opinion as factual information—in his example, popular myths about the content of the recently passed healthcare bill, now signed into law. In short, it all comes down to preconceived personal perspective. Here the old axiom about drawing your curve and then plotting your points is apropos. We tend to view the world through tinted lenses, all the while assuming that we are the only ones who see objectively.
I was intrigued to read about the former medical student Michael Burry who turned his economic insights into a popular financial blog. Impressed with his knowledge, Wall Street gurus began to take regular notice of his predictions. Indeed, many of the financial companies he endorsed turned out to be winners in the market. Everyone, it seemed, was on the same financial page, until Burry noticed a disturbing trend. Solid institutions that went on to fail shared one thing in common: all had invested heavily in subprime mortgage securities. Eventually, Burry convinced Wall Street to issue credit default swaps through which he bet against the popular tide—and subsequently won big.
This scenario demonstrates Nyhan’s premise: when faced with the same set of factual data, observers generate wildly different interpretations. As a consequence of acting on the basis of these observations, the risks are enormous: you could win big (like Burry), or you could lose big as well.
Which brings me to the role of science in contemporary society. Just how objective a discipline is science? When confronted with the same set of facts, how is it that scientists formulate theories with markedly different import?
Global warming: true or false?
Health care reform: good or bad?
Wall Street reform: desirable or undesirable?
In his new book Wrong, science journalist David H. Freedman wonders why scientific pronouncements often turn out to be misleading, exaggerated or entirely off the mark. Part of the problem, he opines, is that many times scientists are forced to rely upon surrogate measurements, because they cannot get at the things they need to measure directly. Thus, they have to make inferences from suboptimal data.
Economists, for example, rely on economic indicators extracted from bits of data to identify trends and forecast the economic outlook. Unfortunately, most research papers published in economic journals don’t conclusively prove anything one way or the other. Freedman wonders: “If tests of the exact same idea routinely generate differing, even opposite, results, then what are we supposed to believe?”
Freedman highlights the work of Dr. John Ioannidis, an M.D. with an undergraduate degree in mathematics, originally published in JAMA (John P.A. Ioannidis, “Contradicted and Initially Stronger Effects in Highly Cited Clinical Research,” Journal of the American Medical Association Vol. 294, No. 2 (2005): 218-28).
According to Ioannidis, “most medical treatment simply isn’t backed up by good, quantitative evidence.”
The whole point of carrying out a study is to rigorously examine a question using tools and techniques that would yield solid data, allowing a careful and conclusive analysis that would replace the conjecture, assumptions, and sloppy assessments that had preceded it. The data are supposed to be the path to truth. And yet these studies, and most types of studies Ioannidis looked at, were far more often than not driving to wrong answers.
Ioannidis felt he was confronting a mystery that spoke to the very foundation of medical wisdom. How can the research community claim to know what it’s doing, and to be making significant progress, if it can’t bring out studies in its top journals that correctly prove anything, or lead to better patient care?
The largest source of wrongness in scientific studies is publication bias. Prestigious medical journals eagerly publish studies that demonstrate novel or unanticipated results. Witness Andrew Wakefield’s bogus study published in the Lancet that purported to link the administration of the MMR vaccine to autism. This problem is compounded further by the mainstream media, which is only too quick to disseminate such conclusions to the public at large. Such misperceptions have a tendency to persist for years.
In his classic treatise on The Structure of Scientific Revolutions, Thomas Kuhn argued that “professionalization” leads to “an immense restriction of the scientist’s vision and to a considerable resistance to paradigm change.” He opines that scientists become captives to a paradigm “like the typical character of Orwell’s 1984, the victim of a history rewritten by the powers that be.”
Perhaps scientists themselves possess their own set of preconceived notions, which in turn dictate how they interpret the data they measure. I suppose that it all depends on which side of the emotional aisle you happen to take your seat.
As Mr. Nyhan writes: “People seem to argue so vehemently against the corrective information that they end up strengthening the misperception in their own minds.”