On the double-edged humility of science

Science has a built-in humility: whereof we know nothing, we are wary of speaking. Science describes what it does know, to the best of our ability; as to what we do not know, it is sometimes able to speak clearly about the bounds of our ignorance; beyond that, it takes care to admit its ignorance.

Given the great extent of what science is able to describe well, its practitioners are generally wont to be more willing to credit conjectures – about that which we do not know – more or less according to how well they harmonize with what existing science has to say about related matters and how drastically one must revise existing theory to accommodate them. One may fairly argue that such a conservative bias – favouring that which is less disruptive – is supported by the scientific method itself, if only because there has developed, over the last five centuries, a steadily stronger correlation (despite occasional revolutions) between compatibility with prior theory and correctness of proposed explanations for the previously unexplained. However, ultimately, this is a bias of the practitioners of science, not of science itself; when presented (as by Darwin and Einstein) with a compelling case for revolutionary change, science casts aside the old in favour of the new, even if many of its practitioners are slow to come around. Failing all else, an older generation dies off and a newer grows up among which the new ideas take root; and, in their turn, the generations who first adopt the idea go on to teach it to those who follow, so that it becomes intuitive to them, even if their forebears found it paradoxical.

In response to this humility of science – the willingness to admit its ignorance and to over-throw its orthodox notions in favour of novel explanations – some fallacies have become common, of two kinds. On the one hand, those who disagree with science treat it as a license to assert that their hypotheses form a valid part of the scientific discourse, in diverse ways; on the other, some scientists grow impatient with arguing against ill-informed opinion and reject even the notion that those outside their discipline may be competent to examine their work and form their own opinion of its worth.

In stray moments over the (2009/Spring) last few years, I have been reading The Fairy-Faith in Celtic Countries, by W. Y. Evans-Wentz [1911], which combines excellent accounts of then-surviving tales of fairy folk with an evident desire to believe there is at least some truth in the folk-lore. The author, as ever with those who want to believe, flirts with diverse fallacies in the attempt to justify such belief, despite clearly taking pains to be scientific. The anthropology is worth reading, so I take this excuse to link to it, although I find the passage that provoked me to begin writing this page does not, in all fairness, perpetrate the error I at first thought I saw in it.

Contra-orthodox fallacies

Those who disagree with scientific orthodoxies have fallen into various fallacies that depend on and abuse science's humility (and the occasional arrogance of scientists). Where science admits its ignorance, they leap in with their own pet hypotheses, expecting to be taken seriously despite a total lack of method (let alone scientific) to their madness. Where science admits that further investigation may impel it to revise its orthodoxy, they take this as grounds for their wild conjectures to be accorded equal time with orthodoxy.

Filling the gaps

Whereof science knows nothing, it speaks not (although it may provide some hints, and its practitioners may espouse strong opinions). This does not mean that every conjecture, that purports to account for such unknowns, is equal in the eyes of science. If a purported explanation raises more questions than it answers, folk shall be justifiably cautious of accepting it as any kind of an improvement over simply acknowledging the prior unknowns as topics for further study. (For contrast, when quarks were added to the bestiary of subatomic particles, they provided a frame-work – for understanding the observed particles – which greatly reduced the complexity of the bestiary and explained such a great deal that the undeniably uncomfortable features of the new theory – quarks are unobserved and the theory entertains no way of directly observing them – appear an acceptable price to pay. Whether the same may be said of the Higgs boson remains an open topic.)

Furthermore, science has rather particular criteria of explanative adequacy: although the conjectured theory may appear to you to explain some phenomenon, science asks what else that theory, and the modes of reasoning you use to reach your explanation, could as readily be found to explain. If your theory can equally explain what is observed as explain wholly contrary observations (had we made them, instead), science shall reject it for lack of predictive power: if we hadn't known the facts we can't yet explain, your theory would have failed to tell us those facts. To have predictive power, a theory must be able to tell us something, in advance of experiments, that could in principle be contradicted by those experiments: if no experiment could possibly have an outcome which would lead you to reject your theory, then no outcome of any experiment actually gives any information in support of your theory.

Psychic research generally starts from the observation that there are many weird things folk have reported, that appear to indicate some supernatural or extra-terrestrial agency. Closer scrutiny eliminates most of that evidence as fraud, delusion or consequences of neurological anomalies. It tends to leave a residue, small in comparison to the initial body of reports, for which science is unable to state with certainty that it has an explanation. This does not mean that the residue could not be explained by any possible application of available science, only that the available analysis has yet to turn up such an explanation; and the fact that the vast majority of the initial reports do fall to such analysis is strong grounds to suspect the rest would follow, given better data on their circumstances and a closer analysis of the details. However, science does not claim certainty about such a conjecture, for it lacks the data to do so. Into this apparent void leap those with diverse and fascinating conjectures about aliens and spirits, eager to present science's proper humility as evidence for their pet theory. It is nothing of the kind.

Life's origins

Likewise, when science ponders the origin of life, we are compelled to acknowledge the near-total lack of data about the conditions in which the first life emerged and the details of its emergence and early development. Indeed, science can even go so far as to say that the available evidence we do have quite clearly indicates that early life was necessarily such insubstantial stuff as single-celled organisms, which are poorly preserved even in the recent fossil record, so that we should expect the surviving fossil record to contain so little trace of the details of life's origin that our failure to find any such trace merely matches the little we know. If we found fossils of bunny rabbits from three milliards of years ago, it would contradict our theory; if we found fossils of single-celled organisms from that epoch, those organisms would support what science has to say although the survival of the fossils would be something of a surprise – raising, indeed, the question of how they managed to survive well enough to be recognizable. That science thus remains silent upon the details of the early origin of life, although scientists do speculate about the possible mechanisms, is thus simply science's proper humility, declining to claim knowledge where it has no data on which to base knowledge.

All the same, science has a clear understanding of the workings of life in the brief part of Earth's history from which we have information; and, from that understanding, science is well equipped to extrapolate backwards into the early history of life and say what it can. Thin though this is on detail, it leaves ample room for complex life to have arisen from simpler, by the systematic processes of inheritance and natural selection, leavened with the sporadic intervention of mutation. Thus, in contemplating life's beginnings, science is happy to consider anything at all that is capable of even only a rough approximation to self-reproduction; from any such beginning, the established mechanisms of life can bring forth the wonders we see in detail. Any reasonable conjecture, compatible with what we know to have existed at an early enough point in the Earth's development, is thus plausible: but science, for want of hard data, is unable to answer the question of which is the actual mechanism by which life arose (from which a quantum mechanic might build a meta-theory in which all possible mechanisms are superposed in parallel).

What we know of the early Earth is that it was a spinning ball of initially hot rock cloaked in an atmosphere (albeit of very different composition to what we observe today) which included a great deal of water vapour. As the rock cooled, there came a point where that water vapour condensed; as the surface of the rock was not perfectly even, some of it was covered in water while some remained exposed to the still-vaporous parts of the atmosphere. As the sun illuminated only one side of the spinning ball, it was heated unevenly and variably; as the water covered only parts of it, day's heating dried out the uncovered parts and made them hot but let them get colder at night; while day evaporated water steadily off the covered parts, whose temperature was thus more stable than the land. Thus parts of the atmosphere over land and sea experienced different heating – leading to pressure variations, hence winds, cyclones and anti-cyclones. Water evaporating into the atmosphere over seas by day thus moved about, some of it coming to be over land and, both due to night's cooling and the effects of being carried over the contours of the land-mass, formed clouds and rain. Thus the rock was weathered, erosion produced dust and mud, so the ball of rock covered itself partially in mud. The chemical composition of that mud varied, for diverse reasons: the rocks from which it came were themselves of diverse chemical composition; the more water-soluble components were more apt to be washed away, although even insoluble grains left thereby would move with the water; the less soluble components were more apt to settle out where the water's flow slowed; where floods carried water out over relatively level ground, pools of water left behind would dry out leaving all their solutes behind; and the size of subsequent floods would affect which such pools would be washed away and which refilled with more mess, to name but a few of the processes leading to inhomogeneity.

Such are the conditions science tells us must have been present on the early Earth; any proposed explanation for the origin of life may freely begin with this. The fertile imaginations of scientists have devised several mechanisms by which brute chemistry may lead to nearly-enough self-reproducing systems capable of serving as the start-points of life. Whether any of the mechanisms thus far documented is the actual process life followed is beyond our means to know: but the very fact that we are able to imagine plausible processes, starting from what we have prior reasons to expect were present, strongly encourages the expectation that brute chemistry, from the raw conditions of the early Earth, could indeed (given time – and it had plenty) bring forth some primitive seed from which life could begin. Any proposed explanation for the origin of life – if it calls for us to believe in the action of anything more than brute chemistry on a muddy ball of rock spinning in the heat and light of a young star – thus has strong presumptions against it, unless it can show compelling other grounds to believe in the actor and action in question. To offer the proposal of such an explanation as if it were the only possible explanation for the origin of life is either intellectually dishonest or evidence of a failure to review the prior literature: to attempt to use (as some creationists do) this explanation of the origin of life as proof of the existence of the conjectured actor, God, reveals nothing but a determined effort to find some pretext for dragging into science the need for a hypothesis that Lagrange famously found unnecessary in Napoleon's time.

Science might be wrong, ergo I'm right

Also known as They laughed at Galileo, too – but I'll show them ! Science is not dogmatic (although some scientists fall into that error): it admits it might be wrong. None the less, a competing account of matters must show itself comprehensively right, on all the things orthodoxy gets right, before science has any reason to listen to its proponents. Even then, if they cannot show some virtue to their model, either in conceptual simplicity or in scope of applicability, science has no particular cause to set aside orthodoxy in their favour.

When Einstein introduced general relativity, with its radically different account of gravitation, he had to show that its predictions agree with Newton's theory, to within the precision of all prior measurements, on all the observations that Newton's theory accounts for correctly. Doing that established general relativity as a competitor to Newtonian gravitation. Going beyond that, by showing how it enabled a unification of Maxwell's electrodynamics with gravitation and by accounting for the precession of the orbit of Mercury – measured by Victorian astronomers to great enough precision that they were aware that it deviated from Newtonian predictions – made it a more effective theory. Predicting the gravitational deflection of star-light by the Sun enabled astronomers to perform a measurement (by observing stars near, in the sky, to Sun and Moon during an eclipse) that could have contradicted Einstein, so that the observed match between observation and prediction served as experimental confirmation of general relativity. Only a theory which passes comparably stringent tests has any prospect of over-throwing orthodoxy.

Galileo, indeed, worked at a time when quite different criteria of explanative adequacy held sway: he was among those who began the process of moving away from the confident conclusions of armchair speculators – who would believe what made most sense to them without thought to actually conducting experiments in the material world – to the modern scientific method. When he dropped two canon-balls off a high tower, his peers observed that the larger fell faster than the smaller, as they expected, without care to the fact that their actual orthodoxy, handed down from Aristotle, claimed the larger should have fallen faster in proportion to its size, which it plainly did not – it did not reach the ground in half the time the smaller took, but won by only a narrow margin, small compared to the total time taken to fall. Galileo's theory, that things fall at the same rate, was thus wrong, but far more accurate than the orthodoxy it ultimately displaced (with corrections, to address air resistance, accounting for the small difference observed). The theory he set out to over-turn was so utterly wrong-headed that being more right than it was easy: what was harder was the task of persuading his peers to care about his novel methods and criteria for preferring one theory over another.

Galileo's theory was better than the one it set out to replace, considered from our modern scientific perspective, but the real import of his work – and the real reason why his contemporaries either mocked or feared it – was that it called for a radical change in methodology. Classical philosophers spurned experiment – because its results are marred by the imperfections of the material world and our senses – trusting instead to their (essentially aesthetic) judgment as to what reasonable conjecture more accorded with their conceptions of how the world should work. Galileo's radical proposal (although he may not have phrased it quite so) was that we should perform experiments, determine what effects the admitted imperfections do introduce, design experiments which limit the imperfections' scope for messing up results and describe the results in terms which ascribe only a de minimis rôle to the imperfections. This natural philosophy produces answers which actually come close to providing a correct description of the behaviour of real things; and, instead of merely dismissing errors in that description as resulting from imperfections of the world, seeks to refine our understanding of the world by measuring the effects of those imperfections and adapting the model to take them into account, thereby removing them from the category of imperfections and placing them among the complications to which due attention is paid.

Anyone who, today, comes along with a theory, which contradicts orthodoxy as radically as Galileo's work contradicted Aristotle's theories, would do well to consider what benefit anyone else has cause to see in it. If the new theory results from a radically new approach to the pursuit of knowledge of the world, what is that approach ? More importantly, why should anyone prefer that approach ? The present methodology of science, that Galileo pioneered, is well-adapted to the task of telling us how things in the material world behave: if you have really made as radical a conceptual innovation as Galileo, I am bound to suspect your approach solves some other problem; however much that may seem important to you, pause to consider whether it shall seem an improvement to those of us busy mastering the material world. At the very least, to have any relevance to science, you'll need to make the case for your new methodology in terms that science can understand. Alternatively, accept that what you are doing is not science, 'though it may have related value, just as natural philosophy came, in time, to simply ignore philosophy as an unhelpful distraction and, by giving people more useful answers than philosophy could, supplanted philosophy as a means of arriving at truth.

The fallacy of scientific authority

In light of the prevalence of ill-informed people peddling ill-formed delusions as theories, though these often do not even rise to the status of meaningful conjectures, I am not at all surprised that some scientists grow impatient enough to forget science's humility and make dogma of their orthodoxies, sometimes even rejecting the freedom to question orthodoxy. Understandable though impatience is, such dogmatic positions run against the spirit of proper scientific inquiry.

An argument from authority points to who says something and offers who they are as grounds for believing what they say. Every religion uses this when it points to its holy revelation and asserts that it is to believed because of its (alleged) source. Journalists use it in reports about science, when they attribute some assertion to the nebulous group known as scientists. In practice, arguments from authority form the mainstay of any individual's knowledge of the world – we identify the sources of information that we trust and take what they say as authoritative. As a practical heuristic for knowing what's going on in the world, this is the only practical way to proceed if you are neither omnipresent nor omniscient. However, it does not qualify as a valid mechanism for establishing the truth of a scientific conjecture. That Einstein said He does not throw dice does not prove that the universe is deterministic; it just tells us that one very prominent physicist was deeply uncomfortable with the philosophical ramifications of quantum mechanics.

Indeed, science may be characterized by a systematic insistence on obtaining truth without resort to arguments from authority: that is, science insists that it is irrelevant who told you something – what matters is that you are able to test what you have been told, for yourself, and that it passes the tests you have applied to it. In practice, of course, we cannot all repeat every one of the experiments that have lead science to its present set of theories, let alone all of the experiments by which those theories have subsequently been tested: yet science is adamant that what matters is not who did the experiments, or was persuaded by their results, but what experiments have been done and why they establish the theory.

Yet in practice we all learn about the world initially by being told and taking on trust, so we are all familiar with accepting arguments from authority; so it should be no surprise when the public takes (and journalists quote) the word of orthodoxy as authoritative; nor when proponents of an orthodoxy become impatient with those who question it – particularly in light of the plainly political agendas of those challenging evolution and climate change orthodoxies, as when the tobacco industry denied the health risks associated with its products. Unsurprising as they are, it remains that such habits are misguided. Indeed, scientists of all disciplines have a duty to scrutinize the work of their peers in other disciplines, trusting only the science, not the persons who have done it. The reasons are essentially those Mill espoused in On Liberty – albeit aiming (I suspect) at religious controversy rather than scientific – so I shall use his reasons (albeit paraphrased somewhat) as framework.

Orthodoxy may be wrong

In stark contrast to most religion, science has the humility to: acknowledge that it may be wrong; and be willing to change in the light of new evidence. It would not have become the source of so much useful information, as it has, otherwise.

Of course, any challenge to orthodoxy is justly obliged to meet a high standard before it may justly expect to be taken seriously. The easiest challenge to scientific orthodoxy is to demonstrate an experiment whose results contradict the predictions of that orthodoxy: the experiment must be described in such detail that others can independently reproduce its results; while the conflict between its result and the predictions of orthodoxy may turn out to only reveal that orthodox theory has been mis-applied (to produce wrong predictions) rather than that orthodoxy's underlying theory is wrong. All the same, what's required for an experimental challenge to orthodoxy is fairly straight-forward.

To challenge the theory with a proposal for an alternative theory is harder: the new theory must correctly account for the vast body of experimental data previously accounted for by the old theory. In accounting for that data, it must be clear how the new theory would predict that data, and nothing else, had it been known before the experiments were conducted; and it must be clear how others, than those proposing the new theory, may use it to make definite predictions about future experiments, so that they may test these predictions against the results of such experiments.

In principle, that is sufficient to make it a theory worthy of consideration; in practice, however, adherents of orthodoxy are unlikely to adopt it unless the new theory offers some advantage over what they have already taken the trouble to learn. Converts may be won over by simplicity in the new theory, whether that be simplicity in the performance of the calculations needed to make predictions, simplicity in the conceptual infrastructure needed to comprehend the theory or such simplifications as make it easier to explain the theory lucidly, which makes it far easier to teach. If it cannot offer greater simplicity – and especially if it is more complex – a new theory more or less has to account for more observed data than the old, although it may achieve this by enabling a unification of two (or more) bodies of theory which one was previously obliged to consider separately (which, however, I would count as a kind of simplification).

All the same, if science demands high standards of any challenge, it does so as a result of the high standards it demands of itself: just as the challenger must show how others may reproduce an experiment or use the theory to make predictions, so must science; just as a rival's predictions must match what experiment reveals, so must orthodoxy's; just as a rival's predictions must be definite for their experimental confirmation to count for anything, so must orthodoxy's. Science has its own rules of evidence and rules for scrutiny of its own methods of arriving at conclusions: and these, in turn, imply a third kind of challenge to an orthodoxy.

A methodological challenge to an orthodoxy argues that the process by which it arrived at its conclusions is flawed. Such a challenge does not claim to have found an actual experiment whose outcome contradicts the orthodoxy; nor does it offer a replacement theory that performs better than the existing. Instead, it points to the methods by which orthodoxy purported to arrive at its conclusion and argues that those methods have no justification, do not properly imply the conclusions reached, or are capable of being applied in ways that lead to contrary conclusions. The last means that the given methods of reasoning cannot be trusted to arrive at right answers. The previous arises where the orthodoxy's authors exercised more choice than they realized in the course of arriving at their conclusions; their data and reasons may have suggested the answer, but it did not oblige them to accept it. The first points to methods of arriving at answers that, though the methods may be valid, have not actually been shown to be valid. A methodological challenge calls into doubt one's reasons for believing a conclusion, more than it challenges the actual conclusion itself.

Even errors may contain the seeds of improvement

Even when a challenger is wrong, their attempt to account for results in a novel way, or to find fault with the orthodox account, may lead those well-versed in the orthodoxy to understand it in a new way, that may help them to improve the orthodoxy. The new insight may come from seeing the familiar through the distorting lens of the failed challenge or it may arise from the clarity of exposition of the orthodoxy needed to establish the flaws in the challenger. The improvement may extend the orthodoxy to address situations to which it had not previously been applied; or it may enable a simplification of the orthodoxy (for any of the senses of simplicity discussed above). Suppressing wrong ideas robs those who are working with right ideas of stimuli that may help them to improve their right ideas.

Understanding is more precious than knowledge

Those who re-invent the wheel – even poorly – are thereby better equipped, than those who have not, to fully comprehend why our diverse ways of making wheels are right for their purposes and to recognize when a novel approach may be useful. Those who have been taught all the right answers may seem to know everything of value: yet those who have struggled with wrong answers and learned for themselves to know the right from the wrong end up with a clearer understanding of the right answers than those who merely memorized them.

Dogma becomes meaningless

When contradictions to orthodoxy are suppressed, when only the received wisdom may be published, the orthodoxy thus propagated is wont to degenerate into a meaningless form of words. If the student does not challenge what is taught, and receive a well-reasoned account of why it is true, the student will not comprehend what is taught, no matter how well it be memorized. If the teacher is not familiar enough with the reasons for the orthodoxy to make a clear case – to persuade, based only on the merits of the case, not on the authority that comes with being the teacher – in support of it, then the teaching will not communicate the ideas clearly and the students shall not learn what the orthodoxy means. Those who learn the truth without a thorough exploration of the alternatives rejected in its favour may be able to recite the liturgy of their faith, but their recital becomes nothing but the rote repetition of meaningless words.

Argumentation is the forge on which we try our answers and, in testing them, discover what they really mean.


Valid CSSValid HTML 4.01 Written by Eddy.