contracelsum

"What agreement has Jerusalem with Athens?"

contracelsum

Justification of Knowledge and Truth (Part II)

A Bold Plan for Perpetual Ignorance

How do we know that what we know is actually true?  Wittgenstein and the post-modernists argued that we never will know absolute truth; all we can have is perspectives (including this one).  This has not sat well with traditional philosophers, but it has certainly had a field day in the world of academia.  We now have endless lists of perspectives as subjects for study: queer literature, minority art, feminist politics, proletarian ethics, trans-gender discourse, and so on, ad nauseam.  The end result is lots and lots of insignificant jobs for post-modern academics.  Lots of heat, but not much light.

We have argued that rationalism–one of the traditional justifications of knowledge–elides into tautologies and irrationalism.  The second major tendency to justify knowledge as being true-truth is empiricism.  This is probably the tendency which is most popular today–largely, due to the widespread belief that science (which employs the “empirical” method) delivers the truth.  As Frame explains:

The common view is that during the ancient and medieval periods, the growth of human knowledge was slow because the methods of acquiring it were based on tradition and speculation.  Great thinkers like Bacon and Newton, however, convinced the world of a better way: forget traditions and speculations.  Verify your hypotheses by going to the facts.  Experiment. Observe. Measure.  Gradually, observed facts will accumulate into a dependable body of knowledge. . . . That kind of investigation is successful, the argument goes, because it provides publicly observable checking procedures.  If you do not agree with a theory, you can go and check it out.  The facts are there for all to see; just compare the theory with the facts. [John Frame, The Doctrine of the Knowledge of God (Nutley: Presbyterian and Reformed Publishing Company, 1987), p.115.]

This approach is widespread and appeals to the modern tendency that equates “science” with near certainty.  But, in the end, empiricism fails as a justification of truth and knowledge.

1. In reality we know many things which we cannot check out ourselves.  For example, we believe most certainly that the City of London exists, although we have never verified it personally.  We believe that quarks exist, that Julius Caesar crossed the Rubicon, and that man has landed on the moon.  We know these things to be true not because we have verified them by checking the facts, but by accepting the testimony of others whom we trust.

2. Our senses often deceive us.

3. There is no purely empirical inquiry.  All data has to be first recorded and reported, then analyzed and evaluated.  The analysis and evaluation of data draws upon our pre-commitments.

What we “see”, “hear”, “smell”, “taste”, and “feel”, is influenced by our expectations.  Those expectations do not come just from sense-experience but from theories, cultural experience, group loyalties, prejudice, religious commitments and so forth.  Thus there is no “purely empirical” inquiry.  We never encounter “brute”, that is, uninterpreted facts.  We only encounter facts that have been interpreted in terms of our existing commitments.  [Ibid., p. 117.]

Science is not purely empirical; therefore, it cannot bear the weight which empiricism wants to hoist upon its shoulders.

4.  Empiricism cannot verify propositions which we all accept as true truth.  For example, “all men are mortal” cannot be verified by empiricism, although everyone accepts the proposition as true.  To prove or disprove them empirically, empiricism would need to study and research the entire universe.  Mathematical propositions cannot be verified empirically, although most accept such propositions as universally true.  In fact, empiricism relies upon such mathematical propositions in testing and examining the facts.  Empiricism cannot prove, nor therefore accept as true, the proposition that the sun will rise tomorrow.  Anything to do with the future must remain in perpetual doubt for the empiricist.  Moreover, empiricism can say nothing about ethical values–for example, that murder is wrong.  True, the empiricist will accept that murder occurs, but has no foundation or basis to assert that it is right or wrong.  The proposition, “man ought not to kill another human being” is beyond the ability of empiricism to validate or establish as true.  In the final analysis, empiricism cannot verify or justify empiricism.

Frame concludes:

Like the problems of rationalism, the problems of empiricism are essentially spiritual.  Like rationalists, empiricists have tried to find certainty apart from God’s revelation, and that false certainty has shown itself to be bankrupt.  Even if the laws of logic are known to us (and it is unclear how they could be on an empirical basis), we could deduce nothing from statements about sensation except, at most, other statements about sensation.  Thus, once again, rationalism becomes irrationalism: a bold plan for autonomously building the edifice of knowledge ends up in total ignorance.  [Ibid., p. 119]

Letter From America (About "Same-old, Same-old . . . ")

Faux-Science  Fascistas

Over the years, we have published a plethora of articles on the subject of Global Warming being a faux-science.  But, the claims to be the only true and genuine science of climate continue to be made.  So we will continue to publish pieces that tear that canard into pieces.

To that end, here is an excellent piece from Patterico:

Is Study of So-Called Climate Change Even “Science”?

Burn the heretic:

In early May, Lennart Bengtsson, a Swedish climate scientist and meteorologist, joined the advisory council of the Global Warming Policy Foundation, a group that questions the reliability of climate change and the costs of policies taken to address it. While Bengtsson maintains he’d always been a skeptic as any scientist ought to be, the foundation and climate-change skeptics proudly announced it as a defection from the scientific consensus.

Just a week later, he says he’s been forced to resign from the group.
The abuse he’s received from the climate-science community has made it impossible to carry on his academic work and made him fear for his own safety. A once-peaceful community, he says in his resignation letter, now reminds him of McCarthyism.

“I had not expect[ed] such an enormous world-wide pressure put at me from a community that I have been close to all my active life,” he wrote in his resignation. “Colleagues are withdrawing their support, other colleagues are withdrawing from joint authorship.”

When scientists cannot abide people questioning their hypotheses, something besides “science” is going on.

But there is an even deeper and more fundamental problem here. Is study of climate change even “science” to begin with?  In a court of law, jurors are told to take the opinions of experts into account — but not to blindly accept them. And indeed, it would be difficult to blindly accept all expert opinion in an adversarial setting, where you often find “experts” on opposing sides, saying completely different things that cannot be reconciled.  But in the field of “science” we are told to trust the “experts.” To do anything else is to reject “science” and that is ignorant and wrong.

That may be, in fields that actually deserve the name “science.” I’m just not sure that study of so-called “climate change” merits that label.  “Science” is based on the scientific method: scientists propose a hypothesis, and then test it through experimentation. When a result can be reliably replicated, the hypothesis gains credibility. When it cannot, it is discarded.

Under this definition, I’m not sure that study of so-called climate quite deserves to be called “science.” The public has been shown no track record of hypotheses that are reliably confirmed by experimentation. Instead, we are told that over 95% of climate scientists agree on . . . something. (Then we find out that the number is phony, because it proposes a test for determining who supports the “humans cause global warming” theory that includes most skeptics among the supposed supporters.)

You want to know what else more than 95% “climate change” scientists agreed on? That their models predicted high temperatures in 2013 — higher, in fact, than the temperatures turned out to be. Climate change models are routinely wrong, and scientists are being forced to admit it. It’s the biggest issue facing those who study the climate.

Scientists are dealing with a system that is so complex, it’s difficult to make pronouncements. In this respect, it reminds me of economics. There is a priesthood of Keynesians who assure you that, for example, the Obama stimulus will “work” as defined by some set of benchmarks — and then, when those benchmarks are not met, we are told things would have been worse. And we are supposed to believe that because the guy telling us is Paul Krugman, and he has a Nobel Prize and you don’t, so how dare you question him?
That being said, I don’t agree with the idea that economists — or the climate scientists — are the priesthood, and we need only have faith in their pronouncements, no matter how often they’re shown to be wrong.

I don’t think that makes me “anti-science.” I think it makes me pro-science.

Not So Definitive After All

Problems with scientific research

How science goes wrong

Scientific research has changed the world. Now it needs to change itself

A SIMPLE idea underpins science: “trust, but verify”. Results should always be subject to challenge from experiment. That simple but powerful idea has generated a vast body of knowledge. Since its birth in the 17th century, modern science has changed the world beyond recognition, and overwhelmingly for the better.
But success can breed complacency. Modern scientists are doing too much trusting and not enough verifying—to the detriment of the whole of science, and of humanity.

Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.

Even when flawed research does not put people’s lives at risk—and much of it is too far from the market to do so—it squanders money and the efforts of some of the world’s best minds. The opportunity costs of stymied progress are hard to quantify, but they are likely to be vast. And they could be rising.

One reason is the competitiveness of science. In the 1950s, when modern academic research took shape after its successes in the second world war, it was still a rarefied pastime. The entire club of scientists numbered a few hundred thousand. As their ranks have swelled, to 6m-7m active researchers on the latest reckoning, scientists have lost their taste for self-policing and quality control. The obligation to “publish or perish” has come to rule over academic life. Competition for jobs is cut-throat. Full professors in America earned on average $135,000 in 2012—more than judges did. Every year six freshly minted PhDs vie for every academic post. Nowadays verification (the replication of other people’s results) does little to advance a researcher’s career. And without verification, dubious findings live on to mislead.

Careerism also encourages exaggeration and the cherry-picking of results. In order to safeguard their exclusivity, the leading journals impose high rejection rates: in excess of 90% of submitted manuscripts. The most striking findings have the greatest chance of making it onto the page. Little wonder that one in three researchers knows of a colleague who has pepped up a paper by, say, excluding inconvenient data from results “based on a gut feeling”. And as more research teams around the world work on a problem, the odds shorten that at least one will fall prey to an honest confusion between the sweet signal of a genuine discovery and a freak of the statistical noise. Such spurious correlations are often recorded in journals eager for startling papers. If they touch on drinking wine, going senile or letting children play video games, they may well command the front pages of newspapers, too.

Conversely, failures to prove a hypothesis are rarely even offered for publication, let alone accepted. “Negative results” now account for only 14% of published papers, down from 30% in 1990. Yet knowing what is false is as important to science as knowing what is true. The failure to report failures means that researchers waste money and effort exploring blind alleys already investigated by other scientists.

The hallowed process of peer review is not all it is cracked up to be, either. When a prominent medical journal ran research past other experts in the field, it found that most of the reviewers failed to spot mistakes it had deliberately inserted into papers, even after being told they were being tested.

All this makes a shaky foundation for an enterprise dedicated to discovering the truth about the world. What might be done to shore it up? One priority should be for all disciplines to follow the example of those that have done most to tighten standards. A start would be getting to grips with statistics, especially in the growing number of fields that sift through untold oodles of data looking for patterns. Geneticists have done this, and turned an early torrent of specious results from genome sequencing into a trickle of truly significant ones.

Ideally, research protocols should be registered in advance and monitored in virtual notebooks. This would curb the temptation to fiddle with the experiment’s design midstream so as to make the results look more substantial than they are. (It is already meant to happen in clinical trials of drugs, but compliance is patchy.) Where possible, trial data also should be open for other researchers to inspect and test.

The most enlightened journals are already becoming less averse to humdrum papers. Some government funding agencies, including America’s National Institutes of Health, which dish out $30 billion on research each year, are working out how best to encourage replication. And growing numbers of scientists, especially young ones, understand statistics. But these trends need to go much further. Journals should allocate space for “uninteresting” work, and grant-givers should set aside money to pay for it. Peer review should be tightened—or perhaps dispensed with altogether, in favour of post-publication evaluation in the form of appended comments. That system has worked well in recent years in physics and mathematics. Lastly, policymakers should ensure that institutions using public money also respect the rules.

Science still commands enormous—if sometimes bemused—respect. But its privileged status is founded on the capacity to be right most of the time and to correct its mistakes when it gets things wrong. And it is not as if the universe is short of genuine mysteries to keep generations of scientists hard at work. The false trails laid down by shoddy research are an unforgivable barrier to understanding.

Global Warming Games

More Time Please

There is a very fine line these days between science and propaganda.  Old school science–that is, science in the good old days–was largely populated by a bunch of ruthless sceptics who believed very little, challenged everything, and wanted experimental proof.

This is not to say that every scientist believed he had to “go back to the beginning” and re-work every experiment to prove for himself the laws of motion or the veracity reflected in the periodic table of the elements.  As Michael Polanyi has argued, all scientists operate within a tradition of knowledge passed on from practitioners to neophytes that represented what he called tacit knowledge.  But, let a few experiments throw up results that are not expected, and old school scientists would get a rush of blood to the head.  The labs would be booked out for months, arguments would rage, and debates would go long into the night.

These days much of this rigour has disappeared–particularly in those “disciplines” where proof or disproof cannot be offered via experimental tests.
  Quickly these “disciplines” have degenerated into Soviet-style science where society dictated from the outset what results would be forthcoming from any scientific research.  Then a bunch of scientists scurried around “proving” what society was expecting.  However, we are not speaking of the Soviet Union here, but of science in the West.

Take a couple of examples.  The first is evolutionism.  There is not a shred of credible experimental evidence to support this inane theory.  Experimental proof would demonstrate biological modification in the lab that would change one species into another–say, a fish into a baboon or baboon to a fish.  Faced with this embarrassment the reflexive pseudo-justification usually runs along the lines–well, this process took billions upon billions upon billions of years to complete:  it can’t be reproduced in the lab–which is just another way of saying that evolutionism is not in the least scientific.

Yet, it has been inordinately successful in capturing the halls of academia, the media, and governments.  the unexpected consequence is that evolutionism’s popularity in the public mind has undermined the credibility of science everywhere.  Why insist upon scientific rigour when propaganda comes up trumps?  And if propaganda has been successful in winning ideological control so that the power structures of society are all bent to serve and endow the legend with money and favour, why not elsewhere and in other fields of “science”.

Enter the second example: climate science.  Over the past thirty years, climate science has morphed into prophetic soothsaying.  Don’t for a moment think that modern climate science is grounded in rigorous experimentation or the laboratory environment.  It’s just speculative theory projected out a few centuries.  It cannot be disproved.  It just is.

For nigh on fifteen years global temperatures have not risen.  Does this threaten the legend?  Not at all.  It was never grounded on experimental scrutiny and confirmation or rejection in the first place.  It was grounded in an ideological world view which rejected industrialisation and economic development in principle.  The legend of global warming became a useful “just so” story to oppose economic development and industrialisation around the globe.  The scientific foundations of the legend were never important or necessary.  That is why it so quickly became politicised and contentious.

Every time you see a modern wind turbine think of it as a monument to folly, ignorance and propaganda.  To be sure, windmills, when first invented in the Middle Ages, were a tremendous technological boon.  Not only were they useful to grind grains to feed people, but the clever Netherlanders used them to pump water off the lowlands and greatly expand their economy and wealth.  But today’s power turbines are so costly and inefficient they can only operate if tax payers subsidise them at every turn.  That’s what happens when propaganda replaces science.

The United Nations has just produced its global climate assessment.  It is troubled by the hiatus in global warming.  How to explain?  Apart from a few half-baked hypothetical speculations, it has no answer.  It’s real argument is, “It won’t happen overnight.  But it will happen!  More time please.”  Sounds just like the spurious justification for evolutionism.  Funny that.

Global warming

More Time Please

There is a very fine line these days between science and propaganda.  Old school science–that is, science in the good old days–was largely populated by a bunch of ruthless sceptics who believed very little, challenged everything, and wanted experimental proof.

This is not to say that every scientist believed he had to “go back to the beginning” and re-work every experiment to prove for himself the laws of motion or the veracity reflected in the periodic table of the elements.  As Michael Polanyi has argued, all scientists operate within a tradition of knowledge passed on from practitioners to neophytes that represented what he called tacit knowledge.  But, let a few experiments throw up results that are not expected, and old school scientists would get a rush of blood to the head.  The labs would be booked out for months, arguments would rage, and debates would go long into the night.

These days much of this rigour has disappeared–particularly in those “disciplines” where proof or disproof cannot be offered via experimental tests.
  Quickly these “disciplines” have degenerated into Soviet-style science where society dictated from the outset what results would be forthcoming from any scientific research.  Then a bunch of scientists scurried around “proving” what society was expecting.  However, we are not speaking of the Soviet Union here, but of science in the West.

Take a couple of examples.  The first is evolutionism.  There is not a shred of credible experimental evidence to support this inane theory.  Experimental proof would demonstrate biological modification in the lab that would change one species into another–say, a fish into a baboon or baboon to a fish.  Faced with this embarrassment the reflexive pseudo-justification usually runs along the lines–well, this process took billions upon billions upon billions of years to complete:  it can’t be reproduced in the lab–which is just another way of saying that evolutionism is not in the least scientific.

Yet, it has been inordinately successful in capturing the halls of academia, the media, and governments.  the unexpected consequence is that evolutionism’s popularity in the public mind has undermined the credibility of science everywhere.  Why insist upon scientific rigour when propaganda comes up trumps?  And if propaganda has been successful in winning ideological control so that the power structures of society are all bent to serve and endow the legend with money and favour, why not elsewhere and in other fields of “science”.

Enter the second example: climate science.  Over the past thirty years, climate science has morphed into prophetic soothsaying.  Don’t for a moment think that modern climate science is grounded in rigorous experimentation or the laboratory environment.  It’s just speculative theory projected out a few centuries.  It cannot be disproved.  It just is.

For nigh on fifteen years global temperatures have not risen.  Does this threaten the legend?  Not at all.  It was never grounded on experimental scrutiny and confirmation or rejection in the first place.  It was grounded in an ideological world view which rejected industrialisation and economic development in principle.  The legend of global warming became a useful “just so” story to oppose economic development and industrialisation around the globe.  The scientific foundations of the legend were never important or necessary.  That is why it so quickly became politicised and contentious.

Every time you see a modern wind turbine think of it as a monument to folly, ignorance and propaganda.  To be sure, windmills, when first invented in the Middle Ages, were a tremendous technological boon.  Not only were they useful to grind grains to feed people, but the clever Netherlanders used them to pump water off the lowlands and greatly expand their economy and wealth.  But today’s power turbines are so costly and inefficient they can only operate if tax payers subsidise them at every turn.  That’s what happens when propaganda replaces science.

The United Nations has just produced its global climate assessment.  It is troubled by the hiatus in global warming.  How to explain?  Apart from a few half-baked hypothetical speculations, it has no answer.  It’s real argument is, “It won’t happen overnight.  But it will happen!  More time please.”  Sounds just like the spurious justification for evolutionism.  Funny that.

The Ugly Scientist

Fraud and Deception

Science trades under the mantle of objective, tested, authenticated conclusions.  It turns out, however, that in all  scientific endeavour there is a strong dose of interpretation and subjectivity.  This is not just the case for science; it is true of all human actions.

Larry Woiwode illustrates the principle:

No fact exists without an interpretation of it, as a philosopher by the name of Cornelius Van Til once said.  What he meant is if I say, “The Civil War”, anybody who hears those words is stormed by sets of facts, some merely by naming it as I have.  If you view it as a war of northern aggression, you have facts to support that.  If another sees it as a conflict that installed commercial manufacture over agrarian interests, facts might well support that view.  If I say its genesis was slavery, I might well be closer to the truth, but I would have to summon my series of facts to support that. . . .

Take a step farther back.  If you believe trees are a result of random happenstance or believe they were ordained to look as they do, part of a design fulfilled, then your view of the tree and facts about it will differ, according to your ideology.  [Larry Woiwode, Words for  Readers and Writers (Wheaton, Illinois: Crossway, 2013), p. 33f.]

Modern science is presently dominated by a perverse ideology: scientism–which holds that the only reality, the only thing which exists, is matter.   It denies the subjective, ideological construction of all scientific endeavour.  It claims pure, brute objectivity. 

As a result, it perversely drives science into being a more ideologically bound and subjectively dominated enterprise.
  There are some things which absolutely must be true–for philosophical or ideological reasons.  “Science” becomes a club to beat upon those who would disagree.  Moreover, the belief that one is purely and absolutely objective easily results in self-willed blindness.  It comes as no surprise then, that fraud has become more common in the scientific enterprise.  Since the data represents brute factuality and pure objectivity, manipulation of it must be equally brute, objective, and factual.  One man’s data is as good as another. 

One of the more recent pieces from Creation Ministries International highlights the growing incidence of deceit and fraud in scientific research.  Some excerpts:

Most of the known cases of modern-day fraud are in the life sciences.  In the biomedical field alone, fully 127 new misconduct cases were lodged with the Office of Research Integrity (US Department of Heatlh & Human Services) in the year 2001. This was the third consecutive rise in the number of cases since 1998.This concern is not of mere academic interest, but also profoundly affects human health and life.Much more than money and prestige are at stake—the fact is, fraud is ‘potentially deadly’, and in the area of medicine, researchers are ‘playing with lives’. The problem is worldwide. In Australia misconduct allegations have created such a problem that the issue has even been raised in the Australian Parliament, and researchers have called for an ‘office of research integrity’. . . .

The major problem with fraud is that of science itself, namely that scientists ‘see their own profession in terms of the powerfully appealing ideal that the philosophers and sociologists have constructed. Like all believers they tend to interpret what they see of the world in terms of what the faith says is there.’ And, unfortunately, science is a ‘complex process in which the observer can see almost anything he wants provided he narrows his vision sufficiently’. An example of this problem is James Randi’s conclusion that scientists are among the easiest of persons to fool with magic tricks. The problem of objectivity is very serious because most researchers believe passionately in their work and the theories they are trying to prove. While this passion may enable the scientist to sustain the effort necessary to produce results, it may also colour and even distort those results.

Many examples exist to support the conclusion that researchers’ propensity for self-delusion is particularly strong, especially when examining ideas and data that impugn on their core belief structure. The fact is ‘all human observers, however well trained, have a strong tendency to see what they expect to see’. Nowhere is this more evident than in the admittedly highly emotional area of evolution.

[The original article includes citations and footnotes]

Taking Refuge in Vain Inanities

Over-Egging “Consensus Science”

It is often argued that the general consensus of scientists and the peer review process ensure the integrity of all scientific results and conclusions, and guard against faulty reasoning, over-extrapolation, poor methodology, and similar. . . . But . . . the way scientific research is actually undertaken reveals a very different story. 

Firstly, consensus should never be used to determine truth since this would be committing the logical fallacy of argumentum ad numerum.  Moreover, consensus  also seems to be applied rather inconsistently.  For example, many Christians accept the scientific consensus that the universe is 8-15 billion years old, yet those same Christians are usually vehemently opposed to the consensus that all life came about by naturalistic evolution. 

Secondly, history shows that the consensus has often been wrong–indeed, hopelessly wrong.
  Thirdly, as Kuhn points out, scientists do not start from scratch rediscovering all the currently known scientific facts and repeating all the experiments that lead to major new discoveries. . . . Rather, as students, they learn and accept the currently held theories on the authority of their teachers and textbooks.  This is indoctrination not consensus. 

Fourthly, much of the consensus is artificial and enforced.  Scientists have to choose which projects to pursue and how to allocate their time.  Younger scientists need to choose which research projects will lead to tenure, gain them grants, or lead to controlling a laboratory.  These goals will not be achieved by attacking well established and widely accepted scientific tenets and theories.  As a visiting fellow at Australian National University recently pointed out, many researchers feel that any new research which challenged or threatens established ideas is unlikely to be funded, and therefore, they do not even bother to put in an application.  Older scientists, on the other hand, have reputations to defend.  Thus Bauman concludes: “Whether we want to admit it or not, there is a remarkably comprehensive scientific orthodoxy to which scientists must subscribe if they want to get a job, get promotion, get a research grant, get tenured, or get published.  If they resist they get forgotten.”  [Andrew S. Kulikovsky, Creation, Fall, Restoration: A Biblical Theology of Creation (Fearn, Ross-shire: Mentor/Christian Focus Publications Ltd, 2009),  p.43f.]

To our mind, there is nothing sinister in this sociology of scientific knowledge.  It is the way all knowledge normally progresses.  What becomes sinister is when the existing orthodoxy or tacit consensus (to employ Michael Polyani’s construct) is over-egged to claim it therefore represents infallible and certain truth–as in, “X must be true because every reputable scientist agrees.”  The fallacy of circularity is blatantly to the fore in such tautological assertions.  Vain, ignorant, and foolish are those who find comfort or take refuge in such inanities.  

Exposing Pre-Commitments

A Moment of Rare Honesty

Every so often an Unbeliever comes along who shows they are aware of their assumptions and pre-commitments.  It does not happen very often.  The cry of our age is, “Just the facts, ma’am.”  We just focus upon facts and follow wherever they take us.  The problem with this is that our presuppositions have already determined what will be accepted as fact, and what will be discarded.

For example, the Christian is presupposed to accept the existence of the God revealed in the Scriptures.  Therefore, His revelation of Himself is factual, objectively and eternally so.  It is true truth.  The Unbeliever is predisposed to deny the existence of God.  Therefore, he regards the Bible as anti-factual or non-factual or mythical.  The Unbeliever’s presuppositions have already determined not only what the facts will be but also how and where they will be discovered or learnt. So Unbelief is plagued by two self-deceits.  The first is that it has no fundamental suppositions or presuppositions.  The second is that Unbelief is objective: it follows the facts wherever the facts lead.

But then again, every so often an Unbeliever comes along who is self-conscious about these things and admits them to himself and the world.  Such an Unbeliever is self-consciously presuppostional in the way that he thinks and operates–which is to say, he is more honest than the “just the facts, ma’am” brigade.

Harvard geneticist, Richard Lewontin is one such.

In an article arguing for the superiority of science over religion . . . Lewontin freely admits that science has its own problems.  It has created many of our social problems (like ecological disasters), and many scientific theories are no more than “unsubstantiated just-so stories.”  Nevertheless, “in the struggle between science and the supernatural,” we “take the side of science.”  Why?  “Because we have a prior commitment to materialism.” [Chuck Colson and Nancy Pearcey, How Now Shall We Live (Wheaton, Illinois: Tyndale House Publishers, 1999), p. 96.  (Emphasis, ours).]

 This “prior commitment to materialism” is not factual; it is a religious commitment that makes modern Unbelieving science atheistic in its procession.  But Unbelieving science also, apart from more honest brokers like Lewontin, wilfully obscures its presuppositions, and claims instead that it is only driven by the facts, the empirical data, to its conclusions.  

And there is more, for Lewontin says even the methods of science are driven by materialistic philosophy.  The rules that define what qualifies as science in the first place have been crafted by materialists in such a way as to ensure they get only materialistic theories.  Or, as Lewontin puts it, “we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations.” (Ibid.)

This, as Colson and Pearcey observe, is a stunning admission.  But it is stunning only in its honesty and candidness, not in its substance.  What Lewontin is describing is both known and expected by Believers.  Unbelief reasons in a vicious circle: circular because we are creatures, vicious because the Unbelieving circle turns upon self.  To put it crudely, Unbelieving science produces Unbelieving facts, theories, and explanations.  Believing science produces Believing facts, theories, and explanations.

There is a popular saying to the effect that  one may be entitled to one’s own opinions, but not one’s own facts.  This is a naive oversimplification.  How often have we been told that it is a fact that the world is warming?  The temperature data proves it, we are informed.  But the temperature data record is patchy, incomplete, and full of data holes.  Therefore, temperature data is “adjusted”, “interpolated” and “filled in”.  The facts are massaged.  And the massaging has a number of assumptions built in which already pre-commit the masseuse to have the “facts” produce a certain pre-determined outcome–that is, the earth is warming.

The facts are not always what they seem, which is why credible science proceeds on the basis of testing, retesting, replication, and retesting again.  But when everyone in the hen house is clucking the same sound, the rigour of replication often softens.  The “apparatus of investigation” elides into an exercise to produce certain explanations and certain results that reinforce the consensus of the day.

The moral of the story is that whilst it is essential we talk about facts and empirical data, it is equally essential that we are aware of our philosophy of fact, and talk about that as well. Full disclosure is always be best policy.  But Unbelief hates such notions because they skewer Unbelief’s apparatus of self-deceit.  That’s one reason Unbelievers often get angry when debating with Christians.

The truth often hurts.

Foundering Rocks, Part III

 Lip Service in the Academy

In the scientific community the vast majority of scientists accept evolutionism on the basis of authority.  “I believe evolutionism to be true because everyone else does” is the actual epistemic state of play of this bizarre theory.  After all, evolution calls for billions of years in order to work its “magic”.  In terms of the theory, no-one is able to be around that long.  Proving the theory to be true by means of scientific experimentation is impossible from the get-go.

It has been intriguing to watch experimental physicists strive mightily, expending great effort and expense to prove their theories about the state and nature of the atom and the existence, behaviour, and function of sub-atomic particles.  The Hadron Collider is just one example, albeit the largest, of experimental equipment built to confirm or deny the kaleidoscope of theories about how atoms are and work.  But when it comes to evolutionism, nada.  There ain’t no Darwinian Collider.
  An experimental machine to reproduce experiments and testing covering a process allegedly taking place over 65 billion of years is a bit of a tall order.  This is one reason why evolutionism fails to meet the basic requirements of a scientific theory: it can neither be proved nor disproved.  It is a philosophical cosmology, not a scientific theory.

Why do scientists hold to the “theory”, then?  Because they believe everyone else does.  Michael Behe has a revealing chapter in his book, Darwin’s Black Box.  He says that when the molecular basis of life became known, a specialty journal emerged in 1971 called the Journal of Molecular Evolution (JME).  It was devoted to research explaining how life at the molecular level came into existence.  It has turned out to be a miserable failure.

In fact none of the papers published in JME over the entire course of its life as a journal has ever proposed a detailed model by which a complex biochemical system might have been produced in a gradual step-by-step Darwinian fashion.  Although many scientists ask how sequences can change or how chemicals necessary for life might be produced in the absence of cells, no one has ever asked in the pages of JME such questions as the following: How did the photosynthetic reaction centre develop?  How did intramolecular transport start?  How did cholesterol biosynthesis begin?  How did retinal become involved in vision?  How did phosphoprotein signaling pathways develop?  The very fact that none of these problems is even addressed, let alone solved, is a very strong indication that Darwinism is an inadequate framework for understanding the origin of complex biochemical systems. [Michael J. Behe, Darwin’s Black Box: The Biochemical Challenge to Evolution, 10th edition (New York: Free Press, {1996}, 2006), p.176.] 

 But maybe this reflects the failure of just one journal?  No. It’s the norm.  Behe turns to the Proceedings of the National Academy of Sciences.

But the great majority (of published papers in molecular biology) are concerned with sequence analysis, just as most papers in JME were, passing over the fundamental question of  how. . . . No papers were published in (the Proceedings) that proposed detailed routes by which complex biochemical structures might have developed.  Surveys of other biochemistry journals show the same result: sequences upon sequences, but no explanations. (Ibid., p. 178.) 

 What about the authoritative textbooks in the field.  When it comes to asserting they contain explanations of evolution they are fulsome indeed.  Behe cites an example of the most successful texts in biochemistry that have been used in many science academies over the years.  It claims that it will teach its readers and students how biomolecules are the products of evolutionary selection so that they may be the fittest possible molecules for their biological function.  (Behe is referring to a text by Albert Lehninger, professor of biophysics at Johns Hopkins.)  It turns out there are just two citations to evolution out of 6,000 entries in the index are to evolution.

With just 2 citations out of 6,000, Lehninger’s teacherly advice to his students concerning the importance of evolution to their studies is belied by his index.  In it Lehninger included virtually everything of relevance to biochemistry.  Apparently, though, evolution is rarely a relevant topic.  (Ibid, p. 181.)

Behe then makes the critical assertion: evolution is a wand used to wave over mysteries.  It is a warranting concept.  It justifies things too difficult to explain–which appear to be beyond explanation.  He cites an example of an index reference to the “evolution, adaptation of sperm whale”.

When we flip to the indicated page, we learn that sperm whales have several tons of oil in their heads which becomes more dense at colder temperatures.  This allows the whale to match the density of the water at the great depths where it often dives and so swim more easily.  After describing the whale the textbook remarks, “Thus, we see in the sperm whale a remarkable anatomical and biochemical adaptation, perfected by evolution.”  But that single line is all that’s said!  The whale is stamped “perfected by evolution,” and then everybody goes home.  The authors make no attempt to explain how the sperm whale came to have the structure it has.  (Ibid.)

The emperor has no clothes.  This modus operandi is eerily similar to the practice of alchemy in the Middle Ages.  By endless experimentation and research alchemists learned a great deal about the properties of the four “elements” (fire, water, air and earth) when heated and combined.  But they failed to produce gold.

Evolutionists have studied assiduously the biomolecular structure of life and at least some have sought to demonstrate how it came to be via evolutionary processes.  Along the way, they have learnt a great deal about the structures of living cells.  But the gold has never been produced.  Most biomolecular scientists, however, just pay lip service to evolutionism.  Its nothing more than a mantra to gain entrance to the academy.  After that, they get on with the real, genuine science.

The Overreach of Stupid Science, Part V

The Eclipse of Ethics

[Part V of  The Folly of Scientism by  Austin L. Hughes

Originally published in The New Atlantis ]

Perhaps no area of philosophy has seen a greater effort at appropriation by advocates of scientism than ethics. Many of them tend toward a position of moral relativism. According to this position, science deals with the objective and the factual, whereas statements of ethics merely represent people’s subjective feelings; there can be no universal right or wrong. Not surprisingly, there are philosophers who have codified this opinion. The positivist tradition made much of a “fact-value distinction,” in which science was said to deal with facts, leaving fields like ethics (and aesthetics) to deal with the more nebulous and utterly disparate world of values. In his influential book Ethics: Inventing Right and Wrong (1977), the philosopher J. L. Mackie went even further, arguing that ethics is fundamentally based on a false theory about reality.

Evolutionary biology has often been seen as highly relevant to ethics, beginning in the nineteenth century. Social Darwinism — at least as it came to be explained and understood by later generations — was an ideology that justified laissez-faire capitalism with reference to the natural “struggle for existence.” In the writings of authors such as Herbert Spencer, the accumulation of wealth with little regard for those less fortunate was justified as “nature’s way.” Of course, the “struggle” involved in natural selection is not a struggle to accumulate a stock portfolio but a struggle to reproduce — and ironically, Social Darwinism arose at the very time that the affluent classes of Western nations were beginning to limit their reproduction (the so-called “demographic transition”) with the result that the economic struggle and the Darwinian struggle were at cross-purposes.

Partly in response to this contradiction, the eugenics movement arose, with its battle cry, “The unfit are reproducing like rabbits; we must do something to stop them!”
Although plenty of prominent Darwinians endorsed such sentiments in their day, no more incoherent a plea can be imagined from a Darwinian point of view: If the great unwashed are out-reproducing the genteel classes, that can only imply that it is the great unwashed who are the fittest — not the supposed “winners” in the economic struggle. It is the genteel classes, with their restrained reproduction, who are the unfit. So the foundations of eugenics are complete nonsense from a Darwinian point of view.

The unsavory nature of Social Darwinism and associated ideas such as eugenics caused a marked eclipse in the enterprise of evolutionary ethics. But since the 1970s, with the rise of sociobiology and its more recent offspring evolutionary psychology, there has been a huge resurgence of interest in evolutionary ethics on the part of philosophers, biologists, psychologists, and popular writers.

It should be emphasized that there is such a thing as a genuinely scientific human sociobiology or evolutionary psychology. In this field, falsifiable hypotheses are proposed and tested with real data on human behavior. The basic methods are akin to those of behavioral ecology, which have been applied with some success to understanding the behavioral adaptations of nonhuman animals, and can shed similar light on aspects of human behavior — although these efforts are complicated by human cultural variability. On the other hand, there is also a large literature devoted to a kind of pop sociobiology that deals in untested — and often untestable — speculations, and it is the pop sociobiologists who are most likely to tout the ethical relevance of their ostensible discoveries.

When evolutionary psychology emerged, its practitioners were generally quick to repudiate Social Darwinism and eugenics, labeling them as “misuses” of evolutionary ideas. It is true that both were based on incoherent reasoning that is inconsistent with the basic concepts of biological evolution; but it is also worth remembering that some very important figures in the history of evolutionary biology did not see these inconsistencies, being blinded, it seems, by their social and ideological prejudices. The history of these ideas is another cautionary tale of the fallibility of institutional science when it comes to getting even its own theories straight.

Just the same, what evolutionary psychology was about, we were told, was something quite different than Social Darwinism. It avoided the political and focused on the personal. One area of human life to which the field has devoted considerable attention is sex, spinning out just-so stories to explain the “adaptive” nature of every sort of behavior, from infidelity to rape. As with the epistemological explanations, since natural selection “should” have favored this or that behavior, it is often simply concluded that it must have done so.

The tacit assumption seems to be that merely reciting the story somehow renders it factual. (There often even seems to be a sort of relish with which these stories are elaborated — the more so the more thoroughly caddish the behavior.) The typical next move is to deplore the very behaviors the evolutionary psychologist has just designated as part of our evolutionary heritage, and perhaps our instinct: To be sure, we don’t approve of such things today, lest anyone get the wrong idea. This deploring is often accompanied by a pious invocation of the fact-value distinction (even though typically no facts at all have made an appearance — merely speculations).

There seems to be a thirst for this kind of explanation, but the pop evolutionary psychologists generally pay little attention to the philosophical issues raised by their evolutionary scenarios. Most obviously, if “we now know” that the selfish behavior attributed to our ancestors is morally reprehensible, how have “we” come to know this? What basis do we have for saying that anything is wrong at all if our behaviors are no more than the consequence of past natural selection? And if we desire to be morally better than our ancestors were, are we even free to do so? Or are we programmed to behave in a certain way that we now, for some reason, have come to deplore?

On the other hand, there is a more serious philosophical literature that attempts to confront some of the issues in the foundations of ethics that arise from reflections on human evolutionary biology — for example, Richard Joyce’s 2006 book The Evolution of Morality. Unfortunately, much of this literature consists of still more storytelling — scenarios whereby natural selection might have favored a generalized moral sense or the tendency to approve of certain behaviors such as cooperation. There is nothing inherently implausible about such scenarios, but they remain in the realm of pure speculation and are essentially impossible to test in any rigorous way. Still, these ideas have gained wide influence.

Part of this evolutionary approach to ethics tends toward a debunking of morality. Since our standards of morality result from natural selection for traits that were useful to our ancestors, the debunkers argue, these moral standards must not refer to any objective ethical truths. But just because certain beliefs about morality were useful for our ancestors does not make them necessarily false. It would be hard to make a similar case, for example, against the accuracy of our visual perception based on its usefulness to our ancestors, or against the truth of arithmetic based on the same.

True ethical statements — if indeed they exist — are of a very different sort from true statements of arithmetic or observational science. One might argue that our ancestors evolved the ability to understand human nature and, therefore, they could derive true ethical statements from an understanding of that nature. But this is hardly a novel discovery of modern science: Aristotle made the latter point in the Nicomachean Ethics. If human beings are the products of evolution, then it is in some sense true that everything we do is the result of an evolutionary process — but it is difficult to see what is added to Aristotle’s understanding if we say that we are able to reason as he did as the result of an evolutionary process. (A parallel argument could be made about Kantian ethics.)

Not all advocates of scientism fall for the problems of reducing ethics to evolution. Sam Harris, in his 2010 book The Moral Landscape, is one advocate of scientism who takes issue with the whole project of evolutionary ethics. Yet he wishes to substitute an offshoot of scientism that is perhaps even more problematic, and certainly more well-worn: utilitarianism. Under Harris’s ethical framework, the central criteria for judging if a behavior is moral is whether or not it contributes to the “well-being of conscious creatures.” Harris’s ideas have all of the problems that have plagued utilitarian philosophy from the beginning. As utilitarians have for some time, Harris purports to challenge the fact-value distinction, or rather, to sidestep the tricky question of values entirely by just focusing on facts. But, as has also been true of utilitarians for some time, this move ends up being a way to advance certain values over others without arguing for them, and to leave large questions about those values unresolved.

Harris does not, for example, address the time-bound nature of such evaluations: Do we consider only the well-being of creatures that are conscious at the precise moment of our analysis? If yes, why should we accept such a bias? What of creatures that are going to possess consciousness in the near future — or would without human intervention — such as human embryos, whose destruction Harris staunchly advocates for the purposes of stem cell research? What of comatose patients, whose consciousness, and prospects for future consciousness, are uncertain? Harris might respond that he is only concerned with the well-being of creatures now experiencing consciousness, not any potentially future conscious creatures. But if so, should he not, for example, advocate expending all of the earth’s nonrenewable resources in one big here-and-now blowout, enhancing the physical well-being of those now living, and let future generations be damned? Yet Harris claims to be a conservationist. Surely the best justification for resource conservation on the basis of his ethics would be that it enhances the well-being of future generations of conscious creatures. If those potential future creatures merit our consideration, why should we not extend the same consideration to creatures already in existence, whose potential future involves consciousness?

Moreover, the factual analysis Harris touts cannot nearly bear the weight of the ethical inquiry he claims it does. Harris argues that the question of what factors contribute to the “well-being of conscious creatures” is a factual one, and furthermore that science can provide insights into these factors, and someday perhaps even give definitive accounts of them. Harris himself has been involved in research that examines the brain states of human subjects engaged in a variety of tasks. Although there has been much overhyping of brain imaging, the limitations of this sort of research are becoming increasingly obvious. Even on their own terms, these studies at best provide evidence of correlation, not of causation, and of correlations mixed in with the unfathomably complex interplay of cause and effect that are the brain and the mind. These studies inherently claim to get around the problems of understanding subjective consciousness by examining the brain, but the basic unlikeness of first-person qualitative experience and third-person events that can be examined by anyone places fundamental limits on the usual reductive techniques of empirical science.

We might still grant Harris’s assumption that neuroscience will someday reveal, in great biochemical and physiological detail, a set of factors highly associated with a sense of well-being. Even so, there would be limitations on how much this knowledge would advance human happiness. For comparison, we know a quite a lot about the physiology of digestion, and we are able to describe in great detail the physiological differences between the digestive system of a person who is starving and that of a person who has just eaten a satisfying and nutritionally balanced meal. But this knowledge contributes little to solving world hunger. This is because the factor that makes the difference — that is, the meal — comes from outside the person. Unless the factors causing our well-being come primarily from within, and are totally independent of what happens in our environment, Harris’s project will not be the key to achieving universal well-being.

Harris is aware that external circumstances play a vital role in our sense of well-being, and he summarizes some research that addresses these factors. But most of this research is soft science of the very softest sort — questionnaire surveys that ask people in a variety of circumstances about their feelings of happiness. As Harris himself notes, most of the results tell us nothing we did not already know. (Unsurprisingly, Harris, an atheist polemicist, fails to acknowledge any studies that have supported a spiritual or religious component in happiness.) Moreover, there is reason for questioning to what extent the self-reported “happiness” in population surveys relates to real happiness. Recent data indicating that both states and countries with high rates of reported “happiness” also have high rates of suicide suggest that people’s answers to surveys may not always provide a reliable indicator of societal well-being, or even of happiness.

This, too, is a point as old as philosophy: As Aristotle noted in the Nicomachean Ethics, there is much disagreement between people as to what happiness is, “and often even the same man identifies it with different things, with health when he is ill, with wealth when he is poor.” Again, understanding values requires philosophy, and cannot simply be sidestepped by wrapping them in a numerical package. Harris is right that new scientific information can guide our decisions by enlightening our application of moral principles — a conclusion that would not have been troubling to Kant or Aquinas. But this is a far cry from scientific information shaping or determining our moral principles themselves, an idea for which Harris is unable to make a case.

A striking inconsistency in Harris’s thought is his adherence to determinism, which seems to go against his insistence that there are right and wrong choices. This is a tension widely evident in pop sociobiology. Harris seems to think that free will is an illusion but also that our decisions are really driven by thoughts that arise unbidden in our brains. He does not explain the origin of these thoughts nor how their origin relates to moral choices.

Harris gives a hint of an answer to this question when, in speaking of criminals, he attributes their actions to “some combination of bad genes, bad parents, bad ideas, and bad luck.” Each of us, he says, “could have been dealt a very different hand in life” and “it seems immoral not to recognize just how much luck is involved in morality itself.” Harris’s reference to “bad genes” puts him back closer to the territory of eugenics and Social Darwinism than he seems to realize, making morality the privilege of the lucky few. Although Harris admits that we have a lot to learn about what makes for happiness, he does advance his understanding that happy people have “careers that are intellectually stimulating and financially rewarding” and “basic control over their lives.”

This view undermines the possibility of happiness and moral behavior for those who are dealt a bad hand, and so does more to degrade than uplift at the individual level. But worse, it does little to advance the well-being of society as a whole. The importance of good circumstances, and guaranteeing these for as many as possible, is one that is already widely understood and appreciated. But the question remains how to bring about these circumstances for everyone, and no economic system has yet been devised to ensure this. Short of this, difficult discussions of philosophy, justice, politics, and all of the other fields concerned with public life will be required to understand what the good life is and how to provide it to many given the limitations and inequalities of what circumstance brings to each of us.

On these points, as with so many others, scientism tends to present as bold, novel solutions what are really just the beginning terms of the problem as it is already widely understood.

The Overreach of Stupid Science, Part IV

The Eclipse of Epistemology

[Part IV of  The Folly of Scientism by  Austin L. Hughes
Originally published in The New Atlantis ]

Hawking and Mlodinow, in the chapter of their book called “The Theory of Everything,” quote Albert Einstein: “The most incomprehensible thing about the universe is that it is comprehensible.” In response, Hawking and Mlodinow offer this crashing banality: “The universe is comprehensible because it is governed by scientific laws; that is to say, its behavior can be modeled.” Later, the authors invite us to give ourselves a collective pat on the back: “The fact that we human beings — who are ourselves mere collections of fundamental particles of nature — have been able to come this close to an understanding of the laws governing us and our universe is a great triumph.”

Great triumph or no, none of this addresses Einstein’s paradox, because no explanation is offered as to why our universe is “governed by scientific laws.”  Moreover, even if we can be confident that our universe has unchanging physical laws — which many of the new speculative cosmologies call into question — how is it that we “mere collections of particles” are able to discern those laws? How can we be confident that we will continue to discern them better, until we understand them fully?

A common response to these questions invokes what has become the catch-all explanatory tool of advocates of scientism: evolution.
W. V. O. Quine was one of the first modern philosophers to apply evolutionary concepts to epistemology, when he argued in Ontological Relativity and Other Essays (1969) that natural selection should have favored the development of traits in human beings that lead us to distinguish truth from falsehood, on the grounds that believing false things is detrimental to fitness. More recently, scientific theories themselves have come to be considered the objects of natural selection. For example, philosopher Bastiaan C. van Fraassen argued in his 1980 book The Scientific Image:

the success of current scientific theories is no miracle. It is not even surprising to the scientific (Darwinist) mind. For any scientific theory is born into a life of fierce competition, a jungle red in tooth and claw. Only the successful theories survive — the ones which in fact latched onto actual regularities in nature.

Richard Dawkins has famously extended this analysis to ideas in general, which he calls “memes.”

The notion that our minds and senses are adapted to find knowledge does have some intuitive appeal; as Aristotle observed long before Darwin, “all men, by nature, desire to know.” But from an evolutionary perspective, it is by no means obvious that there is always a fitness advantage to knowing the truth. One might grant that it may be very beneficial to my fitness to know certain facts in certain contexts: For instance, if a saber-toothed tiger is about to attack me, it is likely to be to my advantage to be aware of that fact.

Accurate perception in general is likely to be advantageous. And simple mathematics, such as counting, might be advantageous to fitness in many contexts — for example, in keeping track of my numerous offspring when saber-toothed cats are about. Plausibly, even the human propensity for gathering genealogical information, and with it an intuitive sense of degrees of relatedness among social group members, might have been advantageous because it served to increase the propensity of an organism to protect members of the species with genotypes similar to its own. But the general epistemological argument offered by these authors goes far beyond any such elementary needs. While it may be plausible to imagine a fitness advantage to simple skills of classification and counting, it is very hard to see such an advantage to DNA sequence analysis or quantum theory.

Similar points apply whether one is considering the ideas themselves or the traits that allow us to form ideas as the objects of natural selection. In either case, the “fitness” of an idea hinges on its ability to gain wide adherence and acceptance. But there is little reason to suppose that natural selection would have favored the ability or desire to perceive the truth in all cases, rather than just some useful approximation of it. Indeed, in some contexts, a certain degree of self-deception may actually be advantageous from the point of view of fitness. There is a substantial sociobiological literature regarding the possible fitness advantages of self-deception in humans (the evolutionary biologist Robert L. Trivers reviewed these in a 2000 article in the Annals of the New York Academy of Sciences).

These invocations of evolution also highlight another common misuse of evolutionary ideas: namely, the idea that some trait must have evolved merely because we can imagine a scenario under which possession of that trait would have been advantageous to fitness. Unfortunately, biologists as well as philosophers have all too often been guilty of this sort of invalid inference. Such forays into evolutionary explanation amount ultimately to storytelling rather than to hypothesis-testing in the scientific sense. For a complete evolutionary account of a phenomenon, it is not enough to construct a story about how the trait might have evolved in response to a given selection pressure; rather, one must provide some sort of evidence that it really did so evolve. This is a very tall order, especially when we are dealing with human mental or behavioral traits, the genetic basis of which we are far from understanding.

Evolutionary biologists today are less inclined than Darwin was to expect that every trait of every organism must be explicable by positive selection. In fact, there is abundant evidence — as described in books like Motoo Kimura’s The Neutral Theory of Molecular Evolution (1983), Stephen Jay Gould’s The Structure of Evolutionary Theory (2002), and Michael Lynch’s The Origins of Genome Architecture (2007) — that many features of organisms arose by mutations that were fixed by chance, and were neither selectively favored nor disfavored. The fact that any species, including ours, has traits that might confer no obvious fitness benefit is perfectly consistent with what we know of evolution. Natural selection can explain much about why species are the way they are, but it does not necessarily offer a specific explanation for human intellectual powers, much less any sort of basis for confidence in the reliability of science.

What van Fraassen, Quine, and these other thinkers are appealing to is a kind of popularized and misapplied Darwinism that bears little relationship to how evolution really operates, yet that appears in popular writings of all sorts — and even, as I have discovered in my own work as an evolutionary biologist, in the peer-reviewed literature.

To speak of a “Darwinian” process of selection among culturally transmitted ideas, whether scientific theories or memes, is at best only a loose analogy with highly misleading implications. It easily becomes an interpretive blank check, permitting speculation that seems to explain any describable human trait. Moreover, even in the strongest possible interpretation of these arguments, at best they help a little in explaining why we human beings are capable of comprehending the universe — but they still say nothing about why the universe itself is comprehensible.

The Overreach of Stupid Science, Part III

The Eclipse of Metaphysics

[Part III of  The Folly of Scientism by  Austin L. Hughes
Originally published in The New Atlantis ]

There are at least three areas of inquiry traditionally in the purview of philosophy that now are often claimed to be best — or only — studied scientifically: metaphysics, epistemology, and ethics. Let us discuss each in turn.

Physicists Stephen Hawking and Leonard Mlodinow open their 2010 book The Grand Design by asking:

What is the nature of reality? Where did all this come from? Did the universe need a creator? … Traditionally these are questions for philosophy, but philosophy is dead. Philosophy has not kept up with modern developments in science, particularly physics. Scientists have become the bearers of the torch of discovery in our quest for knowledge.

Though physicists might once have been dismissive of metaphysics as mere speculation, they would also have characterized such questions as inherently speculative and so beyond their own realm of expertise. The claims of Hawking and Mlodinow, and many other writers, thus represent a striking departure from the traditional view.

In contrast to these authors’ claims of philosophical obsolescence, there has arisen a curious consilience between the findings of modern cosmology and some traditional understandings of the creation of the universe. For example, theists have noted that the model known as the Big Bang has a certain consistency with the Judeo-Christian notion of creation ex nihilo, a consistency not seen in other cosmologies that postulated an eternally existent universe. (In fact, when the astronomer-priest Georges Lemaître first postulated the theory, he was met with such skepticism by proponents of an eternal universe that the name “Big Bang” was coined by his opponents — as a term of ridicule.) Likewise, many cosmologists have articulated various forms of what is known as the “anthropic principle” — that is, the observation that the basic laws of the universe seem to be “fine-tuned” in such a way as to be favorable to life, including human life.

It is perhaps in part as a response to this apparent consilience that we owe the rise of a large professional and popular literature in recent decades dedicated to theories about multiverses, “many worlds,” and “landscapes” of reality that would seem to restore the lack of any special favoring of humanity. Hawking and Mlodinow, for example, state that

the fine-tunings in the laws of nature can be explained by the existence of multiple universes. Many people through the ages have attributed to God the beauty and complexity of nature that in their time seemed to have no scientific explanation. But just as Darwin and Wallace explained how the apparently miraculous design of living forms could appear without intervention by a supreme being, the multiverse concept can explain the fine-tuning of physical law without the need for a benevolent creator who made the universe for our benefit.

The multiverse theory holds that there are many different universes, of which ours is just one, and that each has its own system of physical laws. The argument Hawking and Mlodinow offer is essentially one from the laws of probability: If there are enough universes, one or more whose laws are suitable for the evolution of intelligent life is more or less bound to occur.

Physicist Lee Smolin, in his 1997 book The Life of the Cosmos, goes one step further by applying the principles of natural selection to a multiverse model. Smolin postulates that black holes give rise to new universes, and that the physical laws of a universe determine its propensity to give rise to black holes. A universe’s set of physical laws thus serves as its “genome,” and these “genomes” differ with respect to their propensity to allow a universe to “reproduce” by creating new universes. For example, it happens that a universe with a lot of carbon is very good at making black holes — and a universe with a lot of carbon is also one favorable to the evolution of life. In order for his evolutionary process to work, Smolin also assumes a kind of mutational mechanism whereby the physical laws of a universe may be slightly modified in progeny universes. For Smolin, then, not only is our universe bound to occur because there have been many rolls of the dice, but the dice are loaded in favor of a universe like ours because it happens to be a particularly “fit” universe.

Though these arguments may do some work in evading the conclusion that our universe is fine-tuned with us in mind, they cannot sidestep, or even address, the fundamental metaphysical questions raised by the fact that something — whether one or many universes — exists rather than nothing. The main fault of these arguments lies in their failure to distinguish between necessary and contingent being. A contingent being is one that might or might not exist, and thus might or might not have certain properties. In the context of modern quantum physics, or population genetics, one might even assign probability values to the existence or non-existence of some contingent being. But a necessary being is one that must exist, and whose properties could not be other than they are.

Multiverse theorists are simply saying that our universe and its laws have merely contingent being, and that other universes are conceivable and so also may exist, albeit contingently. The idea of the contingent nature of our universe may cut against the grain of modern materialism, and so seem novel to many physicists and philosophers, but it is not in fact new. Thomas Aquinas, for example, began the third of his famous five proofs of the existence of God (a being “necessary in itself”) with the observation of contingent being (“we find among things certain ones that might or might not be”). Whether or not one is convinced by Aquinas, it should be clear that the “discovery” that our universe is a contingent event among other contingent events is perfectly consistent with his argument.

Writers like Hawking, Mlodinow, and Smolin, however, use the contingent nature of our universe and its laws to argue for a very different conclusion from that of Aquinas — namely, that some contingent universe (whether or not it turned out to be our own) must have come into being, without the existence of any necessary being. Here again probability is essential to the argument: While any universe with a particular set of laws may be very improbable, with enough universes out there it becomes highly probable. This is the same principle behind the fact that, when I toss a coin, even though there is some probability that I will get heads and some probability that I will get tails, it is certain that I will get heads or tails. Similarly, modern theorists imply, the multiverse has necessary being even though any given universe does not.

The problem with this argument is that certainty in the sense of probability is not the same thing as necessary being: If I toss a coin, it is certain that I will get heads or tails, but that outcome depends on my tossing the coin, which I may not necessarily do. Likewise, any particular universe may follow from the existence of a multiverse, but the existence of the multiverse remains to be explained. In particular, the universe-generating process assumed by some multiverse theories is itself contingent because it depends on the action of laws assumed by the theory. The latter might be called meta-laws, since they form the basis for the origin of the individual universes, each with its own individual set of laws. So what determines the meta-laws? Either we must introduce meta-meta-laws, and so on in infinite regression, or we must hold that the meta-laws themselves are necessary — and so we have in effect just changed our understanding of what the fundamental universe is to one that contains many universes. In that case, we are still left without ultimate explanations as to why that universe exists or has the characteristics it does.

When it comes to such metaphysical questions, science and scientific speculation may offer much in fleshing out details, but they have so far failed to offer any explanations that are fundamentally novel to philosophy — much less have they supplanted it entirely.

The Overreach of Stupid Science, Part II

The Abdication of the Philosophers

[Part II of  The Folly of Scientism by  Austin L. Hughes
Originally published in The New Atlantis ]


 
If philosophy is regarded as a legitimate and necessary discipline, then one might think that a certain degree of philosophical training would be very useful to a scientist. Scientists ought to be able to recognize how often philosophical issues arise in their work — that is, issues that cannot be resolved by arguments that make recourse solely to inference and empirical observation. In most cases, these issues arise because practising scientists, like all people, are prone to philosophical errors.

To take an obvious example, scientists can be prone to errors of elementary logic, and these can often go undetected by the peer review process and have a major impact on the literature — for instance, confusing correlation and causation, or confusing implication with a biconditional. Philosophy can provide a way of understanding and correcting such errors. It addresses a largely distinct set of questions that natural science alone cannot answer, but that must be answered for natural science to be properly conducted.

These questions include how we define and understand science itself.
One group of theories of science — the set that best supports a clear distinction between science and philosophy, and a necessary role for each — can broadly be classified as “essentialist.” These theories attempt to identify the essential traits that distinguish science from other human activities, or differentiate true science from nonscientific and pseudoscientific forms of inquiry. Among the most influential and compelling of these is Karl Popper’s criterion of falsifiability outlined in The Logic of Scientific Discovery (1959).

A falsifiable theory is one that makes a specific prediction about what results are supposed to occur under a set of experimental conditions, so that the theory might be falsified by performing the experiment and comparing predicted to actual results. A theory or explanation that cannot be falsified falls outside the domain of science. For example, Freudian psychoanalysis, which does not make specific experimental predictions, is able to revise its theory to match any observations, in order to avoid rejecting the theory altogether. By this reckoning, Freudianism is a pseudoscience, a theory that purports to be scientific but is in fact immune to falsification. In contrast, for example, Einstein’s theory of relativity made predictions (like the bending of starlight around the sun) that were novel and specific, and provided opportunities to disprove the theory by direct experimental observation. Advocates of Popper’s definition would seem to place on the same level as pseudoscience or nonscience every statement — of metaphysics, ethics, theology, literary criticism, and indeed daily life — that does not meet the criterion of falsifiability.

The criterion of falsifiability is appealing in that it highlights similarities between science and the trial-and-error methods we use in everyday problem-solving. If I have misplaced my keys, I immediately begin to construct scenarios — hypotheses, if you will — that might account for their whereabouts: Did I leave them in the ignition or in the front door lock? Were they in the pocket of the jeans I put in the laundry basket? Did I drop them while mowing the lawn? I then proceed to evaluate these scenarios systematically, by testing predictions that I would expect to be true under each scenario — in other words, by using a sort of Popperian method. The everyday, commonsense nature of the falsifiability criterion has the virtue of both showing how science is grounded in basic ideas of rationality and observation, and thereby also of stripping away from science the aura of sacred mystery with which some would seek to surround it.

An additional strength of the falsifiability criterion is that it makes possible a clear distinction between science properly speaking and the opinions of scientists on nonscientific subjects. We have seen in recent years a growing tendency to treat as “scientific” anything that scientists say or believe. The debates over stem cell research, for example, have often been described, both within the scientific community and in the mass media, as clashes between science and religion. It is true that many, but by no means all, of the most vocal defenders of embryonic stem cell research were scientists, and that many, but by no means all, of its most vocal opponents were religious. But in fact, there was little science being disputed: the central controversy was between two opposing views on a particular ethical dilemma, neither of which was inherently more scientific than the other. If we confine our definition of the scientific to the falsifiable, we clearly will not conclude that a particular ethical view is dictated by science just because it is the view of a substantial number of scientists. The same logic applies to the judgments of scientists on political, aesthetic, or other nonscientific issues. If a poll shows that a large majority of scientists prefers neutral colors in bathrooms, for example, it does not follow that this preference is “scientific.”

Popper’s falsifiability criterion and similar essentialist definitions of science highlight the distinct but vital roles of both science and philosophy. The definitions show the necessary role of philosophy in undergirding and justifying science — protecting it from its potential for excess and self-devolution by, among other things, proposing clear distinctions between legitimate scientific theories and pseudoscientific theories that masquerade as science.

By contrast to Popper, many thinkers have advanced understandings of philosophy and science that blur such distinctions, resulting in an inflated role for science and an ancillary one for philosophy. In part, philosophers have no one but themselves to blame for the low state to which their discipline has fallen — thanks especially to the logical positivist and analytic strain that has been dominant for about a century in the English-speaking world. For example, the influential twentieth-century American philosopher W. V. O. Quine spoke modestly of a “philosophy continuous with science” and vowed to eschew philosophy’s traditional concern with metaphysical questions that might claim to sit in judgment on the natural sciences. Science, Quine and many of his contemporaries seemed to say, is where the real action is, while philosophers ought to celebrate science from the sidelines.

This attitude has been articulated in the other main group of theories of science, which rivals the essentialist understandings — namely, the “institutional” theories, which identify science with the social institution of science and its practitioners. The institutional approach may be useful to historians of science, as it allows them to accept the various definitions of fields used by the scientists they study. But some philosophers go so far as to use “institutional factors” as the criteria of good science. Ladyman, Ross, and Spurrett, for instance, say that they “demarcate good science — around lines which are inevitably fuzzy near the boundary — by reference to institutional factors, not to directly epistemological ones.” By this criterion, we would differentiate good science from bad science simply by asking which proposals agencies like the National Science Foundation deem worthy of funding, or which papers peer-review committees deem worthy of publication.

The problems with this definition of science are myriad. First, it is essentially circular: science simply is what scientists do. Second, the high confidence in funding and peer-review panels should seem misplaced to anyone who has served on these panels and witnessed the extent to which preconceived notions, personal vendettas, and the like can torpedo even the best proposals. Moreover, simplistically defining science by its institutions is complicated by the ample history of scientific institutions that have been notoriously unreliable.

Consider the decades during which Soviet biology was dominated by the ideologically motivated theories of the geneticist Trofim Lysenko, who rejected Mendelian genetics as inconsistent with Marxism and insisted that acquired characteristics could be inherited. An observer who distinguishes good science from bad science “by reference to institutional factors” alone would have difficulty seeing the difference between the unproductive and corrupt genetics in the Soviet Union and the fruitful research of Watson and Crick in 1950s Cambridge. Can we be certain that there are not sub-disciplines of science in which even today most scientists accept without question theories that will in the future be shown to be as preposterous as Lysenkoism? Many working scientists can surely think of at least one candidate — that is, a theory widely accepted in their field that is almost certainly false, even preposterous.

Confronted with such examples, defenders of the institutional approach will often point to the supposedly self-correcting nature of science. Ladyman, Ross, and Spurrett assert that “although scientific progress is far from smooth and linear, it never simply oscillates or goes backwards. Every scientific development influences future science, and it never repeats itself.” Alas, in the thirty or so years I have been watching, I have observed quite a few scientific sub-fields (such as behavioral ecology) oscillating happily and showing every sign of continuing to do so for the foreseeable future. The history of science provides examples of the eventual discarding of erroneous theories. But we should not be overly confident that such self-correction will inevitably occur, nor that the institutional mechanisms of science will be so robust as to preclude the occurrence of long dark ages in which false theories hold sway.

The fundamental problem raised by the identification of “good science” with “institutional science” is that it assumes the practitioners of science to be inherently exempt, at least in the long term, from the corrupting influences that affect all other human practices and institutions. Ladyman, Ross, and Spurrett explicitly state that most human institutions, including “governments, political parties, churches, firms, NGOs, ethnic associations, families … are hardly epistemically reliable at all.” However, “our grounding assumption is that the specific institutional processes of science have inductively established peculiar epistemic reliability.” This assumption is at best naïve and at worst dangerous. If any human institution is held to be exempt from the petty, self-serving, and corrupting motivations that plague us all, the result will almost inevitably be the creation of a priestly caste demanding adulation and required to answer to no one but itself.

It is something approaching this adulation that seems to underlie the abdication of the philosophers and the rise of the scientists as the authorities of our age on all intellectual questions. Reading the work of Quine, Rudolf Carnap, and other philosophers of the positivist tradition, as well as their more recent successors, one is struck by the aura of hero-worship accorded to science and scientists. In spite of their idealization of science, the philosophers of this school show surprisingly little interest in science itself — that is, in the results of scientific inquiry and their potential philosophical implications. As a biologist, I must admit to finding Quine’s constant invocation of “nerve-endings” as an all-purpose explanation of human behavior to be embarrassingly simplistic. Especially given Quine’s intellectual commitment to behaviorism, it is surprising yet characteristic that he had little apparent interest in the actual mechanisms by which the nervous system functions.

Ross, Ladyman, and Spurrett may be right to assume that science possesses a “peculiar epistemic reliability” that is lacking in other forms of inquiry. But they have taken the strange step of identifying that reliability with the institutions and practitioners of science, rather than with any particular rational, empirical, or methodological criterion that scientists are bound (but often fail) to uphold. Thus a (largely justifiable) admiration for the work of scientists has led to a peculiar, unjustified role for scientists themselves — so that, increasingly, what is believed by scientists and the public to be “scientific” is simply any claim that is upheld by many scientists, or that is based on language and ideas that sound sufficiently similar to scientific theories.

Secular Theology

The Religious Foundation of Modern Science

It was from the intellectual ferment brought about by the merging of Greek philosophy and Judeo-Islamic-Christian thought that modern science emerged with its unidirection linear time (rather than cyclic), its insistence of nature’s rationality, and its emphasis on mathematical principles.  All the early scientists such as Newton were religious in one way or another. . . . In the ensuing 300 years, the theological dimension of science has faded.  People take for granted that the physical world is both ordered and intelligible. 

The underlying order in nature–the laws of physics–is simply accepted as given, as brute fact.  Nobody asks where the laws come from–at least they don’t in polite company.  However, even the most atheistic science accepts as an act of faith (i.e. an assumption) the existence of a law-like order in nature that is at least in part comprehensible to us.  So science can proceed only if the scientist adopts an essentially theological worldview.  [Paul Davies, “Design in Physics and Cosmology,” in God and Design: The Teleological Argument and Modern Science, ed. Neil A. Manson (London: Routledge, 2003), p. 148.]

Kuhn’s Revolution

Fifty Years On

Several decades ago during post-grad studies we were introduced to Thomas Kuhn’s The Structure of Scientific Revolutions.  Kuhn was working as an academic physicist and was asked to teach a course on science for humanities students.  Very cleverly, Kuhn decided to teach the course looking at the history of science–that is, science from a humanities perspective.

His research led him to the discoveries he summarises in The Structure.  He demonstrated that in the history of science, knowledge has been communal, not objective.  It is socially conditioned.  These ideas are now accepted by all but the most purblind.  Kuhn, of course, was not researching and writing in a vacuum.  Wittgenstein had already laid the philosophical foundations for this thesis.  Michael Polanyi had already articulated the idea that all scientific knowledge was personal knowledge.  Kuhn was building upon these notions.

Fifty years have not passed since The Structure first appeared.  John Naughton, writing in the Observer, chronicles the origins and influence of the work that reverberates to this day.
 

Thomas Kuhn: the man who changed the way the world looked at science 

John Naughton

Fifty years ago, a book by Thomas Kuhn altered the way we look at the philosophy behind science, as well as introducing the much abused phrase ‘paradigm shift’

Fifty years ago this month, one of the most influential books of the 20th century was published by the University of Chicago Press. Many if not most lay people have probably never heard of its author, Thomas Kuhn, or of his book, The Structure of Scientific Revolutions, but their thinking has almost certainly been influenced by his ideas. The litmus test is whether you’ve ever heard or used the term “paradigm shift”, which is probably the most used – and abused – term in contemporary discussions of organisational change and intellectual progress.

A Google search for it returns more than 10 million hits, for example. And it currently turns up inside no fewer than 18,300 of the books marketed by Amazon. It is also one of the most cited academic books of all time. So if ever a big idea went viral, this is it.

The real measure of Kuhn’s importance, however, lies not in the infectiousness of one of his concepts but in the fact that he singlehandedly changed the way we think about mankind’s most organised attempt to understand the world. Before Kuhn, our view of science was dominated by philosophical ideas about how it ought to develop (“the scientific method”), together with a heroic narrative of scientific progress as “the addition of new truths to the stock of old truths, or the increasing approximation of theories to the truth, and in the odd case, the correction of past errors”, as the Stanford Encyclopaedia of Philosophy puts it. Before Kuhn, in other words, we had what amounted to the Whig interpretation of scientific history, in which past researchers, theorists and experimenters had engaged in a long march, if not towards “truth”, then at least towards greater and greater understanding of the natural world.

Kuhn’s version of how science develops differed dramatically from the Whig version. Where the standard account saw steady, cumulative “progress”, he saw discontinuities – a set of alternating “normal” and “revolutionary” phases in which communities of specialists in particular fields are plunged into periods of turmoil, uncertainty and angst. These revolutionary phases – for example the transition from Newtonian mechanics to quantum physics – correspond to great conceptual breakthroughs and lay the basis for a succeeding phase of business as usual. The fact that his version seems unremarkable now is, in a way, the greatest measure of his success. But in 1962 almost everything about it was controversial because of the challenge it posed to powerful, entrenched philosophical assumptions about how science did – and should – work.

What made it worse for philosophers of science was that Kuhn wasn’t even a philosopher: he was a physicist, dammit. Born in 1922 in Cincinnati, he studied physics at Harvard, graduating summa cum laude in 1943, after which he was swept up by the war effort to work on radar. He returned to Harvard after the war to do a PhD – again in physics – which he obtained in 1949. He was then elected into the university’s elite Society of Fellows and might have continued to work on quantum physics until the end of his days had he not been commissioned to teach a course on science for humanities students as part of the General Education in Science curriculum. This was the brainchild of Harvard’s reforming president, James Conant, who believed that every educated person should know something about science.

The course was centred around historical case studies and teaching it forced Kuhn to study old scientific texts in detail for the first time. (Physicists, then as now, don’t go in much for history.) Kuhn’s encounter with the scientific work of Aristotle turned out to be a life- and career-changing epiphany.

“The question I hoped to answer,” he recalled later, “was how much mechanics Aristotle had known, how much he had left for people such as Galileo and Newton to discover. Given that formulation, I rapidly discovered that Aristotle had known almost no mechanics at all… that conclusion was standard and it might in principle have been right. But I found it bothersome because, as I was reading him, Aristotle appeared not only ignorant of mechanics, but a dreadfully bad physical scientist as well. About motion, in particular, his writings seemed to me full of egregious errors, both of logic and of observation.”

What Kuhn had run up against was the central weakness of the Whig interpretation of history. By the standards of present-day physics, Aristotle looks like an idiot. And yet we know he wasn’t. Kuhn’s blinding insight came from the sudden realisation that if one is to understand Aristotelian science, one must know about the intellectual tradition within which Aristotle worked. One must understand, for example, that for him the term “motion” meant change in general – not just the change in position of a physical body, which is how we think of it. Or, to put it in more general terms, to understand scientific development one must understand the intellectual frameworks within which scientists work. That insight is the engine that drives Kuhn’s great book.

Kuhn remained at Harvard until 1956 and, having failed to get tenure, moved to the University of California at Berkeley where he wrote Structure… and was promoted to a professorship in 1961. The following year, the book was published by the University of Chicago Press. Despite the 172 pages of the first edition, Kuhn – in his characteristic, old-world scholarly style – always referred to it as a mere “sketch”. He would doubtless have preferred to have written an 800-page doorstop.

But in the event, the readability and relative brevity of the “sketch” was a key factor in its eventual success. Although the book was a slow starter, selling only 919 copies in 1962-3, by mid-1987 it had sold 650,000 copies and sales to date now stand at 1.4 million copies. For a cerebral work of this calibre, these are Harry Potter-scale numbers.

Kuhn’s central claim is that a careful study of the history of science reveals that development in any scientific field happens via a series of phases. The first he christened “normal science” – business as usual, if you like. In this phase, a community of researchers who share a common intellectual framework – called a paradigm or a “disciplinary matrix” – engage in solving puzzles thrown up by discrepancies (anomalies) between what the paradigm predicts and what is revealed by observation or experiment. Most of the time, the anomalies are resolved either by incremental changes to the paradigm or by uncovering observational or experimental error. As philosopher Ian Hacking puts it in his terrific preface to the new edition of Structure: “Normal science does not aim at novelty but at clearing up the status quo. It tends to discover what it expects to discover.”

The trouble is that over longer periods unresolved anomalies accumulate and eventually get to the point where some scientists begin to question the paradigm itself. At this point, the discipline enters a period of crisis characterised by, in Kuhn’s words, “a proliferation of compelling articulations, the willingness to try anything, the expression of explicit discontent, the recourse to philosophy and to debate over fundamentals”.

In the end, the crisis is resolved by a revolutionary change in world-view in which the now-deficient paradigm is replaced by a newer one. This is the paradigm shift of modern parlance and after it has happened the scientific field returns to normal science, based on the new framework. And so it goes on.  This brutal summary of the revolutionary process does not do justice to the complexity and subtlety of Kuhn’s thinking. To appreciate these, you have to read his book. But it does perhaps indicate why Structure… came as such a bombshell to the philosophers and historians who had pieced together the Whig interpretation of scientific progress.

As an illustration, take Kuhn’s portrayal of “normal” science. The most influential philosopher of science in 1962 was Karl Popper, described by Hacking as “the most widely read, and to some extent believed, by practising scientists”. Popper summed up the essence of “the” scientific method in the title of one of his books: Conjectures and Refutations. According to Popper, real scientists (as opposed to, say, psychoanalysts) were distinguished by the fact that they tried to refute rather than confirm their theories. And yet Kuhn’s version suggested that the last thing normal scientists seek to do is to refute the theories embedded in their paradigm!

Many people were also enraged by Kuhn’s description of most scientific activity as mere “puzzle-solving” – as if mankind’s most earnest quest for knowledge was akin to doing the Times crossword. But in fact these critics were over-sensitive. A puzzle is something to which there is a solution. That doesn’t mean that finding it is easy or that it will not require great ingenuity and sustained effort. The unconscionably expensive quest for the Higgs boson that has recently come to fruition at Cern, for example, is a prime example of puzzle-solving because the existence of the particle was predicted by the prevailing paradigm, the so-called “standard model” of particle physics.

But what really set the cat among the philosophical pigeons was one implication of Kuhn’s account of the process of paradigm change. He argued that competing paradigms are “incommensurable”: that is to say, there exists no objective way of assessing their relative merits. There’s no way, for example, that one could make a checklist comparing the merits of Newtonian mechanics (which applies to snooker balls and planets but not to anything that goes on inside the atom) and quantum mechanics (which deals with what happens at the sub-atomic level). But if rival paradigms are really incommensurable, then doesn’t that imply that scientific revolutions must be based – at least in part – on irrational grounds? In which case, are not the paradigm shifts that we celebrate as great intellectual breakthroughs merely the result of outbreaks of mob psychology?  Kuhn’s book spawned a whole industry of commentary, interpretation and exegesis. His emphasis on the importance of communities of scientists clustered round a shared paradigm essentially triggered the growth of a new academic discipline – the sociology of science – in which researchers began to examine scientific disciplines much as anthropologists studied exotic tribes, and in which science was regarded not as a sacred, untouchable product of the Enlightenment but as just another subculture.

As for his big idea – that of a “paradigm” as an intellectual framework that makes research possible –well, it quickly escaped into the wild and took on a life of its own. Hucksters, marketers and business school professors adopted it as a way of explaining the need for radical changes of world-view in their clients. And social scientists saw the adoption of a paradigm as a route to respectability and research funding, which in due course led to the emergence of pathological paradigms in fields such as economics, which came to esteem mastery of mathematics over an understanding of how banking actually works, with the consequences that we now have to endure.

The most intriguing idea, however, is to use Kuhn’s thinking to interpret his own achievement. In his quiet way, he brought about a conceptual revolution by triggering a shift in our understanding of science from a Whiggish paradigm to a Kuhnian one, and much of what is now done in the history and philosophy of science might be regarded as “normal” science within the new paradigm. But already the anomalies are beginning to accumulate. Kuhn, like Popper, thought that science was mainly about theory, but an increasing amount of cutting-edge scientific research is data- rather than theory-driven. And while physics was undoubtedly the Queen of the Sciences when Structure… was being written, that role has now passed to molecular genetics and biotechnology. Does Kuhn’s analysis hold good for these new areas of science? And if not, isn’t it time for a paradigm shift?

In the meantime, if you’re making a list of books to read before you die, Kuhn’s masterwork is one.

Eruptions and Vented Spleen over Charter Schools

Turning Up the Heat

The brouhaha over charter schools (called Partnership Schools in New Zealand) is merrily spewing forth ash clouds reminiscent of the recent eruption at Mount Tongariro.  We have had one Robin Duff, head of a teachers union protesting the very idea that tax payers’ money would be used to fund a school which taught the biblical doctrine of creation.

The Post Primary Teachers Association has concerns about public money funding religious activities in schools, and president Robin Duff said the types of people who appeared to be interested in charter schools, would not have made it through teacher education. “In the case of the trust, we’d be concerned if an organisation with a ‘statement of faith’ that denies evolution and claims creation according to the Bible is a historical event, were to receive state-funding.”

“Given the criticism of public schools over the quality of science teaching, you’d think they’d have concerns about taxpayer dollars being used to fund religious indoctrination rather than education, but apparently not.”

This sounds horrendous.  Robby’s problem is that creation is “unscientific”.  He lambasts those who would teach children fairy stories and myths in place of good old hard science.  And to add insult to injury, the gummint is going to fund it.

Let’s unpack the spleen, bit by bit. Continue reading