Over the last week, I’ve been involved in an argument with another blogger, Justin Martyr, after Larry Niven linked us both to one of his posts. I challenged Justin (on his own blog) over his comments on ID (Intelligent Design), contending that his version was effectively a ‘God-of-the-gaps’ argument. Don’t read the thread – it becomes tiresome.
Justin tended to take the argument in all sorts of directions, and I tended to follow, but it ultimately became focused on Popper’s criterion of falsifiability for a scientific theory. First of all, notice that I use the word, falsifiability (not even in the dictionary) whereas Justin used the word, falsification. It’s a subtle difference but it highlights a difference in interpretation. It also highlighted to me that some people don’t understand what Popper’s criterion really means or why it’s so significant in scientific epistemology.
I know that, for some of you who read this blog, this will be boring, but, for others, it may be enlightening. Popper originally proposed his criterion to eliminate pseudo-scientific theories (he was targeting Freud at the time) whereby the theory is always true for all answers and all circumstances, no matter what the evidence. The best contemporary example is creationism and ID, because God can explain everything no matter what it entails. There is no test or experiment or observation one can do that will eliminate God as a hypothesis. On the other hand, there are lots of tests and observations (that have been done) that could eliminate evolutionary theory.
As an aside, bringing God into science stops science, which is an argument I once had with William Lane Craig and posted as The God hypothesis (Dec.08).
When scientists and philosophers first cited Popper’s criterion as a reason for rejecting creationism as ‘science’, many creationists (like Duane T. Gish, for example) claimed that evolution can’t be a valid scientific theory either, as no one has ever observed evolution taking place: it’s pure conjecture. So this was the first hurdle of misunderstanding. Firstly, evolutionary theory can generate hypotheses that can be tested. If the hypotheses aren’t falsifiable, then Gish would have had a case. The point is that all the discoveries that have been made, since Darwin and Wallace postulated their theory of natural selection, have only confirmed the theory.
Now, this is where some people, like Justin, for example, think Popper’s specific criterion of ‘falsification’ should really be ‘verification’. They would argue that all scientific theories are verified not falsified, so Popper’s criterion has it backwards. But the truth is you can’t have one without the other. The important point is that the evidence is not neutral. In the case of evolution, the same palaeontological and genetic evidence that has proved evolutionary theory correct, could have just as readily proven it wrong. Which is what you would expect, if the theory was wrong.
Justin made a big deal about me using the word testable (for a theory) in lieu of the word, falsification, as if they referred to different criteria. But a test is not a test if it can’t be failed. So Popper was saying that a theory has to be put at risk to be a valid theory. If you can’t, in principle, prove the theory wrong, then it has no validity in science.
Another example of a theory that can’t be tested is string theory, but for different reasons. String theory is not considered pseudo-science because it has a very sound mathematical basis, but it has effectively been stagnant for the last 20 years, despite some of the best brains in the world working on it. In principle, it does meet Popper’s criterion, because it makes specific predictions, but in practice those predictions are beyond our current technological abilities to either confirm or reject.
As I’ve said in previous posts, science is a dialectic between theory and experimentation or observation. String theory is an example, where half the dialectic is missing (refer my post on Layers of nature, May.09) This means science is epistemologically dynamic and leads to another misinterpretation of Popper’s criterion. In effect, any theory is contingent on being proved incorrect, and we find that, after years of confirmation, some theories are proved incorrect depending on circumstances. The best known example would be Newton’s theories of mechanics and gravity being overtaken by Einstein’s special and general theories of relativity. Actually, Einstein didn’t prove Newton’s theories wrong so much as demonstrate their epistemological limitation. In fact, if Einstein’s equations couldn’t be reduced to Newton’s equations (by eliminating the speed of light, c, as a factor) then he would have had to reject them.
Thomas Kuhn had a philosophical position that science proceeds by revolutions, and Einstein’s theories are often cited as an example of Kuhn’s thesis in action. Some science philosophers (Steve Fuller) have argued that Kuhn’s and Popper’s positions are at odds, but I disagree. Both Newton’s and Einstein’s theories fulfill Popper’s criterion of falsifiability, and have been verified by empirical evidence. It’s just that Einstein’s theories take over from Newton’s when certain parameters become dominant. We also have quantum mechanics, which effectively puts them both in the shade, but no one uses a quantum mechanical equation, or even a relativistic one, when a Newtonian one will suffice.
Kuhn effectively said that scientific revolutions come about when the evidence for a theory becomes inexplicable to the extent that a new theory is required. This is part of the dialectic that I referred to, but the theory part of the dialectic always has to make predictions that the evidence part can verify or reject.
Justin also got caught up in believing that the methodology determines whether a theory is falsifiable or not, claiming that some analyses, like Bayesian probabilities for example, are impossible to falsify. I’m not overly familiar with Bayesian probabilities but I know that they are a reiterative process, whereby a result is fed back into the equation which hones the result. Justin was probably under the impression that this homing into a more accurate result made it an unfalsifiable technique. But, actually, it’s all dependent on the input data. Bruce Bueno de Mequita, whom New Scientist claim is the most successful predictor in the world, uses Bayesian techniques along with game theory to make predictions. But a prediction is falsifiable by definition, otherwise it’s not a prediction. It’s the evidence that determines if the prediction is true or false, not the method one uses to make the prediction.
In summary: a theory makes predictions, which could be right or wrong. It’s the evidence that should decide whether the theory is right or wrong; not the method by which one makes the prediction (a mathematical formula, for example); nor the method by which one gains the evidence (the experimental design). And it’s the right or wrong part that defines falsifiability as the criterion.
To give Justin due credit, he allowed me the last word on his blog.
Footnote: for a more esoteric discussion on Steve Fuller’s book, Kuhn vs. Popper: The Struggle for the Soul of Science, in a political context, I suggest the following. My discussion is far more prosaic and pragmatic in approach, not to mention, un-academic.
Addendum: (29 March 2010) Please read April's comment below, who points out the errors in this post concerning Popper's own point of view.
Addendum 2: This is one post where the dialogue in the comments (below) is probably more informative than the post, owing to contributors knowing more about Popper than I do, which I readily acknowledge.
Addendum 3: (18 Feb. 2012) Here is an excellent biography of Popper in Philosophy Now, with particular emphasis on his contribution to the philosophy of science.