Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Sunday 9 May 2010

Aerodynamics demystified

I know: you don’t believe me; but you haven’t read Henk Tennekes’s book, The Simple Science of Flight; From Insects to Jumbo Jets. This is a book that should be taught in high school, not to get people into the aerospace industry, but to demonstrate how science works in the real world. It is probably the best example I’ve come across, ever.

I admit, I have an advantage with this book, because I have an engineering background, but the truth is that anyone, with a rudimentary high school education in mathematics, should be able to follow this book. By rudimentary, I mean you don’t need to know calculus, just how to manipulate basic equations. Aerodynamics is one of the most esoteric subjects on the planet – all the more reason that Tennekes’s book should be part of a high school curriculum. It demonstrates the availability of science to the layperson better than any book I’ve read on a single subject.

Firstly, you must appreciate that mathematics is about the relationship between numbers rather than the numbers themselves. This is why an equation can be written without any numbers at all, but with symbols (letters of the alphabet) representing numbers. The numbers can have any value as long as the relationship between them is dictated by the equation. So, for an equation containing 3 symbols, if you know 2 of the values, you can work out the third. Elementary, really. To give an example from Tennekes’s book:

W/S=0.38 V2

Where W is the weight of the flying object (in Newtons), S is the area of the wing (square metres) and V is cruising speed (metres per second). 0.38 is a factor dependent on the angle of attack of the wing (average 6 degrees) and the density of the medium (0.3125 kg/m3; air at sea level). What Tennekes reveals graphically is that you can apply this equation to everything from a fruit fly (Drosophila melanogaster) to an Airbus A380 on what he calls The Great Flight Diagram (see bottom of post). (Mind you, his graph is logarithmic along both axes, but that’s being academic, quite literally.)

I’ve used a small sleight-of-hand here, because the equation for the graph is actually:

W/S = c x W1/3

W/S (weight divided by wing area which gives pressure) is called ‘wing loading’ and is proportional to the cubed root of the Weight, which is a direct consequence of the first equation (that I haven’t explained, though Tennekes does). Tennekes’s ‘Great Flight Diagram’ employs the second equation, but gives V (flight cruise speed) as one of the axes (horizontal) against Weight (vertical axis); both logarithmic as I said. At the risk of confusing you, the second equation graphs better (it gives a straight line on a logarithmic scale) but the relationships of both equations are effectively entailed in the one graph, because W, W/S and V can all be read from it.

I was amazed that one equation could virtually cover the entire range of flight dynamics for winged objects on the planet. The equations also effectively explain the range of flight dynamics that nature allows to take place. The heavier something is, the faster it has to fly to stay in the air, which is why 747s consistently fly 200 times faster than fruit flies. The equation shows that there is a relationship between weight, wing area and air speed at all scales, and while that relationship can be stretched it has limits. Flyers (both natural and artificial) left of the curve are slow for their size and ones to the right are fast for their size – they represent the effective limits. (A line on a graph is called a ‘curve’, even if it’s straight, to distinguish it from a grid-line.) So a side-benefit of the book is that it provides a demonstration of how mathematics is not only a tool of analysis, but how it reveals nature’s limits within a specific medium – in this case, air in earth’s gravitational field. It reminded me of why I fell in love with physics when I was in high school – nature’s secrets revealed through mathematics.

The iconic Supermarine Spitfire is one of the few that is right on the curve, but, as Tennekes points out, it was built to climb fast as an interceptor, not for outright speed.

Now, for those who know more about this subject than I do, they may ask: what about Reynolds numbers? Well, I know Reynolds numbers are used by aeronautical engineers to scale up aerodynamic phenomena from small scale models they use in wind tunnels to full scale aeroplanes. Tennekes conveniently leaves this out, but then he’s not explaining how we use models to provide data for their full scale equivalents – he’s explaining what happens at full scale no matter what the scale is. So speed increases with weight and therefore scale – we are not looking for a conversion factor to take us from one scale to another, which is what Reynolds numbers do. (Actually, there’s a lot more to Reynolds numbers than that, but it’s beyond my intellectual ken.) I’m not an aeronautical engineer, though I did work in a very minor role on the design of a wind tunnel once. By minor role, I took minutes of the meetings held by the real experts.

When I was in high school, I was told that winged flight was all explained by the Bernoulli effect, which Tennekes describes as a ‘polite fiction’. So, that little gem alone, makes Tennekes’s book a worthwhile addition to any school’s science library.

But the real value in this book comes when he starts to talk about migrating birds and the relationship between energy and flight. Not only does he compare aeroplanes with other forms of transport, thus explaining why flight is the most economical means of travel over long distances, as nature has already proven with birds, but he analyses what it takes for the longest flying birds to achieve their goals, and how they live at the limit of what nature allows them to do. Again, he uses mathematics, that the reader can work out for themselves, to convert calories from food into muscle power into flight speed and distance, to verify that the very best traveled migratory birds don’t cheat nature, but live at its limits.

The most extraordinary example being bar-tailed godwits that fly across the entire Pacific Ocean from Alaska to New Zealand and to Australia’s Northern Territory – a total of 11,000 km non-stop (7,000 miles). It’s such a feat that Tennekes claims it requires a rethink on the metabolic efficiency of the muscles of these birds, and he provides the numbers to support his argument. He also explains how birds can convert fat directly into energy for muscles, something we can’t do (we have to convert it into sugar first). He also explains how some migratory birds even start to atrophy their wing muscles and heart muscles to extend their trip – they literally burn up their own muscles for fuel.

So he combines physics with biology with zoology with mathematics, all in one chapter, on one specific subject: bird migration.

He uses another equation, along with a graphic display of vectors, that explains how flapping wings work on exactly the same principle as ice skating in humans. What’s more, he doesn’t even tell the reader that he’s working with vectors, or use trigonometry to explain it, yet anyone would be able to understand the connection. That’s just brilliant exposition.

In a nutshell (without the diagrams) power equals force times speed: P=FV. For the same amount of Power, you can have a large Force and small Velocity or the converse.

In other words, a large force times a small velocity can be transformed into a small force with a large velocity, with very little energy loss if friction is minimalised. This applies to both skaters and birds. The large force, in skating, is your leg pushing sideways against your skate, with a small sideways velocity, resulting in a large velocity forwards, from a small force on the skate backwards. Because the skate is at a slight angle, the force sideways (from your leg) is much greater than the force backwards, but it translates into a high velocity forwards.

The same applies to birds on their downstroke: a large force vertically, at a slight forward angle, gives a higher velocity forward. Tennekes says that the ratio of wing tip velocity to forward velocity for birds is typically 1 to 3, though varies between 2 and 4. If a bird wants to fly faster, they don’t flap quicker, they increase the amplitude, which, at the same frequency, increases wing tip speed, which increases forward flight speed. Simple, isn’t it? The sound you hear when pigeons or doves take off vertically is there wing tips actually touching (on both strokes). Actually, what you hear is the whistle of air escaping the closed gap, as a continuous chirp, which is their flapping frequency. So when they take off, they don’t double their wing flapping frequency, they double their wing flapping amplitude, which doubles their wing tip speed at the same frequency: the wing tip has to travel double the distance in the same time.

One possible point of confusion is a term Tennekes uses called ‘specific energy consumption’, which is a ratio, not an amount of energy as its description implies. It is used to compare energy consumption or energy efficiency between different birds (or planes), irrespective of what units of energy one uses. The inversion of the ratio gives the glide ratio (for both birds and planes) or what the French call ‘Finesse’ – a term that has special appeal to Tennekes. So a lower energy consumption gives a longer guide ratio, or vice versa, as one would expect.

Tennekes finally gets into esoteric territory when he discusses drag and vortices, but he’s clever enough to perform an integral without introducing his readers to calculus. He’s even more clever when he derives an equation based on vortices and links it back to the original equation that I referenced at the beginning of this post. Again, he’s demonstrating how mathematics keeps us honest. To give another, completely unrelated example: if Einstein’s general theory of relativity couldn’t be linked to Newton’s general equation of gravity, then Einstein would have had to abandon it. Tennekes does exactly the same thing for exactly the same reason: to show that his new equation agrees with what has already been demonstrated empirically. Although it’s not his equation, but Ludwig Prandtl’s, whom he calls the ‘German grandfather of aerodynamics’.

Prandtl based his equation on an analogy with electromagnetic induction, which Tennekes explains in some detail. They both deal with an induced phenomenon that occurs in a circular loop perpendicular to the core axis. Vortices create drag, but in aerodynamics it actually goes down with speed, which is highly counterintuitive, but explains why flight is so economical compared to other forms of travel, both for birds and for planes. The drag from vortices is called ‘induced’ drag, not to be confused with ‘frictional’ drag that does increase with air speed, so at some point there is an optimal speed, and, logically, Tennekes provides the equation that gives us that as well. He also explains how it’s the vortices from wing tips that cause many long distance flyers, like geese and swans, to fly in V formation. The vortex supplies an updraft just aft and adjacent to the wingtip that the following bird takes advantage of.

Tennekes uses his equations to explain why human-powered flight is the reserve of professional cyclists, and not a recreational sport like hang-gliding or conventional gliding. Americans apparently use the term, sailplane, instead of glider, and Tennekes uses both without explaining he’s referring to the same thing.

Tennekes reveals that his doctoral thesis (in 1964) critiqued the Concorde (still on the drawing board back then) as ‘a step backward in the history of aviation.’ This was considered heretical at the time, but not now, as history has demonstrated to his credit.

The Concorde is now given as an example, in psychology, of how humans are the only species that don’t know when to give up (called the ‘Concorde effect’). Unlike other species, humans evaluate the effort they’ve put into an endeavour, and sometimes, the more effort they invest, the more determined they become to succeed. Whether this is a good or bad trait is purely subjective, but it can evolve into a combination of pride, egotism and even denial. In the case of the Concorde, Tennekes likens it to a manifestation of ‘megalomania’, comparable to Howard Hughes’ infamous Spruce Goose.

Tennekes’s favourite plane is the Boeing 747, which is the complete antithesis to the Concorde, in evolutionary terms, and developed at the same time; apparently so it could be converted to a freight plane when supersonic flight became the expected norm. So, in some respects, the 747, and its successors, were an ironic by-product of the Concorde-inspired thinking of the time.

My only criticism of Tennekes is that he persistently refers to a budgerigar as a parakeet. This is parochialism on my part: in Australia, where they are native, we call them budgies.

Great Flight diagram.

Sunday 11 April 2010

To have or not to have free will

In some respects this post is a continuation of the last one. The following week’s issue of New Scientist (3 April 2010) had a cover story on ‘Frontiers of the Mind’ covering what it called Nine Big Brain Questions. One of these addressed the question of free will, which happened to be where my last post ended. In the commentary on question 8: How Powerful is the Subconscious? New Scientist refers to well-known studies demonstrating that neuron activity precedes conscious decision-making by 50 milliseconds. In fact, John-Dylan Haynes of the Bernstein Centre for Computational Neuroscience, Berlin, has ‘found brain activity up to 10 seconds before a conscious decision to move [a finger].’ To quote Haynes: “The conscious mind is not free. What we think of as ‘free will’ is actually found in the subconscious.”

New Scientist actually reported Haynes' work in this field back in their 19 April 2008 issue. Curiously, in the same issue, they carried an interview with Jill Bolte Taylor, who was recovering from a stroke, and claimed that she "was consciously choosing and rebuilding my brain to be what I wanted it to be". I wrote to New Scientist at the time, and the letter can still be found on the Net:

You report John-Dylan Haynes finding it possible to detect a decision to press a button up to 7 seconds before subjects are aware of deciding to do so (19 April, p 14). Haynes then concludes: "I think it says there is no free will."

In the same issue Michael Reilly interviews Jill Bolte Taylor, who says she "was consciously choosing and rebuilding my brain to be what I wanted it to be" while recovering from a stroke affecting her cerebral cortex (p 42) . Taylor obviously believes she was executing free will.

If free will is an illusion, Taylor's experience suggests that the brain can subconsciously rewire itself while giving us the illusion that it was our decision to make it do so. There comes a point where the illusion makes less sense than the reality.

To add more confusion, during the last week, I heard an interview with Norman Doidge MD, Research psychiatrist at the Columbia University Psychoanalytic Centre and the University of Toronto, who wrote the book, The Brain That Changes Itself. I haven’t read the book, but the interview was all about brain plasticity, and Doidge specifically asserts that we can physically change our brains, just through thought.

What Haynes' experimentation demonstrates is that consciousness is dependent on brain neuronal activity, and that’s exactly the point I made in my last post. Our subconscious becomes conscious when it goes ‘global’, so one would expect a time-lapse between a ‘local’ brain activity (that is subconscious) and the more global brain activity (that is conscious). But the weird part is that Taylor’s experience, and Doidge’s assertions, is that our conscious thoughts can also affect our brain at the neuron level. This reminds me of Douglas Hofstadter’s thesis that we all are a ‘strange loop’, that he introduced in his book, Godel, Escher, Bach, and then elaborated on in a book called I am a Strange Loop. I’ve read the former tome but not the latter one (refer my post on AI & Consciousness, Feb.2009).

We will learn more and more about consciousness, I’m sure, but I’m not at all sure that we will ever truly understand it. As John Searle points out in his book, Mind, at the end of the day, it is an experience, and a totally subjective experience at that. In regard to studying it and analysing it, we can only ever treat it as an objective phenomenon. The Dalai Lama makes the same point in his book, The Universe in a Single Atom.

People tend to think about this from a purely reductionist viewpoint: once we understand the correlation between neuron activity and conscious experience, the mystery stops being a mystery. But I disagree: I expect the more we understand, the bigger the mystery will become. If consciousness is no less weird than quantum mechanics, I’ll be very surprised. And we are already seeing quite a lot of weirdness, when consciousness is clearly dependent on neuronal activity, and yet the brain’s plasticity can be affected by conscious thought.

So where does this leave free will? Well, I don’t think that we are automatons, and I admit I would find it very depressing if that was the case. The last of the Nine Questions in last week’s New Scientist, asks: will AI ever become sentient? In its response, New Scientist reports on some of the latest developments in AI, where they talk about ‘subconscious’ and ‘conscious’ layers of activity (read software). Raul Arrables of the Carlos III University of Madrid, has developed ‘software agents’ called IDA (Intelligent Distribution Agent) and is currently working on LIDA (Learning IDA). By ‘subconcious’ and ‘conscious’ levels, the scientists are really talking about tiers of ‘decision-making’, or a hierarchic learning structure, which is an idea I’ve explored in my own fiction. At the top level, the AI has goals, which are effectively criteria of success or failure. At the lower level it explores various avenues until something is ‘found’ that can be passed onto the higher level. In effect, the higher level chooses the best option from the lower level. The scientists working on this 2 level arrangement, have even given their AI ‘emotions’, which are built-in biases that direct them in certain directions. I also explored this in my fiction, with the notion of artificial attachment to a human subject that would simulate loyalty.

But, even in my fiction, I tend to agree with Searle, that these are all simulations, which might conceivably convince a human that an AI entity really thinks like us. But I don’t believe the brain is a computer, so I think it will only ever be an analogy or a very good simulation.

Both this development in AI and the conscious/subconscious loop we seem to have in our own brains reminds me of the ‘Bayesian’ model of the brain developed by Karl Friston and also reported in New Scientist (31 May 2008). They mention it again in an unrelated article in last week’s issue – one of the little unremarkable reports they do – this time on how the brain predicts the future. Friston effectively argues that the brain, and therefore the mind, makes predictions and then modifies the predictions based on feedback. It’s effectively how the scientific method works as well, but we do it all the time in everyday encounters, without even thinking about it. But Friston argues that it works at the neuron level as well as the cognitive level. Neuron pathways are reinforced through use, which is a point that Norman Doidge makes in his interview. We now know that the brain literally rewires itself, based on repeated neuron firings.

Because we think in a language, which has become a default ‘software’ for ourselves, we tend to think that we really are just ‘wetware’ computers, yet we don’t share this ability with other species. We are the only species that ‘downloads’ a language to our progeny, independently of our genetic material. And our genetic material (DNA) really is software, as it is for every life form on the planet. We have a 4-letter code that provides the instructions to create an entire organism, materially and functionally – nature’s greatest magical trick.

One of the most important aspects of consciousness, not only in humans, but for most of the animal kingdom (one suspects) is that we all ‘feel’. I don’t expect an AI ever to feel anything, even if we programme it to have emotions.

But it is because we can all ‘feel’, that our lives mean so much to us. So, whether we have free will or not, what really matters is what we feel. And without feeling, I would argue that we would not only be not human, but not sentient.


Footnote: If you're interested in neuroscience at all, the interview linked above is well worth listening to, even though it's 40 mins long.

Saturday 3 April 2010

Consciousness explained (well, almost, sort of)

As anyone knows, who has followed this blog for any length of time, I’ve touched on this subject a number of times. It deals with so many issues, including the possibilities inherent in AI and the subject of free will (the latter being one of my earliest posts).

Just to clarify one point: I haven’t read Daniel C. Dennett’s book of the same name. Paul Davies once gave him the very generous accolade by referencing it as 1 of the 4 most influential books he’s read (in company with Douglas Hofstadter’s Godel, Escher, Bach). He said: “[It] may not live up to its claim… it definitely set the agenda for how we should think about thinking.” Then, in parenthesis, he quipped: “Some people say Dennett explained consciousness away.”

In an interview in Philosophy Now (early last year) Dennett echoed David Chalmers’ famous quote that “a thermostat thinks: it thinks it’s too hot, or it thinks it’s too cold, or it thinks the temperature is just right.” And I don’t think Dennett was talking metaphorically. This, by itself, doesn’t imbue a thermostat with consciousness, if one argues that most of our ‘thinking’ happens subconsciously.

I recently had a discussion with Larry Niven on his blog, on this very topic, where we to-and-fro’d over the merits of John Searle’s book, Mind. Needless to say, Larry and I have different, though mutually respectful, views on this subject.

In reference to Mind, Searle addresses that very quote by Chalmers by saying: “Consciousness is not spread out like jam on a piece of bread…” However, if one believes that consciousness is an ‘emergent’ property, it may very well be ‘spread out like jam on a piece of bread’, and evidence suggests, in fact, that this may well be the case.

This brings me to the reason for writing this post:New Scientist, 20 March 2010, pp.39-41; an article entitled Brain Chat by Anil Ananthaswarmy (consulting editor). The article refers to a theory proposed originally by Bernard Baars of The Neuroscience Institute in San Diego, California. In essence, Baars differentiated between ‘local’ brain activity and ‘global’ brain activity, since dubbed the ‘global workspace’ theory of consciousness.

According to the article, this has now been demonstrated by experiment, the details of which, I won’t go into. Essentially, it has been demonstrated that when a person thinks of something subconsciously, it is local in the brain, but when it becomes conscious it becomes more global: ‘…signals are broadcast to an assembly of neurons distributed across many different regions of the brain.’

One of the benefits, of this mechanism, is that if effectively filters out anything that’s irrelevant. What becomes conscious is what the brain considers important. What criterion the brain uses to determine this is not discussed. So this is not the explanation that people really want – it’s merely postulating a neuronal mechanism that correlates with consciousness as we experience it. Another benefit of this theory is that it explains why we can’t consider 2 conflicting images at once. Everyone has seen the duck/rabbit combination and there are numerous other examples. Try listening to a Bach contrapuntal fugue so that you listen to both melodies at once – you can’t. The brain mechanism (as proposed above) says that only one of these can go global, not both. It doesn’t explain, of course, how we manage to consciously ‘switch’ from one to the other.

However, both the experimental evidence and the theory, are consistent with something that we’ve known for a long time: a lot of our thinking happens subconsciously. Everyone has come across a puzzle that they can’t solve, then they walk away from it, or sleep on it overnight, and the next time they look at it, the solution just jumps out at them. Professor Robert Winston, demonstrated this once on TV, with himself as the guinea pig. He was trying to solve a visual puzzle (find an animal in a camouflaged background) and when he had that ‘Ah-ha’ experience, it showed up as a spike on his brain waves. Possibly the very signal of it going global, although I’m only speculating based on my new-found knowledge.

Mathematicians have this experience a lot, but so do artists. No artist knows where their art comes from. Writing a story, for me, is a lot like trying to solve a puzzle. Quite often, I have no better idea what’s going to happen than the reader does. As Woody Allen once said, it’s like you get to read it first. (Actually, he said it’s like you hear the joke first.) But his point is that all artists feel the creative act is like receiving something rather than creating it. So we all know that something is happening in the subconscious – a lot of our thinking happens where we’re unaware of it.

As I alluded to in my introduction, there are 2 issues that are closely related to consciousness, which are AI and free will. I’ve said enough about AI in previous posts, so I won’t digress, except to restate my position that I think AI will never exhibit consciousness. I also concede that one day someone may prove me wrong. It’s one aspect of consciousness that I believe will be resolved one day, one way or the other.

One rarely sees a discussion on consciousness that includes free will (Searle’s aforementioned book, Mind, is an exception, and he devotes an entire chapter to it). Science seems to have an aversion to free will (refer my post, Sep.07) which is perfectly understandable. Behaviours can only be explained by genes or environment or the interaction of the two – free will is a loose cannon and explains nothing. So for many scientists, and philosophers, free will is seen as a nice-to-have illusion.

Conciousness evolved, but if most of our thinking is subconscious, it begs the question: why? As I expounded on Larry’s blog, I believe that one day we will have AI that will ‘learn’; what Penrose calls ‘bottom-up’ AI. Some people might argue that we require consciousness for learning but insects demonstrate learning capabilities, albeit rudimentary compared to what we achieve. Insects may have consciousness, by the way, but learning can be achieved by reinforcement and punishment – we’ve seen it demonstrated in animals at all levels – they don’t have to be conscious of what they’re doing in order to learn.

So the only evolutionary reason I can see for consciousness is free will, and I’m not confining this to the human species. If, as science likes to claim, we don’t need, or indeed don’t have, free will, then arguably, we don’t need consciousness either.

To demonstrate what I mean, I will relate 2 stories of people reacting in an aggressive manner in a hostile situation, even though they were unconscious.

One case, was in the last 10 years, in Sydney, Australia (from memory) when a female security guard was knocked unconscious and her bag (of money) was taken from her. In front of witnesses, she got up, walked over to the guy (who was now in his car), pulled out her gun and shot him dead. She had no recollection of doing it. Now, you may say that’s a good defence, but I know of at least one other similar incident.

My father was a boxer, and when he first told me this story, I didn’t believe him, until I heard of other cases. He was knocked out, and when he came to, he was standing and the other guy was on the deck. He had to ask his second what happened. He gave up boxing after that, by the way.

The point is that both of those cases illustrate that humans can perform complicated acts of self-defence without being consciously cognisant of it. The question is: why is this the exception and not the norm?


Addendum: Nicholas Humphrey, whom I have possibly incorrectly criticised in the past, has an interesting evolutionary explanation: consciousness allows us to read other’s minds. Previously, I thought he authored an article in SEED magazine (2008) that argued that consciousness is an illusion, but I can only conclude that it must be someone else. Humphrey discovered ‘blindsight’ in a monkey (called Helen) with a surgically-removed visual cortex, which is an example of a subconscious phenomenon (sight) with no conscious correlation. (This specific phenomenon has since been found in humans as well, with damaged visual cortex.)


Addendum 2: I have since written a post called Consciousness unexplained in Dec. 2011 for those interested.

Sunday 28 March 2010

Karl Popper’s criterion

Over the last week, I’ve been involved in an argument with another blogger, Justin Martyr, after Larry Niven linked us both to one of his posts. I challenged Justin (on his own blog) over his comments on ID (Intelligent Design), contending that his version was effectively a ‘God-of-the-gaps’ argument. Don’t read the thread – it becomes tiresome.

Justin tended to take the argument in all sorts of directions, and I tended to follow, but it ultimately became focused on Popper’s criterion of falsifiability for a scientific theory. First of all, notice that I use the word, falsifiability (not even in the dictionary) whereas Justin used the word, falsification. It’s a subtle difference but it highlights a difference in interpretation. It also highlighted to me that some people don’t understand what Popper’s criterion really means or why it’s so significant in scientific epistemology.

I know that, for some of you who read this blog, this will be boring, but, for others, it may be enlightening. Popper originally proposed his criterion to eliminate pseudo-scientific theories (he was targeting Freud at the time) whereby the theory is always true for all answers and all circumstances, no matter what the evidence. The best contemporary example is creationism and ID, because God can explain everything no matter what it entails. There is no test or experiment or observation one can do that will eliminate God as a hypothesis. On the other hand, there are lots of tests and observations (that have been done) that could eliminate evolutionary theory.

As an aside, bringing God into science stops science, which is an argument I once had with William Lane Craig and posted as The God hypothesis (Dec.08).

When scientists and philosophers first cited Popper’s criterion as a reason for rejecting creationism as ‘science’, many creationists (like Duane T. Gish, for example) claimed that evolution can’t be a valid scientific theory either, as no one has ever observed evolution taking place: it’s pure conjecture. So this was the first hurdle of misunderstanding. Firstly, evolutionary theory can generate hypotheses that can be tested. If the hypotheses aren’t falsifiable, then Gish would have had a case. The point is that all the discoveries that have been made, since Darwin and Wallace postulated their theory of natural selection, have only confirmed the theory.

Now, this is where some people, like Justin, for example, think Popper’s specific criterion of ‘falsification’ should really be ‘verification’. They would argue that all scientific theories are verified not falsified, so Popper’s criterion has it backwards. But the truth is you can’t have one without the other. The important point is that the evidence is not neutral. In the case of evolution, the same palaeontological and genetic evidence that has proved evolutionary theory correct, could have just as readily proven it wrong. Which is what you would expect, if the theory was wrong.

Justin made a big deal about me using the word testable (for a theory) in lieu of the word, falsification, as if they referred to different criteria. But a test is not a test if it can’t be failed. So Popper was saying that a theory has to be put at risk to be a valid theory. If you can’t, in principle, prove the theory wrong, then it has no validity in science.

Another example of a theory that can’t be tested is string theory, but for different reasons. String theory is not considered pseudo-science because it has a very sound mathematical basis, but it has effectively been stagnant for the last 20 years, despite some of the best brains in the world working on it. In principle, it does meet Popper’s criterion, because it makes specific predictions, but in practice those predictions are beyond our current technological abilities to either confirm or reject.

As I’ve said in previous posts, science is a dialectic between theory and experimentation or observation. String theory is an example, where half the dialectic is missing (refer my post on Layers of nature, May.09) This means science is epistemologically dynamic and leads to another misinterpretation of Popper’s criterion. In effect, any theory is contingent on being proved incorrect, and we find that, after years of confirmation, some theories are proved incorrect depending on circumstances. The best known example would be Newton’s theories of mechanics and gravity being overtaken by Einstein’s special and general theories of relativity. Actually, Einstein didn’t prove Newton’s theories wrong so much as demonstrate their epistemological limitation. In fact, if Einstein’s equations couldn’t be reduced to Newton’s equations (by eliminating the speed of light, c, as a factor) then he would have had to reject them.

Thomas Kuhn had a philosophical position that science proceeds by revolutions, and Einstein’s theories are often cited as an example of Kuhn’s thesis in action. Some science philosophers (Steve Fuller) have argued that Kuhn’s and Popper’s positions are at odds, but I disagree. Both Newton’s and Einstein’s theories fulfill Popper’s criterion of falsifiability, and have been verified by empirical evidence. It’s just that Einstein’s theories take over from Newton’s when certain parameters become dominant. We also have quantum mechanics, which effectively puts them both in the shade, but no one uses a quantum mechanical equation, or even a relativistic one, when a Newtonian one will suffice.

Kuhn effectively said that scientific revolutions come about when the evidence for a theory becomes inexplicable to the extent that a new theory is required. This is part of the dialectic that I referred to, but the theory part of the dialectic always has to make predictions that the evidence part can verify or reject.

Justin also got caught up in believing that the methodology determines whether a theory is falsifiable or not, claiming that some analyses, like Bayesian probabilities for example, are impossible to falsify. I’m not overly familiar with Bayesian probabilities but I know that they are a reiterative process, whereby a result is fed back into the equation which hones the result. Justin was probably under the impression that this homing into a more accurate result made it an unfalsifiable technique. But, actually, it’s all dependent on the input data. Bruce Bueno de Mequita, whom New Scientist claim is the most successful predictor in the world, uses Bayesian techniques along with game theory to make predictions. But a prediction is falsifiable by definition, otherwise it’s not a prediction. It’s the evidence that determines if the prediction is true or false, not the method one uses to make the prediction.

In summary: a theory makes predictions, which could be right or wrong. It’s the evidence that should decide whether the theory is right or wrong; not the method by which one makes the prediction (a mathematical formula, for example); nor the method by which one gains the evidence (the experimental design). And it’s the right or wrong part that defines falsifiability as the criterion.

To give Justin due credit, he allowed me the last word on his blog.

Footnote: for a more esoteric discussion on Steve Fuller’s book, Kuhn vs. Popper: The Struggle for the Soul of Science, in a political context, I suggest the following. My discussion is far more prosaic and pragmatic in approach, not to mention, un-academic.

Addendum: (29 March 2010) Please read April's comment below, who points out the errors in this post concerning Popper's own point of view.

Addendum 2: This is one post where the dialogue in the comments (below) is probably more informative than the post, owing to contributors knowing more about Popper than I do, which I readily acknowledge.

Addendum 3: (18 Feb. 2012) Here is an excellent biography of Popper in Philosophy Now, with particular emphasis on his contribution to the philosophy of science.

Tuesday 16 March 2010

Speciation: still one of nature’s great mysteries

First of all a disclaimer: I’m a self-confessed dilettante, not a real philosopher, and, even though I read widely and take an interest in all sorts of things scientific, I’m not a scientist either. I know a little bit more about physics and mathematics than I do biology, but I can say with some confidence that evolution, like consciousness and quantum mechanics, is one of nature’s great mysteries. But, like consciousness and quantum mechanics, just because it’s a mystery doesn’t make it any less real. Back in Nov.07, I wrote a post titled: Is evolution fact? Is creationism myth?

First, I defined what I meant by ‘fact’: it’s either true or false, not something in between. So it has to be one or the other: like does the earth go round the sun or does the sun go round the earth? One of those is right and one is wrong, and the one that is right is a fact.

Well, I put evolution into that category: it makes no sense to say that evolution only worked for some species and not others; or that it occurred millions of years ago but doesn’t occur now; or the converse that it occurs now, but not in the distant past. Either it occurs or it never occurred, and all the evidence, and I mean all of the evidence, in every area of science: genetics, zoology, palaeontology, virology; suggests it does. There are so many ways that evolution could have been proven false in the last 150 years since Darwin’s and Wallace’s theory of natural selection, that it’s just as unassailable as quantum mechanics. Natural selection, by the way, is not a theory, it’s a law of nature.

Now, both proponents and opponents of evolutionary theory often make the mistake of assuming that natural selection is the whole story of evolution and there’s nothing else to explain. So I can confidently say that natural selection is a natural law because we see evidence of it everywhere in the natural world, but it doesn’t explain speciation, and that is another part of the story that is rarely discussed. But it’s also why it’s one of nature’s great mysteries. To quote from this week’s New Scientist (13 March, 2010, p.31): ‘Speciation still remains one of the biggest mysteries in evolutionary biology.’

This is a rare admission in a science magazine, because many people believe, on both sides of the ideological divide (that evolution has created in some parts of the world, like the US) that it opens up a crack in the scientific edifice for creationists and intelligent design advocates to pull it down.

But again, let’s compare this to quantum mechanics. In a recent post on Quantum Entanglement (Jan.10), where I reviewed Louisa Gilder’s outstanding and very accessible book on the subject, I explain just how big a mystery it remains, even after more than a century of experimentation, verification and speculation. Yet, no one, whether a religious fundamentalist or not, wants to replace it with a religious text or any other so-called paradigm or theory. This is because quantum mechanics doesn’t challenge anything in the Bible, because the Bible, unsurprisingly, doesn’t include anything about physics or mathematics.

Now, the Bible doesn’t include anything about biology either, but the story of Genesis, which is still a story after all the analysis, has been substantially overtaken by scientific discoveries, especially in the last 2 centuries.

But it’s because of this ridiculous debate, that has taken on a political force in the most powerful and wealthy nation in the world, that no one ever mentions that we really don’t know how speciation works. People are sure to counter this with one word, mutation, but mutations and genetic drift don’t explain how genetic anomalies amongst individuals lead to new species. It is assumed that they accumulate to the point that, in combination with natural selection, a new species branches off. But the New Scientist cover story, reporting on work done by Mark Pagel (an evolutionary biologist at the University of Reading, UK) challenges this conventionally held view.

To quote Pagel: “I think the unexamined view that most people have of speciation is this gradual accumulation by natural selection of a whole lot of changes, until you get a group of individuals that can no longer mate with their old population.”

Before I’m misconstrued, I’m not saying that mutation doesn’t play a fundamental role, as it obviously does, which I elaborate on below. But mutations within individuals don’t axiomatically lead to new species. This is a point that Erwin Schrodinger attempted to address in his book, What is Life? (see my review posted Nov.09).

Years ago, I wrote a letter to science journalist, John Horgan, after reading his excellent book The End of Science (a collection of interviews and reflections by some of the world’s greatest minds in the late 20th Century). I suggested to him an analogy between genes and environment interacting to create a human personality, and the interaction between speciation and natural selection creating biological evolution. I postulated back then that we had the environment part, which was natural selection, but not the gene part of the analogy, which is speciation. In other words, I suggested that there is still more to learn, just like there is still more to learn about quantum mechanics. We always assume that we know everything that there is to know, when clearly we don’t. The mystery inherent in quantum mechanics indicates that there is something that we don’t know, and the same is true for evolution.

Mark Pagel’s research is paradigm-challenging, because he’s demonstrated statistically that genetic drift by mutation doesn’t give the right answers. I need to explain this without getting too esoteric. Pagel looked at the branches of 101 various (evolutionary) trees, including: ‘cats, bumblebees, hawks, roses and the like’. By doing a statistical analysis of the time between speciation events (the length of the branches) he expected to get a Bell curve distribution which would account for the conventional view, but instead he got an exponential curve.

To quote New Scientist: ‘The exponential is the pattern you get when you are waiting for some single, infrequent event to happen… the length of time it takes a radioactive atom to decay, and the distance between roadkills on a highway.’

In other words, as the New Scientist article expounds in some detail, new species happen purely by accident. What I found curious about the above quote is the reference to ‘radioactive decay’ which was the starting point for Erwin Schrodinger’s explanation of mutation events, which is why mutation is still a critical factor in the whole process.

Schrodinger went to great lengths, very early in his exposition, to explain that nearly all of physics is statistical, and gave examples from magnetism to thermal activity to radioactive decay. He explained how this same statistical process works in creating mutations. Schrodinger coined a term, ‘statistico-deterministic’, but in regard to quantum mechanics rather than physics in general. Nevertheless, chaos and complexity theory reinforce the view that the universe is far from deterministic at almost every level that one cares to examine it. As the New Scientist article argues, Pagel’s revelation supports Stephen Jay Gould’s assertion: ‘that if you were able to rewind history and replay the evolution on Earth, it would turn out differently every time.’

I’ve left a lot out in this brief exposition, including those who challenge Pagel’s analysis, and how his new paradigm interacts with natural selection and geographical separation, which are also part of the overall picture. Pagel describes his own epiphany when he was in Tanzinia: ‘watching two species of colobus monkeys frolic in the canopy 40 metres overhead. “Apart from the fact that one is black and white and one is red, they do all the same things... I can remember thinking that speciation was very arbitrary. And here we are – that’s what our models are telling us.”’ In other words, natural selection and niche-filling are not enough to explain diversification and speciation.

What I find interesting is that wherever we look in science, chance plays a far greater role than we credit. It’s not just the cosmos at one end of the scale, and quantum mechanics at the other end, that rides on chance, but evolution, like earthquakes and other unpredictable events, also seems to be totally dependent on the metaphorical roll of the dice.

Addendum 1 : (18 March 2010)

Comments posted on New Scientist challenge the idea that a ‘bell curve’ distribution should have been expected at all. I won’t go into that, because it doesn’t change the outcome: 78% of ‘branches’ statistically analysed (from 110) were exponential and 0% were normal distribution (bell curve). Whatever the causal factors, in which mutation plays a definitive role, speciation is as unpredictable as earthquakes, weather events and radio-active decay (for an individual isotope).

Addendum 2: (18 March 2010)

Writing this post, reminded me of Einstein’s famous quote that ‘God does not play with dice’. Well, I couldn’t disagree more. If there is a creator-God (in the Einstein mould) then first and foremost, he or she is a mathematician. Secondly, he or she is a gambler who loves to play the odds. The role of chance in the natural world is more fundamental and universally manifest than we realise. In nature, small variances can have large consequences: we see that with quantum theory, chaos theory and evolutionary theory. There appears to be little room for determinism in the overall play of the universe.

Sunday 7 March 2010

The world badly needs a radical idea

Over the last week, a few items, in the limited media that I access, have increased my awareness that the world needs a new radical idea, and I don’t have it. At the start of the 21st Century we are like a species on steroids, from the planet’s point of view, and that’s not healthy for the planet. And if it’s not healthy for the planet, it’s not healthy for us. Why do so few of us even seem to be aware of this?

It started with last week’s New Scientist’s cover story: Earth’s Nine Lives; whereby an environmental journalist, Fred Pearce, looks at 9 natural parameters that give an indication of the health of the planet from a human perspective. By this, I mean he looks at limits set by scientists and how close we are to them. He calls them boundaries, and they are all closing or already passed, with the possible exception of one. They are: ocean acidity; ozone depletion; fresh water; biodiversity; nitrogen and phosphorous cycles; land use; climate change; atmospheric aerosol loading and chemical pollution.

Out of these, ozone depletion seems to be the only one going in the right direction, and, according to Pearce, three of them, including climate change, have actually crossed their specified boundaries already. But, arguably, the most disturbing is fresh water where he believes the boundary will be crossed mid-century. It’s worth quoting the conclusion in its entirety.

However you cut it, our life-support systems are not in good shape. Three of nine boundaries - climate change, biodiversity and nitrogen fixation - have been exceeded. We are fast approaching boundaries for the use of fresh water and land, and the ocean acidification boundary seems to be looming in some oceans. For two of the remaining three, we do not yet have the science to even guess where the boundaries are.

That leaves one piece of good news. Having come close to destroying the ozone layer, exposing both ourselves and ecosystems to dangerous ultraviolet radiation, we have successfully stepped back from the brink. The ozone hole is gradually healing. That lifeline has been grabbed. At least it shows action is possible - and can be successful.


The obvious common denominator here is human population, which I’ve talked about before (Living in the 21st Century, Sep.07 and more recently, Utopia or dystopia, Sep.09; and my review of Tim Flannery’s book, The Weathermakers, Dec. 09).

In the same week (Friday), I heard an interview with Clive Hamilton, who is Charles Sturt Professor of Public Ethics at the Centre for Applied Philosophy and Public Ethics at the Australian National University, Canberra. He’s just written a book on climate change and despairs at the ideological versus scientific struggle that is taking place globally on this issue. He believes that the Copenhagen summit was actually a backward step compared to the Kyoto protocol.

Then, today (Saturday) Paul Carlin sent me a transcript of an interview with A.C. Grayling, who is currently visiting Australia. The topic of the interview is ‘Religion in its death throes’, but he’s talking about religion in politics rather than religion in a genuinely secularised society.

He’s looking forward to a time when religion is a personal thing rather than a political weapon, that effectively divides people and creates the ‘us and them’ environment we seem to be in at the moment. Australia is relatively free from this, but the internet and other global media means we are not immune. In fact, people have been radicalised in this country, and some of them are now serving jail sentences as a consequence.

To quote Grayling, predicting a more tolerant future:

‘And people who didn't have a religious commitment wouldn't mind if other people did privately and they wouldn't attack or criticise them.

So there was an unwritten agreement that the matter was going to be left quiet. So in a future where the religious organisations and religious individuals had returned to something much more private, much more inward looking, we might have that kind of public domain where people were able to rub along with one another with much less friction than we're seeing at the moment.’


To a large extent, I feel we already have that in Australia, and it’s certainly a position I’ve been arguing for, ever since I started this blog.

But Grayling also mentions climate change, when asked by his interviewer, Leigh Sales, but hints, rather than specifies, that a debate between a science expert on climatology and a so-called climate-change-sceptic would not be very helpful, because they are arguing from completely different places. One is arguing from scientific data and accepted peer-reviewed knowledge and the other is arguing from an ideological position because he or she sees economic woe, job losses and political hardship. It’s as if climate change is a political position and not a scientific-based reality. It certainly comes across that way in this country. As Clive Hamilton argues: people look out their windows and everything looks much the same, so why should I believe these guys in their ivory towers, who create dramas for us because it’s how they make their living. I’m not being cynical – many people do actually think like that.

But this is all related to the original topic formulated by the New Scientist article – it goes beyond climate change - there are a range of issues where we are impacting the planet, and in every case it’s the scientists faint, but portentously reliable voices, who are largely ignored by the politicians and the decision-makers of the world who set our economic course. And that’s why the world badly needs a radical idea. Politicians, the world over, worship the god of economic growth – it’s the mantra heard everywhere from China to Africa to Europe to America to Australia. And economic growth propels population growth and all the boundary pushing ills that the planet is currently facing.

The radical idea we so badly need is an alternative to economic growth and the consumer driven society. I really, badly wish 2 things: I wish I was wrong and I wish I knew what the radical idea was.

Saturday 20 February 2010

On the subject of good and evil and God

I wrote a lengthy dissertation on the subject of evil, very early on in this blog (Evil, Oct.07) and I don’t intend to repeat myself here.

This post has arisen as the result of something I wrote on Stephen Law’s blog, in response to a paper that Stephen has written (that is an academic paper, not just a blog post). To put it into context, Stephen’s paper addresses what is known in classical philosophy as the ‘problem of evil’, or how can one reconcile an omniscient, ultimately beneficial and inherently good god, with the evil and suffering we witness everyday in the physical world. It therefore specifically addresses the biblical god who is represented by the three main monotheistic religions.

Stephen’s thesis, in a nutshell, is that, based on the evidence, an evil god makes more sense than a good god. I’m not going to address Stephen’s argument directly, and I’m not an academic. My response is quite literally my own take on the subject that has been evoked by reading Stephen’s paper, and I neither critique nor analyse his arguments.

My argument, in a nutshell, is that God can represent either good or evil, because it’s dependent on the person who perceives God. As I’ve said previously (many times, in fact) the only evidence we have of God is in someone’s mind, not out there. The point is that people project their own beliefs and prejudices onto this God. So God with capital ‘G’, as opposed to god with small ‘g’, is the God that an individual finds within themselves. God with small ‘g’ is an intellectual construct that could represent the ‘Creator’ or a reference to one of many religious texts – I make this distinction, because one is experienced internally and the other is simply contemplated intellectually. Obviously, I think the Creator-God is an intellectual construct, not to be confused with the ‘feeling’ people express of their God. Not everyone makes this distinction.

Below is an edited version of my comment on Stephen’s blog.

I feel this all comes back to consciousness or sentient beings. Without consciousness there would be no good and evil, and no God either. God is a projection who can represent good or evil, depending on the beholder. Evil is invariably a perversion, because the person who commits evil (like genocide, for example) can always justify it as being for the ‘greater good’.

People who attribute natural disasters to God or Divine forces are especially prone to a perverted view of God. They perversely attribute natural disasters to human activity because God is ‘not happy’. We live in a lottery universe, and whilst we owe our very existence to super novae, another super nova could just as easily wipe us all out in a blink, depending on its proximity.

God, at his or her best, represents the sense of connection we have to all of humanity, and, by extension, nature. Whether that sense be judgmental and exclusive, or compassionate and inclusive, depends on the individual, and the God one believes in simply reflects that. Even atheists sense this connection, though they don’t personify it or conceptualise it like theists do. At the end of the day, what matters is how one perceives and treats their fellows, not whether they are theists or atheists; the latter being a consequence of the former (for theists), not the other way round.

Evil is an intrinsic attribute of human nature, but its origins are evolutionary, not mythical or Divine (I expound upon this in detail in my post on Evil). God is a projection of the ideal self, and therefore encompasses all that is good and bad in all of us.

That is the end of my (edited) comment on Stephen’s blog. My point is that the ‘problem of evil’, as it is formulated in classical philosophy, leads to a very narrow discussion concerning a hypothetical entity, when the problem really exists within the human mind.

Saturday 6 February 2010

Existentialism in a movie

Up in the Air, the latest George Clooney film, is only the third movie I’ve reviewed on this blog, and the only common thread they share is that they all deal with philosophical issues, albeit in completely different ways with totally different themes.

Man on Wire was a documentary about a truly extraordinary, eccentric and uniquely talented human being. It’s hard to imagine we will ever see another Philippe Petit – certainly what he did was a once-in-a-lifetime event.

Watchmen was a fantasy comic-book movie that captured an entire era, specifically the cold war, and consequently brought to the screen the contradictions that many of us, on the outside at least, find in the American psyche.

Up in the Air is an existentialist movie for our time, and, even though it is marketed as a romantic-comedy, anyone expecting it to be a typical feel-good movie that promotes happily-ever-after scenarios will be disappointed. It is uplifting, I believe, but not in the way many people will expect. It has had good reviews, at least in this country, and deservedly so in my opinion.

Firstly, I cannot over-emphasise how well-written this movie is. When I ran a writing course, I told my students that good writing is transparent. Even in a novel, I contend that no one notices good writing, they only notice bad writing. The reader should be so engaged in the story and the characters, that the writing becomes a transparent medium.

Well, in a movie the writing is even more transparent, because it comes out of the mouths of actors or is seen through the direction of the director. To give an example, there is a scene in this movie where two different people get two different reactions from the same person in the same scenario. This scene could easily have appeared contrived and unconvincing, but it was completely believable, because all the characters were so well written. But, as far as the audience is concerned, all credit goes to the actors, and that’s what I mean when I say the writing is transparent.

Now, I know that, according to the internet, part of this movie is unscripted and they used real people (non-actors) who had been laid off, but their participation is seamless. And, paradoxically, it’s the script-writing that allows this to happen invisibly.

Personally, I felt some resonances with this movie, because, when I was in America in 2001/2, I was living the same type of life as the protagonist (George Clooney’s Ryan Bingham): living in hotels and flying or driving around the country. No, I wasn’t a high-flying executive or a hatchet-man; just an ordinary bloke working in a foreign country, so I had no home so to speak. I also wrote half a novel during that period, which, in retrospect, I find quite incredible, as I now struggle to write at all, even though I live alone and no longer hold a full time job. I digress and indulge – two unpardonable sins – so let me get back to my theme.

The movie opens with a montage of people being sacked (fired) with a voice-over of Clooney explaining his job. This cuts to the core of the movie for me: what do we live for? For many people their job defines them – it is their identity, in their own eyes and the eyes of their society. So cutting off someone’s job is like cutting off their life – it’s humiliating at the very least, suicidally depressing at worst and life-changing at best.

The movie doesn’t explore these various scenarios, just hints at them, but that’s all it needs to do. A story puts you in the picture - makes you empathise - otherwise it doesn’t work as a story. So sitting in the cinema, we all identify with these people, feel their pain, their humiliation, their sense of betrayal and their sense of failure. It’s done very succinctly – the movie doesn’t dwell on the consequences – it hits us where it hurts then leave us to contemplate. Just how important is your job to you? Is it just a meal ticket or does it define who you are?

In another sub-plot or 2 or 3, the movie explores relationships and family. So in one film all the existential questions concerning 21st Century living are asked. Getting married, having kids, bringing up the next generation – is that what it’s all about? There’s no definitive answer; the question is left hanging because it’s entirely up to the individual.

Recently (last month) Stephen Law tackled the Meaning of Life question from a humanist perspective. Basically he was defending the position against those who claim that there is no meaning to life without religion, specifically, without God. God doesn’t get a mention in this movie, but that’s not surprising; God rarely gets a mention in the best of fiction. In a recent post of my own (Jesus’ Philosophy) I reviewed Don Cupitt’s book on that very topic, and Cupitt makes the observation that it was the introduction of the novel that brought humanist morality into ordinary discourse. And he’s right: one rarely finds a story where God provides the answer to a moral dilemma – the characters are left to work it out for themselves, and we would be hugely disappointed if that wasn’t the case.

In one scene in the movie, Bingham tells a teary-eyed middle-aged man that he can finally follow the dream he gave up in his youth of becoming a pastry chef or something similar. Existential psychology, if not existential philosophy, emphasises the importance of authenticity – it’s what all artists strive for - it’s about being honest to one’s self. How many of us fail to follow the dream and instead follow the path of least resistance, which is to do what is expected of us by our family, our church, our society or our spouse.

For most of us, finding meaning in our life has very little to do with God, even those of us who believe in a god, because it’s something internal not external. Viktor Frankl, in his autobiographical book about Auschwitz (Man's Search for Meaning), contends that there are 3 ways we find meaning in our life: through a project, through a relationship and through adversity. This movie, with no detour to a war zone or prison camp, hints at all three. Yet it’s in the context of relationships that the real questions are asked.

On Stephen’s blog, another blogger wrote a response which was drenched in cynicism – effectively saying that anyone seeking meaning, in a metaphysical sense, is delusional. And that Stephen’s criteria about what constitutes a ‘meaningful’ life was purely subjective. I challenged this guy by saying: ‘I don’t believe you’; everyone seeks meaning in their life, through their work or their art or their relationships, but, in particular, through relationships.

At the end of one’s life, I would suggest that one would judge it on how many lives one touched and how meaningfully they touched them; all other criteria would pale by comparison. To quote one of my favourite quotes from Eastern antiquity: If you want to judge the true worth of a person, observe the effects they have on other people’s lives.

Sunday 24 January 2010

Interview with David Kilcullen; expert on Afghanistan and counter-insurgency

Anyone who thinks that there are simplistic solutions to the crisis in Afghanistan and Pakistan, should listen to this interview.

Kilcullen also provides some background on the Iraqi conflict, especially from the perspective of the US State Department.

I thought it was very informative what he had to say about Indonesia (and SE Asia in general) towards the end of the interview.

Most quotable quote: “We need to get out of the business of invading other people’s countries just because there are terrorist cells there [though] I would never say never.”

Sunday 17 January 2010

Quantum Entanglement; nature’s great tease

I’ve just read the best book on the history of quantum mechanics that I’ve come across, called The Age of Entanglement, subtitled When Quantum Physics was Reborn, by Louisa Gilder. It’s an even more extraordinary achievement when one discovers that it’s Gilder’s first book, yet one is not surprised to learn it had an 8 year gestation. It’s hard to imagine a better researched book in this field.

Gilder takes an unusual approach, where she creates conversations between key players, as she portrays them, using letters and anecdotal references by the protagonists. She explains how she does this, by way of example, in a preface titled A Note To The Reader, so as not to mislead us that these little scenarios were actually recorded. Sometimes she quotes straight from letters.

When I taught a fiction-writing course early last year, someone asked me is biography fiction or non-fiction? My answer was that as soon as you add dialogue, unless it’s been recorded, it becomes fiction. An example is Schindler’s Ark by Thomas Keneally, who explained that he wrote it as a novel because ‘that is his craft’. In the case of Gilder’s book, I would call these scenarios quasi-fictional. The point is that they work very well, whether they be fictional or not, in giving flesh to the characters as well as the ideas they were exploring.

She provides an insight into these people and their interactions, at a level of perspicuity, that one rarely sees. In particular, she provides an insight into their personal philosophies and prejudices that drove their explorations and their arguments. The heart of the book is Bell’s Theorem or Bell’s Inequality, which I’ve written about before (refer Quantum Mechanical Philosophy, Jul.09). She starts the book off like a Hollywood movie, by providing an excellent exposition of Bell’s Theorem for laypeople (first revealed in 1964) then jumping back in time to the very birth of quantum mechanics (1900) when Planck coined the term, h, (now known as Planck’s constant) to satisfactorily explain black body radiation. Proceeding from this point, Gilder follows the whole story and its amazing cast of characters right up to 2005.

In between there were 2 world wars, a number of Nobel Prizes, the construction of some very expensive particle accelerators and a cold war, which all played their parts in the narrative.

David Mermin, a solid state physicist at Cornell gave the best exposition of Bell’s Theorem to non-physicists, for which the great communicator, Richard Feynman, gave him the ultimate accolade by telling him that he had achieved what Feynman himself had been attempting to achieve yet failed to realise.

Bell’s Theorem, in essence, makes predictions about entangled particles. Entangled particles counter-intuitively suggest action-at-a-distance occurring simultaneously, contradicting everything else we know about reality, otherwise known as ‘classical physics’. Classical physics includes relativity theory which states that nothing, including communication between distinct objects, can occur faster than the speed of light. This is called ‘locality’. Entanglement, which is what Bell’s Theorem entails, suggests the opposite, which we call ‘non-locality’.

Gilder’s abridged version of Mermin’s exposition is lengthy and difficult to summarise, but, by the use of tables, she manages to convey how Bell’s Theorem defies common sense, and that’s the really important bit to understand. Quantum mechanics defies what our expectations are, and Bell’s great contribution to quantum physics was that his famous Inequality puts the conundrum into a succinct and testable formula.

Most people know that Bohr and Einstein were key players and philosophical antagonists over quantum theory. The general view is that Bohr ultimately won the argument, and was further justified by the successful verification of Bell’s Theorem, while Einstein was consigned to history as having discovered two of the most important theories in physics (the special and general theories of relativity) but stubbornly rejected the most empirically successful theory of all time, quantum mechanics. Gilder’s book provides a subtle but significantly different perspective. Whilst she portrays Einstein as unapologetically stubborn, he played a far greater role in the development of quantum theory than popular history tends to grant him. In particular, it could be argued that he understood the significance of Bell’s Theorem many decades before Bell actually conceived it.

Correspondence, referenced by Gilder, suggests that Schrodinger’s famous Cat thought experiment originally arose from a suggestion by Einstein, only Einstein envisaged a box containing explosives that were both exploded and un-exploded at the same time. Einstein also supported De Broglie at a time when everyone else ignored him, and he acknowledged that de Broglie had ‘lifted a corner of the great veil’.

Curiously, the cover of her book contains 3 medallion-like photographic portraits, in decreasing size: Albert Einstein, Erwin Schrodinger and Louis de Broglie; all quantum mechanic heretics. Gilder could have easily included David Bohm and John Bell as well, if that was her theme.

Why heretics? Because they all challenged the Copenhagen interpretation of quantum mechanics, led by Bohr and Heisenberg, and which remains the ‘conventional’ interpretation to this day, even though the inherent conundrum of entanglement remains its greatest enigma.

It was Bohr who apparently said that anyone who claims they understand quantum mechanics doesn’t comprehend it at all, or words to that effect. When we come across something that is new to us, that we don’t readily understand, the brain looks for an already known context in which to place it. In an essay I wrote on Epistemology (July 2008) I made the point that we only understand new knowledge when we incorporate it into existing knowledge. The best example is when we look up a word in a dictionary – it’s always explained by using words that we already know. I also pointed out that this plays a role in storytelling where we are continuously incorporating new knowledge into existing knowledge as the story progresses. Without this innate human cognitive ability we’d give up on a story after the first page.

Well the trap with quantum mechanics is that we attempt to understand it in the context of what we already know, when, really, we can’t. It’s only when you understand the mystery of quantum mechanics that you can truly say: I understand it. In other words, when you finally understand what can’t be known, or can’t be predicted, as we generally do with so-called ‘classical physics’. Quantum mechanics obeys different rules, and when you appreciate that they don’t meet our normal ‘cause and effect’ expectations, then you are on the track of appreciating the conundrum. It’s a great credit to Gilder that she conveys this aspect of quantum physics, both in theory and in experiment, better than any other writer I’ve read.

Some thumbnail sketches based on Gilder’s research are worth relaying. She consistently portrays Neils Bohr as a charismatic leader who dominated as much by personality as by intellect. People loved him, but, consequently, found it difficult to oppose him, is the impression that she gives. The great and famous exception was Einstein, who truly did have a mind of his own, but also Wolfgang Pauli, who was famously known to be the most critical critic of any physicist.

John Wheeler, who in the latter part of the 20th Century, became Bohr’s greatest champion said of his early days with Bohr: “Nothing has done more to convince me that there once existed friends of mankind with the human wisdom of Confucius and Buddha, Jesus and Pericles, Erasmus and Lincoln, than walks and talks under the beech trees of Klampenborg Forest with Neils Bohr.” Could there be any greater praise?

Einstein wrote of Max Planck: “an utterly honest man who thinks of others rather than himself. He has, however, one fault: he is clumsy in finding his way about foreign trains of thought.” As for Lorentz, with whom he was corresponding with at the same time as Planck, he found him “astonishingly profound… I admire this man as no other, I would say I love him.”

Much later in the story, Gilder relates an account of how a 75 year-old Planck made a personal presentation to Hitler, attempting to explain how his dismissal of Jewish scientists from academic positions would have disastrous consequences for Germany. Apparently, he barely opened his mouth before he was given a severe dressing-down by the dictator and told where to go. Nevertheless, the story supports Einstein’s appraisal of the man from a generation earlier.

Gilder doesn’t provide a detailed portrait of Paul Dirac or P.A.M. Dirac, as he’s often better-known, but we know he was a very reserved and eccentric individual, whose mathematical prowess effectively forecast the existence of anti-matter. The Dirac equation is no less significantly prophetic than Einstein’s famous equation, E=mc2.

Wolfgang Pauli’s great contribution to physics was the famous Pauli exclusion principle, which I learnt in high school, and provides the explanation as to why atoms don’t all collapse in on each other, and, why, when you touch something you don’t sink into it. He also predicted the existence of the neutrino. Pauli’s personal life went into a steep decline in the 1930s when he suffered from chronic depression and alcoholism. His life turned around after he met Carl Jung and became a lifelong friend. ‘In two years of Jung’s personal analysis and friendship, Pauli shed his depression. In 1934 he met and married Franca Bertram, who would be his companion for the rest of his life.’

This friendship with Jung led to a contradiction in the light of our 21st Century sensibilities, according to Gilder:

’Pauli could tell Bohr to “shut up” and Einstein that his ideas were “actually not stupid”… But in the words of Franca Pauli, “the extremely rational thinker subjected himself to total dependence on Jung’s magical personality.”’

Schrodinger is as well known for his libertine attitude towards sexual relationships as he is for his famous equation. His own wife became the mistress of Schrodinger’s close friend and mathematician, Hermann Weil, whilst Schrodinger had a string of mistresses. But the identity of his lover-companion, when he was famously convalescing from tuberculosis in an Alpine resort in Arosa and conjured up the wave equations that bear his name, is still unknown to this day.

When Schrodinger died in 1961, Max Born (another Nobel Prize winner in the history of quantum mechanics) wrote the following eulogy:

“His private life seemed strange to bourgeois people like ourselves. But all this does not matter. He was a most loveable person, independent, amusing, temperamental, kind, and generous, and he had a most perfect and efficient brain.”

It was Born who turned Schrodinger’s equations into a probability function that every quantum theorist uses to this day. Born was a regular correspondent with Einstein, but is now almost as famously known in pop culture as being grandfather to Australian songstress, Olivia Newton John (not mentioned in Gilder’s book).

Gilder provides a relatively detailed and bitter-sweet history of the relationship between David Bohm and J. Robert Oppenheimer, both affected in adverse ways by the cold war and McCarthy’s ‘House Un-American Activities Committee’.

I personally identify with Gilder’s portrait of Bohm more than I anticipated, not because of his brilliance or his courage, but because of his apparent neurotic disposition and insecurity and his almost naïve honesty.

Gilder has obviously accessed transcripts of his interrogation, where he repeatedly declined to answer questions “on the ground that it might incriminate and degrade me, and also, I think it infringes on my rights as guaranteed by the First Amendment.”

When he was eventually asked if he belonged to a political party, he finally said, “Yes, I am. I would say ‘Yes’ to that question.”

This raised everyone’s interest, naturally, but when he followed up the next question, “What party or association is that?” he said, “I would say definitely that I voted for the Democratic ticket.” ‘The representative from Missouri’, who asked the question, must have been truly pissed off when he pointed out that that wasn’t what he meant. To which Bohm said, in all honesty no doubt, “How does one become a member of the Democratic Pary?”

Bohm lost his career, his income, his status and everything else at a time when he should have been at the peak of his academic abilities. Even Einstein’s letter of recommendation couldn’t get him a position at the University of Manchester and he eventually went to Sao Paulo in Brasil, though he never felt at home there. Gilder sets one of her quasi-fictional scenarios in a bar, when Feynman was visiting Brasil and socialising with Bohm, deliberately juxtaposing the two personalities. She portrays Bohm as not being jealous of Feynman’s mind, but being jealous of his easy confidence in a foreign country and his sex-appeal to women. That’s the David Bohm I can identify with at a similar age.

Bohm eventually migrated to England where he lived for the rest of his life. I don’t believe he ever returned to America, though I can’t be sure how true that is. I do know he became a close friend to the Dalai Lama, because the Dalai Lama mentions their friendship in one of his many autobiographies.

According to Gilder, it’s unclear if Bohm ever forgave Oppenheimer for ‘selling out’ his friend, Bernard Peters, both of whom hero-worshipped Oppenheimer. Certainly, at the time that Oppenheimer ‘outed’ Peters as a ‘crazy red’, Bohm felt that he had betrayed him.

Bohm made a joke of the House Un-American Activities Committee based on the famous logic conundrum postulated by Bertrand Russell: “If the barber is the man who shaves all men who do not shave themselves, who shaves the barber?” Bohm’s version: “Congress should appoint a committee to investigate all committees that do not investigate themselves.”

But of all the characters, John Bell is the one about whom I knew the least, and yet he is the principal character in Gilder’s narrative, because he was not only able to grasp the essential character of quantum mechanics but to quantify it in a way that could be verified. I won’t go into the long story of how it evolved from the Einstein-Podolsky-Rosen (EPR) conjecture, except to say that Gilder covers it extremely well.

What I did find interesting was that after Bell presented his Inequality, the people who wanted to confirm it were not supported or encouraged on either side of the Atlantic. It was considered a career-stopper, and Bell himself, even discouraged up-and-coming physicists from pursuing it. That all changed, of course, when results finally came out.

After reading Gilder’s account, I went back to the interview that Paul Davies had with Bell (The Ghost in the Atom, 1986) after the famous Alain Aspect experiment had confirmed Bell’s Inequality.

Bell is critical of the conventional Copenhagen interpretation because he argues where do you draw the line between the quantum world and the classical world when you make your ‘observation’. Is it at the equipment, or is it in the optic nerve going to your brain, or is it at the neuron in the brain itself. He’s deliberately mocking the view that ‘consciousness’ is the cause of the ‘collapse’ of the quantum wave function.

In the interview he makes specific references to de Broglie and Bohm. Gilder, I noticed, sourced the same material.

“One of the things that I specifically wanted to do was to see whether there was any real objection to this idea put forward long ago by de Broglie and Bohm that you could give a completely realistic account of all quantum phenomena. De Broglie had done that in 1927, and was laughed out of court in a way that I now regard as disgraceful, because his arguments were trampled on. Bohm resurrected that theory in 1952, and was rather ignored. I thought that the theory of Bohm and de Broglie was in all ways equivalent to quantum mechanics for experimental purposes, but nevertheless it was realistic and unambiguous. But it did have the remarkable feature of action-at-a-distance. You could see that when something happened at one point there were consequences immediately over the whole of space unrestricted by the velocity of light.”

Perhaps that should be the last word in this dissertation, but I would like to point out, that, according to Gilder, Einstein made the exact same observation in 1927, when he tried to comprehend the double-slit experiment in terms of Schrodinger’s waves.

Monday 4 January 2010

Jesus' philosophy

Normally, I wouldn’t look twice at a book with the title, Jesus & Philosophy, but when the author’s name is Don Cupitt, that changes everything. In September last year, I reviewed his book, Above Us Only Sky (under a post titled The Existential God) which is effectively a manifesto on the ‘religion of ordinary life’ to use his own words.

Cupitt takes a very scholarly approach to his topic, referencing The Gospel of Jesus, which arose from the ‘Jesus Seminar’ (1985 to 1995). And, in fact, Cupitt dedicates the book to the seminar’s founder, Robert W. Funk. He also references a document called ‘Q’. For those, like myself, who’ve never heard of Q, I quote Cupitt himself:

“Q, it should be said in parenthesis here, is the term used by Gospel critics to describe a hypothetical sayings-Gospel, written somewhere between the years 50 and 70 CE, and drawn upon extensively by both Matthew and Luke.”

Cupitt is a most unusual theologian in that he has all but disassembled orthodox Christian theology, and he now sees himself more as a philosopher. The overarching thesis of his book, is that Jesus was the first humanist. From anyone else, this could be dismissed as liberal-theological claptrap, but Cupitt is not anyone else; he commands you to take him seriously by the simple merit of his erudition and his lack of academic pretension or arrogance. You don’t have to agree with him but you can’t dismiss him as a ratbag either.

Many people, these days, even question whether Jesus ever existed. Stephen Law, has posed the question more than once on his blog, but, besides provoking intelligent debate, he’s merely revealed how little we actually know. Cupitt doesn’t even raise this question; he assumes that there was an historical Jesus in the same way that we assume there was an historical Buddha, who, like Jesus, kept no records of his teachings. In fact, Cupitt makes this very same comparison. He argues that Jesus’ sayings, like the Buddha’s, would have been remembered orally before anyone wrote them down, and later narratives were attached to them, which became the gospels we know today. He doesn’t question that the biblical stories are fictional, but he believes that behind them was a real person, whose teachings have been perverted by the long history of the Church. He doesn’t use that term, but I do, because it’s what I’ve long believed. The distortion, if not the perversion, was started by Paul, who is the single most responsible person for the concept of Jesus as saviour or messiah that we have today.
.
I actually disagree with Stephen Law’s thesis, and I’ve contended it on his blog, because a completely fictional Jesus doesn’t make a lot of sense to me. If you are going to create a fictional saviour (who is a Deity) then why make him a mortal first and why make him a complete failure, which he was. On the other hand, deifying a mortal after their death at the hands of their enemy, to become a saviour for an oppressed people, makes a lot of sense. A failure in mortal flesh becomes a messiah in a future kingdom beyond death.

Also if Jesus is completely fictional, who was the original author? The logical answer is Paul, but records of Jesus precede Paul, so Paul must have known he was fictional, if that was the case. I’m not an expert in this area, but Cupitt is not the first person to make a distinction between a Jesus who took on the Church of his day and stood up for the outcast and disenfranchised in his society, and Paul’s version, who both knew and prophesied that he was the ‘Son of God’. H.G. Wells in his encyclopedic book, The Outline of History (written after WWI), remarks similarly on a discontinuity in the Jesus story as we know it.

But all this speculation is secondary, though not irrelevant, to Cupitt’s core thesis. Cupitt creates a simple imagery concerning the two conflicting strands of morality, theistic and humanistic, as being vertical and horizontal. The vertical strand comes straight from God or Heaven, which makes it an unassailable authority, and the horizontal strand stems from the human ‘heart’.

His argument, in essence, is that Jesus’ teachings, when analysed, appealed to the heart, not to God’s authority, and, in this respect, he had more in common with Buddha and Confucius than to Moses or Abraham or David. In fact, more than once, Cupitt likens Jesus to an Eastern sage (his words) who drew together a group of disciples, and through examples and teachings, taught a simple philosophy, not only of reciprocity, but of forgiveness.

In fact, Cupitt contends that reciprocity was not Jesus’ core teaching, and, even in his Preface, before he gets into the body of his text, he quotes from the ‘Gospel of Jesus’ to make his point: “If you do good to those who do good to you, what merit is there in that?” (Gospel of Jesus, 7.4; Q/Luke 7.33). Cupitt argues that one of Jesus’ most salient messages was to break the cycle of violence that afflicts all of humanity, and which we see, ironically, most prominently demonstrated in modern day Palestine.

Cupitt uses the term 'ressentiment' to convey this peculiar human affliction: the inability to let go of a grievance, especially when it involves a loved one, but also when it involves more amorphous forms of identity, like nation or race or creed (see my post on Evil, Oct. 07). According to Cuppit, “Jesus says: ‘Don’t let yourself be provoked into ressentiment by the prosperity of the wicked. Instead, be magnanimous, and teach yourself to see in it the grace of God, giving them time to repent. Too many people who have seen the blood of the innocent crying out for vengeance have allowed themselves to develop the revolting belief in a sadistic and vengeful God.’” (Cupitt doesn’t give a reference for this ‘saying’, however.)

I don’t necessarily agree with Cupitt’s conclusion that Jesus is the historical ‘hinge’ from the vertical strand to the horizontal strand, which is the case he makes over 90 odd pages. I think there have been others, notably Gautama Siddhartha (Buddha) and Confucius, who were arguably just as secular as Jesus was, and preceded him by 500 years, though their spheres of influence were geographically distinct from Jesus’.

Obviously, I haven’t covered all of Cupitt’s thesis, including references to Plato and Kant, and the historical relationship between the vertical and horizontal strands of morality. He makes compelling arguments that Jesus has long been misrepresented by the Church, in particular, that Jesus challenged his society’s dependence on dogmatic religious laws.

One interesting point Cupitt makes, almost as a side issue, is that it was the introduction of the novel that brought humanist morality into intellectual discourse. Novels, and their modern derivatives in film and television, have invariably portrayed moral issues as being inter-human not God-human. As Cupitt remarks, you will go a long way before you will find a novel that portrays morality as being God-given. Even so-called religious writers, like Graham Greene and Morris West, were always analysing morality through human interaction (Greene was a master of the moral dilemma) and if God entered one of their narratives, ‘He’ was an intellectual concept, not a character in the story.

There is one aspect of Jesus that Cupitt doesn’t address, and it’s the fact that so many Christians claim to have a personal relationship with him. This, of course, is not isolated to Jesus. I know people who claim to have a personal relationship with Quan Yin (the Buddhist Goddess of Mercy) and others claim a relationship with the Madonna and others with Allah and others with Yahweh and so on. So what is all this? This phenomena, so widespread, has fascinated me all my life, and the simple answer is that it’s a projection. There is nothing judgmental in this hypothesis. My reasoning is that for every individual, the projection is unique. Everyone who believes in this inner Jesus has their own specific version of him. I don’t knock this, but, as I’ve said before, the Deity someone believes in says more about them than it says about the Deity. If this Deity represents all that is potentially good in humanity then there is no greater aspiration.

In the beginning of Cupitt’s book, even before the Preface, he presents William Blake’s poem, The Divine Image. In particular, I like the last verse, which could sum up Cupitt’s humanist thesis.

And all must love the human form,
In heathen, Turk, or Jew;
Where Mercy, Love, & Pity dwell
There God is dwelling too.


In other words, God represents the feeling we have for all of humanity, which is not only subjective, but covers every possible attribute. If you believe in a vengeful, judgmental God, then you might not have a high opinion of humanity, but if you believe in a forgiving and loving God, then maybe that’s where your heart lies. As for those who claim God is both, then I can only assume they are as schizoid in their relationships as their Deity is.

Friday 1 January 2010

Stieg Larsson’s The Girl with the Dragon Tattoo

I don’t normally review novels, but, to be honest, I don’t read a lot of them either, which is an incredible admission for a want-to-be author to make (actually, I’m a real author, just not a very successful one). Most of my reading is non-fiction, at least 90%, and when given a choice between a novel or a non-fiction book, I’ll invariably end up with the latter. There are always a stack of unread books in front of me, which are all non-fiction.

The Girl with the Dragon Tattoo was an exception – this was a book I had to read – simply because I’d heard so much about it. It’s the first in a trilogy by Stieg Larsson, who unfortunately died before they became monstrously successful. The first won a Galaxy British Books award for the ‘Crime Thriller of the Year 2009’. I’m not sure if it has the same status in America as it has in the rest of the world, but, if it hasn’t, I expect that would change if the movie went international.

Larsson was not much younger than me, and was a journalist and editor-in-chief of Expo from 1999. He died in 2004, only 50 years old, just after he delivered all three manuscripts to his Swedish publisher. As a first novel, I’m extraordinarily impressed, and, from my own experience of publishing my first novel at a similar age, I suspect he must have been practicing the art of fiction well beforehand. Very few journalists make the jump from non-fiction to fiction (refer my post on Storytelling, Jul.09) even though the craft of creating easy-to-read yet meaningful and emotively charged prose is well hone. It’s a big leap from writing stories about real people and real events to imaginary scenarios populated by fictional yet believable characters. The craft of writing engaging dialogue looks deceptively simple, yet it can stump the most practiced wordsmith if they’ve never attempted it before.

Larsson has two protagonists, one middle-aged male and one mid-twenties female, who are opposites in almost every respect except intelligence. Mikael Blomkvist could easily be an alter-ego for Larsson, as he’s a financial investigative journalist who jointly runs a magazine with his ‘occasional lover’, Erika Berger, but, being an author myself, I don’t necessarily jump to such obvious conclusions. When people see an actor in a role on the screen, they often assume that that is what he or she must be like in real life, yet that’s the actor’s job: to make you believe the screen persona is a real person. Well, we authors have to create the same illusion – we are all magicians in that sense. There’s no doubt that Larsson used his inside-knowledge in developing his story and background for his character, but the personality of Blomkvist may be quite different to Larsson. In fact, there is no reason not to assume that his other protagonist, albeit a different age, sex and occupation, may be closer in personality to its creator. Having said that, Lisbeth Salander (who is the girl with the dragon tattoo) is a dysfunctional personality, possibly with a variant of Asberger’s that makes one pause. The point is that she’s just as well drawn as Blomkvist, perhaps even better.

My point is that authors, myself included, often create characters who have characteristics that we wish we had but know we haven’t. Blomkvist is an easy-going, tolerant person with liberal views, who charms the pants off women, but has an incisive mind that sees through deception. Larsson may have had these qualities or some of these qualities and added others. We all do this to our protagonists.

One of the strengths of this book is that, whilst it entertains in a way that we expect thrillers to, it exhibits a social conscience with a strong feminist subtext. There are 4 parts, comprising 29 chapters, bookended with a prologue and epilogue. Each part has its own title page, and they all contain a statistic concerning violence against women in Sweden. I will quote the last one in the book, on the title page for Part 4, Hostile Takeover: “92% of women in Sweden who have been subjected to sexual assault have not reported the most recent violent incident to the Police.”

One of the things I personally like about this book is that it challenges our natural tendency to judge people by appearances. Larsson does this with Salander all through the book, where people continually misjudge her, and, all through her fictional life, she’s been undervalued and written off as a ‘retard’ and social misfit.

As a young person growing up, two of the most influential people in my life were eccentrics, both women, one my own age and one 2 generations older. This has made me more tolerant and less judgemental than most people I know. To this extent I can identify with Blomkvist. It’s one of the resonances I felt most strongly in this novel.

Larsson is a very good writer on all fronts. It is halfway through the book (comprising over 500 pages, approx. 150,000 words long) before anything truly dramatic happens on the page. Beforehand we have lots of mystery and lots of intrigue, but not a lot of action or real suspense. It’s a great credit to Larsson that he keeps you engaged for that entire time (it’s a genuine page-turner) without resorting to mini-climaxes. Dan Brown could learn a lot from Stieg Larsson. Brown is a master of riddles, but Larsson’s writing has a depth, both in characterisation and subtext, that’s way over Brown’s head.

There’s nothing much else to say – I’m yet to read the other two in the series. I assume they are self-contained stories with the same protagonists. One is naturally interested in how their relationship develops, both professionally and personally. I often believe that what raises one novel above another is the psychological believability of the characters, to coin my own phrase. Last year I reviewed the film, Watchmen (Oct. 09), a movie based on a graphic novel, but it was the depth of characterisation, along with its substantial subtext, that lifted it above the expected norm for comic-book movies. The Girl with the Dragon Tattoo is in a similar class of fiction. I don’t judge books or films by their genre; I judge them, primarily, by how well-written they are – it’s probably the criterion we all use, but we’re not consciously aware of.