Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Sunday, 23 May 2010

Why religion is not the root of all evil

I heard an interview with William Dalrymple last week (19 May 2010, Sydney time) who is currently attending the Sydney Writers’ Festival. The interview centred around his latest book, Nine Lives: In Search of the Sacred in Modern India.

Dalrymple was born in Edinburgh but has traveled widely in India and the book apparently examines the lives of nine religious followers in India. I haven’t read the book myself, but, following the interview, I’m tempted to seek it out.

As the title of his book suggests, Dalrymple appears fascinated with the religious in general, although he gave no indication in the interview what his own beliefs may be. His knowledge of India’s religions seems extensive and there are a couple of points he raised which I found relevant to the West’s perspective on Eastern religions and the current antagonistic attitudes towards religious issues: Islam, in particular.

As I say, I haven’t read the book, but the gist of it, according to the interview, is that he interviewed nine people, who lead distinctly different cultural lives in India, and wrote a chapter on each one. One of the points he raised, which I found relevant to my own viewpoint, is the idea that God exists inside us and not out there. This is something that I’ve discussed before and I don’t wish to dwell on here, but he inferred that the idea can be found in Sufism as well as Hinduism. It should be pointed out, by the way, that there is not one Hindu religion, and, in fact, Hinduism is really a collection of religions, that the West tend to put all in one conceptual basket. Dalrymple remarked on the similarity between Islamic Sufism and some types of Hinduism, which have flourished in India. In particular, he pointed out that the Sufis are the strongest opponents of Wahhabi-style Islam in Pakistan, which is very similar to the fundamentalism of the Taliban. I raise this point, because many people are unaware that there is huge diversity in Islam, with liberal attitudes pitted against conservative attitudes, the same as we find in any society worldwide, secular or otherwise.

This contradicts the view expressed by Hitchens and Harris (Dawkins has never expressed it, as far as I’m aware, but I’m sure he would concur) that people with moderate religious views somehow give succour to the fundamentalists and extremists in the world. This is a view, which is not just counter-productive, it’s divisive, simplistic, falsely based and deeply prejudicial. And it makes me bloody angry.

These are very intelligent, very knowledgeable and very articulate men, but this stance is an intellectualisation of a deeply held prejudice against religion in general. Because they are atheists, they believe it gives them a special position – they see themselves as being outside the equation – because they have no religious belief, they are objective, which gives them a special status. My point is that they can hardly ask for people with religious views to show tolerance towards each other if they can intellectualise their own intolerance towards all religions. By expressing the view, no matter how obtuse, that any religious tolerance somehow creates a shelter or cover for extremists, they are fomenting intolerance towards those who are actually practicing tolerance.

Dawkins was in Australia for an international Atheist convention in Melbourne, earlier this year. Religion is not a hot topic in this country, but, of course, it becomes a hot topic while he’s visiting, which makes me really glad that he doesn’t live here full time. On a TV panel show, he made the provocative inference that no evil has ever come from atheism. So atheists are not only intellectually superior to everyone else but they are also morally superior. What he said and what he meant, is that no atheist has ever attempted genocide on a religious group because of his or her atheism (therefore religious belief) but lots of political groups have, which may or may not be atheistic. In other words, when it comes to practicing genocide, whether the identification of the outgroup is religious or political becomes irrelevant. We don’t need religion to create politically unstable groups, they can be created by atheists as easily as they can by religious zealots. Dawkins, of course, chooses his words carefully, to give the impression that no atheist would ever indulge in an act of genocide, be it psychological or physical, but we all know that political ideology is no less dangerous than religious ideology.

One of Dawkins’ favourite utterances is: “There is no such thing as a Muslim child.” If one takes that statement to its logical conclusion, he’s advocating that all children should be disassociated from their cultural heritage. Is he aware of how totalitarian that idea is? He wants to live in a mono-culture, where everyone gets the correct education that axiomatically will ensure they will never believe in the delusion of God. Well, I don’t want to live in that world, so, frankly, he can have it.

People like to point to all the conflicts in the world of the last half century, from Ireland to the Balkans to the Middle East as examples of how religion creates conflicts. The unstated corollary is that if we could rid the world of religion we would rid it of its main source of conflict. This is not just naïve, it’s blatantly wrong. All these conflicts are about the haves and have-nots. Almost all conflicts, including the most recent one in Thailand are about one group having economical control over another. That’s what happened in Ireland, in former Yugoslavia, and, most significantly, in Palestine. In many parts of the world, Iraq, Iran and Afghanistan being typical examples, religion and politics are inseparable. It’s naïve in the extreme to believe, from the vantage of a secular society, that if you rid a society of its religious beliefs you will somehow rid it of its politics, or make the politics more stable. You make the politics more stable by getting rid of nepotism and corruption. In Afghanistan, the religious fundamentalists have persuasion and political credibility because the current alternative solution is corrupt and financially self-serving.

It should be obvious for anyone who follows my blog that I’m not anti-atheist. In fact, I’ve done my best to stay out of this debate. But, to be honest, I refuse to take sides in the way some commentators infer we should. I don’t see it as an US and THEM debate, because I don’t live in a country where people with religious agendas are trying to take control of the parliament. We have self-confessed creationists in our political system, but, as was demonstrated on the same panel that Dawkins was on, they are reluctant to express that view in public, and they have no agenda, hidden or otherwise, for changing the school curricula. I live in a country where you can have a religious point of view and you won’t be hung up and scrutinised by every political commentator in the land.

Religion has a bad rap, not helped by the Catholic Church’s ‘above the law’ attitude towards sexual abuse scandals, but religious belief per se should never be the litmus test for someone’s intelligence, moral integrity or strength of character, either way.

Sunday, 9 May 2010

Aerodynamics demystified

I know: you don’t believe me; but you haven’t read Henk Tennekes’s book, The Simple Science of Flight; From Insects to Jumbo Jets. This is a book that should be taught in high school, not to get people into the aerospace industry, but to demonstrate how science works in the real world. It is probably the best example I’ve come across, ever.

I admit, I have an advantage with this book, because I have an engineering background, but the truth is that anyone, with a rudimentary high school education in mathematics, should be able to follow this book. By rudimentary, I mean you don’t need to know calculus, just how to manipulate basic equations. Aerodynamics is one of the most esoteric subjects on the planet – all the more reason that Tennekes’s book should be part of a high school curriculum. It demonstrates the availability of science to the layperson better than any book I’ve read on a single subject.

Firstly, you must appreciate that mathematics is about the relationship between numbers rather than the numbers themselves. This is why an equation can be written without any numbers at all, but with symbols (letters of the alphabet) representing numbers. The numbers can have any value as long as the relationship between them is dictated by the equation. So, for an equation containing 3 symbols, if you know 2 of the values, you can work out the third. Elementary, really. To give an example from Tennekes’s book:

W/S=0.38 V2

Where W is the weight of the flying object (in Newtons), S is the area of the wing (square metres) and V is cruising speed (metres per second). 0.38 is a factor dependent on the angle of attack of the wing (average 6 degrees) and the density of the medium (0.3125 kg/m3; air at sea level). What Tennekes reveals graphically is that you can apply this equation to everything from a fruit fly (Drosophila melanogaster) to an Airbus A380 on what he calls The Great Flight Diagram (see bottom of post). (Mind you, his graph is logarithmic along both axes, but that’s being academic, quite literally.)

I’ve used a small sleight-of-hand here, because the equation for the graph is actually:

W/S = c x W1/3

W/S (weight divided by wing area which gives pressure) is called ‘wing loading’ and is proportional to the cubed root of the Weight, which is a direct consequence of the first equation (that I haven’t explained, though Tennekes does). Tennekes’s ‘Great Flight Diagram’ employs the second equation, but gives V (flight cruise speed) as one of the axes (horizontal) against Weight (vertical axis); both logarithmic as I said. At the risk of confusing you, the second equation graphs better (it gives a straight line on a logarithmic scale) but the relationships of both equations are effectively entailed in the one graph, because W, W/S and V can all be read from it.

I was amazed that one equation could virtually cover the entire range of flight dynamics for winged objects on the planet. The equations also effectively explain the range of flight dynamics that nature allows to take place. The heavier something is, the faster it has to fly to stay in the air, which is why 747s consistently fly 200 times faster than fruit flies. The equation shows that there is a relationship between weight, wing area and air speed at all scales, and while that relationship can be stretched it has limits. Flyers (both natural and artificial) left of the curve are slow for their size and ones to the right are fast for their size – they represent the effective limits. (A line on a graph is called a ‘curve’, even if it’s straight, to distinguish it from a grid-line.) So a side-benefit of the book is that it provides a demonstration of how mathematics is not only a tool of analysis, but how it reveals nature’s limits within a specific medium – in this case, air in earth’s gravitational field. It reminded me of why I fell in love with physics when I was in high school – nature’s secrets revealed through mathematics.

The iconic Supermarine Spitfire is one of the few that is right on the curve, but, as Tennekes points out, it was built to climb fast as an interceptor, not for outright speed.

Now, for those who know more about this subject than I do, they may ask: what about Reynolds numbers? Well, I know Reynolds numbers are used by aeronautical engineers to scale up aerodynamic phenomena from small scale models they use in wind tunnels to full scale aeroplanes. Tennekes conveniently leaves this out, but then he’s not explaining how we use models to provide data for their full scale equivalents – he’s explaining what happens at full scale no matter what the scale is. So speed increases with weight and therefore scale – we are not looking for a conversion factor to take us from one scale to another, which is what Reynolds numbers do. (Actually, there’s a lot more to Reynolds numbers than that, but it’s beyond my intellectual ken.) I’m not an aeronautical engineer, though I did work in a very minor role on the design of a wind tunnel once. By minor role, I took minutes of the meetings held by the real experts.

When I was in high school, I was told that winged flight was all explained by the Bernoulli effect, which Tennekes describes as a ‘polite fiction’. So, that little gem alone, makes Tennekes’s book a worthwhile addition to any school’s science library.

But the real value in this book comes when he starts to talk about migrating birds and the relationship between energy and flight. Not only does he compare aeroplanes with other forms of transport, thus explaining why flight is the most economical means of travel over long distances, as nature has already proven with birds, but he analyses what it takes for the longest flying birds to achieve their goals, and how they live at the limit of what nature allows them to do. Again, he uses mathematics, that the reader can work out for themselves, to convert calories from food into muscle power into flight speed and distance, to verify that the very best traveled migratory birds don’t cheat nature, but live at its limits.

The most extraordinary example being bar-tailed godwits that fly across the entire Pacific Ocean from Alaska to New Zealand and to Australia’s Northern Territory – a total of 11,000 km non-stop (7,000 miles). It’s such a feat that Tennekes claims it requires a rethink on the metabolic efficiency of the muscles of these birds, and he provides the numbers to support his argument. He also explains how birds can convert fat directly into energy for muscles, something we can’t do (we have to convert it into sugar first). He also explains how some migratory birds even start to atrophy their wing muscles and heart muscles to extend their trip – they literally burn up their own muscles for fuel.

So he combines physics with biology with zoology with mathematics, all in one chapter, on one specific subject: bird migration.

He uses another equation, along with a graphic display of vectors, that explains how flapping wings work on exactly the same principle as ice skating in humans. What’s more, he doesn’t even tell the reader that he’s working with vectors, or use trigonometry to explain it, yet anyone would be able to understand the connection. That’s just brilliant exposition.

In a nutshell (without the diagrams) power equals force times speed: P=FV. For the same amount of Power, you can have a large Force and small Velocity or the converse.

In other words, a large force times a small velocity can be transformed into a small force with a large velocity, with very little energy loss if friction is minimalised. This applies to both skaters and birds. The large force, in skating, is your leg pushing sideways against your skate, with a small sideways velocity, resulting in a large velocity forwards, from a small force on the skate backwards. Because the skate is at a slight angle, the force sideways (from your leg) is much greater than the force backwards, but it translates into a high velocity forwards.

The same applies to birds on their downstroke: a large force vertically, at a slight forward angle, gives a higher velocity forward. Tennekes says that the ratio of wing tip velocity to forward velocity for birds is typically 1 to 3, though varies between 2 and 4. If a bird wants to fly faster, they don’t flap quicker, they increase the amplitude, which, at the same frequency, increases wing tip speed, which increases forward flight speed. Simple, isn’t it? The sound you hear when pigeons or doves take off vertically is there wing tips actually touching (on both strokes). Actually, what you hear is the whistle of air escaping the closed gap, as a continuous chirp, which is their flapping frequency. So when they take off, they don’t double their wing flapping frequency, they double their wing flapping amplitude, which doubles their wing tip speed at the same frequency: the wing tip has to travel double the distance in the same time.

One possible point of confusion is a term Tennekes uses called ‘specific energy consumption’, which is a ratio, not an amount of energy as its description implies. It is used to compare energy consumption or energy efficiency between different birds (or planes), irrespective of what units of energy one uses. The inversion of the ratio gives the glide ratio (for both birds and planes) or what the French call ‘Finesse’ – a term that has special appeal to Tennekes. So a lower energy consumption gives a longer guide ratio, or vice versa, as one would expect.

Tennekes finally gets into esoteric territory when he discusses drag and vortices, but he’s clever enough to perform an integral without introducing his readers to calculus. He’s even more clever when he derives an equation based on vortices and links it back to the original equation that I referenced at the beginning of this post. Again, he’s demonstrating how mathematics keeps us honest. To give another, completely unrelated example: if Einstein’s general theory of relativity couldn’t be linked to Newton’s general equation of gravity, then Einstein would have had to abandon it. Tennekes does exactly the same thing for exactly the same reason: to show that his new equation agrees with what has already been demonstrated empirically. Although it’s not his equation, but Ludwig Prandtl’s, whom he calls the ‘German grandfather of aerodynamics’.

Prandtl based his equation on an analogy with electromagnetic induction, which Tennekes explains in some detail. They both deal with an induced phenomenon that occurs in a circular loop perpendicular to the core axis. Vortices create drag, but in aerodynamics it actually goes down with speed, which is highly counterintuitive, but explains why flight is so economical compared to other forms of travel, both for birds and for planes. The drag from vortices is called ‘induced’ drag, not to be confused with ‘frictional’ drag that does increase with air speed, so at some point there is an optimal speed, and, logically, Tennekes provides the equation that gives us that as well. He also explains how it’s the vortices from wing tips that cause many long distance flyers, like geese and swans, to fly in V formation. The vortex supplies an updraft just aft and adjacent to the wingtip that the following bird takes advantage of.

Tennekes uses his equations to explain why human-powered flight is the reserve of professional cyclists, and not a recreational sport like hang-gliding or conventional gliding. Americans apparently use the term, sailplane, instead of glider, and Tennekes uses both without explaining he’s referring to the same thing.

Tennekes reveals that his doctoral thesis (in 1964) critiqued the Concorde (still on the drawing board back then) as ‘a step backward in the history of aviation.’ This was considered heretical at the time, but not now, as history has demonstrated to his credit.

The Concorde is now given as an example, in psychology, of how humans are the only species that don’t know when to give up (called the ‘Concorde effect’). Unlike other species, humans evaluate the effort they’ve put into an endeavour, and sometimes, the more effort they invest, the more determined they become to succeed. Whether this is a good or bad trait is purely subjective, but it can evolve into a combination of pride, egotism and even denial. In the case of the Concorde, Tennekes likens it to a manifestation of ‘megalomania’, comparable to Howard Hughes’ infamous Spruce Goose.

Tennekes’s favourite plane is the Boeing 747, which is the complete antithesis to the Concorde, in evolutionary terms, and developed at the same time; apparently so it could be converted to a freight plane when supersonic flight became the expected norm. So, in some respects, the 747, and its successors, were an ironic by-product of the Concorde-inspired thinking of the time.

My only criticism of Tennekes is that he persistently refers to a budgerigar as a parakeet. This is parochialism on my part: in Australia, where they are native, we call them budgies.

Great Flight diagram.

Sunday, 11 April 2010

To have or not to have free will

In some respects this post is a continuation of the last one. The following week’s issue of New Scientist (3 April 2010) had a cover story on ‘Frontiers of the Mind’ covering what it called Nine Big Brain Questions. One of these addressed the question of free will, which happened to be where my last post ended. In the commentary on question 8: How Powerful is the Subconscious? New Scientist refers to well-known studies demonstrating that neuron activity precedes conscious decision-making by 50 milliseconds. In fact, John-Dylan Haynes of the Bernstein Centre for Computational Neuroscience, Berlin, has ‘found brain activity up to 10 seconds before a conscious decision to move [a finger].’ To quote Haynes: “The conscious mind is not free. What we think of as ‘free will’ is actually found in the subconscious.”

New Scientist actually reported Haynes' work in this field back in their 19 April 2008 issue. Curiously, in the same issue, they carried an interview with Jill Bolte Taylor, who was recovering from a stroke, and claimed that she "was consciously choosing and rebuilding my brain to be what I wanted it to be". I wrote to New Scientist at the time, and the letter can still be found on the Net:

You report John-Dylan Haynes finding it possible to detect a decision to press a button up to 7 seconds before subjects are aware of deciding to do so (19 April, p 14). Haynes then concludes: "I think it says there is no free will."

In the same issue Michael Reilly interviews Jill Bolte Taylor, who says she "was consciously choosing and rebuilding my brain to be what I wanted it to be" while recovering from a stroke affecting her cerebral cortex (p 42) . Taylor obviously believes she was executing free will.

If free will is an illusion, Taylor's experience suggests that the brain can subconsciously rewire itself while giving us the illusion that it was our decision to make it do so. There comes a point where the illusion makes less sense than the reality.

To add more confusion, during the last week, I heard an interview with Norman Doidge MD, Research psychiatrist at the Columbia University Psychoanalytic Centre and the University of Toronto, who wrote the book, The Brain That Changes Itself. I haven’t read the book, but the interview was all about brain plasticity, and Doidge specifically asserts that we can physically change our brains, just through thought.

What Haynes' experimentation demonstrates is that consciousness is dependent on brain neuronal activity, and that’s exactly the point I made in my last post. Our subconscious becomes conscious when it goes ‘global’, so one would expect a time-lapse between a ‘local’ brain activity (that is subconscious) and the more global brain activity (that is conscious). But the weird part is that Taylor’s experience, and Doidge’s assertions, is that our conscious thoughts can also affect our brain at the neuron level. This reminds me of Douglas Hofstadter’s thesis that we all are a ‘strange loop’, that he introduced in his book, Godel, Escher, Bach, and then elaborated on in a book called I am a Strange Loop. I’ve read the former tome but not the latter one (refer my post on AI & Consciousness, Feb.2009).

We will learn more and more about consciousness, I’m sure, but I’m not at all sure that we will ever truly understand it. As John Searle points out in his book, Mind, at the end of the day, it is an experience, and a totally subjective experience at that. In regard to studying it and analysing it, we can only ever treat it as an objective phenomenon. The Dalai Lama makes the same point in his book, The Universe in a Single Atom.

People tend to think about this from a purely reductionist viewpoint: once we understand the correlation between neuron activity and conscious experience, the mystery stops being a mystery. But I disagree: I expect the more we understand, the bigger the mystery will become. If consciousness is no less weird than quantum mechanics, I’ll be very surprised. And we are already seeing quite a lot of weirdness, when consciousness is clearly dependent on neuronal activity, and yet the brain’s plasticity can be affected by conscious thought.

So where does this leave free will? Well, I don’t think that we are automatons, and I admit I would find it very depressing if that was the case. The last of the Nine Questions in last week’s New Scientist, asks: will AI ever become sentient? In its response, New Scientist reports on some of the latest developments in AI, where they talk about ‘subconscious’ and ‘conscious’ layers of activity (read software). Raul Arrables of the Carlos III University of Madrid, has developed ‘software agents’ called IDA (Intelligent Distribution Agent) and is currently working on LIDA (Learning IDA). By ‘subconcious’ and ‘conscious’ levels, the scientists are really talking about tiers of ‘decision-making’, or a hierarchic learning structure, which is an idea I’ve explored in my own fiction. At the top level, the AI has goals, which are effectively criteria of success or failure. At the lower level it explores various avenues until something is ‘found’ that can be passed onto the higher level. In effect, the higher level chooses the best option from the lower level. The scientists working on this 2 level arrangement, have even given their AI ‘emotions’, which are built-in biases that direct them in certain directions. I also explored this in my fiction, with the notion of artificial attachment to a human subject that would simulate loyalty.

But, even in my fiction, I tend to agree with Searle, that these are all simulations, which might conceivably convince a human that an AI entity really thinks like us. But I don’t believe the brain is a computer, so I think it will only ever be an analogy or a very good simulation.

Both this development in AI and the conscious/subconscious loop we seem to have in our own brains reminds me of the ‘Bayesian’ model of the brain developed by Karl Friston and also reported in New Scientist (31 May 2008). They mention it again in an unrelated article in last week’s issue – one of the little unremarkable reports they do – this time on how the brain predicts the future. Friston effectively argues that the brain, and therefore the mind, makes predictions and then modifies the predictions based on feedback. It’s effectively how the scientific method works as well, but we do it all the time in everyday encounters, without even thinking about it. But Friston argues that it works at the neuron level as well as the cognitive level. Neuron pathways are reinforced through use, which is a point that Norman Doidge makes in his interview. We now know that the brain literally rewires itself, based on repeated neuron firings.

Because we think in a language, which has become a default ‘software’ for ourselves, we tend to think that we really are just ‘wetware’ computers, yet we don’t share this ability with other species. We are the only species that ‘downloads’ a language to our progeny, independently of our genetic material. And our genetic material (DNA) really is software, as it is for every life form on the planet. We have a 4-letter code that provides the instructions to create an entire organism, materially and functionally – nature’s greatest magical trick.

One of the most important aspects of consciousness, not only in humans, but for most of the animal kingdom (one suspects) is that we all ‘feel’. I don’t expect an AI ever to feel anything, even if we programme it to have emotions.

But it is because we can all ‘feel’, that our lives mean so much to us. So, whether we have free will or not, what really matters is what we feel. And without feeling, I would argue that we would not only be not human, but not sentient.


Footnote: If you're interested in neuroscience at all, the interview linked above is well worth listening to, even though it's 40 mins long.

Saturday, 3 April 2010

Consciousness explained (well, almost, sort of)

As anyone knows, who has followed this blog for any length of time, I’ve touched on this subject a number of times. It deals with so many issues, including the possibilities inherent in AI and the subject of free will (the latter being one of my earliest posts).

Just to clarify one point: I haven’t read Daniel C. Dennett’s book of the same name. Paul Davies once gave him the very generous accolade by referencing it as 1 of the 4 most influential books he’s read (in company with Douglas Hofstadter’s Godel, Escher, Bach). He said: “[It] may not live up to its claim… it definitely set the agenda for how we should think about thinking.” Then, in parenthesis, he quipped: “Some people say Dennett explained consciousness away.”

In an interview in Philosophy Now (early last year) Dennett echoed David Chalmers’ famous quote that “a thermostat thinks: it thinks it’s too hot, or it thinks it’s too cold, or it thinks the temperature is just right.” And I don’t think Dennett was talking metaphorically. This, by itself, doesn’t imbue a thermostat with consciousness, if one argues that most of our ‘thinking’ happens subconsciously.

I recently had a discussion with Larry Niven on his blog, on this very topic, where we to-and-fro’d over the merits of John Searle’s book, Mind. Needless to say, Larry and I have different, though mutually respectful, views on this subject.

In reference to Mind, Searle addresses that very quote by Chalmers by saying: “Consciousness is not spread out like jam on a piece of bread…” However, if one believes that consciousness is an ‘emergent’ property, it may very well be ‘spread out like jam on a piece of bread’, and evidence suggests, in fact, that this may well be the case.

This brings me to the reason for writing this post:New Scientist, 20 March 2010, pp.39-41; an article entitled Brain Chat by Anil Ananthaswarmy (consulting editor). The article refers to a theory proposed originally by Bernard Baars of The Neuroscience Institute in San Diego, California. In essence, Baars differentiated between ‘local’ brain activity and ‘global’ brain activity, since dubbed the ‘global workspace’ theory of consciousness.

According to the article, this has now been demonstrated by experiment, the details of which, I won’t go into. Essentially, it has been demonstrated that when a person thinks of something subconsciously, it is local in the brain, but when it becomes conscious it becomes more global: ‘…signals are broadcast to an assembly of neurons distributed across many different regions of the brain.’

One of the benefits, of this mechanism, is that if effectively filters out anything that’s irrelevant. What becomes conscious is what the brain considers important. What criterion the brain uses to determine this is not discussed. So this is not the explanation that people really want – it’s merely postulating a neuronal mechanism that correlates with consciousness as we experience it. Another benefit of this theory is that it explains why we can’t consider 2 conflicting images at once. Everyone has seen the duck/rabbit combination and there are numerous other examples. Try listening to a Bach contrapuntal fugue so that you listen to both melodies at once – you can’t. The brain mechanism (as proposed above) says that only one of these can go global, not both. It doesn’t explain, of course, how we manage to consciously ‘switch’ from one to the other.

However, both the experimental evidence and the theory, are consistent with something that we’ve known for a long time: a lot of our thinking happens subconsciously. Everyone has come across a puzzle that they can’t solve, then they walk away from it, or sleep on it overnight, and the next time they look at it, the solution just jumps out at them. Professor Robert Winston, demonstrated this once on TV, with himself as the guinea pig. He was trying to solve a visual puzzle (find an animal in a camouflaged background) and when he had that ‘Ah-ha’ experience, it showed up as a spike on his brain waves. Possibly the very signal of it going global, although I’m only speculating based on my new-found knowledge.

Mathematicians have this experience a lot, but so do artists. No artist knows where their art comes from. Writing a story, for me, is a lot like trying to solve a puzzle. Quite often, I have no better idea what’s going to happen than the reader does. As Woody Allen once said, it’s like you get to read it first. (Actually, he said it’s like you hear the joke first.) But his point is that all artists feel the creative act is like receiving something rather than creating it. So we all know that something is happening in the subconscious – a lot of our thinking happens where we’re unaware of it.

As I alluded to in my introduction, there are 2 issues that are closely related to consciousness, which are AI and free will. I’ve said enough about AI in previous posts, so I won’t digress, except to restate my position that I think AI will never exhibit consciousness. I also concede that one day someone may prove me wrong. It’s one aspect of consciousness that I believe will be resolved one day, one way or the other.

One rarely sees a discussion on consciousness that includes free will (Searle’s aforementioned book, Mind, is an exception, and he devotes an entire chapter to it). Science seems to have an aversion to free will (refer my post, Sep.07) which is perfectly understandable. Behaviours can only be explained by genes or environment or the interaction of the two – free will is a loose cannon and explains nothing. So for many scientists, and philosophers, free will is seen as a nice-to-have illusion.

Conciousness evolved, but if most of our thinking is subconscious, it begs the question: why? As I expounded on Larry’s blog, I believe that one day we will have AI that will ‘learn’; what Penrose calls ‘bottom-up’ AI. Some people might argue that we require consciousness for learning but insects demonstrate learning capabilities, albeit rudimentary compared to what we achieve. Insects may have consciousness, by the way, but learning can be achieved by reinforcement and punishment – we’ve seen it demonstrated in animals at all levels – they don’t have to be conscious of what they’re doing in order to learn.

So the only evolutionary reason I can see for consciousness is free will, and I’m not confining this to the human species. If, as science likes to claim, we don’t need, or indeed don’t have, free will, then arguably, we don’t need consciousness either.

To demonstrate what I mean, I will relate 2 stories of people reacting in an aggressive manner in a hostile situation, even though they were unconscious.

One case, was in the last 10 years, in Sydney, Australia (from memory) when a female security guard was knocked unconscious and her bag (of money) was taken from her. In front of witnesses, she got up, walked over to the guy (who was now in his car), pulled out her gun and shot him dead. She had no recollection of doing it. Now, you may say that’s a good defence, but I know of at least one other similar incident.

My father was a boxer, and when he first told me this story, I didn’t believe him, until I heard of other cases. He was knocked out, and when he came to, he was standing and the other guy was on the deck. He had to ask his second what happened. He gave up boxing after that, by the way.

The point is that both of those cases illustrate that humans can perform complicated acts of self-defence without being consciously cognisant of it. The question is: why is this the exception and not the norm?


Addendum: Nicholas Humphrey, whom I have possibly incorrectly criticised in the past, has an interesting evolutionary explanation: consciousness allows us to read other’s minds. Previously, I thought he authored an article in SEED magazine (2008) that argued that consciousness is an illusion, but I can only conclude that it must be someone else. Humphrey discovered ‘blindsight’ in a monkey (called Helen) with a surgically-removed visual cortex, which is an example of a subconscious phenomenon (sight) with no conscious correlation. (This specific phenomenon has since been found in humans as well, with damaged visual cortex.)


Addendum 2: I have since written a post called Consciousness unexplained in Dec. 2011 for those interested.

Sunday, 28 March 2010

Karl Popper’s criterion

Over the last week, I’ve been involved in an argument with another blogger, Justin Martyr, after Larry Niven linked us both to one of his posts. I challenged Justin (on his own blog) over his comments on ID (Intelligent Design), contending that his version was effectively a ‘God-of-the-gaps’ argument. Don’t read the thread – it becomes tiresome.

Justin tended to take the argument in all sorts of directions, and I tended to follow, but it ultimately became focused on Popper’s criterion of falsifiability for a scientific theory. First of all, notice that I use the word, falsifiability (not even in the dictionary) whereas Justin used the word, falsification. It’s a subtle difference but it highlights a difference in interpretation. It also highlighted to me that some people don’t understand what Popper’s criterion really means or why it’s so significant in scientific epistemology.

I know that, for some of you who read this blog, this will be boring, but, for others, it may be enlightening. Popper originally proposed his criterion to eliminate pseudo-scientific theories (he was targeting Freud at the time) whereby the theory is always true for all answers and all circumstances, no matter what the evidence. The best contemporary example is creationism and ID, because God can explain everything no matter what it entails. There is no test or experiment or observation one can do that will eliminate God as a hypothesis. On the other hand, there are lots of tests and observations (that have been done) that could eliminate evolutionary theory.

As an aside, bringing God into science stops science, which is an argument I once had with William Lane Craig and posted as The God hypothesis (Dec.08).

When scientists and philosophers first cited Popper’s criterion as a reason for rejecting creationism as ‘science’, many creationists (like Duane T. Gish, for example) claimed that evolution can’t be a valid scientific theory either, as no one has ever observed evolution taking place: it’s pure conjecture. So this was the first hurdle of misunderstanding. Firstly, evolutionary theory can generate hypotheses that can be tested. If the hypotheses aren’t falsifiable, then Gish would have had a case. The point is that all the discoveries that have been made, since Darwin and Wallace postulated their theory of natural selection, have only confirmed the theory.

Now, this is where some people, like Justin, for example, think Popper’s specific criterion of ‘falsification’ should really be ‘verification’. They would argue that all scientific theories are verified not falsified, so Popper’s criterion has it backwards. But the truth is you can’t have one without the other. The important point is that the evidence is not neutral. In the case of evolution, the same palaeontological and genetic evidence that has proved evolutionary theory correct, could have just as readily proven it wrong. Which is what you would expect, if the theory was wrong.

Justin made a big deal about me using the word testable (for a theory) in lieu of the word, falsification, as if they referred to different criteria. But a test is not a test if it can’t be failed. So Popper was saying that a theory has to be put at risk to be a valid theory. If you can’t, in principle, prove the theory wrong, then it has no validity in science.

Another example of a theory that can’t be tested is string theory, but for different reasons. String theory is not considered pseudo-science because it has a very sound mathematical basis, but it has effectively been stagnant for the last 20 years, despite some of the best brains in the world working on it. In principle, it does meet Popper’s criterion, because it makes specific predictions, but in practice those predictions are beyond our current technological abilities to either confirm or reject.

As I’ve said in previous posts, science is a dialectic between theory and experimentation or observation. String theory is an example, where half the dialectic is missing (refer my post on Layers of nature, May.09) This means science is epistemologically dynamic and leads to another misinterpretation of Popper’s criterion. In effect, any theory is contingent on being proved incorrect, and we find that, after years of confirmation, some theories are proved incorrect depending on circumstances. The best known example would be Newton’s theories of mechanics and gravity being overtaken by Einstein’s special and general theories of relativity. Actually, Einstein didn’t prove Newton’s theories wrong so much as demonstrate their epistemological limitation. In fact, if Einstein’s equations couldn’t be reduced to Newton’s equations (by eliminating the speed of light, c, as a factor) then he would have had to reject them.

Thomas Kuhn had a philosophical position that science proceeds by revolutions, and Einstein’s theories are often cited as an example of Kuhn’s thesis in action. Some science philosophers (Steve Fuller) have argued that Kuhn’s and Popper’s positions are at odds, but I disagree. Both Newton’s and Einstein’s theories fulfill Popper’s criterion of falsifiability, and have been verified by empirical evidence. It’s just that Einstein’s theories take over from Newton’s when certain parameters become dominant. We also have quantum mechanics, which effectively puts them both in the shade, but no one uses a quantum mechanical equation, or even a relativistic one, when a Newtonian one will suffice.

Kuhn effectively said that scientific revolutions come about when the evidence for a theory becomes inexplicable to the extent that a new theory is required. This is part of the dialectic that I referred to, but the theory part of the dialectic always has to make predictions that the evidence part can verify or reject.

Justin also got caught up in believing that the methodology determines whether a theory is falsifiable or not, claiming that some analyses, like Bayesian probabilities for example, are impossible to falsify. I’m not overly familiar with Bayesian probabilities but I know that they are a reiterative process, whereby a result is fed back into the equation which hones the result. Justin was probably under the impression that this homing into a more accurate result made it an unfalsifiable technique. But, actually, it’s all dependent on the input data. Bruce Bueno de Mequita, whom New Scientist claim is the most successful predictor in the world, uses Bayesian techniques along with game theory to make predictions. But a prediction is falsifiable by definition, otherwise it’s not a prediction. It’s the evidence that determines if the prediction is true or false, not the method one uses to make the prediction.

In summary: a theory makes predictions, which could be right or wrong. It’s the evidence that should decide whether the theory is right or wrong; not the method by which one makes the prediction (a mathematical formula, for example); nor the method by which one gains the evidence (the experimental design). And it’s the right or wrong part that defines falsifiability as the criterion.

To give Justin due credit, he allowed me the last word on his blog.

Footnote: for a more esoteric discussion on Steve Fuller’s book, Kuhn vs. Popper: The Struggle for the Soul of Science, in a political context, I suggest the following. My discussion is far more prosaic and pragmatic in approach, not to mention, un-academic.

Addendum: (29 March 2010) Please read April's comment below, who points out the errors in this post concerning Popper's own point of view.

Addendum 2: This is one post where the dialogue in the comments (below) is probably more informative than the post, owing to contributors knowing more about Popper than I do, which I readily acknowledge.

Addendum 3: (18 Feb. 2012) Here is an excellent biography of Popper in Philosophy Now, with particular emphasis on his contribution to the philosophy of science.

 

Tuesday, 16 March 2010

Speciation: still one of nature’s great mysteries

First of all a disclaimer: I’m a self-confessed dilettante, not a real philosopher, and, even though I read widely and take an interest in all sorts of things scientific, I’m not a scientist either. I know a little bit more about physics and mathematics than I do biology, but I can say with some confidence that evolution, like consciousness and quantum mechanics, is one of nature’s great mysteries. But, like consciousness and quantum mechanics, just because it’s a mystery doesn’t make it any less real. Back in Nov.07, I wrote a post titled: Is evolution fact? Is creationism myth?

First, I defined what I meant by ‘fact’: it’s either true or false, not something in between. So it has to be one or the other: like does the earth go round the sun or does the sun go round the earth? One of those is right and one is wrong, and the one that is right is a fact.

Well, I put evolution into that category: it makes no sense to say that evolution only worked for some species and not others; or that it occurred millions of years ago but doesn’t occur now; or the converse that it occurs now, but not in the distant past. Either it occurs or it never occurred, and all the evidence, and I mean all of the evidence, in every area of science: genetics, zoology, palaeontology, virology; suggests it does. There are so many ways that evolution could have been proven false in the last 150 years since Darwin’s and Wallace’s theory of natural selection, that it’s just as unassailable as quantum mechanics. Natural selection, by the way, is not a theory, it’s a law of nature.

Now, both proponents and opponents of evolutionary theory often make the mistake of assuming that natural selection is the whole story of evolution and there’s nothing else to explain. So I can confidently say that natural selection is a natural law because we see evidence of it everywhere in the natural world, but it doesn’t explain speciation, and that is another part of the story that is rarely discussed. But it’s also why it’s one of nature’s great mysteries. To quote from this week’s New Scientist (13 March, 2010, p.31): ‘Speciation still remains one of the biggest mysteries in evolutionary biology.’

This is a rare admission in a science magazine, because many people believe, on both sides of the ideological divide (that evolution has created in some parts of the world, like the US) that it opens up a crack in the scientific edifice for creationists and intelligent design advocates to pull it down.

But again, let’s compare this to quantum mechanics. In a recent post on Quantum Entanglement (Jan.10), where I reviewed Louisa Gilder’s outstanding and very accessible book on the subject, I explain just how big a mystery it remains, even after more than a century of experimentation, verification and speculation. Yet, no one, whether a religious fundamentalist or not, wants to replace it with a religious text or any other so-called paradigm or theory. This is because quantum mechanics doesn’t challenge anything in the Bible, because the Bible, unsurprisingly, doesn’t include anything about physics or mathematics.

Now, the Bible doesn’t include anything about biology either, but the story of Genesis, which is still a story after all the analysis, has been substantially overtaken by scientific discoveries, especially in the last 2 centuries.

But it’s because of this ridiculous debate, that has taken on a political force in the most powerful and wealthy nation in the world, that no one ever mentions that we really don’t know how speciation works. People are sure to counter this with one word, mutation, but mutations and genetic drift don’t explain how genetic anomalies amongst individuals lead to new species. It is assumed that they accumulate to the point that, in combination with natural selection, a new species branches off. But the New Scientist cover story, reporting on work done by Mark Pagel (an evolutionary biologist at the University of Reading, UK) challenges this conventionally held view.

To quote Pagel: “I think the unexamined view that most people have of speciation is this gradual accumulation by natural selection of a whole lot of changes, until you get a group of individuals that can no longer mate with their old population.”

Before I’m misconstrued, I’m not saying that mutation doesn’t play a fundamental role, as it obviously does, which I elaborate on below. But mutations within individuals don’t axiomatically lead to new species. This is a point that Erwin Schrodinger attempted to address in his book, What is Life? (see my review posted Nov.09).

Years ago, I wrote a letter to science journalist, John Horgan, after reading his excellent book The End of Science (a collection of interviews and reflections by some of the world’s greatest minds in the late 20th Century). I suggested to him an analogy between genes and environment interacting to create a human personality, and the interaction between speciation and natural selection creating biological evolution. I postulated back then that we had the environment part, which was natural selection, but not the gene part of the analogy, which is speciation. In other words, I suggested that there is still more to learn, just like there is still more to learn about quantum mechanics. We always assume that we know everything that there is to know, when clearly we don’t. The mystery inherent in quantum mechanics indicates that there is something that we don’t know, and the same is true for evolution.

Mark Pagel’s research is paradigm-challenging, because he’s demonstrated statistically that genetic drift by mutation doesn’t give the right answers. I need to explain this without getting too esoteric. Pagel looked at the branches of 101 various (evolutionary) trees, including: ‘cats, bumblebees, hawks, roses and the like’. By doing a statistical analysis of the time between speciation events (the length of the branches) he expected to get a Bell curve distribution which would account for the conventional view, but instead he got an exponential curve.

To quote New Scientist: ‘The exponential is the pattern you get when you are waiting for some single, infrequent event to happen… the length of time it takes a radioactive atom to decay, and the distance between roadkills on a highway.’

In other words, as the New Scientist article expounds in some detail, new species happen purely by accident. What I found curious about the above quote is the reference to ‘radioactive decay’ which was the starting point for Erwin Schrodinger’s explanation of mutation events, which is why mutation is still a critical factor in the whole process.

Schrodinger went to great lengths, very early in his exposition, to explain that nearly all of physics is statistical, and gave examples from magnetism to thermal activity to radioactive decay. He explained how this same statistical process works in creating mutations. Schrodinger coined a term, ‘statistico-deterministic’, but in regard to quantum mechanics rather than physics in general. Nevertheless, chaos and complexity theory reinforce the view that the universe is far from deterministic at almost every level that one cares to examine it. As the New Scientist article argues, Pagel’s revelation supports Stephen Jay Gould’s assertion: ‘that if you were able to rewind history and replay the evolution on Earth, it would turn out differently every time.’

I’ve left a lot out in this brief exposition, including those who challenge Pagel’s analysis, and how his new paradigm interacts with natural selection and geographical separation, which are also part of the overall picture. Pagel describes his own epiphany when he was in Tanzinia: ‘watching two species of colobus monkeys frolic in the canopy 40 metres overhead. “Apart from the fact that one is black and white and one is red, they do all the same things... I can remember thinking that speciation was very arbitrary. And here we are – that’s what our models are telling us.”’ In other words, natural selection and niche-filling are not enough to explain diversification and speciation.

What I find interesting is that wherever we look in science, chance plays a far greater role than we credit. It’s not just the cosmos at one end of the scale, and quantum mechanics at the other end, that rides on chance, but evolution, like earthquakes and other unpredictable events, also seems to be totally dependent on the metaphorical roll of the dice.

Addendum 1 : (18 March 2010)

Comments posted on New Scientist challenge the idea that a ‘bell curve’ distribution should have been expected at all. I won’t go into that, because it doesn’t change the outcome: 78% of ‘branches’ statistically analysed (from 110) were exponential and 0% were normal distribution (bell curve). Whatever the causal factors, in which mutation plays a definitive role, speciation is as unpredictable as earthquakes, weather events and radio-active decay (for an individual isotope).

Addendum 2: (18 March 2010)

Writing this post, reminded me of Einstein’s famous quote that ‘God does not play with dice’. Well, I couldn’t disagree more. If there is a creator-God (in the Einstein mould) then first and foremost, he or she is a mathematician. Secondly, he or she is a gambler who loves to play the odds. The role of chance in the natural world is more fundamental and universally manifest than we realise. In nature, small variances can have large consequences: we see that with quantum theory, chaos theory and evolutionary theory. There appears to be little room for determinism in the overall play of the universe.