Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Tuesday 1 September 2009

The Existential God

I was introduced to Don Cupitt on Stephen Law’s blog, about a year ago, or even earlier, when he provided a link to a radio interview with Cupitt on a BBC philosophy programme. Cupitt is a theologian, and he was being quizzed on his particularly unorthodox view of God, which, from memory, was more humanist than sacred.

More recently, I acquired his book, Above Us Only Sky, followed by a Chinese hieroglyph, which I assume means ‘sky’. Inside, the book is subtitled, The Religion of Ordinary Life, which pretty well sums up Cupitt’s entire philosophy. The book’s title is obviously a direct reference to the line in John Lennon’s song, Imagine, which also includes the line, ‘And no religion too’, and, despite being a theologian, that line could equally apply to Cupitt’s book. Right at the start of his book, he provides 27 points in, what one might call, a manifesto for living. Point 22 is headed:

“Even the Supreme Good must be left behind at once.

I, all my expressions, and even the Summum Bonumm, the Supreme Good itself, are all of them transient. Eternal happiness may be great enough to make one feel that one’s whole life has been worthwhile, but it is utterly transient. Let it go!”

His book, as the above quote exemplifies, is even more humanist than his interview, and, in fact, I would call it existentialist, hence the title of this post. I have also called myself existentialist on more than one occasion, but then so is Viktor Frankl in my view, who is not entirely an atheist either. Existentialism is not synonymous with atheism, by the way, but most theists think it is. By existentialist, I mean that we are responsible for our own destiny, which makes God less significant in the overall scheme of things. In other words, a belief in God is less relevant when one considers that moral choices, and any other choice for that matter, are completely dependent on the individual. I take the extreme view and suggest that we are responsible for God rather than God is responsible for us, but that’s so heretical I’ll desist from pursuing it for the sake of continuity.

But Cupitt’s book was a genuine surprise, because, despite its glib title, it’s actually a very meaty book on philosophy. For a start, Cupitt puts emphasis on language as the prism, or even filter, through which we analyse and conceptualise the world. To quote his point 6:

“Life is a continuous streaming process of symbolic expression and exchange.

The motion of language logically precedes the appearing of a formed and ‘definite’ world. It is in this sense that it was once said that ‘In the beginning was the Word’.”

I don’t entirely agree with him, concerning his implication that language determines our reality, but I need to digress a bit before I can address that specifically.

A central tenet of his thesis is that our Platonic heritage of a ‘perfect’ world is an illusion that we are only just starting to shed. Life is exactly what we get and that’s all it is. His philosophy is that once we realise this ‘truth’, we can live the ‘religion of ordinary life’ as his title suggests, and his manifesto specifies. In fact, he argues that this is what we already do in a secular humanist society, but we just don’t articulate it as such. Curiously, I made a similar point in a post I wrote on this blog almost 2 years ago, titled, Existentialism: the unconscious philosophy (Oct.07). Basically, I contended that, following the global Western cultural revolution of the 1960s, we adopted an existentialist philosophy without specifying it as such, or even realising it. We effectively said that we are responsible for our actions and their consequences and God has very little to do with it. I believe Cupitt is making a similar point: we achieved a revelation that humanity’s future is in our hands, and, unless we accept that responsibility, we will fail it.

But it’s in his discussion on rejecting Plato and the illusion that we inherited from him, that he returns to the significance of language:

“You can have more-or-less anything, provided only that you understand and accept that you can have it only language-wrapped – that is, mediated by language’s secondary, symbolic and always-ambiguous quality.” (Emphasis in the original.)

In highlighting this point, I’ve skipped a lot of his text, including an entire chapter on ‘Truth’ and a discussion on Descartes, and, in particular, Kant’s Critique of Pure Reason, where Kant famously contends that we will never understand the ‘thing-in-itself’, which is a direct reference to Plato’s ‘Forms’.

Like many dissertations on epistemology, Cupitt glosses over the significance of mathematics, which is arguably the most stubborn relic of Plato’s philosophy, and one that effectively side-steps Cupitt’s ‘language-wrapped’ dependence that I quoted above. I’m not a physicist but physics has interested me my whole life, and I’ve long believed that, as a discipline, it provides us with the best means of interpreting the universe and revealing its mysteries. In fact, without physics, our knowledge of the universe would still be stuck in the dark ages. But Cupitt alludes to a deep scepticism when he describes it thus:

“The physicist sets out his definitions of matter, space and time, then his laws of motion, and then his formulae for making calculations. But when he has developed his system of mathematical physics – a system of ideas – how is he to prove that there is a Real World out there of which the system is true and to which it applies? …whence do all its ideas get their ‘objective reality?”

In other words, Cupitt is sceptical that a ‘system of ideas’, even one imbedded in mathematics, can provide an ‘objective reality’. But there are 2 points that Cupitt fails to address in his dissertation on ‘truth’ and ‘language’. Firstly, mathematics is not a language in the normal sense, although many people refer to it as if it is. Mathematical symbols are a language of sorts, but the concepts they represent, and, in particular, the relationships that mathematics describes are the closest we will get to ‘“God-given” truth’ to quote Roger Penrose (The Emperor’s New Mind). In other words, they have a universal quality unlike any other epistemic system that we know of, that arguably contain truths independent of the human mind. Now, I know many philosophers dispute this, but mathematical ‘truths’ (wherever they come from) are arguably the only abstract truths we can rely on, and do rely on all the time, in every technological marvel we exploit.

So mathematics provides us with ‘truths’ as well as a window into the ‘reality’ of the universe that we would never otherwise possess. It is on this point that I believe Cupitt and I epistemologically differ.

But it’s not epistemology per se that Cupitt is challenging when he explicates: ideas are ‘language-wrapped’; he has a deeper, theological motive. He points out how absurd it is to think that God provided us with scriptures using a language humans invented. Especially since God should be outside language in the same way ‘He’s’ supposedly outside the very universe in which we exist. What’s more, Cupitt challenges the very notion that our experience of ‘God’ by praying can be validated without language. I believe Cupitt makes a very good point here: if our ideas are language-wrapped then so is our idea of God.

In an earlier post (May 09) I referenced an essay by Raymond Smullyan called Is God a Taoist? In my post, I made a connection between Smullyan’s idea of God or Tao as ‘the scheme of things’ and the mathematical laws at different levels of scale that the universe appears to obey. This particular concept of an impersonal, non-language-wrapped God in combination with a Platonic mathematical realm is entirely compatible with Cupitt’s stated philosophy, though I doubt he would accept it. Cupitt provides an allegorical tale of a large group of Buddhist monks, one of whom gets up to speak about the Tao (Cupitt uses the term, ‘Supreme Good’), saying that: ‘No words can speak of it… It is beyond speech, it is even beyond all thought.’ When he sits down, another monk stands to address the same crowd: ‘Did the last speaker say anything?’

In a recent post on Storytelling (July 09), I made the point that without language we would think in the language of dreams, which is imagery and emotion. In fact, I argued that the language of dreams is the language of storytelling, only we are unaware of it. The story is ‘language-wrapped’ but the emotional content of the story is not, and neither is the imagery it conjures in our minds. Without these 2 components, the story is lifeless, just words on paper – it fails to engage the mind as story.

I’m not surprised that many cultures include dreaming as part of their religion – the American Indians are possibly the best known. Australian Aborigines use the term ‘Dream-time’ (at least, that’s its English translation) as the reference to their religion, full of mythical creatures and mythical tales. In recent posts on Larry Niven’s blog, there have been a couple of references to the comparison between religion and music that people often make. Many people have made the observation that music transcends language, and to some extent that is true. The only reason, we can say that, is because music moves us emotionally, and whilst language can describe those emotions it can’t convey them, whereas music can. So I would argue that religious experience is not language-wrapped in the same way that musical experience is not language-wrapped. Again, Cupitt would beg to differ. In fact, he would dispute the religious experience and call it illusion, and he is not alone. Most philosophers would agree with him completely.

Cupitt devotes an entire chapter to the subject, ‘Religion’, where he describes it as a ‘standard’ (as in a flag) to which people rally and identify, and, to which he rightly acknowledges, represents a view of God that is no longer tenable or of value. This is the religion that divides people and incorporates an infinite being who stands outside the world and judges us all. On this point, Cupitt and I are in agreement: it’s an entirely outdated, even dangerous, concept.

“…those who split the world between good and evil in effect split their own psyches too, and the puritans, the wowsers, the morality-campaigners, the condemners and the persecutors end up as unhappy people, Bible-bashers who are themselves without religion.”

This is the origin of the neurosis that made people of my generation revolt. Cupitt also makes reference to the 1960s when he describes the change in the zeitgeist that is effectively the theme of his book. Neurosis is like hypnotism – your brain tells you to do one thing but you do something else over which you feel you have no control. If you put your mind in a strait-jacket then it will revolt in a way that will shock you. Religion can do exactly that. To quote Cupitt again:

“In one form or another around the world, organized religion still manages to keep a large percentage of humanity locked in the most wretched mental poverty and backwardness.”

Cupitt goes on to express his individualistic philosophy that he calls ‘solar living’ (as in solo or solitary) but I would call existentialism, or a variant thereof. The fundamentals of his religious philosophy is to replace ‘God’ with ‘Life’, and rather than have a relationship between an earthly existence and an immortal one, to have a relationship between the individual’s life and the continuing stream of life that involves the rest of humanity.

My own approach is to refer to the internal and external world, which is the cornerstone of my entire world philosophy, but is effectively the same concept that Cupitt expresses here, albeit using different language. (Later in the book, Cupitt rejects the inner life concept altogether.) However, unlike Cupitt, I would contend that religion is part of one’s inner world rather than the external world. This makes religion completely subjective, and, in many respects, in conflict with organised or institutionalised religion. I’ve made this point before on previous posts, and Cupitt makes a similar point, arguably the most important in the book for me:

“The only ideas, thoughts, convictions that stay with you and give you real support are ones you have formulated yourself and tested out in your own life… In effect, the only religion that can save you is one you have made up for yourself and tested out for yourself: in short, a heresy.”

Cupitt always brings the discussion back to language, and this is the source of my personal dissent with his philosophy. He makes the apparently self-evident point: ‘…there is no meaning, no truth, no reality, and no knowledge without language.’ Which is true for us humans, cognitively, but the unstated corollary is that because none of these things can exist without language, they can’t exist without humanity either. This is the crux of his entire epistemological thesis.

Language is the most obvious bridge between our internal and external world, and almost nothing can be conveyed without language, but lots of things can be felt and experienced without the intervention of language. But Cupitt would argue that any experience is meaningless, quite literally, if it can’t be expressed in language. In other words, because it comes ‘language-wrapped’, that’s what makes it real. One needs to be careful here to distinguish epistemology from ontology, and I think the 2 are being confounded.

I think religion, as it is experienced by the individual, actually has little to do with language and more to do with emotion, just like music or even storytelling. As I described above, a story is written in words, but if it doesn’t transcend the words then it’s not a story. On the other hand, Cupitt argues, categorically, that there is nothing meaningful ‘outside language’.

Religion, and therefore God, is a psychological phenomenon, just like colour. Now, everyone thinks this is a misguided analogy, but colour does not exist out there in the external world, it only exists in your mind. What exists in the external world are light waves reflected off objects. You could probably build a robot that could delineate different wavelengths of light and associate a range of wavelengths with a label, like red for instance. But the robot wouldn’t actually see the colour red like you and I do. Some monkeys can’t see colours that we can see, because they only have bichromatic vision not trichromatic, but, if we genetically engineer them, they can. Yes, that’s a fact, not internet bullshit – it’s been done.

Anyway God is an experience that some people have that ‘feels’ like something outside themselves even though it only occurs in their minds. Many people never have this experience, so they don’t believe in God. The problem with this is that some of the people who think they have this experience believe that makes them superior to those who don’t, and likewise some of the people who don’t, believe they must be axiomatically superior to those who claim they do, because they’re obviously nuts.

Cupitt doesn’t address this, but I do because it’s what creates the whole divide that is actually so unimportant. I contend it’s like heterosexuals believing that everyone should be heterosexual, because it’s unimaginable to be anything else, and homosexuals arguing that everyone should be homosexual, even though they never do. But, in the same way that I think people who are heterosexual should behave as heterosexuals and people who are homosexual should behave as homosexuals, I believe that people who have an experience that they call God should be theists, and those who don’t should be atheists.

At the end of the day, I think God is a projection. I believe that the God someone believes in says more about them than it says about God (I’ve made this point before). That way people get the God they deserve. I call it the existential God.

I’ve now gone completely away from Cupitt’s book, but don’t be put off, it’s a very good book. And it’s very good philosophy because it provokes critical thinking. Another person would write something completely different to what I have written because they would focus on something else. This is a book to which, I admit, I have not done justice. It is worth acquiring just to read the essay he wrote for a symposium on Judaic Christian dialogue – not what people expected, I’m sure.

Cupitt ultimately argues for a universal morality that ignores identity of any kind, just like Lennon’s song. Accordingly, I’ll give Cupitt the last word(s):

“Our moral posture and practice must never be associated with a claim to be… an adherent of some particular ethnic or religious group, because all those who retreat into ‘identity’ have given up universal morality and have embraced some form of partisan fundamentalism – which means paranoia and hatred of humanity.”

“…any philosopher who is serious about religion should avoid all contact with ‘organized religion’. …Which is why, on the day this book is published, I shall finally and sadly terminate my own lifelong connection with organized religion.”

Saturday 8 August 2009

Memetics

Susan Blackmore is a well-known proponent of ‘memes’, and she wrote an article in New Scientist, 1 August 2009, called The Third Replicator, which is about the rise of the machines. No, this has nothing to do with the so-called Singularity Prophecy (see my post of that title in April this year). I haven’t read any of Blackmore’s books, but I’ve read articles by her before. She’s very well respected in her field, which is evolutionary psychology. By the ‘Third Replicator’ she’s talking about the next level of evolution, following genes and memes: the evolution of machine knowledge, if I get the gist of her thesis. I find Blackmore a very erudite scholar and writer, but I have philosophical differences.

I’ve long had a problem with the term meme, partly because I believe it is over-used and over-interpreted, though I admit it is a handy metaphor. When I first heard the term meme, it was used in the context of cultural norms or social norms, so I thought why not use ‘social norms’ as we do in social psychology. Yes, they get passed on from generation to generation and they ‘mutate’, and one could even say that they compete, but the analogy with genes has a limit, and the limit is that there are no analogous phenotypes and genotypes with memes as there are with genes (I made the same point in a post on Human Nature in Nov.07). And Dawkins makes the exact same point, himself, in his discussion on memes in The God Delusion. Dawkins talks about ‘memplexity’ arising from a ‘meme-pool’, and in terms of cultural evolution one can see merit in the meme called meme, but I believe it ignores other relevant factors as I discuss below.

Earlier this year I referenced essays in Hofstadter and Dennett’s The Mind’s I (Subjectivity, Jun.09; and Taoism, May 09). One of the essays included is Dawkins’ Selfish Genes and Selfish Memes. In another New Scientist issue (18 July 2009), Fern Elsdon-Baker, head of the British Council’s Darwin Now project, is critical of what he calls the Dawkins dogma saying: ‘Metaphors that have done wonders for people’s understanding of evolution are now getting in the way’; and ‘Dawkins contribution is indisputable, but his narrow view of evolution is being called into question.’ Effectively, Elsdon-Baker is saying that the ‘selfish gene’ metaphor has limitations as well, which I won’t discuss here, but I certainly think the ‘selfish meme’ metaphor can be taken too literally. People tend to forget that neither genes nor memes have any ‘will’ (Dawkins would be the first to point this out) yet the qualifier, ‘selfish,’ implies just that. However, it’s a metaphor, remember, so there’s no contradiction. Now I know that everyone knows this, but in the case of memes, I think it’s necessary to state it explicitly, especially when Blackmore (and Dawkins) compare memes to biological parasites.

Getting back to Blackmore’s article: the first replicators are biological, being genes; the second replicators are human brains, because we replicate knowledge; and the third replicators will be computers because they will eventually replicate knowledge or information independently of us. This is an intriguing prediction and there’s little doubt that it will come to pass in some form or another. Machines will pass on ‘code’ analogous to the way we do, since DNA is effectively ‘code’, albeit written in molecules made from amino acids rather than binary arithmetic. But I think Blackmore means something else: machines will share knowledge and change it independently of us, which is a subtly different interpretation. In effect, she’s saying that computers will develop their own ‘culture’ independently of ours, in the same way that we have created culture independently of our biological genes. (I will return to this point later.)

And this is where the concept of meme originally came from: the idea that cultural evolution, specifically in the human species, overtook biological evolution. I first came across this idea, long before I’d heard of memes, when I read Arthur Koestler’s The Ghost in the Machine. Koestler gave his own analogy, which I’ve never forgotten. He made the point that the human brain really hasn’t change much since homo sapiens first started walking on the planet, but what we had managed to do with it had changed irrevocably. The analogy he gave was to imagine someone, say a usurer, living in medieval times, who used an abacus to work out their accounts; then one morning they woke up to find it had been replaced with an IBM mainframe computer. That is what the human brain was like when it first evolved – we really had no idea what it was capable of. But culturally we evolved independently of biological evolution, and from this observation Dawkins coined the term, meme, as an analogy to biological genes, and, in his own words, the unit of selection.

But reading Blackmore: ‘In all my previous work in memetics I have used the term “meme” to apply to any information that is copied between people…’. So, by this definition, the word meme covers everything that the human mind has ever invented, including stories, language, musical tunes, mathematics, people’s names, you name it. When you use one idea to encompass everything then the term tends to lose its veracity. I think there’s another way of looking at this, and it’s to do with examining the root cause of our accelerated accumulation of knowledge.

In response to a comment on a recent post (Storytelling, last month) I pointed out how our ability to create script effectively allows us to extend our long term memory, even across generations. Without script, as we observe in many indigenous cultures, dance and music allows the transmission of knowledge across generations orally. But it is this fundamental ability, amplified by the written word, that has really driven the evolution of culture, whether it be in scientific theories, mathematical ideas, stories, music, even history. Are all these things memes? By Blackmore’s definition (above) the answer is yes, but I think that’s stretching the analogy, if, for no other reason than many of these creations are designed, not selected. But leaving that aside, the ability to record knowledge for future generations has arguably been the real accelerant in the evolution of culture, in all its manifestations. We can literally extend our memories across generations – something that no other species can do. So where does this leave memes? As I alluded to above, not everything generated by the human mind is memetic in my opinion, but I’ll address that at the end.

Going back to my original understanding of meme as a cultural or social norm, I can see its metaphorical value. I still see it as an analogy to genes – in other words, as a metaphor. Literally, memes are social norms, but they are better known for their metaphorical meaning as analogous to genes. If, on the other hand, memes are all knowledge - in other words, everything that is imbedded in human language - then the metaphor has been stretched too far to be meaningful in my view. A metaphor is an analogy without the conjunction, ‘like’, and analogies are the most common means to explain a new concept or idea to someone else. It is always possible that people can take a metaphor too literally, and I believe memes have suffered that fate.

As for the ‘third replicator’, it’s an intriguing and provocative idea. Will machines create a culture independently of human culture that will evolutionarily outstrip ours? It’s the stuff of science fiction, which, of course, doesn’t make it nonsense. I think there is the real possibility of machines evolving, and I’ve explored it in my own ventures into sci-fi, but how independent they will become of their creators (us) is yet to be seen. Certainly, I see the symbiotic relationship between us and technology only becoming more interdependent, which means that true independence may never actually occur.

However, the idea that machine-generated ideas will take on a life of their own is not entirely new. What Blackmore is suggesting is that such ideas won’t necessarily interact with humanity for selection and propagation. As she points out, we already have viruses and search engines that effectively do this, but it’s their interaction with humanity that eventually determines their value and their longevity, thus far. One can imagine, however, a virus remaining dormant and then becoming active later, like a recessive gene, so there: the metaphor has just been used. Because computers use code, analogous to DNA, then comparisons are unavoidable, but this is not what Blackmore is referring to.

Picture this purely SF scenario: we populate a planet with drones to ‘seed’ it for future life, so that for generations they have no human contact. Could they develop a culture? This is Asimov territory, and at this stage of technological development, it is dependent on the reader’s, or author’s, imagination.

One of Blackmore’s principal contentions is that memes have almost been our undoing as a species in the past, but we have managed to survive all the destructive ones so far. What she means is that some ideas have been so successful, yet so destructive, that they could have killed off the entire human race (any ideologue-based premise for global warfare would have sufficed). Her concern now is that the third replicator (machines) could create the same effect. In other words, AI could create a run-away idea that could ultimately be our undoing. Again, this has been explored in SF, including stories I’ve written myself. But, even in my stories, the ‘source’ of the ‘idea’ was originally human.

However, as far as human constructs go, we’re not out of the woods by a long shot, with the most likely contender being infinite economical growth. I suspect Blackmore would call it a meme but I would call it a paradigm. The problem is that a meme implies it’s successful because people select it, whereas I think paradigms are successful simply because they are successful at whatever they predict, like scientific theories and mathematical formulae, all of which are inherently un-memetic. In other words, they are not successful because we select them, but we select them because they are successful, which turns the meme idea on its head.

But whatever you want to call it, economic growth is so overwhelmingly successful: socially, productively, politically, on a micro and macro scale; that it is absolutely guaranteed to create a catastrophic failure if we continue to assume the Earth has infinite resources. But that’s a subject for another post. Of course, I hope I’m totally wrong, but I think that’s called denial. Which begs the question: is denial a meme?

Sunday 2 August 2009

Einstein's words

Today I bought a special edition of the science magazine, DISCOVER (July 2009), with the auspicious title, DISCOVER presents EINSTEIN. The magazine opens with an essay that Einstein wrote in 1931 (so before World War II). Or, at least, it was published in 1931, copyrighted by The Hebrew University of Jerusalem. The essay is titled, The World as I see It, which one assumes was provided by Einstein himself.

For the rest of this post I will remain silent; I merely wish to present some very eloquent excerpts that provide an insight into Einstein’s personal philosophy.

How strange is the lot of us mortals! Each of us here for a brief sojourn; for what purpose he knows not, though sometimes he thinks he senses it.

A hundred times every day I remind myself that my inner and outer life are based on the labors of other men, living and dead, and that I must exert myself in order to give in the same measure as I have received and am still receiving. I am strongly drawn to a frugal life and am often oppressively aware that I am engrossing an undue amount of the labor of my fellow men. I regard class distinctions as unjustified and, in the last resort, based on force. I also believe that a simple and unassuming life is good for everybody, physically and mentally.

Schopenhauer’s saying “A man can do what he wants but not want what he wants” has been a very real inspiration to me since my youth; it has been a continual consolation in the face of life’s hardships, my own and others’, and an unfailing wellspring of tolerance. This realization mercifully mitigates the easily paralyzing sense of responsibility and prevents us from taking ourselves and other people all too seriously; it is conducive to a view of life which, in particular, gives humor its due.

I have never looked upon ease and happiness as ends in themselves – this ethical basis I call the ideal of a pigsty. The ideals that have lighted my way, and time after time have given me new courage to face life cheerfully, have been Kindness, Beauty and Truth. Without the sense of kinship with men of like mind, without the occupation with the objective world, the eternally unattainable in the field of art and scientific endeavors, life would have seemed to me empty. The trite objects of human efforts – possessions, outward success, luxury – have always seemed to me contemptible.

I am truly a “lone traveler” and have never belonged to my country, my home, my friends, or even my immediate family with my whole heart; in the face of all these ties, I have never lost a sense of distance and a need for solitude – feelings which increase with years. One becomes sharply aware, but without regret, of the limits of mutual understanding and consonance with other people. No doubt such a person loses some of his innocence and unconcern; on the other hand, he is largely independent of the opinions, habits, and judgments of his fellows and avoids the temptation to build his inner equilibrium upon such insecure foundations.

My political ideal is democracy. Let every man be respected as an individual and no man idolized. It is an irony of fate that I myself have been the recipient of excessive admiration and reverence from my fellow-beings, through no fault, and no merit, of my own. The cause of this may well be the desire, unattainable for many, to understand the few ideas to which I have with my feeble powers attained through ceaseless struggle.

The led must not be coerced; they must be able to choose their leader. An autocratic system of coercion, in my opinion, soon degenerates. Force attracts men of low morality, and I believe it to be an invariable rule that tyrants of genius are succeeded by scoundrels. For this reason I have always been passionately opposed to systems such as we see in Italy and Russia today.

The really valuable thing in the pageant of human life seems to me not the political state but the creative, sentient individual, the personality; it alone creates the noble and the sublime, while the herd as such remains dull in thought and dull in feeling.

This topic brings me to that worst outcrop of herd life, the military system, which I abhor. That a man can take pleasure in fours to the strains of a band is enough to make me despise him. He has only been given his big brain by mistake; unprotected spinal marrow was all he needed. This plague-spot of civilization ought to be abolished with all possible speed. Heroism on command, senseless violence, and all the loathsome nonsense that goes by the name of patriotism – how passionately I hate them! How vile and despicable seems war to me! I would rather be hacked to pieces than take part in such an abominable business.

The most beautiful experience we can have is the mysterious. It is the fundamental emotion that stands at the cradle of true art and true science. Whoever does not know it and can no longer wonder, no longer marvel, is as good as dead, and his eyes are dimmed. It was the experience of mystery – even if mixed with fear – that engendered religion. A knowledge of the existence of something we cannot penetrate, our perceptions of the profoundest reason and the most radiant beauty, which only in their most primitive forms are accessible to our minds – it is this knowledge and this emotion that constitutes true religiosity, and in this sense, and this sense alone, I am a deeply religious man.

I cannot conceive of a God who rewards and punishes his creatures, or has a will of the kind that we experience in ourselves. Neither can I nor would I want to conceive of an individual that survives his physical death; let feeble souls, from fear or absurd egoism, cherish such thoughts. I am satisfied with the mystery of the eternity of life and with the awareness and a glimpse of the marvelous structure of the existing world, together with the devoted striving to comprehend a portion, be it ever so tiny, of the Reason that manifests itself in nature.

Saturday 1 August 2009

Interview with a disillusioned nun

This is another radio interview (Friday 31 July 2009) which I strongly recommend, both inspiring and counter-expectative. (The link is only available for the next 2 weeks)

Dr. Colette Livermore worked with Mother Teresa's Order before leaving and studying to become a medical practitioner. She's written a book on her experiences titled, Hope Endures.

This is a repeat interview and I had heard it before. In between I read Robert Hutchison's book on Opus Dei, Their Kingdom Come, which I wrote about in another post in June this year, Politics in religion, religion in politics. In light of what I had learnt from Hutchison's book, Sister Colette's experiences in the Order made a lot more sense.

This is religion at its most perverse, where obedience is rated higher than normally-accepted standards of moral behaviour (it will make you fiercely angry). As Dr. Colette explains herself, it actually flies in the face of what Jesus taught.

You can download the audio as a podcast and listen to it whenever you want, but you won't get the musical selection. On the other hand, you can listen to it now and get the music as well. Either way, it's only available on line for the next 2 weeks. It is the 31 July interview in the list.

Tuesday 28 July 2009

Storytelling, Art and the Evolution of Mind

This is in response to a book, On the Origin of Stories by a Kiwi academic, Brian Boyd, subtitled Evolution, Cognition and Fiction. According to the back fly cover, Brian Boyd is ‘University Distinguished Professor in the Department of English, University of Auckland [and] is the world’s foremost authority on the works of Vladimir Nabokov.’

It’s an ambitious work in that Boyd attempts to explain, or, at best, understand, the role of art, and stories in particular, in the evolutionary development of the human mind. In this endeavour, he references the work of well-known exponents in the field, like Richard Dawkins and Steven Pinker, but also many others.

Storytelling, or more specifically, literature, is a subject that attracted attention on Stephen Law’s blog earlier this month, and was taken up by others: Larry Niven and Faith in Honest Doubt are two that I’m aware of. Boyd’s book, like all good philosophical treatises, is provocative and introduces novel perspectives.

I’ll warn you in advance, that this is a very lengthy post, even by my standards. Having said that, I’ve written much longer treatises on the subject than this; so in some respects this could be considered the abridged version.

As a writer of fiction, and having once taught a course in fiction writing, I obviously have particular views of my own. I once wrote a letter to New Scientist (which was consequentially published) supporting the idea that art was like the ‘Peacock’s Tail’ in driving the development of the human brain. It’s an idea originally proposed by Geoffrey Miller, that art and intelligence evolved together in humans by ‘sexual selection’. Boyd makes the point that this is not the whole story and I suspect he’s correct, but I’m getting ahead of myself, so I will backtrack slightly.

Boyd’s book is broken down into 2 major parts (Book I & Book II), with the first part looking at evolution and cognition and art, and the second part looking at 2 iconic works in particular: Homer’s Odyssey and Dr Seuss’s Horton hears a Who! I’ll address Book I mainly, as it captures both the essence and the detail of Boyd’s ‘evolutionary’ thesis.

There are 2 main strands to his thesis on evolutionary human development: co-operation and ‘cognitive play’; the latter term being one that Boyd has coined himself to explain the origins of art per se. Co-operation, as Boyd expounds in detail, is a necessary attribute of any social species, of which there are innumerable examples in all areas of the animal kingdom from ants and bees to top predators. I won’t elaborate on this aspect of Boyd’s thesis, even though he returns to it often, but its significance to storytelling is that stories give ‘lessons’ in the role of co-operation or the consequences of betrayal – in other words, moral lessons. But this is only one component of a very complex picture.

Boyd’s elaboration on ‘cognitive play’ is far more interesting, if for no other reason, than it’s a novel concept that fits our experience and observations. He starts off by pointing out how play is an important part of the development of many species in that it tunes their sensory-motor responses and effectively ‘wires’ their brain in ways that are crucial for their survival as adults. The same, of course, is true for humans, but our development is particularly prolonged, and has been focused by cultural enhancements. And, in humans, the development of mental acuity is just as important (arguably even more important) as physical acuity, hence the role of cognitive play, which Boyd argues is the origin of art.

So cognitive play forms the same role as physical play observed in other species, only humans have taken it to another level, as we tend to do with anything mental. Boyd points out that singing in birds, or the ‘art work’ of a bower bird is another example, but these are examples of sexual selection behaviours, which may or may not be relevant to human artistic endeavours. In fact, Boyd argues that there are numerous examples of human art that are not performed or produced for sexual selection, which may be a by-product rather than the primary driver.

What he’s saying is that cognitive play, in the form of artistic, creative acts, is a means to ‘tune’ our brains for better cognitive skills rather than impress the opposite sex, though that does happen as we are all aware, but so does winning on the sports field.

And certainly, when we are children, we see art as playing, or certainly I did. Whenever I was given any spare time at all, I drew pictures and I drew compulsively right up to puberty when I lost interest altogether. So I see merit in Boyd’s notion of cognitive play, even if it’s based on my personal experience rather than objective observation.

Where I disagree with Boyd is in regard to what is the reward in art. Boyd argues that it is pattern that provides the pleasure (he says reward) and gives the example of music, as well as story. He explains how we have been ‘designed’ (he has no problem using the ‘d’ word in evolution, and neither do I) to look for patterns, and art rewards us in this regard. In music we anticipate the melody even when we listen to it for the first time and we find it unsatisfactory when it doesn’t meet our expectations of harmony or rhythm. The same is true of stories where we have expectations provided by plot development and are disappointed if our expectations are not met.

But, personally, I think Boyd is slightly off track here. What music and stories have in common is that they create tension and then resolve that tension. It is the resolution of the tension that gives the most satisfying reward. It’s unsurprisingly similar to the sexual experience, and in all cases we are rewarded with dopamine. It is no coincidence that the word ‘climax’ is used in all three contexts: music, stories and sex.

But there are other rewards: highly specific emotional rewards. I overheard a friend, recently recommending, to another friend, a book that she had read because it had aroused all her emotions. She said that it had made her laugh, cry, feel scared and angry. She had felt: compassion, sexiness, excitement, anxiety, despair and moral outrage; all in one book. Without, at least, some of these emotional rewards, stories would not hold our attention for long, if at all.

Speaking of attention, Boyd raises it as a special attribute, not just in reference to storytelling, but in regards to humanity in general. Attention seeking and attention sharing is apparent in infants from an early age, and, according to Boyd, is unique to humanity in its overriding dominance in infant behaviour. To quote: ‘Human one-year-olds engage in joint attention… indicating objects or events simply for the sake of sharing attention to them, something that apes never do. They expect others to share interest, attention, and response: “This by itself is rewarding for infants – apparently in a way it is not for any other species on the planet.”’ This last, indented quotation is Boyd quoting D.S Wilson quoting Michael Tomasello.

The upshot is that ‘attention seeking’ is one of the main drivers behind artistic endeavour and, based on personal experience, I would concur. Boyd quotes H.G. Wells: “Scarcely any artist will hesitate in the choice between money and attention.” Which explains why the great bulk of artists, now, and in antiquity, rarely received a livable income from their efforts. It’s a misapprehension, as I know from personal conversations, that artists seek fame to gain fortune, otherwise they’d all give up early. It’s equally misperceived that artists are happy to create works just for themselves or for the sake of the doing. Artists crave an audience above all else – it’s their whole raison d’etre.

Boyd talks about ‘creativity’ in Darwinian terms: how it’s almost a random process, or variations on accepted themes (like mutations) that get selected by the artist’s milieu or audience. He points out that it doesn’t have ‘value’ in biological terms but he gives examples of how it’s value-added in technology, and of how technology and art have had a symbiotic relationship (my terminology, not his). Printing allowed the production of novels that could be read in one’s own time, film technology gave us movies and recordings gave us music on-demand; these are all iconic examples.

But, to my mind, this strictly Darwinian interpretation underplays the role of imagination; although, to be fair to Boyd, he’s not dismissive of it, quite the contrary. Imagination is the ability to perceive some event or place or person that is not in the here and now. It could be in the past or the future, or another geographical location, or it could be completely fictional. As I pointed out in a previous post, we know that some animals have imagination, because they can imagine the outcome of a hunting strategy, otherwise how or why would they be able to do it? (Refer: Imagination, Mar. 08).

But we humans take imagination to another level, and art, all art, is effectively the projection of an individual’s imagination in the form of some external manifestation so that others can also experience it. In fact, this is as close to a definition of art as I can give. Imagination is the key to creativity – I find it impossible to conceive of one without the other. But imagination is also the key to the comprehension of a story (Boyd also appreciates this as I explicate below, though he uses different terminology).

After a lengthy exposition on the ‘theory of mind’: how it has evolved in primates and other species, and the stages it achieves at different ages in children, from causal inferences to the perception of others’ beliefs; Boyd eventually reveals an insight, that, as a writer, I already knew.

But first he explains the difference between semantic and episodic memory: the former dealing with facts and the latter dealing with events or experiences. Importantly, he references the work of Frederic Bartlett who made us aware that episodic memories are reconstructed in a way that we recollect the ‘gist’ of an event rather than any particular detail. Boyd points out that we reconstruct an episodic memory for its value in future encounters, rather than a need for knowledge per se, as we do with semantic memory.

Then he says: ‘Tellingly for this constructive episodic simulation hypothesis, imagining the future recruits most of the same brain areas as recalling the past… to provide a form of “life simulator” that allows us to test options without trying them in real life.’ (Italics in the original.) This, of course, is an accurate description of ‘fiction’, but it also occurs in dreams. As a writer, I’ve always known that the ‘medium’ for a story is not the words on the page, but the reader’s imagination, and, effectively, that is what Boyd is saying.

He makes the point even more emphatically when he quotes Barsalou: “As people comprehend a text, they construct simulations to represent its perceptual, motor, and affective content. Simulations appear central to the representation of meaning.” Boyd then goes on to explain how this specific human ability allows us to follow a particular character (he uses the word, agent) in a narrative. (I’ll come back to this when I discuss viewpoint.)

By the time Boyd starts to talk about narrative you’re well into the book, and what he’s really talking about is gossip, where we relate events to others concerning our interpretation of someone else’s viewpoint. Is this how fiction arose? I’m not sure. In the modern world it’s equivalent to journalism, and the differences between journalism and fiction are much greater than people realise. For a start, journalism is not art, and that’s a big distinction. Art must always engage one emotionally, and whilst both gossip and journalism can fulfill that function, fiction works on another level. Biography comes closest: a well-written biography can engage a reader as well as any fiction; but fiction is an art that few people master, in the same way that few people master musical composition. In fact, I would suggest that the difference between journalism and fiction is like the difference between someone performing a song and someone composing it.

I’ve always compared fiction writing to musical composition, even though I’ve done one and not the other. It’s just that writing fiction has more in common, in my mind, to music, than to writing non-fiction. Someone (I can’t remember who) said that all art comes back to music, or words to that effect, and, in my limited experience as an artist, I would have to agree. In fiction you create moods and emotions and responses, not unlike music, which is completely different to non-fiction. In journalism you can sometimes achieve the same, but it’s not the raison d’etre of journalism as it is in fiction.

Or is it? Consider that the ‘stories’ that attract attention are always the ones that horrify us, and if they don’t, the media ‘sensationalises’ them just for the sake of ‘good copy’. Just today, I heard an 8 minute interview with a survivor of the recent bombing in Indonesia, and it was the man’s authenticity and sincerity that engaged me. But why do I take this vicarious interest in someone else’s misfortune? Is this the same reason that I read fiction? Perhaps it is, but I expect not. If we can get all the vicarious emotive responses we need from all the world’s disasters then why do we need fiction? Boyd doesn’t address this, but maybe it’s unanswerable.

I have my own theory: fiction, from childhood to adulthood, is about escapism. People ‘indulge’ in fiction to escape. Therefore, in my view, it has more in common, historically, with mythology than gossip. Comic book superheroes are our equivalent to mythology, from Tarzan living with the apes to Superman who came from outer space to provide a moral code that is as indomitable as his abilities. So fiction arose from the imagination escaping way beyond the bounds of our mortal existence. But with subtlety and more down-to-earth realism it became our earliest model of psychology, which Boyd alludes to on more than one occasion. I’ve always contended that fiction is a mixture of reality and fantasy, and how it’s blended varies according to the genre and the author’s proclivities.

Boyd doesn’t use the word, escapism, but the term, ‘pretend-play’, as the catalyst for storytelling, along with the need for novelty and surprise, especially amongst young children. He points out that only humans can pretend something to be something else, like an analogy, and children demonstrate pretend-play, including pretend-attributed objects (like sticks for swords and guns) from a very early age. Pretend-play certainly exercises the imagination, and escapism is the logical consequence of that. Escapism alludes to setting the imagination free: allowing it to roam beyond the everyday. The imagination needs exercise, in the form of ‘cognitive play’ just like any other aspect of our physiology. So I believe Boyd has provided a valuable insight with this novel concept.

If fiction originally started as play in the form of drama, then Boyd’s contention that cognitive play is the root of fiction makes a lot of sense, though I don’t believe that’s what he had in mind. Performing as a character for an audience is certainly one of the best sources of natural opiates one can acquire, as I can attest from personal experience. Probably equivalent to performing on a sports field, though I’m not in a position to compare. But if fiction started off as performance, then it makes more sense to me than the idea that it originated from our propensity for gossip, and I expect Boyd would agree. However, we tend to think that fiction started as an oral tradition, as Boyd explores in his analysis of Homer’s Odyssey, but that too is a performance, albeit of a different kind to acting out a drama. Few people appreciate the similarity between writing fiction and acting a role, yet it requires a writer to create the role in their head even before the actor has seen it. I’ve always believed that the mental process is the same for both. It requires them both to mentally step into someone else’s shoes. When a reader becomes engaged in reading a work of fiction they become the actor in their own mind, only they’re not consciously aware of it.

Boyd repeatedly makes allusions to empathy and ‘mirror neurons’ in his text. In the 25 June 2008 issue of New Scientist, under the heading, The Science of Fiction, they report on psychometric studies done to show how reading fiction improves empathy. The specific test for empathy was reading the emotional content of eyes revealed in a letter-box view. So you wouldn’t think that reading fiction would improve the reading of eyes, but it’s not so surprising when one considers that empathy is a pre-requisite for fiction to work at all. So reading fiction actually exercises our empathy.

In his analysis of Odyssey, I have a subtly different perspective to Boyd, whose exposition I won’t go into. This is such an iconic work, that ‘odyssey’ refers to a genre in its own right. It represents, in one epic work, the most universal theme of all stories: the hero overcoming a string of adversities to achieve a life-saving, even soul-saving, goal. I believe this is such a universal theme in fiction, because it’s how we all gain self-knowledge and wisdom, even though we rarely admit it. It’s Socrates’ adage about the unexamined life in a narrative form: it’s only when we are seriously challenged that we seriously examine ourselves. It’s a universal theme that can be found in all cultures, including the Chinese I Ching: ‘Adversity is the reverse of success, but it can lead to success if it befalls the right person.’ Those very words encapsulate the theme of almost any work of fiction one cares to nominate.

One of the aspects of fiction, that Boyd touches on obliquely, is our ability to follow its thread when our limited working memory doesn’t allow us to keep the whole work in our mind for the story’s duration. In fact, a lengthy novel can be read over days without us losing our way, like rejoining a path after having a night’s sleep. One reason is that a well-written story states its premise* early on, and Boyd gives the example of Odysseywhere we know the goal from the beginning. But a more contemporary example would be J.K. Rowling’s Harry Potter series, where she gives the premise for the entire 7 books in the first few chapters of the first book, so we know what it’s all about all the way through.

Subplots can be followed if they are all interwoven with the plot that the protagonist is following. In fact, every relationship in a story is its own subplot, and if all the relationships involve the protagonist then it’s no more difficult to follow than what we encounter in our own lives. And likewise, the hero’s journey is analogous to what we experience in real life, albeit the hero’s world is completely different to the one we live in. So when we read the story we are in the hero’s here and now, and we find that no more difficult than living in our own here and now. This is the power of human imagination - we can live a vicarious life as easily as we can live our own – escapism is fiction, or, more accurately, fiction is escapism, almost by definition.

On another level, there is a cognitive aspect to this that is more universal. We only comprehend new knowledge when we integrate it into existing knowledge. For example: we generally only understand a new word when it is explained using known words (just look up a dictionary). With a story, we are constantly integrating new knowledge into existing knowledge as the story progresses. So we are exercising a fundamental, uniquely human, cognitive skill while we are being entertained.

In his lengthy discussion on Odyssey, Boyd alludes more than once to every writer’s dilemma: how to meet the reader’s expectations, that the premise itself often creates, and also surprise them. Expectations are necessary if a storyteller wishes to engage their audience, but without surprises they will be less than satisfied. It’s the tension between plot and character that Boyd obviously appreciates, but struggles to articulate, that creates the dilemma, but also resolves it if the writer knows how. The plot provides the expectations and it’s the characters that provide the surprises – this is my own personal experience as a writer, and one of the secrets, I believe, of our craft. If a character surprises you as a writer, then you know they will also surprise the reader. The secret is to give your characters ‘free will’; that way they provide the spontaneity that differentiates fresh fiction from stale. Not all writers agree with me on this, but if my characters don’t take on a life of their own, then I know my story is not worth pursuing. (Refer my Dec.08 post, Zen; an interpretation, for an artist’s perspective, specifically Escher’s.)

This leads to another aspect of prose fiction, that Boyd explores in his analysis of Odyssey, which is multiple viewpoints. Good fiction doesn’t need a narrator, because it’s the characters that tell the story, which is another secret of the craft. Viewpoint should be internal not external, whether it be first person or third person intimate, and that is why they are the most popular viewpoints used in novels. Obviously, third person intimate allows greater flexibility and that’s why it is the most popular method of all. Character is the inner world and plot is the outer world, which makes plot synonymous with fate and character synonymous with free will; that is the secret of writing fiction in a nutshell.

This has been a much longer essay than I intended, but then fiction is a personal passion of mine, and Boyd’s tome covers an enormous territory.

However, there is still one aspect of fiction, specifically prose fiction, that hasn’t been addressed, and it’s not really addressed by Boyd either. He refers to ‘life-simulation’, as I mentioned earlier, which in effect is imagining future scenarios, which allows fiction to work, not only for the writer but also for the reader. What he doesn’t mention is the crucial role of imagery.

Right at the end of his book, Boyd discusses in some detail Dr. Seuss’s Horton hears a Who! which is a classic, and highly successful, children’s picture-book. Only once (in a radio interview with Margaret Throsby, ABC Classic FM) have I heard the issue raised as to why we progress from books with pictures to books without pictures, and it was raised by Margaret, from memory, not the interviewee, whom I’ve since forgotten.

I can still remember the first stories that entranced me, before I could read myself, and I believe it was the pictures, more than the words, that engaged me. (I also started drawing my own pictures from a very early age.) We had a series of classic fairy tales, in a comic book style, but with almost photographic-style coloured images, not cartoonish at all. But some of them I got my mother to read over and over again, though I used to look at the pictures while she read them.

Of course, when I was older, in the days before TV, I listened to radio serials and read comic books, which are closer to film than literature. Unlike other children, I would read the same comic over and over until I was well and truly sated with it. I liked the experience so much I would repeat it until it no longer engaged me.

But at some point, we make the transition to books without pictures, and we actually prefer them, because, for some reason, they engage us more. And I believe the reason is twofold. Firstly, we get inside the heads of the characters (via viewpoint as I mentioned earlier) in a way that can’t happen with comics, or even movies, and that is why books of fiction are not yet dead. Secondly, we eventually reach a point where it is just as satisfying to create our own pictures in our own mind, which, of course, we do without conscious effort. But it is like we learn to ‘translate’ words into the pictures of story, via our capacious and almost preternatural imaginations.

I made the point earlier, and in another post, that without the facility of imagination, no one would even be able to appreciate a story, let alone write one. But there is more: without the innate, indeed, prime-natural ability we have for imagery, cinema would have killed the novel a century ago (as I alluded to above). I’ve speculated previously that without language we would think in imagery. My evidence is dreams. We all dream in imagery and metaphor, and that is why stories are so easy for us. The language of story is the language of dreams.


* ‘Premise’, I’ve found in American texts on fiction, is often confounded with ‘theme’, even though dictionary definitions clearly delineate them. Premise is the foundation or starting point, both in the context of an argument or a story. Theme is a recurring motif, originally applied to music, but, in the context of a story, can be a message or a moral or an allegory or just an idea. The premise and the theme of a story can be the same, but mostly they’re not, and some stories don’t even have a substantial theme. But God help the reader of a story that doesn’t have a premise.

Wednesday 8 July 2009

Quantum Mechanical Philosophy

Following on from my last post, Subjectivity: The Mind’s I (Part 1), I read Paul Davies’ Other Worlds, for a couple of reasons. One, Hofstadter mentioned it in his ‘reflections’ that I referred to in that post, plus, I had recently come across it and already put it aside with the intention of re-reading it anyway. Also the subject of the post led me to contemplate the philosophical ramifications of quantum mechanics (hence the title), and Davies’ book was a good place to start.

As it turned out, I hadn’t read it, even though I’ve owned it for over 20 years, and I was confusing it with another one of his books, The Ghost in the Atom, which was a compilation of BBC interviews transcribed and published in the same decade (1980s). So, logically, I read them both.

Both of these were published in England before Davies came to Australia, where he wrote a string of books on philosophy and science: The Cosmic Blueprint (about chaos theory), The Mind of God (about cosmology), God and the New Physics (much the same territory as Dawkins’ The God Delusion, only written 20 years earlier, but with more depth in my view and a different emphasis), About Time (about time in every respect), The Origin of Life (about microbiology), and these are just the ones I’ve read. He now lives in America, as an astro-biologist with Arizona State University, and has since published The Goldilocks Enigma (about John Wheeler’s conjecture that the universe effectively exists as a cosmological-scale, quantum-phenomenal loop, and to whom Davies dedicated the book). This is arguably his best book, philosophically, because it entails a lifetime’s contemplation on science, epistemology and ontology.

At the top of my blog, I have scribed a little aphorism, which some may see as a definition for philosophy, but I see as a criterion. If you want a definition, I refer you to an earlier post: What is philosophy? (March 08) To quote: ‘In a nutshell, philosophy is a point of view supported by rational argument. A corollary to this is that it requires argument to practice philosophy.’ But in reference to my criterion, as well as my definition, Davies fulfils both of them admirably. It is impossible to read Davies without challenging your deepest held beliefs, especially about reality, the universe and our place in it. No, he’s not a science ‘heretic’, far from it: he just writes very well on difficult subjects about which he has a lot of knowledge.

The Ghost in the Atom (1986), has 2 authors credited: J. Brown and P.C.W. Davies. Brown was ‘Radio Producer in the BBC Science Unit, London’, whereas Davies was ‘Professor of Theoretical Physics [at] the University of Newcastle upon Tyne’. The book was a collection of radio interview transcripts (edited) of some very big names in physics: Alain Aspect, John Bell, John Wheeler, David Duetsch, David Bohm; and these are just the ones I’ve heard of. It also included: Rudolf Peierls, John Taylor and Basil Hiley; whom I hadn’t heard of. The interviews have been edited, but I get the impression from the book’s Forward that the text may actually contain more material than was originally put to air. I assume Davies was the interviewer in the programme, and he tended to play devil’s advocate to whomever he engaged. The book is a treasure, if for no other reason than some of these great minds are no longer with us.

In my encounters on the blogosphere, I’ve come across more than a few people who seem to think that philosophy has largely been overtaken by science, and any distinction is at best, academic, and at worst, irrelevant. But there are fundamental differences, as I recently pointed out in a comment on another post, The Mirror Paradox (July 08): science often deals in right and wrong answers, whereas philosophy often does not.

In a more recent post (Nature’s Layers of Reality) I made the point that ‘quantum mechanics is where science and philosophy collide, and philosophy is still all at sea.’ Quantum mechanics is arguably the most empirically successful meta-theory ever, so it’s been inordinately successful as a sieve for right and wrong answers. But philosophically it conjures up more questions than answers. (Davies makes the exact same point, albeit with more authority, in The Goldilocks Enigma.)

In The Ghost in the Atom, Rudolf Peierls argues for the traditional Copenhagen interpretation, largely formulated and promoted by Niels Bohr, and, right at the start, Peierls bridles at the word ‘interpretation’, because, as far as he was concerned, there are no alternative ‘interpretations’. He also baulked at the word, ‘reality’, or at least, in the context of the discussion. To him, physics can only give a description, and, in the case of quantum mechanics, ‘reality’ is a misnomer.

In each of the interviews, the discussion tended to centre around John Bell’s famous theorem and Alain Aspect’s consequential experiment, which affirmed Bell’s ‘inequality’ as it is called. This originally arose from a famous thought experiment proposed by Einstein and elaborated on by Podolsky and Rosen, so it became known as the Einstein-Podolsky-Rosen or EPR experiment. It examines the purported ‘action-at-a-distance’ phenomenon predicted by quantum physics for certain traits of particles or photons.

If you measure the trait of one of a pair of particles (of common origin), you instantaneously get the correlated result of its complementary partner, even though you couldn’t possibly know beforehand. In a very truncated nutshell, quantum physics says you won’t know what state either particle is in until you observe one of them or take a measurement of it, which will automatically affect the other particle, even if it’s on the other side of the universe. Einstein originally formulated this in a thought experiment to prove Bohr wrong, because, according to his own (proven) theory of special relativity, it should be impossible. John Bell worked out a mathematical theorem that would prove Einstein right or wrong, depending on a number of correlated outcomes. Alain Aspect then created a real experiment to test Bell’s theorem (made the thought experiment actually happen), which ultimately proved Einstein wrong and quantum theory correct. (This was long after Einstein had died, by the way.)

The various physicists, interviewed by Davies in The Ghost in the Atom, explained the outcome of Aspect’s experiment based on their (philosophical) interpretation of quantum physics. No one disputed the actual results.

John Wheeler, who was a protégé of Bohr’s, also defended the Copenhagen interpretation, but there was a subtle difference to Peirel’s interpretation as to what constituted an observation or a measurement. For Wheeler, the quantum ‘wave packet’ collapsed (into one state or another) when, for example, a photon changed the chemical composition of a film or set off a Geiger counter or a photon multiplier. But Peirel took Eugene Wigner’s extreme interpretation that the ‘collapse’ only occurred when the result was observed by a conscious observer. For Wigner, consciousness was intrinsically involved in forming ‘reality’, although Peirel argued that we can’t talk about ‘reality’ in this context, which was how he side-stepped the obvious conundrum this view posed (it verges on solipsism).

Wheeler took the more accepted or conventional view that quantum phenomena become ‘real’ when they interact with a ‘macro’ object. But Wheeler acknowledged that the choice of apparatus, or the preparation of the experiment affected the outcome. He argued that even if you made a ‘delayed-choice’ of what to measure, you would still get a quantum mechanical outcome. For example, in the famous Young double-slit experiment, if you measure or observe what goes through each slit, you won’t get the double slit interference that is observable when you choose not to ‘observe’ individual slits. Wheeler conjectured that this would still occur even if you made the measurement or observation after the photon or particle had traversed the slits, and he has since been proven correct. In other words, Wheeler is saying that you effectively create a causal effect backwards in time, quantum mechanically. But Wheeler goes further and conjectures that this would even happen on a cosmological scale if, instead of using 2 slits, you used a galaxy lensing light from a distant quasar to create interference or not. This is theoretically possible, if technologically impossible to confirm (at this point in time). It must be pointed out that this phenomenon does not allow communication backwards in time, so paradoxes of the sort that we often see in science fiction would not be possible, but it’s still very counter-intuitive to say the least.

David Deutsch defended the ‘many-worlds’ interpretation, originally proposed by Hugh Everett, which I referenced (via Hofstadter) in my last post. Deutsch’s interpretation is subtly different to Everett’s (in fact, many of the interviewees revealed the subtle variations that exist within this field) in that the worlds don’t bifurcate but are already in existence – not a huge step if there are an infinite number of them. But Deutsch did introduce a novel idea that the separate universes not only separate but also ‘fuse’, which is how he explained the interference.

In The Goldilocks Enigma (published 20 years later), Davies makes the observation that whilst the ‘multiverse’ started off as a quantum mechanical interpretation, it is now very popular amongst cosmologists in conjunction with the ‘anthropic principle’. Both Martin Rees (Just Six Numbers) and Richard Dawkins (The God Delusion) appropriate it to explain our peculiarly privileged existence in the overall scheme of things. Not just our existence, but the existence of life in general.

The most interesting interviewee, from my perspective, both now and when I originally read the book about 20 years ago, is David Bohm. I’m a great fan of Bohm’s, if for no other reason than he defied McCarthy, even though it meant that he spent the rest of his life in England. He wrote a book on philosophy in his later years (but prior to this interview) called Wholeness and the Implicate Order, which I’ve read. Bohm is a great mind but not a great writer, which is unfortunate for laypeople like me. The advantage of the interview is that someone, as knowledgeable and astute as Davies, can draw out the ideas and the elaboration of the ideas that you long to comprehend. But it helps, in this case, if one understands the implications of the Bell inequality.

The Bell inequality can be distilled into the mandatory abandonment of one of two highly-cherished and long-assumed ideas: objective reality or the impossibility of non-local communication. Objective reality requires no explanation. But non-local communication (usually short-handed as non-locality) means the ability to communicate at faster-than-light-speed, which breaches Einstein’s special theory of relativity. Hence the reason that Einstein originally created the thought experiment which led to Bell’s theorem.

In ordinary parlance, non-locality refers to an unseen and undetectable connection between 2 objects separated in space and time, which is a more intuitive concept to grasp. Einstein called it: ‘ghostly action-at-a-distance’; which provides the title of the book.

In effect, Bohm is willing to entertain the possibility of non-locality in order to hang on to objective reality. He calls it a ‘hidden variables’ theory, but it is also known as the ‘quantum potential’ theory. The labels are unimportant; it’s the ideas he has behind them that I believe are worth pursuing. Like Bohm, I find it the easiest interpretation to live with, philosophically.

To quote David Deutsch, when he was discussing David Bohm’s interpretation with Davies: ‘A non-local hidden variable theory means, in ordinary language, a theory in which influences propagate across space and time without passing through the space in between.’

I couldn’t have expressed it better myself, and neither, I suspect, could Bohm.

Basically, Bohm is saying that there is something hidden underneath that we have not uncovered, which is why he uses the term ‘implicate order’. He gives the analogy of folding up a piece of paper and drawing lines on it, then, when you unfold it, you get a pattern. In quantum phenomena we see the pattern but not the ‘order’ underneath. My own interpretation is that quantum phenomena may be the surface effects of a hidden (or multiply-hidden) dimensions. Instead of many ‘hidden’ universes perhaps there are ‘hidden’ dimensions, but I suspect you would really only need one.

If you take a box and unfold it into 2 dimensions, you get a cross. If you go off the end of one of the branches of the cross in 2 dimensions, you would end up on the opposite branch if you were in 3 dimensions. An extra dimension allows you to ‘cut’ through space and time. Bohm even entertains the heresy of heresies that backward communication may be possible (within limitations). Bohm doesn’t discuss extra dimensions; it’s just my mind trying to come up with a ‘physical’ interpretation that would allow both non-locality and objective reality (I’m really not familiar enough with the physics to conjecture further).

The last person interviewed in the book is Basil Hiley, who worked with Bohm on the ‘quantum potential’ theory. He has come up with a mathematical interpretation using Schrodinger’s equations, in conjunction with a ‘quantum potential’ that allows non-locality (a sort of ‘absolute space-time [like] a quantum aether’ to use his own description) and, to quote: ‘[from] the statistical results of typical quantum experiments you find that they are still Lorentz invariant’ (they obey Einstein’s relativity theory).

When Davies quizzed Hiley about Heisenberg’s Uncertainty Principle, Hiley attempted to explain it as a thermodynamic statistical effect. Davies then said, if that was the case, you wouldn’t need Planck’s constant, and Hiley said: ‘To me the value of Planck’s constant is not really relevant to quantum mechanics’, which is an extraordinary statement considering Planck’s constant is what initiated quantum theory in the first place. But to be fair to Hiley, he acknowledges this and makes the point that many people believe that if you brought Planck’s constant to zero you would get classical physics, but he asserts ‘nothing could be further from the truth.’

The value of Plank’s constant has always intrigued me: it places a limit on our ability to perceive the world. It also explains (to me) why quantum effects are scale dependent, though many people claim they are not, and mathematically that is true. But perhaps, and this is a big perhaps from someone as ignorant as me, Planck’s constant determines the hidden dimension, if there is one. This is pure speculation and obviously incorrect, otherwise, I’m sure, someone would have explored it well before now.


Addendum: There is a detailed discussion on this topic in Scientific American, March 2009 issue. The online version can be found here. Where the article has attracted 152 comments to date.