Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Sunday 16 January 2011

Cycles of Time – a new theory of cosmology

Cycles of Time, subtitled An Extraordinary New View of the Universe, is a very recent book by Roger Penrose; so recent that I pre-ordered it. Anyone who has followed my blog over the last few years will know that I’m a big fan of Penrose. Along with Paul Davies and Richard Feynman, I think he’s one of the top physics writers for laypeople ever. John Gribbin and James Gleick are also very good but not quite in the same league in my opinion. Davies, Feynman and Penrose all have different strengths so comparisons are not entirely fair. Feynman was the great communicator of some of the most esoteric theories in physics and if you want to grasp the physics, he’s the best. Davies is, in my view, the best philosophical writer and also covers the widest field: covering topics like astrophysics, the origin of life, cosmology, chaos theory, the nature of time and in The Goldilocks Enigma the meaning of life, the universe and everything.

Penrose is actually a mathematician and made significant contributions to tessellation (tiles, map boundaries etc), but he’s also won at least one award in physics (1988 Wolf Prize jointly with Stephen Hawking) and his dissertations on the subject of consciousness reveal him as an erudite and compelling polymath.

My favourite book of his is The Emperor’s New Mind(1989) where he first tackled the subject of consciousness and challenged the prevailing view that Artificial Intelligence would herald in a new consciousness equivalent to or better than our own. But the book also covers almost the entire field of physics, argues cogently for a Platonic view of mathematics, explains the role of entropy on a cosmic scale, and devotes an entire chapter to the contingent nature of ‘truth’ in science. A must-read for anyone who thinks we know everything or are on the verge of knowing everything.

Now I’m the first to admit that I can quickly get out of my depth on this topic, and I can’t defend all the arguments that Penrose delivers, because, quite frankly, I don’t understand all the physics that lay behind them, but he’s one of the few people, with the relevant intellectual credentials, who can challenge the prevailing view on our universe’s origins and not lose credibility in the process.

For a start, reading this book makes one realise how little we do know and how speculative some of our theories are. Many commentators treat theoreticians who challenge string theory, and its latest incantation, M theory, as modern-day luddites, which is entirely unfair considering that string theory has no experimental or observational successes to its name. In other words, it’s a work of mathematical genius that may or may not reflect reality. Penrose’s CCC (Conformal Cyclic Cosmology) is also a mathematically consistent theory with no empirical evidence to either confirm or deny it. (Penrose does suggest avenues of enquiry to rectify that however.)

I first came across CCC in a book, On Space and Time (2008), a collection of ‘essays’ by people like Alain Connes, Shahn Majid, Andrew Taylor and of course Sir Roger Penrose. It also included John Polkinghorne and Michael Heller to provide a theological perspective. Personally, I think it would have been a better book if it stuck to the physics, because I don’t think metaphysical philosophies are any help in understanding cosmology, even though one could argue that mathematical Platonism is a metaphysical philosophy. I don’t mind that people want to reconcile scientific knowledge with their personal religious beliefs, but it’s misleading to imply that religion can inform science. And science can only inform religion if one conscientiously rejects all the mythology that religions seem to attract and generate. Putting that personal caveat aside, I can highly recommend this book, edited by Shahn Majid, for an overview of current thinking on cosmology and all the mysteries that this topic entails. This is true frontier-science and that perspective should never be lost in any such discussion.

Getting back to Penrose, his latest book tackles cosmology on the grandest scale from the universe’s Big Bang to its inevitable demise. Along the way he challenges the accepted wisdom of inflation amongst other prevailing ideas. He commences with a detailed description of entropy because it lies at the heart of the conundrum as he sees it. It’s entropy that makes the Big Bang so very special, and he spends almost half the book on expounding why.

Penrose describes specific aspects of time that I referred to in a post last year (The enigma we call time, July 2010). He gives the same example I did of an egg falling off a table demonstrating the inherent relationship between entropy (the 2nd law of thermodynamics) and the arrow of time we are all familiar with. He even cites a film running backwards showing an egg reconstituting itself and rising from the floor as an example of time reversal and a violation of the 2nd law of thermodynamics acting simultaneously, just as I did. He also explains how time doesn’t exist without mass, because for photons (light rays), which are massless, time is always zero.

The prevailing view, according to almost everything I read on this subject via science magazines, is that we live in a multiverse where universes pop out like exploding bubbles, of which the Big Bang and its consequent ‘inflation’ was just one. In the Christmas/New Year edition of New Scientist (25 December 2010/1 January 2011, p.9) there is an article that claims we may have ‘evidence’ of ‘bruising’ in the CMB (Cosmic Microwave Background) resulting from ‘collisions’ with other universes. (The cosmic background radiation was predicted by the Big Bang and discovered purely by accident, which makes it the best evidence we have that our universe did indeed begin with the Big Bang.)

Some people also believe there is an asymmetry to the universe, implying there is an ‘axis’, which would be consistent with us being ‘joined’ to a ‘neighbouring universe’. But be careful with all these speculative scenarios fed by inexplicable and potentially paradigm-changing observations – they just confirm how little we really know.

The multiverse in conjunction with the ‘anthropic principle’ appears to be the most widely accepted explanation for the how, why and wherewithal of our hard-to-believe existence. Because we live in possibly the only universe of an infinite number then naturally it is the only universe we have knowledge of. If all the other universes, or almost all, are uninhabitable then no one will ever observe them. Ergo we observe this universe because it’s the one that produced life, of which we are the ultimate example.

Paul Davies, in The Goldilocks Enigma, spends a page and a half discussing both the virtues and pitfalls of the multiverse proposition. In particular, he discusses what he calls ‘...the extreme multiverse model proposed by Max Tegmark in which all possible worlds of any description really exist…’ In other words, whatever mathematics allows can exist. Quoting Davies again: ‘The advantage of the extreme multiverse is that it explains everything because it contains everything.’ However, as he also points out, because it explains everything it virtually explains nothing. As someone else, a theologian (I can’t remember who), once pointed out, in a discussion with Richard Dawkins, it’s no more helpful than a ‘God-of-the-gaps’ argument, which also explains everything and therefore ultimately explains nothing.

Stephen Hawking has also come out with a new book with Leonard Mlodinow titled The Grand Design, which I haven’t read but read reviews of, in particular Scientific American. Someone in America (Dale, who has a blog, Faith in Honest Doubt) put me onto a radio podcast by some guys under the name, Reasonable Doubts, who ran a 3-part series on Buddhism. At the end of one of their programmes they took Hawking to task for making what they saw as the absurd claim that the universe could be ‘something from nothing’.

I left a comment on their blog that this was not a new idea:

I'm not sure why you got in a tiz about Hawkings' position, though I haven't read his latest book, but I read an editorial comment in Scientific American under the heading, Hawking vs God. The idea that the universe could be 'something for nothing' is not new. Paul Davies discussed it over 20 years ago in God and the New Physics (1983) in a chapter titled: Is the universe a free lunch? He says almost exactly what Hawking is credited with saying (according to Scientific American): the universe (according to the 'free lunch' scenario) can account for itself, the only thing that is unaccountable are the laws of nature that apparently brought it about. Davies quotes physicist, Alan Guth: "It's often said that there is no such thing as a free lunch. The universe, however, is a free lunch."

Davies, Hawking and Penrose are not loonies – they are all highly respected physicists. We’ve learned from Einstein and Bohr that nature doesn’t obey rules according to our common sense view of the world, and, arguably, the universe’s origin is the greatest of all unsolved mysteries. Why is there something instead of nothing? And is there any reason to assume that there wasn’t nothing before we had something?

What, may you ask, has any of this to do with Penrose’s CCC theory? It’s just a detour to synoptically describe the intellectual landscape that his theory inhabits.

As I alluded to earlier, Penrose focuses on the biggest conundrum in the universe, being entropy, and how it makes the Big Bang so ultra-ultra special. Few discussions I’ve read on cosmology even mention the role of entropy, yet it literally drives the entire universe’s evolution – Paul Davies doesn’t shy away from it in God and the New Physics - but otherwise, only Penrose puts it centre stage from my reading experience.

Both Davies and Penrose discuss it in terms of ‘phase space’ which is really hard to explain and really hard to envisage without thinking about dimensional space. But effectively the equation for entropy is the logarithm of a volume of phase space multiplied by Boltzmann’s constant: S = k log(V). The use of a logarithm allows one to differentiate between entropies in a dynamic system. Significantly, one can only ‘take away’ entropy by adding it to somewhere else that’s external to the ‘closed’ environment one is studying. The most obvious example is a refrigerator that keeps cold by dumping heat externally to the ambient air in a room (the fridge loses entropy by adding it externally). As Penrose points out, the only reason the Sun’s energy is ‘useful’ to us is because it’s a ‘locally’ hot spot in an otherwise cold space. If it was in thermal equilibrium with its environment it would be useless to Earth. ‘Work’ can only be done when there is an imbalance in energy (usually temperature) between a system and its environment.

But more significantly, to decrease the entropy in a ‘closed’ system (like a refrigerator or Earth) there must be an increase in entropy externally. So ultimately the entire universe’s entropy must always be increasing. The corollary to that is that the universe must have started with a very small entropy indeed, and that is what makes the Big Bang so very special. In fact Penrose calculates the ultimate phase space volume of the entire universe as e raised to the power of 10 raised to the power of 123, (e10)123, or, if it’s easier to comprehend, take 10 raised to the power of 10 (10 plus 10 noughts) raised to the power of 123 (10 x 123 noughts). So That’s 1 with 123 x 10 noughts after it. To reverse this calculation, it means that the precision of the big bang to create the universe that we live in is one part in 10 to the 10 to 123, (1-10)-123. So that’s a precision of 0.00…(123x10 0’s)1.

Penrose takes the universe in its current state and extrapolates it back to its near-origin at the so-called inflationary stage between 10-35 and 10-32 seconds from its birth. He also extrapolates it into its distant future, making some assumptions, and finding that the two states are ‘conformally’ equivalent. One of his key assumptions is that the universe is inherently hyperbolic so it has a small but positive cosmological constant. This means that the universe will always expand and never collapse back onto itself. Penrose provides good arguments, that I won’t attempt to replicate here, that a ‘Big Bounce’ scenario could not produce the necessary entropic precision that we appear to need for the Big Bang. In other words, it would be a violation of the 2nd law of thermodynamics.

Penrose’s future universe assumes that the universe would consist entirely of black holes, many of which exist at the centre of all known galaxies. As these black holes become ‘hotter’ than the space that surrounds them, they will evaporate through Hawking radiation, so that eventually the entire universe will be radiation in the form of electromagnetic waves and gravitons. Significantly there will be virtually no mass therefore no clocks, and, from what I can understand, that’s what makes the universe conformal. It will have a ‘conformal boundary’. Penrose’s bold hypothesis is that this conformal boundary will become the conformal boundary that we envisage at the end of the inflationary period of our universe. Hence the death of one universe becomes the birth of the next.

What of the conundrum of the 2nd law of thermodynamics? Penrose spends considerable time discussing whether or not information is lost in black holes, which is a contentious point. Hawking once argued that information was lost, but now argues otherwise. Penrose thinks he should have stuck to his guns. Many scientists believe it’s a serious flaw in cosmological thinking to consider that information could be lost in black holes. Many scientists and philosophers argue that ‘everything’ is information, including us. There’s an argument that teleportation is theoretically achievable, even on a macro scale, because everything is just information at base. I’ve never been convinced of that premise, but leaving that aside, I think that information could be lost in black holes and so does Penrose. If this is true then all information regarding our universe will no longer exist after all the black holes evaporate, and, arguably, entropy will be reset, along with time. I’ve simplified this part of Penrose’s treatise, so I may not be doing him justice, but I know that the loss of information through multiple black hole evaporation is crucial to his theory.

When I first came across this thesis in On Space and Time I admit that it appealed to me philosophically. The idea that the end of the universe could be mathematically and physically equivalent to its beginning, and therefore could recycle endlessly is an intellectually attractive idea. Nature is full of beginnings and endings on all sorts of scales, why not on the cosmological scale? Infinity is the scariest concept there is if you think about it seriously – the alternative is oblivion, nihilism effectively. We have a life of finite length that we are only aware of while we are living it, yet we know that existence goes on before we arrive and after we’re gone. Why should it be any different for the universe itself?

I admit I don’t understand all the physics, and there still seems to be the issue of going from a cold universe of maximum entropy to a hot universe of minimum entropy, yet Penrose seems to believe that his ‘conformal boundary’ at both ends allows for that eventuality.

Saturday 8 January 2011

It's women who choose, not men

Not so recently, I told someone I had a blog and it was called Journeyman Philosopher and they couldn’t stop themselves from laughing. I said, ‘Yes, it is a bit wankerish.’ Especially for an Aussie. But I’m not and never will be the real thing – a philosopher, that is – yet I practice philosophy, by attempting to emulate the credo I have inscribed at the top. The truth is that none of us, who value knowledge for its own sake, ever stop learning, and I’ve made it a lifetime passion. This blog does little more than pass on and share, and occasionally provide insights. But I also attempt to provoke thought, and if I should ever fail at that then I should call it quits.

So this post is one of those thought-provoking ones, because it challenges centuries of culturally accepted norms. I’m a single bloke who’s never married, so I’m hardly an expert on relationships, but this is a philosophical position on relationships garnered both from experience and observation.

Recently, I took part in a discussion on Eli’s blog, Rustbelt Philosophy, whereby I cited Nietzsche from Beyond Good and Evil that most people take a philosophical position on visceral grounds and then rationalise it with an argument. As I commented on Eli’s blog, I think this is especially true for religious arguments, but it has wider applications as well. The more we invest in a theory (for example) the less likely we are to reject it, even in the face of conflicting evidence.

I’m currently reading Roger Penrose’s latest book, Cycles of Time (to be the subject of a future post) and he readily acknowledges his personal prejudices in outlining his iconoclastic theory for the origins of our universe. The point I’m making, and its relevance to this post, is that I too have prejudices that shape my views on this topic.

In the last decade or 2 there has been a strong and popular resurgence in Jane Austen’s novels (through film and TV), which indicates they have strong universal themes. Jane Austen suffered from the prejudices of her day when women were not supposed to earn money, and the class structure, in which she lived, precluded intelligent women, like herself, from attaining fulfilling lives. Everything was dependent on them marrying the right bloke, or more clinically, marrying into the right family. I have to say that I’ve seen examples of that narrow-minded thinking even in my own lifetime. Austen had her novels published through a male intermediary and on her grave there is no mention that she was an author because it was considered a slight for a woman to admit she had a profession.

But the theme of every Austen novel that I’ve seen (I haven’t read any of them) is that the woman finds the right bloke despite the obstacles that her society puts in her way. And the right bloke is the one who demonstrates that he’s a genuine friend and not someone who is playing the social game according to the rules of their society. Austen was an iconoclast in her own right, and the fact that her stories still ring true today, indicates that she was revealing a universal truth.

Somewhere in my childhood I realised that women are in fact the stronger sex, and that whilst men can’t live without women, they can live without us. But this is only one reason that I believe women should do the choosing and not the men. The mechanics of courtship also indicate that it is the woman who chooses even though the bloke thinks it’s him. I remember seeing a documentary on speed-dating once, and the facilitator made the exact same observation. Personally, I wasn’t surprised.

In many respects, I think the best analogy in the animal kingdom is with birds. The male really just wants to have sex, so what does he do? He sings or he flashes colourful plumage or he performs a dance or he builds a bower, and then the female chooses the one she thinks is best, not the other way round. Now, this is an analogy, but I think it applies to humans just as well. Whilst it is the woman who might arguably wear the plumage, she does the selecting, and it is the men who perform. We show off our wit and conversation, we drive flash cars and buy big houses and use whatever talents we may have to impress. I read somewhere recently (Scientific American Mind) that in mixed company it is the men who tell the jokes and the women who do the laughing.

So my argument is that we woo but women select. I believe this is the natural order and centuries of cultural, religious and political control have attempted to overturn it. All our institutions have been patriarchal and marriage is arguably the most patriarchal of them all.

And this is where my argument reflects the sentiment expressed by Nietzsche, because I have a rational justification to support my intuitively-premised prejudice. It is the woman who has most to lose in a relationship because she’s the one who gets pregnant. So what I’m arguing is that it should be her choice all the way down the line. It is the woman who should determine the parameters and limits of a relationship. It is she who should decide how intimate it should become and whether marriage is an option, not the bloke. I would even argue that men cope with rejection better than women. Our sex drive is like a tap, easy to turn on, not so easy to turn off, but that’s what masturbation is for.

Anyone who has read my book, Elvene, will recognise a feminist theme that pretty well reflects the philosophy I’ve outlined above. It wasn’t intentional, and it was only afterwards that I realised that I had encapsulated that theme into my writing. Considering it’s set in the future, not the past, it has little in common with Jane Austen. As one of its reviewers pointed out, the book also deals with relationship issues like respect, honesty and generosity of spirit.

In essence, I think the patriarchal cultural mores that we’ve had for centuries are not only past their use-by-date, but are in conflict with the natural order for human relationships. Our societies would be a lot more psychologically healthy if that was acknowledged.

Sunday 2 January 2011

The Number Devil by Hans Magnus Enzensberger

It’s been a while since I’ve written anything really meaty on my blog and an entire year since I last wrote a post that reviewed a book on mathematics.

But what I really like about this particular post is that it renders the near to the global. This arose from a Christmas drink that I had with my neighbour across the road, Sarah, who lent me a book, that she never lends, on the proviso I write it up on my blog. So from my neighbour, who literally lives directly opposite me with her 2 sons, Andre and Emelio, to the blogosphere.

Over a bottle of Aussie red (Barossa Valley Shiraz 2008) – yes that’s worth mentioning because we both agreed that it was a bloody good drop (literally and figuratively) – we somehow got into a discussion on mathematics and the teaching of mathematics in particular, which led us to swapping books the next day.

On Christmas Day 2009, I published a post on The Bedside Book of Algebra (Michael Willers), which is the book I swapped with Sarah. The Number Devil; A Mathematical Adventure covers some of the same material but it’s aimed at a younger audience and it has a different approach. The whole purpose of this book it to reveal to young people that mathematics is a world worth exploring and not just a sadistic intellectual exercise designed by teachers to torment young developing minds. Sarah’s book has 2 bookmarks in it: one for her and one for her 7 year-old son; and her son’s bookmark is further advanced than hers.

It is written in novel-form and the premise of the narrative is very simple: the protagonist, Robert, is having tormenting dreams when he is visited by a devil, who calls himself the ‘Number Devil’ and begins to give him lessons in mathematics. It’s extremely clever, because it’s engaging and contains entertaining and informative illustrations, as well as providing exposition on some of the more esoteric mathematical concepts like infinity, transfinite numbers, combinations and permutations, Pascal’s triangle, Fibonacci numbers, prime numbers and Goldbach’s conjecture.

Whilst Enzensberger reveals the relationship between Pascal’s triangle and Fibonacci numbers, he doesn’t explain the relationship between Pascal’s triangle and the binomial theorem, which I learned in high school. He also explains the relationship between Pascal’s triangle and the combination algorithm, but not the way I learned it, which I think is more intuitive and useful. He uses diagonals (within Pascal’s triangle) whereas I learned it by using the rows.

The cleverness is that he provides these expositions without revealing to the reader how advanced these mathematical ‘lessons’ are. In fact, the reader is introduced to the ‘mysteries’ that have fascinated ‘ancients’ from many cultures across the world. Enzensberger’s inspired approach is to reveal the appeal of mathematics (that most mathematicians only find in adulthood) to young people before they are turned off it forever. He demonstrates that esoteric concepts can be taught without emphasising their esoterica.

Even the idea of a ‘number devil’ is inspired because mathematics is considered to be so devilish, and, in some cultures, mathematicians were considered to be devil’s apprentices (refer my recent post on Hypatia). In the second chapter (chapters are sequential nights of dreaming) Robert finds himself in a cave with the Number Devil, and the illustration is an obvious allusion to Plato’s cave, though no mention is made of this in the text.

At the end, the Number Devil takes Robert to ‘Number Heaven’ and ‘Number Hell’, though they appear to be the same place, where he meets some of the ‘masters’ like Russell, Fibonacci, Archimedes and a Chinese man whose name we don’t learn. We don’t meet Pythagoras who lives in a higher realm altogether, up in the clouds.

I’d recommend this book to any parent whose children show the slightest mathematical inclination and also adults who want an introduction to this esoteric world. As Sarah said, it’s like a mathematical version of Jostein Gaarder’s Sophie’s World, which is a high enough recommendation in itself.

Oh, I should mention that the illustrations are by Rotraut Susanne Berner; they augment the text perfectly.

Tuesday 21 December 2010

Ayaan Hirsi Ali

This woman should need no introduction, she’s been in the media in most Western countries I’m sure. I thought this was a really good interview (Tue. 21 Dec. 2010) because it gives an insight into her background as well as a candid exposition of her political and philosophical views.

I haven’t read either of her biographies, but I’ve read second-hand criticism which led me to believe she was anti-Islamic. This is not entirely true, depending on how one defines Islam. To quote her own words: “I have no problem with the religious dimension of Islam.” She’s not the first Muslim I’ve come across to differentiate between religious and political Islam. Most Westerners, especially since 9/11, believe that any such distinction is artificial. I beg to differ.

She makes it very clear that she’s against the imposition of Sharia law, the subjugation of women and any form of totalitarianism premised on religious-based scripture (irrespective of the religion). In short, she’s a feminist. She decries the trivial arguments over dress when there are other issues of far greater import, like arranged marriages, so-called circumcision of women and honour killings. (For an intelligent debate on whether the burqua should be ‘outlawed’ I refer you to this.)

What I found remarkable, and almost unimaginable, was how violent her childhood and upbringing were. There was violence in the school, violence in the home, violence in politics. As she points out it was so pervasive that a peaceful environment was considered unthinkable. One of the most poignant stories she tells was when she went to Holland to seek asylum, and on going to a Police Station to register, the policeman asked her if she would like a cup of coffee or tea. This was a revelation to her: that a man in uniform should offer a woman, a stranger and a foreigner, a cup of coffee or tea was simply mind-blowing.

It is beyond most of us to imagine a childhood where violence is the principal form of interrogation and negotiation between people in all walks of life: home, education and work; yet that was her life. That she can now talk of falling in love and of writing a letter to her unborn child for a hopeful future is close to miraculous.

What resonated with me was her argument that it doesn’t take 600 years to reconcile Islam with the modern secular world, but only 4 generations. I have Muslim friends, both in America and in Australia, and they belie the belief, held by many in Western societies, that Muslims can’t assimilate and yet keep their cultural and spiritual beliefs. They demonstrate to me that Ayaan Hirsi Ali is correct in her fundamental assumptions and philosophical approach.

Saturday 11 December 2010

On-line interview for Elvene

This is a blatant promotion. Obviously the interview is totally contrived by the publisher, and if you press TOP at the end of my piece, you will get an overview of the current state of play in Oz publishing and distribution, from the perspective of one of the (minor) players.

Having said that, the questions were not vetted by me and the answers are all my own.

The term 'jack-of-all-trades' is a complete misnomer. Anyone who actually knows me, knows that I'm totally useless at all trades involving genuine dexterous skill. The rest is mostly true, though I've only written one screenplay and one novel that I'm willing to own up to.

The interview contains some of my philosophy on writing in 'nutshell' form, with the added relevance of referencing something that I've written.

Friday 3 December 2010

Hypatia


Last week I saw a movie by Alejandro Amenabar called Agora, which is effectively the story of Hypatia and her death at the hands of Christian zealots in Alexandria towards the end of the Roman Empire in AD 414. So the film is based on a real event and a real person, though it is a fictional account.

Amenabar also made the excellent film, The Sea Inside, starring Javier Bardem, which was also based on a real person’s life. In this case, a fictionalised account of a quadriplegic’s battle with the Church and government in Spain to take his own life through euthanasia.

I first came across Hypatia in Clifford A. Pickover’s encyclopedic tome, The Math Book, subtitled, From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics.  He allots one double page (a page of brief exposition juxtaposed with a graphic or a photo) to each milestone he’s selected. He presents Hypatia as the first historically recognised woman mathematician. In fact she was a philosopher and teacher at the famous Library of Alexandria, even though she was a Greek, and like her father, practiced philosophy, science, mathematics and astronomy in the tradition of Plato and Aristotle. By accounts, she was attractive, but never married, and, according to Pickover, once said she was ‘wedded to the truth’. The film gives a plausible account of her celibacy, when her father explains to a suitor that, in order to marry, she would have to give up her pursuit of knowledge and that would be like a slow death for her.

The film stars Rachel Weisz in the role of Hypatia and it’s a convincing portrayal of an independent, highly intelligent woman, respected by men of political power and persuasion. The complex political scene is also well depicted with the rise of Christianity creating an escalating conflict with Jews that the waning Roman military government seems incapable of controlling.

It’s a time when the Christians are beginning to exert their newly-found political power, and their Biblical-derived authority justifies their intention to convert everyone to their cause or destroy those who oppose them. There is a scene where they drive all the Jews out of Alexandria, which they justify by citing Biblical text. The film, of course, resonates with 20th Century examples of ‘ethnic cleansing’ and the role of religious fundamentalism in justifying human atrocities. Hypatia’s own slave (a fictionalised character, no doubt) is persuaded to join the Christians where he can turn his built-up resentment into justified slaughter.

Hypatia would have been influenced by Pythagoras’s quadrivium, upon which Plato’s Academy was based: arithmetic, geometry, astronomy and music. In the movie she is depicted as a ‘truth-seeker’, who questions Ptolemy’s version of the solar system and performs an experiment to prove to herself, if no one else, that the Earth could move without us being aware of its motion. I suspect this is poetic licence on the part of Amenabar, along with the inference that she may have foreseen that the Earth’s orbit is elliptical rather than circular. What matters, though, is that she took her philosophy very seriously, and she appreciated the role of mathematics in discerning truth in the natural world. There is a scene where she rejects Christianity on the basis that she can’t accept knowledge without questioning it. It would have gone against her very being.

There is also a scene in which the Church’s hierarchy reads the well-known text from Timothy: “I suffer not a woman to teach or to control a man”, which is directed at the Roman Prefect, who holds Hypatia in high regard. The priest claims this is the word of God, when, in fact, it’s the word of Paul. Paul, arguably, influenced the direction of Christianity even more than Jesus. After all, Jesus never wrote anything down, yet Paul’s ‘letters’ are predominant in the New Testament.

Hypatia’s death, in the film, is sanitised, but history records it as brutal in the extreme. One account is that she was dragged through the streets behind a chariot and the other is that she had her flesh scraped from her by shards of pottery or sharp shells. History also records that the Bishop, Cyril, held responsible for her death, was canonised as a saint. The film gives a credible political reason for her death: that she had too much influence over the Prefect, and while they couldn’t touch him in such a malicious way, they could her.

But I can’t help but wonder at the extent of their hatred, to so mutilate her body and exact such a brutal end to an educated woman. I can only conclude that she represented such a threat to their power for two reasons: one, she was a woman who refused to acknowledge their superiority both in terms of gender and in terms of religious authority; and two, she represented a search for knowledge beyond the scriptures that could ultimately challenge their authority. I think it was this last reason that motivated their hatred so strongly. As a philosopher, whose role it was to seek knowledge and question dogma, she represented a real threat, especially when she taught ‘disciples’, some of whom became political leaders. A woman who thinks was the most dangerous enemy they knew.


Addendum: I've since read a book called Hypatia of Alexandria by Michael Deakin, Honorary Research Fellow at the School of Mathematical Sciences of Monash University (Melbourne, Australia). In the appendix, Deakin includes letters written to Hypatia by another Bishop, Synesius of Cyrene, who clearly respected, even adored her, as a former student.

Saturday 6 November 2010

We have to win the war against stupidity first

In Oz, we have a paper called The Australian, which was created by Rupert Murdoch when he was still a young bloke. Overseas visitors, therefore, may find it anomalous that in last weekend’s The Weekend Australian Magazine there was an article by Johann Hari that was critical of US policy in Afghanistan and Pakistan. Specifically, the use of drones in the so-called war on terror. The same magazine, by the way, runs a weekly column by one of Australia’s leading left-wing commentators, Phillip Adams, and has done so for decades. In his country of origin, it appears, Murdoch is something of a softie. Having said that, the article cannot be found on the magazine’s web page. Murdoch wouldn’t want to dilute his overseas persona apparently.

If I could provide a link, I obviously would, because I can’t relate this story any more eloquently than the journalist does. He starts off by asking the reader to imagine the street, where they live, being bombed by a robotic plane controlled by pilots on the other side of the world in a ‘rogue state’ called the USA. A bit of poetic licence on my part, because Hari does not use the term ‘rogue state’ and he asks you to imagine that the drone is controlled from Pakistan, not America. Significantly, the ‘pilots’ are sitting at a console with a joy stick as if they’re playing a video game. But this ‘game’ has both fatal and global consequences.

The gist of Hari’s article is that this policy, endorsed by Obama’s administration and “the only show in town” according to some back-room analysts, is that it actually enlists more jihadists than it destroys.

David Kilcullen, an Australian expert on Afghanistan and once advisor to the American State Department ‘…has shown that two percent of the people killed by the robot-planes in Pakistan are jihadis. The remaining 98 percent are as innocent as the victims of 9/11. He says: “It’s not moral.” And it gets worse: “Every one of these dead non-combatants represents an alienated family, and more recruits for a militant movement that has grown exponentially as drone strikes have increased.”’

David Kilcullen, who was once advisor to Condoleezza Rice during Bush’s administration, once said in an ABC (Oz BC) radio interview, that ‘…we need to get out of the business of invading other people’s countries because we believe they may harbour terrorists.’

‘Juan Cole, Professor of Middle Eastern History at the University of Michigan, puts it more bluntly: “When you bomb people and kill their family, it pisses them off. They form lifelong grudges… This is not rocket science. If they were not sympathetic to the Taliban and al-Qa’ida before, after you bomb the shit out of them they will be.”’

According to Hari, drones were originally developed by Israel and are routinely deployed to bomb the Gaza Strip. Not surprisingly, the US government won’t even officially acknowledge that their programme exists. Having said that, Bob Woodward, in his book, Obama’s Wars, claims that ‘the US has an immediate plan to bomb 150 targets in Pakistan if there is a jihadi attack inside America.’ In other words, the people who promote this strategy see it as a deterrent, when all evidence points to the opposite outcome. As Hari points out, in 2004, a ‘report commissioned by Donald Rumsfeld said that “American direct intervention in the Muslim world” was the primary reason for jihadism'.

I could fill this entire post with pertinent quotes, but the message is clear to anyone who engages their brain over their emotions: you don’t stop people building bombs to kill innocent civilians in your country by doing it to them in their country.

Sunday 17 October 2010

ELVENE, the 2nd edition



My one and only novel, ELVENE, has been published as an e-book by IP (Interactive Publications) and also POD at Glasshouse Books, a Queensland based company. The cover is by Aaron Pocock, so it’s an all-Aussie affair, though I believe Dr. David Reiter, who founded IP, is an ex-pat American.

I haven’t met David or Aaron, or even spoken to them, such is the facility of the internet. Even though IP engaged Aaron (I paid for the artwork), we corresponded via an intermediary, and I’m very pleased with the results. I believe he captured both the atmosphere and the right degree of sensuality that is reflected in the text itself. I’ve always been a strong believer that the cover should reflect the content of the book, both contextually and emotionally.

If you read the blurb on the web site (written by me) you may be mistaken in the belief that this is a variation on James Cameron’s Avatar. Nothing against Avatar, but I need to point out that ELVENE was written in 2001/2, about 8 years before Avatar was released, but I suspect we have been influenced by the same predecessors, in particular, Frank Herbert’s 1965 classic, DUNE. If any of you have seen Miyazaki’s anime, Nausicaa of the Valley of the Wind (refer my recent post, 5 Oct.10) you may also see some similarities. I did when I saw it in 2006, even though it was first released in 1984. Obviously I can’t be influenced by something I didn’t even know existed, but I’m happy to be compared with Miyazaki anytime.

The book contains oblique references to Clarke, Kubrick, Coleridge, Kipling and even Barbarella (her ship was called Alfie for you train-spotters). So, whilst Avatar could be best described as Dune meets Dances with Wolves, Elvene is Dune meets Dances with Wolves, meets Ursula Le Guin, meets Ian Fleming, meets Barbarella, meets Edgar Rice Burroughs. So my influences began with the comic books I read in the 1960’s, not to mention the radio serials I listened to before TV (yes, I’m that old). At the age of 9, I started writing my own Tarzan scripts, and I started drawing my own superheroes about the same time, possibly a bit older.

I once described ELVENE as a graphic novel without the graphics, and more than one person has told me that it’s ‘a very visual story’. An interesting achievement, considering I believe description to be the most boring form of prose (refer my August post on Creative Writing).

Most people who’ve read it ask: where’s the next one? Well, the truth is that I have started a sequel but I find it hard to believe I will ever write anything as good as ELVENE again. It really feels like an aberration to me. I’m not a writer as a profession, more a hobbyist, nevertheless I’m proud of my achievement. It’s not for everyone, but I’ve found that women like it in particular, including those who have never read a Sci-Fi book before. Maybe it’s a Sci-Fi book for people who don’t read Sci-Fi. I can only let others be the judge.

Two unsolicited reviews can be found at YABooksCentral: one by a teenager and one by a retired schoolteacher (both women).

More reviews can be found here. (Note: the top review contains spoilers)

Also available on Amazon, iBookstore, Lightning Source (Ingram) and ContentReserve.com.

Sunday 10 October 2010

The Festival of Dangerous Ideas

This is a post where I really don't have much to say at all, because this video says it all.

If you can't access the video, you can still read the transcript.

Where else would you find a truly international panel, with representatives from Indonesia, Pakistan, America, England and, of course, the host nation, Oz? I think the only internationally renowned participant is Geoffrey Robertson QC, who famously took up Salman Rushdie's case when he was subjected to a death-sentence fatwa by Iran's Ayatollah Khomeini (late 1980s early 90s). I suspect the rest of the panel are only well-known in their countries of origin.

Believe me, this discussion is well worth the 1 hour of your time.

Tuesday 5 October 2010

Nausicaa of the Valley of the Wind by Hayao Miyazaki

I’ve just read this 7 volume graphic novel over a single weekend. I saw the anime version a few years back at a cinematic mini-festival of his work. As it turned out, it was the first of his movies I ever saw, and it’s still my favourite. Most people would declare Spirited Away or Princess Mononoke as his best works, and they’re probably right, but I liked Nausicaa because certain elements of the story resonated with my own modest fictional creation, Elvene. You can see a Japanese trailer of the anime here.

The movie was released in 1984 and the graphic novels were only translated into English in 1997. I didn’t even know they existed until I looked it up on the Internet to inform a friend. And then a graphic novelist guest at our book club (see my blog list) told me that the local library has all 7 volumes; they’re catalogued under ‘graphic novel – teenager’. Even though Miyazaki is better known for his animated movies (Studio Ghibli), the film version of Nausicaa barely scratches the surface. The graphic novels are on the scale of Lord of the Rings or Star Wars or Dune. Of the 7 volumes, the shortest is 120 pages and the last is over 200 pages. If Miyazaki wasn’t Japanese, I’m sure this would be a classic of the genre.

Being Japanese, they’re read from right to left, so the back cover is actually the front cover and vice versa. I thought: why didn’t they just reverse the pagination for Western readers? But, of course, the graphics have to be read right to left as well. In other words, to Westernise them they’d have to be mirror-reversed, so wisely the publishers left them alone.

On the inside back cover (front cover for us) Miyazake explains the inspiration for the character. Of course, Nausicaa was originally a character in Homer’s The Odyssey, but Miyazaki first came across her in Bernard Evslin’s Japanese translation of a dictionary of Greek mythology. Evslin apparently gave 3 pages to Nausicaa but only one page each to Zeus and Achilles, so Miyazaki was a little disappointed when he read Homer’s original and found that she played such a small yet pivotal role in Odysseus’s journey. He was also influenced by a Japanese fictional heroine in The Tales of Past and Present called “the princess who loved insects”.

Those who are familiar with Miyazaki know that all his stories have strong female roles, and, personally, I think Nausicaa is the best of them, albeit she is one of the youngest.

But this reference to Homer’s Odyssey raises a point that has long fascinated me about graphic novels (or comic books, as they were known when I was a kid). They are arguably the only literary form which echoes back to the mythical world of the ancients, where characters have god-like abilities with human attributes. Now some of you may ask what about fantasy fiction of the sword and wizard variety? King Arthur, Merlin and Gandalf surely fall into that category. Yes, they are somewhat in between, but they are not superheroes, of whom Superman is the archetype. Bryan Singer’s film version, Superman Returns, which polarised critics and audiences, makes the allusion to Christ most overtly, and I suspect, deliberately.

It’s not just the Bible that provides a literary world where humanity and Gods meet (well there are 2 God characters in the Bible, the Father and the Son, not to mention Satan). Moses talked to a burning bush, Abraham was visited by angels, and Jesus conversed with Satan, God and ordinary mortals, including prostitutes.

The Mahabharata is a classic Hindu text involving deities and warring families, and of course there’s Homer’s tales, where the Greek gods take sides in battles and make deals with mortals.

Well, Miyazake’s Nausicaa falls into this category, in my view, even though there’s not a deity in sight. Nausicaa is probably the most Christ-like character I’ve come across in contemporary fiction since Superman. However that’s a Western interpretation – I expect Miyazaki would be more influenced by the Goddess of Mercy (Guan Yin in China, Kannon in Japan).

Nausicaa is a warrior princess with prodigious fighting abilities but her greatest ability is to empathise with all living creatures and to win over people to her side through her sheer personality and integrity. This last attribute is actually the most believable part of the novel, and when she continually wins respect and trust, Miyazaki convinces us that this human aspect of her character is real. But there are supernatural qualities as well. Her heart is so pure that she is able to lead the most evil character in the story into the afterlife (reminiscent of a scene in Harry Potter with a different outcome). In the last volume there is a warrior-god intent on destruction (an artificial life-form) whom she bends to her will through her sheer compassion because he believes she is his mother.

There are numerous other characters, but Princess Kushana is probably the most complex. She is involved in a mortal struggle with her emperor father and throne-contender brothers, but the most interesting relationship she has is with her ambitious Chief of Staff, Kurotowa. Early in the story she tries to have him killed, much later she saves his life.

Like Princess Mononoke, Miyazaki’s tale is a cautionary one about how humanity is destroying the ecology of the planet. Other subplots warn against religious dogma being used as a political weapon to manipulate people into war, and petty royal rivalries decimating populations through war and creating starving refugee communities out of the survivors.

There are, of course, a small group of characters who see Nausicaa as a prophet, and even a goddess, which creates problems for her in and of itself.

This is a rich story of many layers, not just a boy’s (or girl’s) own adventure. Nausicaa is a classic of the graphic novel genre – it’s just not recognised as such because it’s not American.

Thursday 23 September 2010

Happiness and the role of empathy

It’s been a while between posts but I’ve been busy on many fronts, including preparing Elvene for a second edition as an e-book and POD (print on demand). I’ll write a future post on that when it’s released in a couple of months. I’m also back to working full time (my real job is an engineer) so my time is spread thinner than it used to be.

I subscribe to Philosophy Now, which is an excellent magazine even if its publication is as erratic as my blog, and it always comes out with a theme. In this issue (No 80, August/September 2010) the theme, always given on the cover, is the human condition: is it really that bad? This post arose from a conflation in my mind of two of its essays. One on Compassion & Peace by Michael Allen Fox, Professor Emeritus of Philosophy at Queen’s University, Canada and Adjunct Professor; School of Humanities, University of New England, Australia. (Philosophy Now is a UK publication, btw.) The other was an essay by Dr. Kathleen O’Dwyer, who describes herself as ‘a scholar, teacher and author’ (my type of academic). Her essay is titled Can we be happy? But it’s really a discussion of Bertrand Russell’s treatise, The Conquest of Happiness, with a few other references thrown in like Freud, Laing and Grayling, amongst others.

I will dive right in with a working definition that O’Dwyer provides and is hard to beat:

“…a feeling of well-being – physical, emotional, spiritual or psychological; a feeling that one’s needs are being met – or at least that one has the power to strive towards the satisfaction of the most significant of such needs; a feeling that one is being authentic in living one’s life and in one’s relations with significant others; a feeling that one is using one’s potential as far as this is possible; a feeling that one is contributing to life in some way – that one’s life is making a difference.”

As she says, it’s all about ‘feeling’, which is not only highly subjective but based on perceptions. Nevertheless, she covers most bases, and, in particular, the sense of freedom to pursue one’s dreams and the requisite need to feel belonged, though she doesn’t use either of those phrases specifically. However, I would argue that these are the 2 fundamental criteria that one can distill from her synopsis.

Her discussion of Russell leads to talk about the opposite of happiness, its apparent causes and how to overcome it. Russell, like myself, suffered from depression in his early years, and this invariably affords a degree of self-examination that can either lead to self-obsession or revelation, and, in my case, both: one came before the other; and I don’t have to tell you in what order.

But Russell expresses the release or transcendence from this ‘possession’ rather succinctly as “a diminishing preoccupation with myself”. And this is the key to happiness in a nutshell, as also expressed by psychiatrist, George Vaillant, from the Harvard Medical School and interviewed in May this year on ABC’s 7.30 Report (see embedded video below).

And this segues into empathy, which I contend is the most important word in the English language. Fox goes to some length to explain the differences between compassion, empathy, sympathy and sacrifice, which, personally, I find unnecessary. They all extend from the inherent ability to put oneself in someone else’s shoes, and that is effectively what empathy is. So I put empathy at the head of all these terms and the source of altruism for most people. Studies have been done to demonstrate that reading fiction improves empathy (refer my post on Storytelling, July 2009). The psychometric test is very simple: determining the emotional content of eyes with no other available cues. As a writer, I don’t find this surprising, because, without empathy, fiction simply doesn’t work. As I mentioned in that post, the reader becomes an actor in their own mind but they’re not consciously aware of it.

But, more significantly, I would argue that all art exercises empathy, because it’s the projection of one individual’s imagination into another’s. Many artists, myself included, feel it’s their duty to put the reader or their audience in someone else’s shoes. It’s no surprise to me that art flourishes in all human societies and is often the most resilient endeavour when oppression is dominant.

But, more significant to the topic at hand, empathy and happiness are inseparable in my view. Contrary to some people’s beliefs and political ideologies, one rarely, if ever, gains happiness over another person’s suffering. Hence the message of Fox’s essay: peace and compassion go hand in hand.

The theme of Russell’s thesis (as revealed by O’Dwyer) and the message illuminated by George Vaillant below are exactly the same. We don’t find happiness in self-obsession, but in its opposite: the ability to empathise and give love to others.

Saturday 14 August 2010

How to create an imaginary, believable world.

Earlier this week (last Tuesday, in fact) I was invited to take a U3A class as a 'guest speaker', with the title of this post as the topic. I was invited by Shirley Randles, whom I already knew (see below). In preparation, I wrote out the following, even though I had no intention of reading it out; just an exercise to collect my thoughts. As it turned out, Shirley wasn't able to attend due to a family illness, and the 'talk' became a free-form discussion that made the 1+3/4 hrs go very quickly. In the last 15-20 minutes, I gave them a short writing exercise, which everyone seemed to enjoy and perform admirably.

Some of you may have read a post I wrote last year on Storytelling, so there is some repetition, though a different emphasis, in this post.



Firstly, I want to thank Shirley for inviting me to come and talk. I just want to say that I’m not a bestselling author, or even a prolific writer. But I have given courses in creative writing and Shirley interviewed me a few years back and liked what I write and liked what I had to say as well.

Science fiction and fantasy are my genres, but what I have to say applies to all genres, because all fiction involves immersing your reader in an imaginary world. And if that world is not believable then you won’t engage them. We call it suspension of disbelief. It’s very similar to being in a dream, because, whilst we are in a dream, we believe it totally, even though, when we awake and analyse it, it defies our common sense view of the world. And I will come back to the dream analogy later, because I think it’s more than a coincidence; I think that stories are the language of dreams.

There are 3 components to all stories: character, plot and world. I don’t know if any of you saw the PIXAR exhibition a couple of years ago at ACMI, but it was broken down into those 3 areas, only they called plot ‘story’. Now, everyone knows about plot and character, but most people don’t pay much attention to world. It is largely seen as a sub-component of plot. But I make the distinction, if for no other reason, than they all require different skills as a writer.

But I’m going to talk about plot and character first, because the world only makes sense in the context of the other two. And also, character and plot are very important components in making a story believable.

It is through character that a reader is engaged. The character, especially the protagonist, is your window into a story. In fact, I think character is the most important component of all. When I think of an idea for a story, it always comes with the character foremost. I can’t speak for other writers, but, for me, the character invariably comes with the initial idea.

All stories are an interaction between plot and character, and I have a particular philosophical view on this. The character and plot are the inner and outer world of the story, and this has a direct parallel in real life. We all, individually, have an inner and outer world, and, in life, the outer world is fate and the inner world is free will. So, to me, fate and free will are not contradictory but complementary. Fate represents everything we have no control over and free will represents our own actions. So, in a story, the plot is synonymous to fate and character is synonymous to free will. Just like in real life, a character is challenged, shaped and changed by fate: the events that directly impact on him or her. And this is the fundamental secret to storytelling. The character in the story reacts to events, and, as a result, changes and, hopefully, grows.

Now, I’m going to take this analogy one step further, because, ideally, as a writer, I believe you should give your characters free will. As Colleen McCullough once said, you play God in that you create all the obstacles and hurdles for your characters to deal with, but, for me, the creative process only works when the characters take on a life of their own.

To explain what I mean, I will quote the famous artist, M.C. Escher: "While drawing I sometimes feel as if I were a spiritualist medium, controlled by the creatures I am conjuring up." Now, I think most artists have experienced this at some point, but especially writers. When you are in the zone (to use a sporting reference) you can feel like you are channeling a character. I call it a Zen experience. Richard Tognetti, the virtuoso violinist with the ACO (Australian Chamber Orchestra) once made the comment that it’s like ‘you’re not there’, which I thought summed it up perfectly. Strange as it may sound, the best writing you will ever do is when your ego is not involved – you are just a medium, as Escher so eloquently put it.

There is a philosophical debate amongst writers about whether to outline or not to outline. Most of the writers I’ve met, argue that you shouldn’t, whereas most books you read on the topic argue that you should. Both Peter Temple and Peter Corris argue that you shouldn’t. Stephen King is contemptuous of anyone who does an outline, whereas J.K. Rowling famously plotted out all 7 novels of Harry Potter. My advice: you have to find what works for you.

Personally, I do an outline but it’s very broad brush – it’s like scaffolding that I follow. I found this technique through trial and error, and I suggest that anyone else should do the same. It’s what works for me and you have to find what works for you.

Now, I’m finally going to talk about world. After all, it’s what this talk is meant to be about, isn’t it? Well, yes and no. To create a believable world actually starts with character in my opinion. The more real your characters are, the more likely you are to engage your readers. This is why books like Lord of the Rings and the Harry Potter series are so successful, even though the worlds and the plots they describe are so fantastical.

All works of fiction are a combination of reality and fantasy, and how you mix them varies according to genre. But grounding a story in a believable character is not only the easiest method, it’s also the most successful. The quickest way to break the spell in a story, for me, is to make the character do something completely out of character. So-called reversals, where the hero suddenly turns out to be the villain are the cheapest of plot devices as far as I am concerned. There are exceptions, and to give one example: Snape in Harry Potter is actually a ‘double-agent’ so his reversal is totally believable, and when we learn about it, a lot of things suddenly make sense. Also, having a character who is not what they appear to be is not what I am talking about here. Ideally, a character reveals themselves gradually over the story, and can even change and grow, as I described above, but a complete reversal is a lot harder to swallow, especially when it’s done as a final ‘twist’ to dupe the reader.

The first thing to know about world is to understand what it is not. It is not just background or setting; it’s an interactive component of the story. One of the things that distinguishes fiction from non-fiction is the message, because the message is always emotive in fiction. You have to engage the reader emotionally and that includes the world. There are 5 narrative styles that I am aware of, though some people may contend that there are less or more. Basically, they are description, exposition, dialogue, action and introspection. By introspection I mean what’s going on inside the character’s head. Most books on writing will tell you that exposition is the most boring, but I disagree. I think that description is the most boring – it’s the part of the text that readers will skip over to get on with the story.

If you read the classics from the 19th century and even early, or not-so-early, 20th century you will find that writers would describe scenes in great detail. TV and movies changed all that, for 2 reasons. One, we became more impatient, and two, cinema and video eliminated completely the need for description. So novels started to develop a shorthand whereby scenes are more like impressionists' paintings. But what’s more important, when you set up a scene, is to create atmosphere and mood, because that’s what engages the reader emotionally.

And here I return to my earlier reference to dreams, because I believe that dreams are our primal language. The language of dreams is imagery and emotion, and that’s also the language of story. The reason I believe that written stories (as opposed to cinema) facilitate imagery in our minds is because we do it in our dreams. The medium for a novel is not the words on the page but the reader’s imagination. You have to engage the reader’s imagination, otherwise the story is lifeless, just words on a page.

One final point, which brings me back to character. If you tell the story from a character’s point of view, then you engage that character’s emotions and senses. So if you relate a scene through the character’s eyes, ears, nose and touch, then you overcome the boredom of description more readily.

Friday 23 July 2010

The enigma we call time

The June 2010 edition of Scientific American had an article called Is Time an Illusion? The article was written by Craig Callendar, who is a ‘philosophy professor at the University of California, San Diego’, and explains how 20th Century physics has all but explained time away. In fact, according to him, some scientists believe it has. It reminds me of how many scientists believe that free will and consciousness have been explained away as well, or, if not, then the terms have passed their use-by-date. I once had a brief correspondence with Peter Watson who wrote A Terrible Beauty, an extraordinarily well-researched and well-written book that attempts to cover the great minds and great ideas of the 20th Century, mostly in art and science, rather than politics and history. He contended that words like imagination and mind were no longer meaningful because they referred to an inner state of which we have no real understanding. He effectively argued that everything we contemplate as ‘internal’ is really dependent on our ‘external’ world, including the language we used to express it. But I’m getting off the track before I’ve even started. My point is that time, like consciousness and free will, and even imagination, are all experiences that we all have, which makes them as real as any empirically derived quantity that we know.

But isn’t time an empirically derived quantity as well? Well, that’s effectively the subject of Callendar’s essay. Attempts to rewrite Einstein’s theory of general relativity (gravity) in the same form as electromagnetism, as John Wheeler and Bryce De-Witt did in the late 1960s, resulted in an equation where time (denoted as t) simply disappeared. As Callendar explains, time is the real stumbling block to any attempt at a theory for quantum gravity, which attempts to combine quantum mechanics with Einstein’s general relativity. According to the theory of relativity, time is completely dependent on the observer, where the perceived sequence of events can differ from one observer to another depending on their relative positions and velocities, though causality is always conserved. On the other hand, quantum mechanics, through entanglement, can defy Einstein’s equations altogether (see my post on Entanglement, Jan 2010).

But let’s start with our experience of time, since it entails our entire life, from the moment we start storing memories up to our death. And this storing of memories is a crucial point, otherwise we’d have no sense of time at all, no sense of past or future, just a continuous present. Oliver Sacks, in his book, The Man Who Mistook his Wife for a Hat, tells the story of a man who suffered retrograde amnesia (The lost mariner) through excessive alcoholism, and in the 1970s when Sacks met him, still thought he was living in 1949 or thereabouts when he left the navy after WW2. The man was unable to create new memories so that he was effectively stuck in time, at least psychologically.

Kant famously argued in his Critique of Pure Reason, that both time and space were projections of the human mind. Personally, I always had a problem with Kant’s thesis on this subject, because I contend that both time and space exist independently of the human mind. In fact, they are the very fabric of the universe, but I’m getting ahead of myself again.

Without memory we would have no sense of the past and without imagination, no sense of the future. Brian Boyd, in his book The Origin of Stories (see my review called Storytelling, July 2009) referenced neurological evidence to explain how we use the same parts of the brain when we envisage the past as we do when we envisage the future. In both cases, we create the scenario in our mind, so how do we tell the difference?

Raymond Tallis, who writes a regular column in Philosophy Now (Tallis in Wonderland), wrote a very insightful essay in the April/May 2010 edition (the Is God really Dead? issue) ‘on the true mystery of memory’, where he explains the fundamental difference between memory in humans and memory in computers. It is impossible for me to do justice to such a brilliant essay, but effectively he questions how does the neuron or neurons, that supposedly store the memory, know or tell us when the memory was made in a temporal sense, even though it is something that we all intuitively sense. On the other hand, memory in a computer simply has a date and time stamp on it, a label in effect, but is otherwise physically identical to when it was created.

In the case of the brain, it’s in the hippocampus, where long term memories are generated, new neurons are created when something eventful happens which ties events together. Long term memory is facilitated by association, and so is learning, which is why analogies and metaphors are so useful for comprehending new knowledge, but I’m getting off the track again.

The human brain, and any other brain, one expects, recreates the memory in our imagination so that it’s not always accurate and certainly lacks photographic detail, but somehow conjures up a sense of past, even distance in time. Why are we able to distinguish this from an imaginary scenario that has never actually happened? Of course we can’t always, and false memories have been clinically demonstrated to occur.

Have you ever noticed that in dreams (see previous post), we experience a continuous present? Our dreams never have a history and never a future, they just happen, and often morph into a new scenario in such a way that any dislocation in time is not even registered, except when we wake up and try to recall them. Sometimes in a dream, I have a sense of memory attached to it, like I’ve had the dream before, yet when I wake up that sense is immediately lost. I wonder if this is what happens when people experience déjà vu (when they’re awake of course). I’ve had episodes of TGA (Transient Global Amnesia) where one’s thoughts seem to go in loops. It’s very disorienting, even scary, and the first time I experienced this, I described it to my GP as being like ‘memories from the future’, which made him seriously consider referring me to a psychiatrist.

So time, as we experience it, is intrinsically related to memory, yet there is another way we experience time, all the time, at least while we are conscious. And it is this ‘other way’ that made me challenge Kant’s thesis, when I first read it and was asked to write an essay on it. All animals, with sight, experience time through their eyes, because our eyes record the world quite literally as it passes us by, in so many frames a second. In the case of humans it’s twenty something. Movies and television need to have a higher frequency (24 from memory) in order for us to see movement fluidly. But many birds have a higher rate than us, so they would see a TV as jerky. When we see small birds flick their heads about in quick movement, they would see the same movement as fluid, which is why they can catch insects in mid-flight and we haven’t got Buckley’s. The point is that we literally see time, but different species see time at different rates.

We all know that our very existence in this world, on a cosmic scale, is just a blink, and a subliminal blink at that. On the scale of the universe at large, we barely register. Yet think upon this: without consciousness, time might as well not exist, because without consciousness the idea of a past or future is irrelevant, arguably non-existent. In this sense, Kant was right. It is only consciousness that has a sense of past and future; certainly nothing inanimate has a sense of past and future, even if it exists in a causal relationship with something else.

But of course, we believe that time does exist without consciousness, because we believe the universe had a cosmic history long before consciousness even evolved and will continue to exist long after the planet, upon which we are dependent for our very existence, and the sun, upon which we are dependent for all our needs, both cease to exist.

There has been one term that keeps cropping up in this dissertation, which has time written all over it, and it’s called causality. Causality is totally independent of the human mind or any other mind (I’m not going to argue about the ‘mind of God’). Causality, which we not only witness every day, but is intrinsic to all physical phenomena, is the greatest evidence we have that time is real. Even Einstein’s theories of relativity, which, as Callendar argues, effectively dismisses the idea of a universal time (or absolute time) still allow for causality.

David Hume famously challenged our common sense view of causality, arguing that it can never be proven; only that one event has followed another. John Searle gives the best counter-argument I’ve read, in his book, Mind, but I won’t digress as both of their arguments are outside the scope of this topic. However, every animal that pursues its own food believes in causality, even if they don’t think about it the way philosophers do. Causality only makes sense if time exists, so if causality is a real phenomenon then so is time. I might add that causality is also a lynch pin of physics, otherwise conservation of momentum suddenly becomes a non sequitur.

My knowledge of relativity theory and quantum mechanics is very rudimentary, to say the least, nevertheless I believe I know enough to explain a few basic principles. In a way, light replaces time in relativity theory; that’s because, for a ray of light, time really does not exist. For a photon, time is always zero – it only becomes a temporal entity for an observer who either receives it or transmits it. That is why light is always the shortest distance between 2 events, whether you want to travel between them or send a message. Einstein’s great revelation was to appreciate that this effectively turned time into a dimension that was commensurate with a spatial dimension. Equations for space-time include a term that is the speed of light multiplied by time, which effectively gives another dimension in addition to the other 3 dimensions of space that we are familiar with. You can literally see this dimension of time when you look at a night sky or peer through an astronomical telescope, because the stars you are observing are not only separated from us by space but also by time – thousands of years in fact.

But quantum mechanics is even more bizarre and difficult to reconcile with our common-or-garden view of the world. A lot of quantum weirdness stems from the fact that under certain conditions, like quantum tunneling and entanglement, time and space seem to become irrelevant. Entanglement implies that instantaneous connections are possible, across any distance, completely contrary to the restraints of relativity that I described above (see addendum below). And quantum tunneling also disregards relativity theory, where time can literally disappear, albeit temporarily and within energy constraints (refer my post, Oct.09).

But relativity and quantum mechanics are not the end of the story of time in physics; there is another aspect, which is perhaps even more intriguing, because it gives us the well-known arrow of time. Last year I wrote a review of Erwin Schrodinger’s book, What is Life? (Nov.09), a recommended read to anyone with an interest in philosophy or science. In it, Schrodinger reveals that one of his heroes was Ludwig Boltzmann, and it was Boltzmann, who elucidated for us, the second law of thermodynamics, otherwise known as entropy. It is entropy that apparently drives the arrow of time, as Penrose, Feynman and Schrodinger have all pointed out in various books aimed at laypeople, like myself. But it was Penrose who first explained it to me (in The Emperor’s New Mind) that whilst both relativity theory and quantum mechanics allow for time reversal, entropy does not.

Callendar, very early in his Scientific American article, posits the idea that time may be an emergent property of the universe, and entropy seems to fit that role. Entropy is why you can’t reconstitute an egg into its original form after you’ve dropped it on the floor, broken its shell and spilled all its contents into the carpet. You can run a film backwards showing a broken egg coming back together and rising from the floor with no trace of a stain on the carpet, but we immediately know it’s false. And that’s exactly what you would expect to see if time ran backwards, even though it never does. The two perceptions are related: entropy says that the egg can’t be recovered from its fall and so does the arrow of time; they are the same thing.

But Penrose, in his exposition, goes further, and explains that the entire cosmos follows this law, from the moment of the Big Bang until the death throes of the universe – it’s a universal law.

But this in itself begs another question: if a photon experiences zero time and the early universe (as well as its death) was just entirely radiation, where then is time? And without time, how did the universe evolve into a realm that is not entirely radiation. Well, there is a clue in the radiation itself, because all radiation has a frequency and from the frequency it has an energy, defined by Planck’s famous equation: E = hf. Where f is frequency and h is Planck’s constant. So the very equation, that gives us the energy of the universe, also entails time, because frequency is meaningless without time. But if photons have zero time, how is this possible? Also, if any particle approaches the same velocity as the photon, so does its time approach zero. And this happens when something falls into a black hole, so it becomes frozen in time to an external observer. Perhaps there is more than one type of time. A relativistic time that varies from one observer to another (this is a known fact, because the accuracy of GPS signals transmitted from satellites are dependent on it) and an entropic time that drives the entire universe and stops time from running backwards, thus ensuring causality is never violated. And what of time in quantum mechanics? Well, quantum mechanics hints that there is something about our universe that we still don’t know or understand, and to (mis)quote Wittgenstein: Of that which one does not know, one should not speak.

Addendum: Timmo, who is a real physicist, has pointed out that my comment on entanglement could be misconstrued. Specifically, entanglement does not allow faster-than-light communication. For a more comprehensive discussion on entanglement, I refer you to an earlier post.

Addendum 2: I revisited this topic in Oct. 2011 with a post, Where does time go? (in quantum mechanics).

Sunday 20 June 2010

What dreams are made of

Last week’s New Scientist (12 June 2010) had a very interesting article on dreams, in particular ‘lucid dreaming’, by Jessica Hamzelou. She references numerous people: Ursula Voss (University of Franfurt), Patrick McNamara (Boston University), Allan Hobson (Harvard Medical School), Eric Nofzinger (University of Pittsburgh) Victor Spoormaker (Utrecht University) and Michael Czisch (Max Planck Institute); so it’s a serious topic all over the world.

Ursula Voss argues that there are 2 states of consciousness, which she calls primary and secondary. ‘Primary’ being what most animals perceive: raw sensations and emotions; whereas ‘secondary’ is unique to humans, according to Voss, because only humans are “aware of being aware”. This in itself is an interesting premise.

I don’t agree with the well-known supposition that most animals don’t have a sense of ‘self’ because they don’t recognise themselves in a mirror. Even New Scientist reported on challenges to this view many years ago (before I started blogging). The lack of recognition of one’s own reflection is obviously a cognitive misperception, but it doesn’t axiomatically mean that the animal doesn’t have a sense of its own individuality relative to other members of its own species, which is how I would define a sense of self. In other words, a sense of self is the ability to differentiate one’s self from others. The fact that it mistakenly perceives its own reflection as an ‘other’, doesn’t imply the converse: that it can’t distinguish its self from a genuine other – in fact, if anything, it confirms that cognitive ability, albeit erroneously.

That’s a slight detour to the main topic, nevertheless it’s relevant, because I believe it’s not what Voss is referring to, which is our ability ‘to reflect upon ourselves and our feelings’. It’s hard to imagine that any animal can contemplate upon its own thoughts the way we do. What makes us unique, cognitively, is our ability to create concepts within concepts ad infinitum, which is why I can write an essay like this, but no other primate can. I always thought this was my own personal philosophical insight until I read Godel Escher Bach and realised that Douglas Hofstadter had reached it many years before. And, as Hofstadter would point out, it’s this very ability which allows us to look at ourselves almost objectively, just as we do others, that we call self-contemplation. If this is what Voss is referring to, when she talks about ‘secondary consciousness’, then I would probably agree with her premise.

So what has this to do with dreams? Well, one of the aspects of dreams, that distinguishes them from reality, is that they defy rational expectations, yet we seem totally acceptant of this. Voss contends that it’s because we lose our ‘secondary’ consciousness during dreaming that we lose our rational radar, so to speak (my turn of phrase, not hers).

The article argues that with lucid dreaming we can get our secondary consciousness back, and there is some neurological evidence to support this conjecture, but I’m getting ahead of myself. For those who haven’t come across the term before, lucid dreaming is the ability to take conscious control of one’s dream. In effect, one becomes aware that one is dreaming. Hamzelou even provides a 5-step procedure to induce lucid dreams.

Now, from personal experience, any time I’ve realised I’m dreaming, it immediately pops me out of the dream. Nevertheless, I believe I’ve experienced lucid dreaming, or at least, a form of it. According to Patrick McNamara (Boston University), our dream life goes down hill as we age, especially once we’ve passed adolescence. Well, I have a very rich dream life, virtually every night, but then I’ve learnt, from anecdotal evidence at least, that storytellers seem to dream more or recall them more than other people do. I’d be interested if there was any hard evidence to support this.

Certainly, storytellers understand the connection between story and dreaming, because, like stories, dreams put us in situations that we don’t face everyday. In fact, it has been argued that dreams evolutionary purpose was to remind us that the world can be a dangerous place. But I’m getting off the track again, because, as a storyteller, I believe that my stories come from the same place that my dreams do. In other words, in my dreams I meet all sorts of characters that I would never meet in real life, and have experiences that I would never have in real life. But I’ve long been aware that there are 2 parts to my dream: one part being generated by some unknown source and the other part being my subjective experience of it. In the dream, I behave as a conscious being, just as I would in the real world, and I wonder if this is what is meant by lucid dreaming. Likewise, when one is writing a story, there is often a sense that it comes from an unknown source, and you consciously inhabit the character who is experiencing it. Which is exactly what actors do, by the way, only the dream they are inhabiting is a movie set or a stage.

Neurological studies have shown that there is one area of the brain that shuts down during REM (Rapid Eye Movement) sleep which is the signature behavioural symptom of dreaming. The ‘dorsolateral prefrontal cortex (DLPFC) was remarkably subdued during REM sleep, compared with during wakefulness.’ Allan Hobson (Harvard) believes that this is our rationality filter (again, my term, not his) because its inactivity correlates with our acceptance of completely irrational and dislocated events. Neurological studies of lucid dreams have been difficult to capture, but one intriguing finding has been an increase in a specific brainwave at 40 hertz in the frontal regions. In fact, the neurological studies done so far, point to brain activity being somewhere in between normal REM sleep and full wakefulness. The studies aren’t sensitive enough to determine if the DLFPC plays a role in lucid dreams or not, but the 40 hertz brainwave is certainly more characteristic of wakefulness.

To me, dreams are what-if scenarios, and are opportunities to gain self-knowledge. I’ve long believed that one can learn from one’s dreams, not in a Jungian or Freudian sense, but more pragmatically. I’ve always believed that the way I behave in a dream simulates the way I would behave in real life. If I behave in a way that I’m not comfortable with, it makes me contemplate ways of self-improvement. Dreams allow us to face situations that we might not want to confront in reality. It’s our ability for self-reflection, that Voss calls secondary consciousness, that makes dreams valuable tools for self-knowledge. Stories often serve the same purpose. A story that really impacts on us, is usually one that confronts issues relevant to our lives, or makes us aware of issues we prefer to ignore. In this sense, both dreams and stories can be a good antidote for denial.

Sunday 23 May 2010

Why religion is not the root of all evil

I heard an interview with William Dalrymple last week (19 May 2010, Sydney time) who is currently attending the Sydney Writers’ Festival. The interview centred around his latest book, Nine Lives: In Search of the Sacred in Modern India.

Dalrymple was born in Edinburgh but has traveled widely in India and the book apparently examines the lives of nine religious followers in India. I haven’t read the book myself, but, following the interview, I’m tempted to seek it out.

As the title of his book suggests, Dalrymple appears fascinated with the religious in general, although he gave no indication in the interview what his own beliefs may be. His knowledge of India’s religions seems extensive and there are a couple of points he raised which I found relevant to the West’s perspective on Eastern religions and the current antagonistic attitudes towards religious issues: Islam, in particular.

As I say, I haven’t read the book, but the gist of it, according to the interview, is that he interviewed nine people, who lead distinctly different cultural lives in India, and wrote a chapter on each one. One of the points he raised, which I found relevant to my own viewpoint, is the idea that God exists inside us and not out there. This is something that I’ve discussed before and I don’t wish to dwell on here, but he inferred that the idea can be found in Sufism as well as Hinduism. It should be pointed out, by the way, that there is not one Hindu religion, and, in fact, Hinduism is really a collection of religions, that the West tend to put all in one conceptual basket. Dalrymple remarked on the similarity between Islamic Sufism and some types of Hinduism, which have flourished in India. In particular, he pointed out that the Sufis are the strongest opponents of Wahhabi-style Islam in Pakistan, which is very similar to the fundamentalism of the Taliban. I raise this point, because many people are unaware that there is huge diversity in Islam, with liberal attitudes pitted against conservative attitudes, the same as we find in any society worldwide, secular or otherwise.

This contradicts the view expressed by Hitchens and Harris (Dawkins has never expressed it, as far as I’m aware, but I’m sure he would concur) that people with moderate religious views somehow give succour to the fundamentalists and extremists in the world. This is a view, which is not just counter-productive, it’s divisive, simplistic, falsely based and deeply prejudicial. And it makes me bloody angry.

These are very intelligent, very knowledgeable and very articulate men, but this stance is an intellectualisation of a deeply held prejudice against religion in general. Because they are atheists, they believe it gives them a special position – they see themselves as being outside the equation – because they have no religious belief, they are objective, which gives them a special status. My point is that they can hardly ask for people with religious views to show tolerance towards each other if they can intellectualise their own intolerance towards all religions. By expressing the view, no matter how obtuse, that any religious tolerance somehow creates a shelter or cover for extremists, they are fomenting intolerance towards those who are actually practicing tolerance.

Dawkins was in Australia for an international Atheist convention in Melbourne, earlier this year. Religion is not a hot topic in this country, but, of course, it becomes a hot topic while he’s visiting, which makes me really glad that he doesn’t live here full time. On a TV panel show, he made the provocative inference that no evil has ever come from atheism. So atheists are not only intellectually superior to everyone else but they are also morally superior. What he said and what he meant, is that no atheist has ever attempted genocide on a religious group because of his or her atheism (therefore religious belief) but lots of political groups have, which may or may not be atheistic. In other words, when it comes to practicing genocide, whether the identification of the outgroup is religious or political becomes irrelevant. We don’t need religion to create politically unstable groups, they can be created by atheists as easily as they can by religious zealots. Dawkins, of course, chooses his words carefully, to give the impression that no atheist would ever indulge in an act of genocide, be it psychological or physical, but we all know that political ideology is no less dangerous than religious ideology.

One of Dawkins’ favourite utterances is: “There is no such thing as a Muslim child.” If one takes that statement to its logical conclusion, he’s advocating that all children should be disassociated from their cultural heritage. Is he aware of how totalitarian that idea is? He wants to live in a mono-culture, where everyone gets the correct education that axiomatically will ensure they will never believe in the delusion of God. Well, I don’t want to live in that world, so, frankly, he can have it.

People like to point to all the conflicts in the world of the last half century, from Ireland to the Balkans to the Middle East as examples of how religion creates conflicts. The unstated corollary is that if we could rid the world of religion we would rid it of its main source of conflict. This is not just naïve, it’s blatantly wrong. All these conflicts are about the haves and have-nots. Almost all conflicts, including the most recent one in Thailand are about one group having economical control over another. That’s what happened in Ireland, in former Yugoslavia, and, most significantly, in Palestine. In many parts of the world, Iraq, Iran and Afghanistan being typical examples, religion and politics are inseparable. It’s naïve in the extreme to believe, from the vantage of a secular society, that if you rid a society of its religious beliefs you will somehow rid it of its politics, or make the politics more stable. You make the politics more stable by getting rid of nepotism and corruption. In Afghanistan, the religious fundamentalists have persuasion and political credibility because the current alternative solution is corrupt and financially self-serving.

It should be obvious for anyone who follows my blog that I’m not anti-atheist. In fact, I’ve done my best to stay out of this debate. But, to be honest, I refuse to take sides in the way some commentators infer we should. I don’t see it as an US and THEM debate, because I don’t live in a country where people with religious agendas are trying to take control of the parliament. We have self-confessed creationists in our political system, but, as was demonstrated on the same panel that Dawkins was on, they are reluctant to express that view in public, and they have no agenda, hidden or otherwise, for changing the school curricula. I live in a country where you can have a religious point of view and you won’t be hung up and scrutinised by every political commentator in the land.

Religion has a bad rap, not helped by the Catholic Church’s ‘above the law’ attitude towards sexual abuse scandals, but religious belief per se should never be the litmus test for someone’s intelligence, moral integrity or strength of character, either way.