Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Tuesday 21 December 2010

Ayaan Hirsi Ali

This woman should need no introduction, she’s been in the media in most Western countries I’m sure. I thought this was a really good interview (Tue. 21 Dec. 2010) because it gives an insight into her background as well as a candid exposition of her political and philosophical views.

I haven’t read either of her biographies, but I’ve read second-hand criticism which led me to believe she was anti-Islamic. This is not entirely true, depending on how one defines Islam. To quote her own words: “I have no problem with the religious dimension of Islam.” She’s not the first Muslim I’ve come across to differentiate between religious and political Islam. Most Westerners, especially since 9/11, believe that any such distinction is artificial. I beg to differ.

She makes it very clear that she’s against the imposition of Sharia law, the subjugation of women and any form of totalitarianism premised on religious-based scripture (irrespective of the religion). In short, she’s a feminist. She decries the trivial arguments over dress when there are other issues of far greater import, like arranged marriages, so-called circumcision of women and honour killings. (For an intelligent debate on whether the burqua should be ‘outlawed’ I refer you to this.)

What I found remarkable, and almost unimaginable, was how violent her childhood and upbringing were. There was violence in the school, violence in the home, violence in politics. As she points out it was so pervasive that a peaceful environment was considered unthinkable. One of the most poignant stories she tells was when she went to Holland to seek asylum, and on going to a Police Station to register, the policeman asked her if she would like a cup of coffee or tea. This was a revelation to her: that a man in uniform should offer a woman, a stranger and a foreigner, a cup of coffee or tea was simply mind-blowing.

It is beyond most of us to imagine a childhood where violence is the principal form of interrogation and negotiation between people in all walks of life: home, education and work; yet that was her life. That she can now talk of falling in love and of writing a letter to her unborn child for a hopeful future is close to miraculous.

What resonated with me was her argument that it doesn’t take 600 years to reconcile Islam with the modern secular world, but only 4 generations. I have Muslim friends, both in America and in Australia, and they belie the belief, held by many in Western societies, that Muslims can’t assimilate and yet keep their cultural and spiritual beliefs. They demonstrate to me that Ayaan Hirsi Ali is correct in her fundamental assumptions and philosophical approach.

Saturday 11 December 2010

On-line interview for Elvene

This is a blatant promotion. Obviously the interview is totally contrived by the publisher, and if you press TOP at the end of my piece, you will get an overview of the current state of play in Oz publishing and distribution, from the perspective of one of the (minor) players.

Having said that, the questions were not vetted by me and the answers are all my own.

The term 'jack-of-all-trades' is a complete misnomer. Anyone who actually knows me, knows that I'm totally useless at all trades involving genuine dexterous skill. The rest is mostly true, though I've only written one screenplay and one novel that I'm willing to own up to.

The interview contains some of my philosophy on writing in 'nutshell' form, with the added relevance of referencing something that I've written.

Friday 3 December 2010

Hypatia


Last week I saw a movie by Alejandro Amenabar called Agora, which is effectively the story of Hypatia and her death at the hands of Christian zealots in Alexandria towards the end of the Roman Empire in AD 414. So the film is based on a real event and a real person, though it is a fictional account.

Amenabar also made the excellent film, The Sea Inside, starring Javier Bardem, which was also based on a real person’s life. In this case, a fictionalised account of a quadriplegic’s battle with the Church and government in Spain to take his own life through euthanasia.

I first came across Hypatia in Clifford A. Pickover’s encyclopedic tome, The Math Book, subtitled, From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics.  He allots one double page (a page of brief exposition juxtaposed with a graphic or a photo) to each milestone he’s selected. He presents Hypatia as the first historically recognised woman mathematician. In fact she was a philosopher and teacher at the famous Library of Alexandria, even though she was a Greek, and like her father, practiced philosophy, science, mathematics and astronomy in the tradition of Plato and Aristotle. By accounts, she was attractive, but never married, and, according to Pickover, once said she was ‘wedded to the truth’. The film gives a plausible account of her celibacy, when her father explains to a suitor that, in order to marry, she would have to give up her pursuit of knowledge and that would be like a slow death for her.

The film stars Rachel Weisz in the role of Hypatia and it’s a convincing portrayal of an independent, highly intelligent woman, respected by men of political power and persuasion. The complex political scene is also well depicted with the rise of Christianity creating an escalating conflict with Jews that the waning Roman military government seems incapable of controlling.

It’s a time when the Christians are beginning to exert their newly-found political power, and their Biblical-derived authority justifies their intention to convert everyone to their cause or destroy those who oppose them. There is a scene where they drive all the Jews out of Alexandria, which they justify by citing Biblical text. The film, of course, resonates with 20th Century examples of ‘ethnic cleansing’ and the role of religious fundamentalism in justifying human atrocities. Hypatia’s own slave (a fictionalised character, no doubt) is persuaded to join the Christians where he can turn his built-up resentment into justified slaughter.

Hypatia would have been influenced by Pythagoras’s quadrivium, upon which Plato’s Academy was based: arithmetic, geometry, astronomy and music. In the movie she is depicted as a ‘truth-seeker’, who questions Ptolemy’s version of the solar system and performs an experiment to prove to herself, if no one else, that the Earth could move without us being aware of its motion. I suspect this is poetic licence on the part of Amenabar, along with the inference that she may have foreseen that the Earth’s orbit is elliptical rather than circular. What matters, though, is that she took her philosophy very seriously, and she appreciated the role of mathematics in discerning truth in the natural world. There is a scene where she rejects Christianity on the basis that she can’t accept knowledge without questioning it. It would have gone against her very being.

There is also a scene in which the Church’s hierarchy reads the well-known text from Timothy: “I suffer not a woman to teach or to control a man”, which is directed at the Roman Prefect, who holds Hypatia in high regard. The priest claims this is the word of God, when, in fact, it’s the word of Paul. Paul, arguably, influenced the direction of Christianity even more than Jesus. After all, Jesus never wrote anything down, yet Paul’s ‘letters’ are predominant in the New Testament.

Hypatia’s death, in the film, is sanitised, but history records it as brutal in the extreme. One account is that she was dragged through the streets behind a chariot and the other is that she had her flesh scraped from her by shards of pottery or sharp shells. History also records that the Bishop, Cyril, held responsible for her death, was canonised as a saint. The film gives a credible political reason for her death: that she had too much influence over the Prefect, and while they couldn’t touch him in such a malicious way, they could her.

But I can’t help but wonder at the extent of their hatred, to so mutilate her body and exact such a brutal end to an educated woman. I can only conclude that she represented such a threat to their power for two reasons: one, she was a woman who refused to acknowledge their superiority both in terms of gender and in terms of religious authority; and two, she represented a search for knowledge beyond the scriptures that could ultimately challenge their authority. I think it was this last reason that motivated their hatred so strongly. As a philosopher, whose role it was to seek knowledge and question dogma, she represented a real threat, especially when she taught ‘disciples’, some of whom became political leaders. A woman who thinks was the most dangerous enemy they knew.


Addendum: I've since read a book called Hypatia of Alexandria by Michael Deakin, Honorary Research Fellow at the School of Mathematical Sciences of Monash University (Melbourne, Australia). In the appendix, Deakin includes letters written to Hypatia by another Bishop, Synesius of Cyrene, who clearly respected, even adored her, as a former student.

Saturday 6 November 2010

We have to win the war against stupidity first

In Oz, we have a paper called The Australian, which was created by Rupert Murdoch when he was still a young bloke. Overseas visitors, therefore, may find it anomalous that in last weekend’s The Weekend Australian Magazine there was an article by Johann Hari that was critical of US policy in Afghanistan and Pakistan. Specifically, the use of drones in the so-called war on terror. The same magazine, by the way, runs a weekly column by one of Australia’s leading left-wing commentators, Phillip Adams, and has done so for decades. In his country of origin, it appears, Murdoch is something of a softie. Having said that, the article cannot be found on the magazine’s web page. Murdoch wouldn’t want to dilute his overseas persona apparently.

If I could provide a link, I obviously would, because I can’t relate this story any more eloquently than the journalist does. He starts off by asking the reader to imagine the street, where they live, being bombed by a robotic plane controlled by pilots on the other side of the world in a ‘rogue state’ called the USA. A bit of poetic licence on my part, because Hari does not use the term ‘rogue state’ and he asks you to imagine that the drone is controlled from Pakistan, not America. Significantly, the ‘pilots’ are sitting at a console with a joy stick as if they’re playing a video game. But this ‘game’ has both fatal and global consequences.

The gist of Hari’s article is that this policy, endorsed by Obama’s administration and “the only show in town” according to some back-room analysts, is that it actually enlists more jihadists than it destroys.

David Kilcullen, an Australian expert on Afghanistan and once advisor to the American State Department ‘…has shown that two percent of the people killed by the robot-planes in Pakistan are jihadis. The remaining 98 percent are as innocent as the victims of 9/11. He says: “It’s not moral.” And it gets worse: “Every one of these dead non-combatants represents an alienated family, and more recruits for a militant movement that has grown exponentially as drone strikes have increased.”’

David Kilcullen, who was once advisor to Condoleezza Rice during Bush’s administration, once said in an ABC (Oz BC) radio interview, that ‘…we need to get out of the business of invading other people’s countries because we believe they may harbour terrorists.’

‘Juan Cole, Professor of Middle Eastern History at the University of Michigan, puts it more bluntly: “When you bomb people and kill their family, it pisses them off. They form lifelong grudges… This is not rocket science. If they were not sympathetic to the Taliban and al-Qa’ida before, after you bomb the shit out of them they will be.”’

According to Hari, drones were originally developed by Israel and are routinely deployed to bomb the Gaza Strip. Not surprisingly, the US government won’t even officially acknowledge that their programme exists. Having said that, Bob Woodward, in his book, Obama’s Wars, claims that ‘the US has an immediate plan to bomb 150 targets in Pakistan if there is a jihadi attack inside America.’ In other words, the people who promote this strategy see it as a deterrent, when all evidence points to the opposite outcome. As Hari points out, in 2004, a ‘report commissioned by Donald Rumsfeld said that “American direct intervention in the Muslim world” was the primary reason for jihadism'.

I could fill this entire post with pertinent quotes, but the message is clear to anyone who engages their brain over their emotions: you don’t stop people building bombs to kill innocent civilians in your country by doing it to them in their country.

Sunday 17 October 2010

ELVENE, the 2nd edition



My one and only novel, ELVENE, has been published as an e-book by IP (Interactive Publications) and also POD at Glasshouse Books, a Queensland based company. The cover is by Aaron Pocock, so it’s an all-Aussie affair, though I believe Dr. David Reiter, who founded IP, is an ex-pat American.

I haven’t met David or Aaron, or even spoken to them, such is the facility of the internet. Even though IP engaged Aaron (I paid for the artwork), we corresponded via an intermediary, and I’m very pleased with the results. I believe he captured both the atmosphere and the right degree of sensuality that is reflected in the text itself. I’ve always been a strong believer that the cover should reflect the content of the book, both contextually and emotionally.

If you read the blurb on the web site (written by me) you may be mistaken in the belief that this is a variation on James Cameron’s Avatar. Nothing against Avatar, but I need to point out that ELVENE was written in 2001/2, about 8 years before Avatar was released, but I suspect we have been influenced by the same predecessors, in particular, Frank Herbert’s 1965 classic, DUNE. If any of you have seen Miyazaki’s anime, Nausicaa of the Valley of the Wind (refer my recent post, 5 Oct.10) you may also see some similarities. I did when I saw it in 2006, even though it was first released in 1984. Obviously I can’t be influenced by something I didn’t even know existed, but I’m happy to be compared with Miyazaki anytime.

The book contains oblique references to Clarke, Kubrick, Coleridge, Kipling and even Barbarella (her ship was called Alfie for you train-spotters). So, whilst Avatar could be best described as Dune meets Dances with Wolves, Elvene is Dune meets Dances with Wolves, meets Ursula Le Guin, meets Ian Fleming, meets Barbarella, meets Edgar Rice Burroughs. So my influences began with the comic books I read in the 1960’s, not to mention the radio serials I listened to before TV (yes, I’m that old). At the age of 9, I started writing my own Tarzan scripts, and I started drawing my own superheroes about the same time, possibly a bit older.

I once described ELVENE as a graphic novel without the graphics, and more than one person has told me that it’s ‘a very visual story’. An interesting achievement, considering I believe description to be the most boring form of prose (refer my August post on Creative Writing).

Most people who’ve read it ask: where’s the next one? Well, the truth is that I have started a sequel but I find it hard to believe I will ever write anything as good as ELVENE again. It really feels like an aberration to me. I’m not a writer as a profession, more a hobbyist, nevertheless I’m proud of my achievement. It’s not for everyone, but I’ve found that women like it in particular, including those who have never read a Sci-Fi book before. Maybe it’s a Sci-Fi book for people who don’t read Sci-Fi. I can only let others be the judge.

Two unsolicited reviews can be found at YABooksCentral: one by a teenager and one by a retired schoolteacher (both women).

More reviews can be found here. (Note: the top review contains spoilers)

Also available on Amazon, iBookstore, Lightning Source (Ingram) and ContentReserve.com.

Sunday 10 October 2010

The Festival of Dangerous Ideas

This is a post where I really don't have much to say at all, because this video says it all.

If you can't access the video, you can still read the transcript.

Where else would you find a truly international panel, with representatives from Indonesia, Pakistan, America, England and, of course, the host nation, Oz? I think the only internationally renowned participant is Geoffrey Robertson QC, who famously took up Salman Rushdie's case when he was subjected to a death-sentence fatwa by Iran's Ayatollah Khomeini (late 1980s early 90s). I suspect the rest of the panel are only well-known in their countries of origin.

Believe me, this discussion is well worth the 1 hour of your time.

Tuesday 5 October 2010

Nausicaa of the Valley of the Wind by Hayao Miyazaki

I’ve just read this 7 volume graphic novel over a single weekend. I saw the anime version a few years back at a cinematic mini-festival of his work. As it turned out, it was the first of his movies I ever saw, and it’s still my favourite. Most people would declare Spirited Away or Princess Mononoke as his best works, and they’re probably right, but I liked Nausicaa because certain elements of the story resonated with my own modest fictional creation, Elvene. You can see a Japanese trailer of the anime here.

The movie was released in 1984 and the graphic novels were only translated into English in 1997. I didn’t even know they existed until I looked it up on the Internet to inform a friend. And then a graphic novelist guest at our book club (see my blog list) told me that the local library has all 7 volumes; they’re catalogued under ‘graphic novel – teenager’. Even though Miyazaki is better known for his animated movies (Studio Ghibli), the film version of Nausicaa barely scratches the surface. The graphic novels are on the scale of Lord of the Rings or Star Wars or Dune. Of the 7 volumes, the shortest is 120 pages and the last is over 200 pages. If Miyazaki wasn’t Japanese, I’m sure this would be a classic of the genre.

Being Japanese, they’re read from right to left, so the back cover is actually the front cover and vice versa. I thought: why didn’t they just reverse the pagination for Western readers? But, of course, the graphics have to be read right to left as well. In other words, to Westernise them they’d have to be mirror-reversed, so wisely the publishers left them alone.

On the inside back cover (front cover for us) Miyazake explains the inspiration for the character. Of course, Nausicaa was originally a character in Homer’s The Odyssey, but Miyazaki first came across her in Bernard Evslin’s Japanese translation of a dictionary of Greek mythology. Evslin apparently gave 3 pages to Nausicaa but only one page each to Zeus and Achilles, so Miyazaki was a little disappointed when he read Homer’s original and found that she played such a small yet pivotal role in Odysseus’s journey. He was also influenced by a Japanese fictional heroine in The Tales of Past and Present called “the princess who loved insects”.

Those who are familiar with Miyazaki know that all his stories have strong female roles, and, personally, I think Nausicaa is the best of them, albeit she is one of the youngest.

But this reference to Homer’s Odyssey raises a point that has long fascinated me about graphic novels (or comic books, as they were known when I was a kid). They are arguably the only literary form which echoes back to the mythical world of the ancients, where characters have god-like abilities with human attributes. Now some of you may ask what about fantasy fiction of the sword and wizard variety? King Arthur, Merlin and Gandalf surely fall into that category. Yes, they are somewhat in between, but they are not superheroes, of whom Superman is the archetype. Bryan Singer’s film version, Superman Returns, which polarised critics and audiences, makes the allusion to Christ most overtly, and I suspect, deliberately.

It’s not just the Bible that provides a literary world where humanity and Gods meet (well there are 2 God characters in the Bible, the Father and the Son, not to mention Satan). Moses talked to a burning bush, Abraham was visited by angels, and Jesus conversed with Satan, God and ordinary mortals, including prostitutes.

The Mahabharata is a classic Hindu text involving deities and warring families, and of course there’s Homer’s tales, where the Greek gods take sides in battles and make deals with mortals.

Well, Miyazake’s Nausicaa falls into this category, in my view, even though there’s not a deity in sight. Nausicaa is probably the most Christ-like character I’ve come across in contemporary fiction since Superman. However that’s a Western interpretation – I expect Miyazaki would be more influenced by the Goddess of Mercy (Guan Yin in China, Kannon in Japan).

Nausicaa is a warrior princess with prodigious fighting abilities but her greatest ability is to empathise with all living creatures and to win over people to her side through her sheer personality and integrity. This last attribute is actually the most believable part of the novel, and when she continually wins respect and trust, Miyazaki convinces us that this human aspect of her character is real. But there are supernatural qualities as well. Her heart is so pure that she is able to lead the most evil character in the story into the afterlife (reminiscent of a scene in Harry Potter with a different outcome). In the last volume there is a warrior-god intent on destruction (an artificial life-form) whom she bends to her will through her sheer compassion because he believes she is his mother.

There are numerous other characters, but Princess Kushana is probably the most complex. She is involved in a mortal struggle with her emperor father and throne-contender brothers, but the most interesting relationship she has is with her ambitious Chief of Staff, Kurotowa. Early in the story she tries to have him killed, much later she saves his life.

Like Princess Mononoke, Miyazaki’s tale is a cautionary one about how humanity is destroying the ecology of the planet. Other subplots warn against religious dogma being used as a political weapon to manipulate people into war, and petty royal rivalries decimating populations through war and creating starving refugee communities out of the survivors.

There are, of course, a small group of characters who see Nausicaa as a prophet, and even a goddess, which creates problems for her in and of itself.

This is a rich story of many layers, not just a boy’s (or girl’s) own adventure. Nausicaa is a classic of the graphic novel genre – it’s just not recognised as such because it’s not American.

Thursday 23 September 2010

Happiness and the role of empathy

It’s been a while between posts but I’ve been busy on many fronts, including preparing Elvene for a second edition as an e-book and POD (print on demand). I’ll write a future post on that when it’s released in a couple of months. I’m also back to working full time (my real job is an engineer) so my time is spread thinner than it used to be.

I subscribe to Philosophy Now, which is an excellent magazine even if its publication is as erratic as my blog, and it always comes out with a theme. In this issue (No 80, August/September 2010) the theme, always given on the cover, is the human condition: is it really that bad? This post arose from a conflation in my mind of two of its essays. One on Compassion & Peace by Michael Allen Fox, Professor Emeritus of Philosophy at Queen’s University, Canada and Adjunct Professor; School of Humanities, University of New England, Australia. (Philosophy Now is a UK publication, btw.) The other was an essay by Dr. Kathleen O’Dwyer, who describes herself as ‘a scholar, teacher and author’ (my type of academic). Her essay is titled Can we be happy? But it’s really a discussion of Bertrand Russell’s treatise, The Conquest of Happiness, with a few other references thrown in like Freud, Laing and Grayling, amongst others.

I will dive right in with a working definition that O’Dwyer provides and is hard to beat:

“…a feeling of well-being – physical, emotional, spiritual or psychological; a feeling that one’s needs are being met – or at least that one has the power to strive towards the satisfaction of the most significant of such needs; a feeling that one is being authentic in living one’s life and in one’s relations with significant others; a feeling that one is using one’s potential as far as this is possible; a feeling that one is contributing to life in some way – that one’s life is making a difference.”

As she says, it’s all about ‘feeling’, which is not only highly subjective but based on perceptions. Nevertheless, she covers most bases, and, in particular, the sense of freedom to pursue one’s dreams and the requisite need to feel belonged, though she doesn’t use either of those phrases specifically. However, I would argue that these are the 2 fundamental criteria that one can distill from her synopsis.

Her discussion of Russell leads to talk about the opposite of happiness, its apparent causes and how to overcome it. Russell, like myself, suffered from depression in his early years, and this invariably affords a degree of self-examination that can either lead to self-obsession or revelation, and, in my case, both: one came before the other; and I don’t have to tell you in what order.

But Russell expresses the release or transcendence from this ‘possession’ rather succinctly as “a diminishing preoccupation with myself”. And this is the key to happiness in a nutshell, as also expressed by psychiatrist, George Vaillant, from the Harvard Medical School and interviewed in May this year on ABC’s 7.30 Report (see embedded video below).

And this segues into empathy, which I contend is the most important word in the English language. Fox goes to some length to explain the differences between compassion, empathy, sympathy and sacrifice, which, personally, I find unnecessary. They all extend from the inherent ability to put oneself in someone else’s shoes, and that is effectively what empathy is. So I put empathy at the head of all these terms and the source of altruism for most people. Studies have been done to demonstrate that reading fiction improves empathy (refer my post on Storytelling, July 2009). The psychometric test is very simple: determining the emotional content of eyes with no other available cues. As a writer, I don’t find this surprising, because, without empathy, fiction simply doesn’t work. As I mentioned in that post, the reader becomes an actor in their own mind but they’re not consciously aware of it.

But, more significantly, I would argue that all art exercises empathy, because it’s the projection of one individual’s imagination into another’s. Many artists, myself included, feel it’s their duty to put the reader or their audience in someone else’s shoes. It’s no surprise to me that art flourishes in all human societies and is often the most resilient endeavour when oppression is dominant.

But, more significant to the topic at hand, empathy and happiness are inseparable in my view. Contrary to some people’s beliefs and political ideologies, one rarely, if ever, gains happiness over another person’s suffering. Hence the message of Fox’s essay: peace and compassion go hand in hand.

The theme of Russell’s thesis (as revealed by O’Dwyer) and the message illuminated by George Vaillant below are exactly the same. We don’t find happiness in self-obsession, but in its opposite: the ability to empathise and give love to others.

Saturday 14 August 2010

How to create an imaginary, believable world.

Earlier this week (last Tuesday, in fact) I was invited to take a U3A class as a 'guest speaker', with the title of this post as the topic. I was invited by Shirley Randles, whom I already knew (see below). In preparation, I wrote out the following, even though I had no intention of reading it out; just an exercise to collect my thoughts. As it turned out, Shirley wasn't able to attend due to a family illness, and the 'talk' became a free-form discussion that made the 1+3/4 hrs go very quickly. In the last 15-20 minutes, I gave them a short writing exercise, which everyone seemed to enjoy and perform admirably.

Some of you may have read a post I wrote last year on Storytelling, so there is some repetition, though a different emphasis, in this post.



Firstly, I want to thank Shirley for inviting me to come and talk. I just want to say that I’m not a bestselling author, or even a prolific writer. But I have given courses in creative writing and Shirley interviewed me a few years back and liked what I write and liked what I had to say as well.

Science fiction and fantasy are my genres, but what I have to say applies to all genres, because all fiction involves immersing your reader in an imaginary world. And if that world is not believable then you won’t engage them. We call it suspension of disbelief. It’s very similar to being in a dream, because, whilst we are in a dream, we believe it totally, even though, when we awake and analyse it, it defies our common sense view of the world. And I will come back to the dream analogy later, because I think it’s more than a coincidence; I think that stories are the language of dreams.

There are 3 components to all stories: character, plot and world. I don’t know if any of you saw the PIXAR exhibition a couple of years ago at ACMI, but it was broken down into those 3 areas, only they called plot ‘story’. Now, everyone knows about plot and character, but most people don’t pay much attention to world. It is largely seen as a sub-component of plot. But I make the distinction, if for no other reason, than they all require different skills as a writer.

But I’m going to talk about plot and character first, because the world only makes sense in the context of the other two. And also, character and plot are very important components in making a story believable.

It is through character that a reader is engaged. The character, especially the protagonist, is your window into a story. In fact, I think character is the most important component of all. When I think of an idea for a story, it always comes with the character foremost. I can’t speak for other writers, but, for me, the character invariably comes with the initial idea.

All stories are an interaction between plot and character, and I have a particular philosophical view on this. The character and plot are the inner and outer world of the story, and this has a direct parallel in real life. We all, individually, have an inner and outer world, and, in life, the outer world is fate and the inner world is free will. So, to me, fate and free will are not contradictory but complementary. Fate represents everything we have no control over and free will represents our own actions. So, in a story, the plot is synonymous to fate and character is synonymous to free will. Just like in real life, a character is challenged, shaped and changed by fate: the events that directly impact on him or her. And this is the fundamental secret to storytelling. The character in the story reacts to events, and, as a result, changes and, hopefully, grows.

Now, I’m going to take this analogy one step further, because, ideally, as a writer, I believe you should give your characters free will. As Colleen McCullough once said, you play God in that you create all the obstacles and hurdles for your characters to deal with, but, for me, the creative process only works when the characters take on a life of their own.

To explain what I mean, I will quote the famous artist, M.C. Escher: "While drawing I sometimes feel as if I were a spiritualist medium, controlled by the creatures I am conjuring up." Now, I think most artists have experienced this at some point, but especially writers. When you are in the zone (to use a sporting reference) you can feel like you are channeling a character. I call it a Zen experience. Richard Tognetti, the virtuoso violinist with the ACO (Australian Chamber Orchestra) once made the comment that it’s like ‘you’re not there’, which I thought summed it up perfectly. Strange as it may sound, the best writing you will ever do is when your ego is not involved – you are just a medium, as Escher so eloquently put it.

There is a philosophical debate amongst writers about whether to outline or not to outline. Most of the writers I’ve met, argue that you shouldn’t, whereas most books you read on the topic argue that you should. Both Peter Temple and Peter Corris argue that you shouldn’t. Stephen King is contemptuous of anyone who does an outline, whereas J.K. Rowling famously plotted out all 7 novels of Harry Potter. My advice: you have to find what works for you.

Personally, I do an outline but it’s very broad brush – it’s like scaffolding that I follow. I found this technique through trial and error, and I suggest that anyone else should do the same. It’s what works for me and you have to find what works for you.

Now, I’m finally going to talk about world. After all, it’s what this talk is meant to be about, isn’t it? Well, yes and no. To create a believable world actually starts with character in my opinion. The more real your characters are, the more likely you are to engage your readers. This is why books like Lord of the Rings and the Harry Potter series are so successful, even though the worlds and the plots they describe are so fantastical.

All works of fiction are a combination of reality and fantasy, and how you mix them varies according to genre. But grounding a story in a believable character is not only the easiest method, it’s also the most successful. The quickest way to break the spell in a story, for me, is to make the character do something completely out of character. So-called reversals, where the hero suddenly turns out to be the villain are the cheapest of plot devices as far as I am concerned. There are exceptions, and to give one example: Snape in Harry Potter is actually a ‘double-agent’ so his reversal is totally believable, and when we learn about it, a lot of things suddenly make sense. Also, having a character who is not what they appear to be is not what I am talking about here. Ideally, a character reveals themselves gradually over the story, and can even change and grow, as I described above, but a complete reversal is a lot harder to swallow, especially when it’s done as a final ‘twist’ to dupe the reader.

The first thing to know about world is to understand what it is not. It is not just background or setting; it’s an interactive component of the story. One of the things that distinguishes fiction from non-fiction is the message, because the message is always emotive in fiction. You have to engage the reader emotionally and that includes the world. There are 5 narrative styles that I am aware of, though some people may contend that there are less or more. Basically, they are description, exposition, dialogue, action and introspection. By introspection I mean what’s going on inside the character’s head. Most books on writing will tell you that exposition is the most boring, but I disagree. I think that description is the most boring – it’s the part of the text that readers will skip over to get on with the story.

If you read the classics from the 19th century and even early, or not-so-early, 20th century you will find that writers would describe scenes in great detail. TV and movies changed all that, for 2 reasons. One, we became more impatient, and two, cinema and video eliminated completely the need for description. So novels started to develop a shorthand whereby scenes are more like impressionists' paintings. But what’s more important, when you set up a scene, is to create atmosphere and mood, because that’s what engages the reader emotionally.

And here I return to my earlier reference to dreams, because I believe that dreams are our primal language. The language of dreams is imagery and emotion, and that’s also the language of story. The reason I believe that written stories (as opposed to cinema) facilitate imagery in our minds is because we do it in our dreams. The medium for a novel is not the words on the page but the reader’s imagination. You have to engage the reader’s imagination, otherwise the story is lifeless, just words on a page.

One final point, which brings me back to character. If you tell the story from a character’s point of view, then you engage that character’s emotions and senses. So if you relate a scene through the character’s eyes, ears, nose and touch, then you overcome the boredom of description more readily.

Friday 23 July 2010

The enigma we call time

The June 2010 edition of Scientific American had an article called Is Time an Illusion? The article was written by Craig Callendar, who is a ‘philosophy professor at the University of California, San Diego’, and explains how 20th Century physics has all but explained time away. In fact, according to him, some scientists believe it has. It reminds me of how many scientists believe that free will and consciousness have been explained away as well, or, if not, then the terms have passed their use-by-date. I once had a brief correspondence with Peter Watson who wrote A Terrible Beauty, an extraordinarily well-researched and well-written book that attempts to cover the great minds and great ideas of the 20th Century, mostly in art and science, rather than politics and history. He contended that words like imagination and mind were no longer meaningful because they referred to an inner state of which we have no real understanding. He effectively argued that everything we contemplate as ‘internal’ is really dependent on our ‘external’ world, including the language we used to express it. But I’m getting off the track before I’ve even started. My point is that time, like consciousness and free will, and even imagination, are all experiences that we all have, which makes them as real as any empirically derived quantity that we know.

But isn’t time an empirically derived quantity as well? Well, that’s effectively the subject of Callendar’s essay. Attempts to rewrite Einstein’s theory of general relativity (gravity) in the same form as electromagnetism, as John Wheeler and Bryce De-Witt did in the late 1960s, resulted in an equation where time (denoted as t) simply disappeared. As Callendar explains, time is the real stumbling block to any attempt at a theory for quantum gravity, which attempts to combine quantum mechanics with Einstein’s general relativity. According to the theory of relativity, time is completely dependent on the observer, where the perceived sequence of events can differ from one observer to another depending on their relative positions and velocities, though causality is always conserved. On the other hand, quantum mechanics, through entanglement, can defy Einstein’s equations altogether (see my post on Entanglement, Jan 2010).

But let’s start with our experience of time, since it entails our entire life, from the moment we start storing memories up to our death. And this storing of memories is a crucial point, otherwise we’d have no sense of time at all, no sense of past or future, just a continuous present. Oliver Sacks, in his book, The Man Who Mistook his Wife for a Hat, tells the story of a man who suffered retrograde amnesia (The lost mariner) through excessive alcoholism, and in the 1970s when Sacks met him, still thought he was living in 1949 or thereabouts when he left the navy after WW2. The man was unable to create new memories so that he was effectively stuck in time, at least psychologically.

Kant famously argued in his Critique of Pure Reason, that both time and space were projections of the human mind. Personally, I always had a problem with Kant’s thesis on this subject, because I contend that both time and space exist independently of the human mind. In fact, they are the very fabric of the universe, but I’m getting ahead of myself again.

Without memory we would have no sense of the past and without imagination, no sense of the future. Brian Boyd, in his book The Origin of Stories (see my review called Storytelling, July 2009) referenced neurological evidence to explain how we use the same parts of the brain when we envisage the past as we do when we envisage the future. In both cases, we create the scenario in our mind, so how do we tell the difference?

Raymond Tallis, who writes a regular column in Philosophy Now (Tallis in Wonderland), wrote a very insightful essay in the April/May 2010 edition (the Is God really Dead? issue) ‘on the true mystery of memory’, where he explains the fundamental difference between memory in humans and memory in computers. It is impossible for me to do justice to such a brilliant essay, but effectively he questions how does the neuron or neurons, that supposedly store the memory, know or tell us when the memory was made in a temporal sense, even though it is something that we all intuitively sense. On the other hand, memory in a computer simply has a date and time stamp on it, a label in effect, but is otherwise physically identical to when it was created.

In the case of the brain, it’s in the hippocampus, where long term memories are generated, new neurons are created when something eventful happens which ties events together. Long term memory is facilitated by association, and so is learning, which is why analogies and metaphors are so useful for comprehending new knowledge, but I’m getting off the track again.

The human brain, and any other brain, one expects, recreates the memory in our imagination so that it’s not always accurate and certainly lacks photographic detail, but somehow conjures up a sense of past, even distance in time. Why are we able to distinguish this from an imaginary scenario that has never actually happened? Of course we can’t always, and false memories have been clinically demonstrated to occur.

Have you ever noticed that in dreams (see previous post), we experience a continuous present? Our dreams never have a history and never a future, they just happen, and often morph into a new scenario in such a way that any dislocation in time is not even registered, except when we wake up and try to recall them. Sometimes in a dream, I have a sense of memory attached to it, like I’ve had the dream before, yet when I wake up that sense is immediately lost. I wonder if this is what happens when people experience déjà vu (when they’re awake of course). I’ve had episodes of TGA (Transient Global Amnesia) where one’s thoughts seem to go in loops. It’s very disorienting, even scary, and the first time I experienced this, I described it to my GP as being like ‘memories from the future’, which made him seriously consider referring me to a psychiatrist.

So time, as we experience it, is intrinsically related to memory, yet there is another way we experience time, all the time, at least while we are conscious. And it is this ‘other way’ that made me challenge Kant’s thesis, when I first read it and was asked to write an essay on it. All animals, with sight, experience time through their eyes, because our eyes record the world quite literally as it passes us by, in so many frames a second. In the case of humans it’s twenty something. Movies and television need to have a higher frequency (24 from memory) in order for us to see movement fluidly. But many birds have a higher rate than us, so they would see a TV as jerky. When we see small birds flick their heads about in quick movement, they would see the same movement as fluid, which is why they can catch insects in mid-flight and we haven’t got Buckley’s. The point is that we literally see time, but different species see time at different rates.

We all know that our very existence in this world, on a cosmic scale, is just a blink, and a subliminal blink at that. On the scale of the universe at large, we barely register. Yet think upon this: without consciousness, time might as well not exist, because without consciousness the idea of a past or future is irrelevant, arguably non-existent. In this sense, Kant was right. It is only consciousness that has a sense of past and future; certainly nothing inanimate has a sense of past and future, even if it exists in a causal relationship with something else.

But of course, we believe that time does exist without consciousness, because we believe the universe had a cosmic history long before consciousness even evolved and will continue to exist long after the planet, upon which we are dependent for our very existence, and the sun, upon which we are dependent for all our needs, both cease to exist.

There has been one term that keeps cropping up in this dissertation, which has time written all over it, and it’s called causality. Causality is totally independent of the human mind or any other mind (I’m not going to argue about the ‘mind of God’). Causality, which we not only witness every day, but is intrinsic to all physical phenomena, is the greatest evidence we have that time is real. Even Einstein’s theories of relativity, which, as Callendar argues, effectively dismisses the idea of a universal time (or absolute time) still allow for causality.

David Hume famously challenged our common sense view of causality, arguing that it can never be proven; only that one event has followed another. John Searle gives the best counter-argument I’ve read, in his book, Mind, but I won’t digress as both of their arguments are outside the scope of this topic. However, every animal that pursues its own food believes in causality, even if they don’t think about it the way philosophers do. Causality only makes sense if time exists, so if causality is a real phenomenon then so is time. I might add that causality is also a lynch pin of physics, otherwise conservation of momentum suddenly becomes a non sequitur.

My knowledge of relativity theory and quantum mechanics is very rudimentary, to say the least, nevertheless I believe I know enough to explain a few basic principles. In a way, light replaces time in relativity theory; that’s because, for a ray of light, time really does not exist. For a photon, time is always zero – it only becomes a temporal entity for an observer who either receives it or transmits it. That is why light is always the shortest distance between 2 events, whether you want to travel between them or send a message. Einstein’s great revelation was to appreciate that this effectively turned time into a dimension that was commensurate with a spatial dimension. Equations for space-time include a term that is the speed of light multiplied by time, which effectively gives another dimension in addition to the other 3 dimensions of space that we are familiar with. You can literally see this dimension of time when you look at a night sky or peer through an astronomical telescope, because the stars you are observing are not only separated from us by space but also by time – thousands of years in fact.

But quantum mechanics is even more bizarre and difficult to reconcile with our common-or-garden view of the world. A lot of quantum weirdness stems from the fact that under certain conditions, like quantum tunneling and entanglement, time and space seem to become irrelevant. Entanglement implies that instantaneous connections are possible, across any distance, completely contrary to the restraints of relativity that I described above (see addendum below). And quantum tunneling also disregards relativity theory, where time can literally disappear, albeit temporarily and within energy constraints (refer my post, Oct.09).

But relativity and quantum mechanics are not the end of the story of time in physics; there is another aspect, which is perhaps even more intriguing, because it gives us the well-known arrow of time. Last year I wrote a review of Erwin Schrodinger’s book, What is Life? (Nov.09), a recommended read to anyone with an interest in philosophy or science. In it, Schrodinger reveals that one of his heroes was Ludwig Boltzmann, and it was Boltzmann, who elucidated for us, the second law of thermodynamics, otherwise known as entropy. It is entropy that apparently drives the arrow of time, as Penrose, Feynman and Schrodinger have all pointed out in various books aimed at laypeople, like myself. But it was Penrose who first explained it to me (in The Emperor’s New Mind) that whilst both relativity theory and quantum mechanics allow for time reversal, entropy does not.

Callendar, very early in his Scientific American article, posits the idea that time may be an emergent property of the universe, and entropy seems to fit that role. Entropy is why you can’t reconstitute an egg into its original form after you’ve dropped it on the floor, broken its shell and spilled all its contents into the carpet. You can run a film backwards showing a broken egg coming back together and rising from the floor with no trace of a stain on the carpet, but we immediately know it’s false. And that’s exactly what you would expect to see if time ran backwards, even though it never does. The two perceptions are related: entropy says that the egg can’t be recovered from its fall and so does the arrow of time; they are the same thing.

But Penrose, in his exposition, goes further, and explains that the entire cosmos follows this law, from the moment of the Big Bang until the death throes of the universe – it’s a universal law.

But this in itself begs another question: if a photon experiences zero time and the early universe (as well as its death) was just entirely radiation, where then is time? And without time, how did the universe evolve into a realm that is not entirely radiation. Well, there is a clue in the radiation itself, because all radiation has a frequency and from the frequency it has an energy, defined by Planck’s famous equation: E = hf. Where f is frequency and h is Planck’s constant. So the very equation, that gives us the energy of the universe, also entails time, because frequency is meaningless without time. But if photons have zero time, how is this possible? Also, if any particle approaches the same velocity as the photon, so does its time approach zero. And this happens when something falls into a black hole, so it becomes frozen in time to an external observer. Perhaps there is more than one type of time. A relativistic time that varies from one observer to another (this is a known fact, because the accuracy of GPS signals transmitted from satellites are dependent on it) and an entropic time that drives the entire universe and stops time from running backwards, thus ensuring causality is never violated. And what of time in quantum mechanics? Well, quantum mechanics hints that there is something about our universe that we still don’t know or understand, and to (mis)quote Wittgenstein: Of that which one does not know, one should not speak.

Addendum: Timmo, who is a real physicist, has pointed out that my comment on entanglement could be misconstrued. Specifically, entanglement does not allow faster-than-light communication. For a more comprehensive discussion on entanglement, I refer you to an earlier post.

Addendum 2: I revisited this topic in Oct. 2011 with a post, Where does time go? (in quantum mechanics).

Sunday 20 June 2010

What dreams are made of

Last week’s New Scientist (12 June 2010) had a very interesting article on dreams, in particular ‘lucid dreaming’, by Jessica Hamzelou. She references numerous people: Ursula Voss (University of Franfurt), Patrick McNamara (Boston University), Allan Hobson (Harvard Medical School), Eric Nofzinger (University of Pittsburgh) Victor Spoormaker (Utrecht University) and Michael Czisch (Max Planck Institute); so it’s a serious topic all over the world.

Ursula Voss argues that there are 2 states of consciousness, which she calls primary and secondary. ‘Primary’ being what most animals perceive: raw sensations and emotions; whereas ‘secondary’ is unique to humans, according to Voss, because only humans are “aware of being aware”. This in itself is an interesting premise.

I don’t agree with the well-known supposition that most animals don’t have a sense of ‘self’ because they don’t recognise themselves in a mirror. Even New Scientist reported on challenges to this view many years ago (before I started blogging). The lack of recognition of one’s own reflection is obviously a cognitive misperception, but it doesn’t axiomatically mean that the animal doesn’t have a sense of its own individuality relative to other members of its own species, which is how I would define a sense of self. In other words, a sense of self is the ability to differentiate one’s self from others. The fact that it mistakenly perceives its own reflection as an ‘other’, doesn’t imply the converse: that it can’t distinguish its self from a genuine other – in fact, if anything, it confirms that cognitive ability, albeit erroneously.

That’s a slight detour to the main topic, nevertheless it’s relevant, because I believe it’s not what Voss is referring to, which is our ability ‘to reflect upon ourselves and our feelings’. It’s hard to imagine that any animal can contemplate upon its own thoughts the way we do. What makes us unique, cognitively, is our ability to create concepts within concepts ad infinitum, which is why I can write an essay like this, but no other primate can. I always thought this was my own personal philosophical insight until I read Godel Escher Bach and realised that Douglas Hofstadter had reached it many years before. And, as Hofstadter would point out, it’s this very ability which allows us to look at ourselves almost objectively, just as we do others, that we call self-contemplation. If this is what Voss is referring to, when she talks about ‘secondary consciousness’, then I would probably agree with her premise.

So what has this to do with dreams? Well, one of the aspects of dreams, that distinguishes them from reality, is that they defy rational expectations, yet we seem totally acceptant of this. Voss contends that it’s because we lose our ‘secondary’ consciousness during dreaming that we lose our rational radar, so to speak (my turn of phrase, not hers).

The article argues that with lucid dreaming we can get our secondary consciousness back, and there is some neurological evidence to support this conjecture, but I’m getting ahead of myself. For those who haven’t come across the term before, lucid dreaming is the ability to take conscious control of one’s dream. In effect, one becomes aware that one is dreaming. Hamzelou even provides a 5-step procedure to induce lucid dreams.

Now, from personal experience, any time I’ve realised I’m dreaming, it immediately pops me out of the dream. Nevertheless, I believe I’ve experienced lucid dreaming, or at least, a form of it. According to Patrick McNamara (Boston University), our dream life goes down hill as we age, especially once we’ve passed adolescence. Well, I have a very rich dream life, virtually every night, but then I’ve learnt, from anecdotal evidence at least, that storytellers seem to dream more or recall them more than other people do. I’d be interested if there was any hard evidence to support this.

Certainly, storytellers understand the connection between story and dreaming, because, like stories, dreams put us in situations that we don’t face everyday. In fact, it has been argued that dreams evolutionary purpose was to remind us that the world can be a dangerous place. But I’m getting off the track again, because, as a storyteller, I believe that my stories come from the same place that my dreams do. In other words, in my dreams I meet all sorts of characters that I would never meet in real life, and have experiences that I would never have in real life. But I’ve long been aware that there are 2 parts to my dream: one part being generated by some unknown source and the other part being my subjective experience of it. In the dream, I behave as a conscious being, just as I would in the real world, and I wonder if this is what is meant by lucid dreaming. Likewise, when one is writing a story, there is often a sense that it comes from an unknown source, and you consciously inhabit the character who is experiencing it. Which is exactly what actors do, by the way, only the dream they are inhabiting is a movie set or a stage.

Neurological studies have shown that there is one area of the brain that shuts down during REM (Rapid Eye Movement) sleep which is the signature behavioural symptom of dreaming. The ‘dorsolateral prefrontal cortex (DLPFC) was remarkably subdued during REM sleep, compared with during wakefulness.’ Allan Hobson (Harvard) believes that this is our rationality filter (again, my term, not his) because its inactivity correlates with our acceptance of completely irrational and dislocated events. Neurological studies of lucid dreams have been difficult to capture, but one intriguing finding has been an increase in a specific brainwave at 40 hertz in the frontal regions. In fact, the neurological studies done so far, point to brain activity being somewhere in between normal REM sleep and full wakefulness. The studies aren’t sensitive enough to determine if the DLFPC plays a role in lucid dreams or not, but the 40 hertz brainwave is certainly more characteristic of wakefulness.

To me, dreams are what-if scenarios, and are opportunities to gain self-knowledge. I’ve long believed that one can learn from one’s dreams, not in a Jungian or Freudian sense, but more pragmatically. I’ve always believed that the way I behave in a dream simulates the way I would behave in real life. If I behave in a way that I’m not comfortable with, it makes me contemplate ways of self-improvement. Dreams allow us to face situations that we might not want to confront in reality. It’s our ability for self-reflection, that Voss calls secondary consciousness, that makes dreams valuable tools for self-knowledge. Stories often serve the same purpose. A story that really impacts on us, is usually one that confronts issues relevant to our lives, or makes us aware of issues we prefer to ignore. In this sense, both dreams and stories can be a good antidote for denial.

Sunday 23 May 2010

Why religion is not the root of all evil

I heard an interview with William Dalrymple last week (19 May 2010, Sydney time) who is currently attending the Sydney Writers’ Festival. The interview centred around his latest book, Nine Lives: In Search of the Sacred in Modern India.

Dalrymple was born in Edinburgh but has traveled widely in India and the book apparently examines the lives of nine religious followers in India. I haven’t read the book myself, but, following the interview, I’m tempted to seek it out.

As the title of his book suggests, Dalrymple appears fascinated with the religious in general, although he gave no indication in the interview what his own beliefs may be. His knowledge of India’s religions seems extensive and there are a couple of points he raised which I found relevant to the West’s perspective on Eastern religions and the current antagonistic attitudes towards religious issues: Islam, in particular.

As I say, I haven’t read the book, but the gist of it, according to the interview, is that he interviewed nine people, who lead distinctly different cultural lives in India, and wrote a chapter on each one. One of the points he raised, which I found relevant to my own viewpoint, is the idea that God exists inside us and not out there. This is something that I’ve discussed before and I don’t wish to dwell on here, but he inferred that the idea can be found in Sufism as well as Hinduism. It should be pointed out, by the way, that there is not one Hindu religion, and, in fact, Hinduism is really a collection of religions, that the West tend to put all in one conceptual basket. Dalrymple remarked on the similarity between Islamic Sufism and some types of Hinduism, which have flourished in India. In particular, he pointed out that the Sufis are the strongest opponents of Wahhabi-style Islam in Pakistan, which is very similar to the fundamentalism of the Taliban. I raise this point, because many people are unaware that there is huge diversity in Islam, with liberal attitudes pitted against conservative attitudes, the same as we find in any society worldwide, secular or otherwise.

This contradicts the view expressed by Hitchens and Harris (Dawkins has never expressed it, as far as I’m aware, but I’m sure he would concur) that people with moderate religious views somehow give succour to the fundamentalists and extremists in the world. This is a view, which is not just counter-productive, it’s divisive, simplistic, falsely based and deeply prejudicial. And it makes me bloody angry.

These are very intelligent, very knowledgeable and very articulate men, but this stance is an intellectualisation of a deeply held prejudice against religion in general. Because they are atheists, they believe it gives them a special position – they see themselves as being outside the equation – because they have no religious belief, they are objective, which gives them a special status. My point is that they can hardly ask for people with religious views to show tolerance towards each other if they can intellectualise their own intolerance towards all religions. By expressing the view, no matter how obtuse, that any religious tolerance somehow creates a shelter or cover for extremists, they are fomenting intolerance towards those who are actually practicing tolerance.

Dawkins was in Australia for an international Atheist convention in Melbourne, earlier this year. Religion is not a hot topic in this country, but, of course, it becomes a hot topic while he’s visiting, which makes me really glad that he doesn’t live here full time. On a TV panel show, he made the provocative inference that no evil has ever come from atheism. So atheists are not only intellectually superior to everyone else but they are also morally superior. What he said and what he meant, is that no atheist has ever attempted genocide on a religious group because of his or her atheism (therefore religious belief) but lots of political groups have, which may or may not be atheistic. In other words, when it comes to practicing genocide, whether the identification of the outgroup is religious or political becomes irrelevant. We don’t need religion to create politically unstable groups, they can be created by atheists as easily as they can by religious zealots. Dawkins, of course, chooses his words carefully, to give the impression that no atheist would ever indulge in an act of genocide, be it psychological or physical, but we all know that political ideology is no less dangerous than religious ideology.

One of Dawkins’ favourite utterances is: “There is no such thing as a Muslim child.” If one takes that statement to its logical conclusion, he’s advocating that all children should be disassociated from their cultural heritage. Is he aware of how totalitarian that idea is? He wants to live in a mono-culture, where everyone gets the correct education that axiomatically will ensure they will never believe in the delusion of God. Well, I don’t want to live in that world, so, frankly, he can have it.

People like to point to all the conflicts in the world of the last half century, from Ireland to the Balkans to the Middle East as examples of how religion creates conflicts. The unstated corollary is that if we could rid the world of religion we would rid it of its main source of conflict. This is not just naïve, it’s blatantly wrong. All these conflicts are about the haves and have-nots. Almost all conflicts, including the most recent one in Thailand are about one group having economical control over another. That’s what happened in Ireland, in former Yugoslavia, and, most significantly, in Palestine. In many parts of the world, Iraq, Iran and Afghanistan being typical examples, religion and politics are inseparable. It’s naïve in the extreme to believe, from the vantage of a secular society, that if you rid a society of its religious beliefs you will somehow rid it of its politics, or make the politics more stable. You make the politics more stable by getting rid of nepotism and corruption. In Afghanistan, the religious fundamentalists have persuasion and political credibility because the current alternative solution is corrupt and financially self-serving.

It should be obvious for anyone who follows my blog that I’m not anti-atheist. In fact, I’ve done my best to stay out of this debate. But, to be honest, I refuse to take sides in the way some commentators infer we should. I don’t see it as an US and THEM debate, because I don’t live in a country where people with religious agendas are trying to take control of the parliament. We have self-confessed creationists in our political system, but, as was demonstrated on the same panel that Dawkins was on, they are reluctant to express that view in public, and they have no agenda, hidden or otherwise, for changing the school curricula. I live in a country where you can have a religious point of view and you won’t be hung up and scrutinised by every political commentator in the land.

Religion has a bad rap, not helped by the Catholic Church’s ‘above the law’ attitude towards sexual abuse scandals, but religious belief per se should never be the litmus test for someone’s intelligence, moral integrity or strength of character, either way.

Sunday 9 May 2010

Aerodynamics demystified

I know: you don’t believe me; but you haven’t read Henk Tennekes’s book, The Simple Science of Flight; From Insects to Jumbo Jets. This is a book that should be taught in high school, not to get people into the aerospace industry, but to demonstrate how science works in the real world. It is probably the best example I’ve come across, ever.

I admit, I have an advantage with this book, because I have an engineering background, but the truth is that anyone, with a rudimentary high school education in mathematics, should be able to follow this book. By rudimentary, I mean you don’t need to know calculus, just how to manipulate basic equations. Aerodynamics is one of the most esoteric subjects on the planet – all the more reason that Tennekes’s book should be part of a high school curriculum. It demonstrates the availability of science to the layperson better than any book I’ve read on a single subject.

Firstly, you must appreciate that mathematics is about the relationship between numbers rather than the numbers themselves. This is why an equation can be written without any numbers at all, but with symbols (letters of the alphabet) representing numbers. The numbers can have any value as long as the relationship between them is dictated by the equation. So, for an equation containing 3 symbols, if you know 2 of the values, you can work out the third. Elementary, really. To give an example from Tennekes’s book:

W/S=0.38 V2

Where W is the weight of the flying object (in Newtons), S is the area of the wing (square metres) and V is cruising speed (metres per second). 0.38 is a factor dependent on the angle of attack of the wing (average 6 degrees) and the density of the medium (0.3125 kg/m3; air at sea level). What Tennekes reveals graphically is that you can apply this equation to everything from a fruit fly (Drosophila melanogaster) to an Airbus A380 on what he calls The Great Flight Diagram (see bottom of post). (Mind you, his graph is logarithmic along both axes, but that’s being academic, quite literally.)

I’ve used a small sleight-of-hand here, because the equation for the graph is actually:

W/S = c x W1/3

W/S (weight divided by wing area which gives pressure) is called ‘wing loading’ and is proportional to the cubed root of the Weight, which is a direct consequence of the first equation (that I haven’t explained, though Tennekes does). Tennekes’s ‘Great Flight Diagram’ employs the second equation, but gives V (flight cruise speed) as one of the axes (horizontal) against Weight (vertical axis); both logarithmic as I said. At the risk of confusing you, the second equation graphs better (it gives a straight line on a logarithmic scale) but the relationships of both equations are effectively entailed in the one graph, because W, W/S and V can all be read from it.

I was amazed that one equation could virtually cover the entire range of flight dynamics for winged objects on the planet. The equations also effectively explain the range of flight dynamics that nature allows to take place. The heavier something is, the faster it has to fly to stay in the air, which is why 747s consistently fly 200 times faster than fruit flies. The equation shows that there is a relationship between weight, wing area and air speed at all scales, and while that relationship can be stretched it has limits. Flyers (both natural and artificial) left of the curve are slow for their size and ones to the right are fast for their size – they represent the effective limits. (A line on a graph is called a ‘curve’, even if it’s straight, to distinguish it from a grid-line.) So a side-benefit of the book is that it provides a demonstration of how mathematics is not only a tool of analysis, but how it reveals nature’s limits within a specific medium – in this case, air in earth’s gravitational field. It reminded me of why I fell in love with physics when I was in high school – nature’s secrets revealed through mathematics.

The iconic Supermarine Spitfire is one of the few that is right on the curve, but, as Tennekes points out, it was built to climb fast as an interceptor, not for outright speed.

Now, for those who know more about this subject than I do, they may ask: what about Reynolds numbers? Well, I know Reynolds numbers are used by aeronautical engineers to scale up aerodynamic phenomena from small scale models they use in wind tunnels to full scale aeroplanes. Tennekes conveniently leaves this out, but then he’s not explaining how we use models to provide data for their full scale equivalents – he’s explaining what happens at full scale no matter what the scale is. So speed increases with weight and therefore scale – we are not looking for a conversion factor to take us from one scale to another, which is what Reynolds numbers do. (Actually, there’s a lot more to Reynolds numbers than that, but it’s beyond my intellectual ken.) I’m not an aeronautical engineer, though I did work in a very minor role on the design of a wind tunnel once. By minor role, I took minutes of the meetings held by the real experts.

When I was in high school, I was told that winged flight was all explained by the Bernoulli effect, which Tennekes describes as a ‘polite fiction’. So, that little gem alone, makes Tennekes’s book a worthwhile addition to any school’s science library.

But the real value in this book comes when he starts to talk about migrating birds and the relationship between energy and flight. Not only does he compare aeroplanes with other forms of transport, thus explaining why flight is the most economical means of travel over long distances, as nature has already proven with birds, but he analyses what it takes for the longest flying birds to achieve their goals, and how they live at the limit of what nature allows them to do. Again, he uses mathematics, that the reader can work out for themselves, to convert calories from food into muscle power into flight speed and distance, to verify that the very best traveled migratory birds don’t cheat nature, but live at its limits.

The most extraordinary example being bar-tailed godwits that fly across the entire Pacific Ocean from Alaska to New Zealand and to Australia’s Northern Territory – a total of 11,000 km non-stop (7,000 miles). It’s such a feat that Tennekes claims it requires a rethink on the metabolic efficiency of the muscles of these birds, and he provides the numbers to support his argument. He also explains how birds can convert fat directly into energy for muscles, something we can’t do (we have to convert it into sugar first). He also explains how some migratory birds even start to atrophy their wing muscles and heart muscles to extend their trip – they literally burn up their own muscles for fuel.

So he combines physics with biology with zoology with mathematics, all in one chapter, on one specific subject: bird migration.

He uses another equation, along with a graphic display of vectors, that explains how flapping wings work on exactly the same principle as ice skating in humans. What’s more, he doesn’t even tell the reader that he’s working with vectors, or use trigonometry to explain it, yet anyone would be able to understand the connection. That’s just brilliant exposition.

In a nutshell (without the diagrams) power equals force times speed: P=FV. For the same amount of Power, you can have a large Force and small Velocity or the converse.

In other words, a large force times a small velocity can be transformed into a small force with a large velocity, with very little energy loss if friction is minimalised. This applies to both skaters and birds. The large force, in skating, is your leg pushing sideways against your skate, with a small sideways velocity, resulting in a large velocity forwards, from a small force on the skate backwards. Because the skate is at a slight angle, the force sideways (from your leg) is much greater than the force backwards, but it translates into a high velocity forwards.

The same applies to birds on their downstroke: a large force vertically, at a slight forward angle, gives a higher velocity forward. Tennekes says that the ratio of wing tip velocity to forward velocity for birds is typically 1 to 3, though varies between 2 and 4. If a bird wants to fly faster, they don’t flap quicker, they increase the amplitude, which, at the same frequency, increases wing tip speed, which increases forward flight speed. Simple, isn’t it? The sound you hear when pigeons or doves take off vertically is there wing tips actually touching (on both strokes). Actually, what you hear is the whistle of air escaping the closed gap, as a continuous chirp, which is their flapping frequency. So when they take off, they don’t double their wing flapping frequency, they double their wing flapping amplitude, which doubles their wing tip speed at the same frequency: the wing tip has to travel double the distance in the same time.

One possible point of confusion is a term Tennekes uses called ‘specific energy consumption’, which is a ratio, not an amount of energy as its description implies. It is used to compare energy consumption or energy efficiency between different birds (or planes), irrespective of what units of energy one uses. The inversion of the ratio gives the glide ratio (for both birds and planes) or what the French call ‘Finesse’ – a term that has special appeal to Tennekes. So a lower energy consumption gives a longer guide ratio, or vice versa, as one would expect.

Tennekes finally gets into esoteric territory when he discusses drag and vortices, but he’s clever enough to perform an integral without introducing his readers to calculus. He’s even more clever when he derives an equation based on vortices and links it back to the original equation that I referenced at the beginning of this post. Again, he’s demonstrating how mathematics keeps us honest. To give another, completely unrelated example: if Einstein’s general theory of relativity couldn’t be linked to Newton’s general equation of gravity, then Einstein would have had to abandon it. Tennekes does exactly the same thing for exactly the same reason: to show that his new equation agrees with what has already been demonstrated empirically. Although it’s not his equation, but Ludwig Prandtl’s, whom he calls the ‘German grandfather of aerodynamics’.

Prandtl based his equation on an analogy with electromagnetic induction, which Tennekes explains in some detail. They both deal with an induced phenomenon that occurs in a circular loop perpendicular to the core axis. Vortices create drag, but in aerodynamics it actually goes down with speed, which is highly counterintuitive, but explains why flight is so economical compared to other forms of travel, both for birds and for planes. The drag from vortices is called ‘induced’ drag, not to be confused with ‘frictional’ drag that does increase with air speed, so at some point there is an optimal speed, and, logically, Tennekes provides the equation that gives us that as well. He also explains how it’s the vortices from wing tips that cause many long distance flyers, like geese and swans, to fly in V formation. The vortex supplies an updraft just aft and adjacent to the wingtip that the following bird takes advantage of.

Tennekes uses his equations to explain why human-powered flight is the reserve of professional cyclists, and not a recreational sport like hang-gliding or conventional gliding. Americans apparently use the term, sailplane, instead of glider, and Tennekes uses both without explaining he’s referring to the same thing.

Tennekes reveals that his doctoral thesis (in 1964) critiqued the Concorde (still on the drawing board back then) as ‘a step backward in the history of aviation.’ This was considered heretical at the time, but not now, as history has demonstrated to his credit.

The Concorde is now given as an example, in psychology, of how humans are the only species that don’t know when to give up (called the ‘Concorde effect’). Unlike other species, humans evaluate the effort they’ve put into an endeavour, and sometimes, the more effort they invest, the more determined they become to succeed. Whether this is a good or bad trait is purely subjective, but it can evolve into a combination of pride, egotism and even denial. In the case of the Concorde, Tennekes likens it to a manifestation of ‘megalomania’, comparable to Howard Hughes’ infamous Spruce Goose.

Tennekes’s favourite plane is the Boeing 747, which is the complete antithesis to the Concorde, in evolutionary terms, and developed at the same time; apparently so it could be converted to a freight plane when supersonic flight became the expected norm. So, in some respects, the 747, and its successors, were an ironic by-product of the Concorde-inspired thinking of the time.

My only criticism of Tennekes is that he persistently refers to a budgerigar as a parakeet. This is parochialism on my part: in Australia, where they are native, we call them budgies.

Great Flight diagram.

Sunday 11 April 2010

To have or not to have free will

In some respects this post is a continuation of the last one. The following week’s issue of New Scientist (3 April 2010) had a cover story on ‘Frontiers of the Mind’ covering what it called Nine Big Brain Questions. One of these addressed the question of free will, which happened to be where my last post ended. In the commentary on question 8: How Powerful is the Subconscious? New Scientist refers to well-known studies demonstrating that neuron activity precedes conscious decision-making by 50 milliseconds. In fact, John-Dylan Haynes of the Bernstein Centre for Computational Neuroscience, Berlin, has ‘found brain activity up to 10 seconds before a conscious decision to move [a finger].’ To quote Haynes: “The conscious mind is not free. What we think of as ‘free will’ is actually found in the subconscious.”

New Scientist actually reported Haynes' work in this field back in their 19 April 2008 issue. Curiously, in the same issue, they carried an interview with Jill Bolte Taylor, who was recovering from a stroke, and claimed that she "was consciously choosing and rebuilding my brain to be what I wanted it to be". I wrote to New Scientist at the time, and the letter can still be found on the Net:

You report John-Dylan Haynes finding it possible to detect a decision to press a button up to 7 seconds before subjects are aware of deciding to do so (19 April, p 14). Haynes then concludes: "I think it says there is no free will."

In the same issue Michael Reilly interviews Jill Bolte Taylor, who says she "was consciously choosing and rebuilding my brain to be what I wanted it to be" while recovering from a stroke affecting her cerebral cortex (p 42) . Taylor obviously believes she was executing free will.

If free will is an illusion, Taylor's experience suggests that the brain can subconsciously rewire itself while giving us the illusion that it was our decision to make it do so. There comes a point where the illusion makes less sense than the reality.

To add more confusion, during the last week, I heard an interview with Norman Doidge MD, Research psychiatrist at the Columbia University Psychoanalytic Centre and the University of Toronto, who wrote the book, The Brain That Changes Itself. I haven’t read the book, but the interview was all about brain plasticity, and Doidge specifically asserts that we can physically change our brains, just through thought.

What Haynes' experimentation demonstrates is that consciousness is dependent on brain neuronal activity, and that’s exactly the point I made in my last post. Our subconscious becomes conscious when it goes ‘global’, so one would expect a time-lapse between a ‘local’ brain activity (that is subconscious) and the more global brain activity (that is conscious). But the weird part is that Taylor’s experience, and Doidge’s assertions, is that our conscious thoughts can also affect our brain at the neuron level. This reminds me of Douglas Hofstadter’s thesis that we all are a ‘strange loop’, that he introduced in his book, Godel, Escher, Bach, and then elaborated on in a book called I am a Strange Loop. I’ve read the former tome but not the latter one (refer my post on AI & Consciousness, Feb.2009).

We will learn more and more about consciousness, I’m sure, but I’m not at all sure that we will ever truly understand it. As John Searle points out in his book, Mind, at the end of the day, it is an experience, and a totally subjective experience at that. In regard to studying it and analysing it, we can only ever treat it as an objective phenomenon. The Dalai Lama makes the same point in his book, The Universe in a Single Atom.

People tend to think about this from a purely reductionist viewpoint: once we understand the correlation between neuron activity and conscious experience, the mystery stops being a mystery. But I disagree: I expect the more we understand, the bigger the mystery will become. If consciousness is no less weird than quantum mechanics, I’ll be very surprised. And we are already seeing quite a lot of weirdness, when consciousness is clearly dependent on neuronal activity, and yet the brain’s plasticity can be affected by conscious thought.

So where does this leave free will? Well, I don’t think that we are automatons, and I admit I would find it very depressing if that was the case. The last of the Nine Questions in last week’s New Scientist, asks: will AI ever become sentient? In its response, New Scientist reports on some of the latest developments in AI, where they talk about ‘subconscious’ and ‘conscious’ layers of activity (read software). Raul Arrables of the Carlos III University of Madrid, has developed ‘software agents’ called IDA (Intelligent Distribution Agent) and is currently working on LIDA (Learning IDA). By ‘subconcious’ and ‘conscious’ levels, the scientists are really talking about tiers of ‘decision-making’, or a hierarchic learning structure, which is an idea I’ve explored in my own fiction. At the top level, the AI has goals, which are effectively criteria of success or failure. At the lower level it explores various avenues until something is ‘found’ that can be passed onto the higher level. In effect, the higher level chooses the best option from the lower level. The scientists working on this 2 level arrangement, have even given their AI ‘emotions’, which are built-in biases that direct them in certain directions. I also explored this in my fiction, with the notion of artificial attachment to a human subject that would simulate loyalty.

But, even in my fiction, I tend to agree with Searle, that these are all simulations, which might conceivably convince a human that an AI entity really thinks like us. But I don’t believe the brain is a computer, so I think it will only ever be an analogy or a very good simulation.

Both this development in AI and the conscious/subconscious loop we seem to have in our own brains reminds me of the ‘Bayesian’ model of the brain developed by Karl Friston and also reported in New Scientist (31 May 2008). They mention it again in an unrelated article in last week’s issue – one of the little unremarkable reports they do – this time on how the brain predicts the future. Friston effectively argues that the brain, and therefore the mind, makes predictions and then modifies the predictions based on feedback. It’s effectively how the scientific method works as well, but we do it all the time in everyday encounters, without even thinking about it. But Friston argues that it works at the neuron level as well as the cognitive level. Neuron pathways are reinforced through use, which is a point that Norman Doidge makes in his interview. We now know that the brain literally rewires itself, based on repeated neuron firings.

Because we think in a language, which has become a default ‘software’ for ourselves, we tend to think that we really are just ‘wetware’ computers, yet we don’t share this ability with other species. We are the only species that ‘downloads’ a language to our progeny, independently of our genetic material. And our genetic material (DNA) really is software, as it is for every life form on the planet. We have a 4-letter code that provides the instructions to create an entire organism, materially and functionally – nature’s greatest magical trick.

One of the most important aspects of consciousness, not only in humans, but for most of the animal kingdom (one suspects) is that we all ‘feel’. I don’t expect an AI ever to feel anything, even if we programme it to have emotions.

But it is because we can all ‘feel’, that our lives mean so much to us. So, whether we have free will or not, what really matters is what we feel. And without feeling, I would argue that we would not only be not human, but not sentient.


Footnote: If you're interested in neuroscience at all, the interview linked above is well worth listening to, even though it's 40 mins long.

Saturday 3 April 2010

Consciousness explained (well, almost, sort of)

As anyone knows, who has followed this blog for any length of time, I’ve touched on this subject a number of times. It deals with so many issues, including the possibilities inherent in AI and the subject of free will (the latter being one of my earliest posts).

Just to clarify one point: I haven’t read Daniel C. Dennett’s book of the same name. Paul Davies once gave him the very generous accolade by referencing it as 1 of the 4 most influential books he’s read (in company with Douglas Hofstadter’s Godel, Escher, Bach). He said: “[It] may not live up to its claim… it definitely set the agenda for how we should think about thinking.” Then, in parenthesis, he quipped: “Some people say Dennett explained consciousness away.”

In an interview in Philosophy Now (early last year) Dennett echoed David Chalmers’ famous quote that “a thermostat thinks: it thinks it’s too hot, or it thinks it’s too cold, or it thinks the temperature is just right.” And I don’t think Dennett was talking metaphorically. This, by itself, doesn’t imbue a thermostat with consciousness, if one argues that most of our ‘thinking’ happens subconsciously.

I recently had a discussion with Larry Niven on his blog, on this very topic, where we to-and-fro’d over the merits of John Searle’s book, Mind. Needless to say, Larry and I have different, though mutually respectful, views on this subject.

In reference to Mind, Searle addresses that very quote by Chalmers by saying: “Consciousness is not spread out like jam on a piece of bread…” However, if one believes that consciousness is an ‘emergent’ property, it may very well be ‘spread out like jam on a piece of bread’, and evidence suggests, in fact, that this may well be the case.

This brings me to the reason for writing this post:New Scientist, 20 March 2010, pp.39-41; an article entitled Brain Chat by Anil Ananthaswarmy (consulting editor). The article refers to a theory proposed originally by Bernard Baars of The Neuroscience Institute in San Diego, California. In essence, Baars differentiated between ‘local’ brain activity and ‘global’ brain activity, since dubbed the ‘global workspace’ theory of consciousness.

According to the article, this has now been demonstrated by experiment, the details of which, I won’t go into. Essentially, it has been demonstrated that when a person thinks of something subconsciously, it is local in the brain, but when it becomes conscious it becomes more global: ‘…signals are broadcast to an assembly of neurons distributed across many different regions of the brain.’

One of the benefits, of this mechanism, is that if effectively filters out anything that’s irrelevant. What becomes conscious is what the brain considers important. What criterion the brain uses to determine this is not discussed. So this is not the explanation that people really want – it’s merely postulating a neuronal mechanism that correlates with consciousness as we experience it. Another benefit of this theory is that it explains why we can’t consider 2 conflicting images at once. Everyone has seen the duck/rabbit combination and there are numerous other examples. Try listening to a Bach contrapuntal fugue so that you listen to both melodies at once – you can’t. The brain mechanism (as proposed above) says that only one of these can go global, not both. It doesn’t explain, of course, how we manage to consciously ‘switch’ from one to the other.

However, both the experimental evidence and the theory, are consistent with something that we’ve known for a long time: a lot of our thinking happens subconsciously. Everyone has come across a puzzle that they can’t solve, then they walk away from it, or sleep on it overnight, and the next time they look at it, the solution just jumps out at them. Professor Robert Winston, demonstrated this once on TV, with himself as the guinea pig. He was trying to solve a visual puzzle (find an animal in a camouflaged background) and when he had that ‘Ah-ha’ experience, it showed up as a spike on his brain waves. Possibly the very signal of it going global, although I’m only speculating based on my new-found knowledge.

Mathematicians have this experience a lot, but so do artists. No artist knows where their art comes from. Writing a story, for me, is a lot like trying to solve a puzzle. Quite often, I have no better idea what’s going to happen than the reader does. As Woody Allen once said, it’s like you get to read it first. (Actually, he said it’s like you hear the joke first.) But his point is that all artists feel the creative act is like receiving something rather than creating it. So we all know that something is happening in the subconscious – a lot of our thinking happens where we’re unaware of it.

As I alluded to in my introduction, there are 2 issues that are closely related to consciousness, which are AI and free will. I’ve said enough about AI in previous posts, so I won’t digress, except to restate my position that I think AI will never exhibit consciousness. I also concede that one day someone may prove me wrong. It’s one aspect of consciousness that I believe will be resolved one day, one way or the other.

One rarely sees a discussion on consciousness that includes free will (Searle’s aforementioned book, Mind, is an exception, and he devotes an entire chapter to it). Science seems to have an aversion to free will (refer my post, Sep.07) which is perfectly understandable. Behaviours can only be explained by genes or environment or the interaction of the two – free will is a loose cannon and explains nothing. So for many scientists, and philosophers, free will is seen as a nice-to-have illusion.

Conciousness evolved, but if most of our thinking is subconscious, it begs the question: why? As I expounded on Larry’s blog, I believe that one day we will have AI that will ‘learn’; what Penrose calls ‘bottom-up’ AI. Some people might argue that we require consciousness for learning but insects demonstrate learning capabilities, albeit rudimentary compared to what we achieve. Insects may have consciousness, by the way, but learning can be achieved by reinforcement and punishment – we’ve seen it demonstrated in animals at all levels – they don’t have to be conscious of what they’re doing in order to learn.

So the only evolutionary reason I can see for consciousness is free will, and I’m not confining this to the human species. If, as science likes to claim, we don’t need, or indeed don’t have, free will, then arguably, we don’t need consciousness either.

To demonstrate what I mean, I will relate 2 stories of people reacting in an aggressive manner in a hostile situation, even though they were unconscious.

One case, was in the last 10 years, in Sydney, Australia (from memory) when a female security guard was knocked unconscious and her bag (of money) was taken from her. In front of witnesses, she got up, walked over to the guy (who was now in his car), pulled out her gun and shot him dead. She had no recollection of doing it. Now, you may say that’s a good defence, but I know of at least one other similar incident.

My father was a boxer, and when he first told me this story, I didn’t believe him, until I heard of other cases. He was knocked out, and when he came to, he was standing and the other guy was on the deck. He had to ask his second what happened. He gave up boxing after that, by the way.

The point is that both of those cases illustrate that humans can perform complicated acts of self-defence without being consciously cognisant of it. The question is: why is this the exception and not the norm?


Addendum: Nicholas Humphrey, whom I have possibly incorrectly criticised in the past, has an interesting evolutionary explanation: consciousness allows us to read other’s minds. Previously, I thought he authored an article in SEED magazine (2008) that argued that consciousness is an illusion, but I can only conclude that it must be someone else. Humphrey discovered ‘blindsight’ in a monkey (called Helen) with a surgically-removed visual cortex, which is an example of a subconscious phenomenon (sight) with no conscious correlation. (This specific phenomenon has since been found in humans as well, with damaged visual cortex.)


Addendum 2: I have since written a post called Consciousness unexplained in Dec. 2011 for those interested.