Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Saturday, 21 July 2012

Why is there something rather than nothing?


Jim Holt has written an entire book on this subject, titled Why Does the World Exist? An Existential Detective Story. Holt is a philosopher and frequent contributor to The New Yorker, the New York Times and the London Review of Books, according to the blurb on the inner title page. He’s also very knowledgeable in mathematics and physics, and has the intellectual credentials to gain access to some of the world’s most eminent thinkers, like David Deutsch, Richard Swinburne, Steven Weinberg, Roger Penrose and the late John Updike, amongst others. I’m stating the obvious when I say that he is both cleverer and better read than me.

The above-referenced, often-quoted existential question is generally attributed to Gottfried Leibniz, in the early 18th Century and towards the end of his life, in his treatise on the “Principle of Sufficient Reason”, which, according to Holt, ‘…says, in essence, that there is an explanation for every fact, an answer to every question.’ Given the time in which he lived, it’s not surprising that Leibniz’s answer was ‘God’.  Whilst Leibniz acknowledged the physical world is contingent, God, on the other hand, is a ‘necessary being’.

For some people (like Richard Swinburne), this is still the only relevant and pertinent answer, but considering Holt makes this point on page 21 of a 280 page book, it’s obviously an historical starting point and not a conclusion. He goes on to discuss Hume’s and Kant’s responses but I’ll digress. In Feb. 2011, I wrote a post on metaphysics, where I point out that there is no reason for God to exist if we didn’t exist, so I think the logic is back to front. As I’ve argued elsewhere (March 2012), the argument for a God existing independently of humanity is a non sequitur. This is not something I’ll dwell on – I’m just putting the argument for God into perspective and don’t intend to reference it again.

Sorry, I’ll take that back. In Nov 2011, I got into an argument with Emanuel Rutten on his blog, after he claimed that he had proven that God ‘necessarily exists’ using modal logic. Interestingly, Holt, who understands modal logic better than me, raises this same issue. Holt references Alvin Platinga’s argument, which he describes as ‘dauntingly technical’. In a nutshell: because of God’s ‘maximal greatness’, if one concedes he can exist in one possible world, he must necessarily exist in all possible worlds because ‘maximal greatness’ must exist in all possible worlds. Apparently, this was the basis of Godel’s argument (by logic) for the existence of God. But Holt contends that the argument can just as easily be reversed by claiming that there exists a possible world where ‘maximal greatness’ is absent’. And ‘if God is absent from any possible world, he is absent from all possible worlds…’ (italics in the original). Rutten, by the way, tried to have it both ways: a personal God necessarily exists, but a non-personal God must necessarily not exist. If you don’t believe me, check out the argument thread on his own blog which I link from my own post, Trying to define God (Nov. 2011).

Holt starts off with a brief history lesson, and just when you think: what else can he possibly say on the subject? he takes us on a globe-trotting journey, engaging some truly Olympian intellects. As the book progressed I found the topic more engaging and more thought-provoking. At the very least, Holt makes you think, as all good philosophy should. Holt acknowledges an influence and respect for Thomas Nagel, whom he didn’t speak with, but ‘…a philosopher I have always revered for his originality, depth and integrity.’

I found the most interesting person Holt interviewed to be David Deutsch, who is best known as an advocate for Hugh Everett’s ‘many worlds’ interpretation of quantum mechanics. Holt had expected a frosty response from Deutsch, based on a review he’d written on Deutsch’s book, The Fabric of Reality, for the Wall Street Journal where he’d used the famous description given to Lord Byron: “mad, bad and dangerous to know”. But he left Deutsch’s company with quite a different impression, where ‘…he had revealed a real sweetness of character and intellectual generosity.’

I didn’t know this, but Deutsch had extended Turing’s proof of a universal computer to a quantum version, whereby  ‘…in principle, it could simulate any physically possible environment. It was the ultimate “virtual reality” machine.’ In fact, Deutsch had presented his proof to Richard Feynman just before his death in 1988, who got up, as Deutsch was writing it on a blackboard, took the chalk off him and finished it off. Holt found out, from his conversation with Deutsch, that he didn’t believe we live in a ‘quantum computer simulation’.

Deutsch outlined his philosophy in The Fabric of Reality, according to Holt (I haven’t read it):

Life and thought, [Deutsch] declared, determine the very warp and woof of the quantum multiverse… knowledge-bearing structures – embodied in physical minds – arise from evolutionary processes that ensure they are nearly identical across different universes. From the perspective of the quantum multiverse as a whole, mind is a pervasive ordering principle, like a giant crystal.

When Holt asked Deutsch ‘Why is there a “fabric of reality” at all?’ he said “[it] could only be answered by finding a more encompassing fabric of which the physical multiverse was a part. But there is no ultimate answer.” He said “I would start with the principle of comprehensibility.”

He gave the example of a quasar in the universe and a model of the quasar in someone’s brain “…yet they embody the same mathematical relationships.” For Deutsch, it’s the comprehensibility of the universe (in particular, its mathematical comprehensibility) that provides a basis for the ‘fabric of reality’. I’ll return to this point later.

The most insightful aspect of Holt’s discourse with Deutsch was his differentiation between explanation by laws and explanation of specifics. For example, Newton’s theory of gravitation gave laws to explain what Kepler could only explain by specifics: the orbits of planets in the solar system. Likewise, Darwin and Wallace’s theory of natural selection gave a law for evolutionary speciation rather than an explanation for every individual species. Despite his affinity for ‘comprehensibility’, Deutsch also claimed: “No, none of the laws of physics can possibly answer the question of why the multiverse is there.”

It needs to be pointed out that Deutsch’s quantum multiverse is not the same as the multiverse propagated by an ‘eternally-inflating universe’. Apparently, Leonard Susskind has argued that “the two may really be the same thing”, but Steven Weinberg, in conversation with Holt, thinks they’re “completely perpendicular”.

Holt’s conversation with Penrose held few surprises for me. In particular, Penrose described his 3 worlds philosophy: the Platonic (mathematical) world, the physical world and the mental world. I’ve expounded on this in previous posts, including the one on metaphysics I mentioned earlier but also when I reviewed Mario Livio’s book, Is God a Mathematician? (March 2009).

Penrose argues that mathematics is part of our mental world (in fact, the most complex and advanced part) whilst our mental world is produced by the most advanced and complex part of the physical world (our brains). But Penrose is a mathematical Platonist, and conjectures that the universe is effectively a product of the Platonic world, which creates an existential circle when you contemplate all three. Holt found Penrose’s ideas too ‘mystical’ and suggests that he was perhaps more Pythagorean than Platonist. However, I couldn’t help but see a connection with Deutsch’s ‘comprehensibility’ philosophy. The mathematical model in the brain (of a quasar, for example) having the same ‘mathematical relationships’ as the quasar itself. Epistemologically, mathematics is the bridge between our comprehensibility and the machinations of the universe.

One thing that struck me right from the start of Holt’s book, yet he doesn’t address till the very end, is the fact that without consciousness there might as well be nothing. Nothingness is what happens when we die, and what existed before we were born. It’s consciousness that determines the difference between ‘something’ and ‘nothing’. Schrodinger, in What is Life? made the observation that consciousness exists in a continuous present. Possibly, it’s the only thing that does. After all, we know that photons don’t. As Raymond Tallis keeps reminding us, without consciousness, there is no past, present or future. It also means that without memory we would not experience consciousness. So some states of unconsciousness could simply mean that we are not creating any memories.

Another interesting personality in Holt’s engagements was Derek Parfit, who contemplated a hypothetical ‘selector’ to choose a universe. Both Holt and Parfit concluded, through pure logic, using ‘simplicity’ as the criterion, that there would be no selector and ‘lots of generic possibilities’ which would lead to a ‘thoroughly mediocre universe’. I’ve short-circuited the argument for brevity, but, contrary to Holt’s and Parfit’s conclusion, I would contend that it doesn’t fit the evidence. Our universe is far from mediocre if it’s produced life and consciousness. The ‘selector’, it should be pointed out, could be a condition like ‘goodness’ or ‘fullness’. But, after reading their discussion, I concluded that the logical ‘selector’ is the anthropic principle, because that’s what we’ve got: a universe that’s comprehensible containing conscious entities that comprehend it.

P.S. I wrote a post on The Anthropic Principle last month.


Addendum 1: In reference to the anthropic principle, the abovementioned post specifies a ‘weak’ version and a ‘strong’ version, but it’s perhaps best understood as a ‘passive’ version and an ‘active’ version. To combine both posts, I would argue that the fundamental ontological question in my title, raises an obvious, fundamental ontological fact that I expound upon in the second last paragraph: ‘without consciousness, there might as well be nothing.’ This leads me to be an advocate for the ‘strong’ version of the anthropic principle. I’m not saying that something can’t exist without consciousness, as it obviously can and has, but, without consciousness, it’s irrelevant.


Addendum 2 (18 Nov. 2012): Four months ago I wrote a comment in response to someone recommending Robert Amneus's book, The Origin of the Universe; Case Closed (only available as an e-book, apparently).

In particular, Amneus is correct in asserting that if you have an infinitely large universe with infinite time, then anything that could happen will happen an infinite number of times, which explains how the most improbable events can become, not only possible, but actual. So mathematically, given enough space and time, anything that can happen will happen. I would contend that this is as good an answer to the question in my heading as you are likely to get.

Wednesday, 18 July 2012

The real war in Afghanistan is set in hell for young girls


This is probably the most disturbing documentary I’ve seen on television, yet it elevates 4 Corners to the best current-affairs programme in Australia and, possibly, the world. I remember reading in USA Today, when American and coalition forces first went into Afghanistan after 9/11 (yes, I was in America at the time) a naïve journalist actually worrying that the change to democracy in Afghanistan might occur too quickly. I found it extraordinary that a journalist covering international affairs had such a limited view of the world outside their own country.

My understanding of Afghanistan is limited and obviously filtered through the eyes, ears and words of journalists, but there appears to be two worlds: one trying to break into the 21st Century through youthful television programmes (amongst other means) and one dominated by tribal affiliations and centuries-old customs and laws. In the latter, it is the custom to settle disputes by the perpetrator’s family giving land or daughters to the victim’s family. In other words, daughters are treated as currency and as bargaining chips in negotiations. In recent times, this has had tragic consequences resulting from a NATO-backed policy to destroy opium crops, which is the only real way that Afghan farmers can make money. Opium is the source of income for the Taliban but the trade is run by drug smugglers, based in Pakistan. They are the Afghani equivalent of the mafia in that they are merciless. With the destruction of crops, that the drug smugglers finance, they are abducting the farmer’s daughters, from as young as 7 years (as evident in the 4 Corners programme) for payment of their debts. The government and NATO are simply ignoring the problem, and as far as the Taliban is concerned, it’s an issue between the drug smugglers and the farmers.

This is a world that most of us cannot construe. If you put yourself in their shoes and ask: What would I do? Unless you are delusional, the answer has to be that you would do the same as them: you’d have no choice. It’s hard for us to imagine that there exists a world where life is so cheap, yet poverty, perpetual conflict and no control over one’s destiny inevitably leads to such a world. I hope this programme opens people’s eyes and breaks through the cocoon skin that most of us inhabit.

More than anything else, it demonstrates the moral bankruptcy of the Taliban, the cultural ignorance of the coalition and the inadequacy of Pakistani law enforcement.

Wednesday, 11 July 2012

It’s time the Catholic Church came out of the Closet


This programme was aired a couple of weeks ago on ABC’s 4 Corners, but it demonstrates how out-of-touch the Catholic Church is, not only with reality, but with community expectations. More than anything else, the Church lets down its own followers, betrays them in fact.

This deals specifically with a couple of cases in Australia, and it’s amazing that it takes investigative journalism to shine a light on them. Most damning for the Church, is evidence that protecting their pedophilic clergy and their own reputation was more important than protecting members of their congregation.

The most significant problem, highlighted by the programme, is the implicit belief, held by the Church and evidenced by their actions, that they are literally above the law that applies to everyone else.

This is an institution that claims to have the high moral ground on issues like abortion, therapeutic cloning, gay marriage, euthanasia, to nominate the most controversial ones, when it so clearly lacks any moral credibility. Most people in the West simply ignore the Catholic Church’s more inane teachings regarding contraception, but in developing countries, the Church has real clout. In countries where protection against AIDS and birth control are important issues, both for health and economic reasons, the Church’s attitude is morally irresponsible. The Catholic Church tries to pretend that it should be respected and taken seriously, and perhaps one day it will, when it enters the 21st Century and actually commits to the same laws as the people it supposedly preaches to.


Saturday, 30 June 2012

The Anthropic Principle


I’ve been procrastinating over this topic for some time, probably a whole year, such is the epistemological depth hidden behind its title; plus it has religious as well as scientific overtones. So I recently re-read John D. Barrow’s The Constants of Nature with this specific topic in mind. I’ve only read 3 of Barrow’s books, though his bibliography is extensive, and the anthropic principle is never far from the surface of his writing.

To put it into context, Barrow co-wrote a book titled, The Anthropic Cosmological Principle, with Frank J. Tipler in 1986, that covers the subject in enormous depth, both technically and historically. But it’s a dense read and The Constants of Nature, written in 2002, is not only more accessible but possibly more germane because it delineates the role of constants, dimensions and time in making the universe ultimately livable. I discussed Barrow’s The Book of Universes in May 2011, which, amongst other things, explains why the universe has to be so large and so old if life is to exist at all. In March this year, I also discussed the role of ‘chaos’ in the evolution of the universe and life, which leads me (at least) to contend that the universe is purpose-built for life to emerge (but I’m getting ahead of myself).

We have the unique ability (amongst species on this planet) to not only contemplate the origins of our existence, but to ruminate on the origins of the universe itself. Therefore it’s both humbling, and more than a little disconcerting, to learn that the universe is possibly even more unique than we are. This, in effect, is the subject of Barrow’s book.

Towards the end of the 19th Century, an Irish physicist, George Johnstone, attempted to come up with a set of ‘units’ based on known physical constants like c (the speed of light), e (the charge on an electron) and G (Newton’s gravitational constant). At the start of the 20th Century, Max Planck did the same, adding h (Planck’s quantum constant) to the mix. The problem was that these constants either produced very large numbers or very small ones, but they pointed the way to understanding the universe in terms of ‘Nature’s constants’.

Around the same time, Einstein developed his theory of relativity, which was effectively an extension of the Copernican principle that no observer has a special frame of reference compared to anyone else. Specifically, the constant, c, is constant irrespective of an observer’s position or velocity. In correspondence with Ilse Rosenthal-Schneider (1891-1990), Einstein expressed a wish that there would be dimensionless constants that arose from theory. In other words, Einstein wanted to believe that nature’s constants were not only absolute but absolutely no other value.  In his own words,  he wanted to know if “God had any choice in making the world”. In some respects this sums up Barrow’s book, because nature’s constants do, to a great extent, determine whether the universe could be life-producing.

On page 167 of the paperback edition (Vintage Books), Barrow produces a graph that shows the narrow region allowed by the electromagnetic coupling constant, α, and the mass ratio of an electron to a proton, β, for a habitable universe with stars and self-reproducible molecules. Not surprisingly, our universe is effectively in the middle of the region. On page 168, he produces another graph of α against the strong coupling constant, αs, that allows the carbon atom to be stable. In this case, the region is extraordinarily small (in both graphs, the scales are logarithmic).

I was surprised to learn that Immanuel Kant was possibly the first to appreciate the relationship between Newton’s theory of gravity being an inverse square law and the 3 dimensions of space. He concluded that the universe was 3D because of the inverse square law, whereas, in fact, we would conclude the converse. Paul Ehrenfest (1890 – 1933), who was a friend of Einstein, extended Kant’s insight when he theorised that stable planetary orbits were only possible in 3 dimensions (refer my post, This is so COOL, May 2012). But Ehrenfest made another revelation when he realised that 3 dimensional waves were special. In even dimensions, different parts of a ‘wavy disturbance’ travel at different speeds, and, whilst waves in odd dimensions have disturbances all travelling at the same speed, they become increasingly distorted in dimensions other than 3. On page 222, Barrow produces another graph demonstrating that only a universe with 3 dimensions of space and one of time, can produce a universe that is neither unpredictable, unstable nor too simple.

But the most intriguing and informative chapter in his book concerns research performed by himself, John Webb, Mike Murphy, Victor Flambaum, Vladimir Dzuba, Chris Churchill, Michael Drinkwater, Jason Prochaska and Art Wolfe that the fine structure constant (α) may have been a different value in the far distant past by the miniscule amount of 0.5 x 10-5, which equates to 5 x 10-16 per year. Barrow speculates that there are fundamentally 3 ages to the universe, which he calls the radiation age, the cold dark matter age and the vacuum energy age or curvature age (being negative curvature) and we are at the start of the third age. He simplifies this as the radiation era, the dust era and the curvature era. He contends that the fine structure constant increased in the dust era but is constant in the curvature era. Likewise, he believes that the gravitational constant, G, has decreased in the dust era but remains constant in the curvature era. He contends: ‘The vacuum energy and the curvature are the brake-pads of the Universe that turn off variations in the constants of Nature.’

Towards the end of the book, he contemplates the idea of the multiverse, and unlike other discussions on the topic, points out how many variations one can have. Do you just have different constants or do you have different dimensions, of both space and/or time? If you have every possible universe then you can have an infinite number, which means that there are an infinite number of every universe, including ours. He made this point in The Book of Universes as well.

I’ve barely scratched the surface of Barrow’s book, which, over 300 pages, provides ample discussion on all of the above topics plus more. But I can’t leave the subject without providing a definition of both the weak anthropic principle and the strong anthropic principle as given by Brandon Carter.

The weak principle: ‘that what we can expect to observe must be restricted by the condition necessary for our presence as observers.’

The strong principle: ‘that the universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers with it at some stage.’

The weak principle is effectively a tautology: only a universe that could produce observers could actually be observed. The strong principle is a stronger contention and is an existential one. Note that the ‘observers’ need not be human, and, given the sheer expanse of the universe, it is plausible that other ‘intelligent’ life-forms could exist that could also comprehend the universe. Having said that, Tipler and Barrow, in The Anthropic Cosmological Principle, contended that the consensus amongst evolutionary biologists was that the evolution of human-like intelligent beings elsewhere in the universe was unlikely.

Whilst this was written in 1986, Nick Lane (first Provost Venture Research Fellow at University College London) has done research on the origin of life, (funded by Leverhulme Trust) and reported in New Scientist (23 June 2012, pp.33-37) that complex life was a ‘once in four billion years of evolution… freak accident’.  Lane provides a compelling argument, based on evidence and the energy requirements for cellular life, that simple life is plausibly widespread in the universe but complex life (requiring mitochondria) ‘…seems to hinge on a single fluke event – the acquisition of one simple cell by another.’ As he points out: ‘All the complex life on Earth – animals, plants, fungi and so on – are eukaryotes, and they all evolved from the same ancestor.’

I’ve said before that the greatest mystery of the universe is that it created the means to understand itself. We just happen to be the means, and, yes, that makes us special, whether we like it or not. Another species could have evolved to the same degree and may do over many more billions of years and may have elsewhere in the universe, though Nick Lane’s research suggests that this is less likely than is widely believed.

The universe, and life on Earth, could have evolved differently as chaos theory tells us, so some other forms of intelligence could have evolved, and possibly have that we are unaware of. The Universe has provided a window for life, consciousness and intelligence to evolve, and we are the evidence. Everything else is speculation.

Saturday, 23 June 2012

Alan Turing’s 100th Birthday today


Alan Turing is not as well known as Albert Einstein, yet he arguably had a greater impact on the 20th Century and was no less a genius. Turing was not only one of the great minds of the 20th Century but one of the great minds in Western philosophy. In fact, in January, Nature called him “one of the top scientific minds of all time”. He literally invented the modern computer in his head in the 1930s as a thought experiment, whilst simultaneously solving one of the great mathematical problems of his age: the so-called ‘halting problem’. I’ve described this in a previous post (Jan. 2008) whilst reviewing Gregory Chaitin’s book, Thinking about Godel and Turing, but the occasion warrants some repetition.

The 2 June 2012 edition of New Scientist had a feature on Turing by John Graham-Cumming, and it covers in greater detail and erudition anything I can write here. For the public at large, Turing is probably best known for his role at Bletchley Park, in the 2nd World War, deciphering the Enigma code used by German U-boats. Turing’s contribution remained ‘classified’ until after his death, though, according to Wikipedia, he received an OBE ‘for his work at the Foreign Office’. Turing worked with Gordon Welchman on the Bombe, a machine they designed to run ‘cribs’ to decipher the enigma code. And, with mathematician Bill Tutte, he also developed a method to decode the Tunny cipher, which was used for high-level messages in Hitler’s command.

Turing also developed a ‘portable’ code called ‘Delilah’, which was unique in that it depended on clock-arithmetic, making it very difficult to decode compared to other ciphers. According to Graham-Cumming, the details of this have only recently been declassified.

Turing also became fascinated with mathematics in nature in his childhood, like the recurrence of Fibonacci sequences in spiral patterns in daisy petals and sunflower heads. In 1952 he published a paper on “The chemical basis of morphogenesis”, whereby ‘…specific chemical reactions were responsible for the irregular spots and patches on the skin of animals like leopards or cows, and the ridges inside the roof of the mouth.’ He provided a mathematical model (a computer simulation) of 2 chemicals interacting via diffusion and reaction in a chaotic yet repetitive fashion that would result in a variegated pattern. He speculated that this could become manifest as a literal pattern on animal skins if the 2 chemicals either turned on or off specific cells. Again, according to Graham-Cumming, as recently as January this year, researchers at King’s College London demonstrated Turing’s theory ‘…that 2 chemicals control the ridge patterns inside a mouse’s mouth.’

But, in scientific and mathematical circles, Turing is best known for his ‘proof’ of the ‘halting problem’, which is actually very simple to formulate but difficult to prove. Basically, Turing conjured a thought experiment of a machine that could compute an algorithm until it either found an answer or it didn’t, which meant it could run forever (the ‘halting problem’). Turing was able to prove that one could not determine in advance whether the algorithm would stop or not. An example is Goldbach’s conjecture, which can be easily formulated by an algorithm and run on a computer. At present there is no proof of the Goldbach conjecture but it has been derived by computers up to 100 trillion or 1014. Obviously, if we knew it could stop or not we could determine if it was true or not to infinity. The same is true for Riemann’s hypothesis, probably the most famous unsolved problem in mathematics. Chaitin (mentioned above) has invented a term, Ω (Omega) to provide a probability of Turing’s algorithm stopping. To quote from a previous post:

Chaitin claims that this is his major contribution to mathematics, arising from his invention of the term ‘Ω’ (Omega), though he calls it a discovery, to designate the probability of a programme ‘halting’, otherwise known as the ‘halting probability’.

But it was in conjuring his ‘thought experiment’ that Turing mentally invented what we now call a computer. I expect computers would have been invented without Turing in the same way relativity would have been discovered without Einstein, yet that is not to diminish either man’s genius or singular contribution. Turing’s insight was to imagine a ‘tape’ of infinite length with instructions that not only performed the algorithm but performed actions on the tape itself. It’s what we recognise today as software. Turing realised that this allowed a ‘universal’ machine to exist, now called a ‘universal Turing machine’, because the tape could instruct one machine to do what all possible machines could do. All modern computers are examples of Universal Turing Machines, including the one I’m using to write and post this blog.

One cannot discuss Turing without talking about the circumstances of his death, because it was a tragedy comparable to the deaths of Socrates and Lavoisier. Turing was persecuted for being a homosexual after he went to the Police to report a burglary. He was given a choice of imprisonment or ‘medical castration’ by hormone treatment, which he accepted. In 1954, at the relatively young age of 41, he committed suicide and the world lost a visionary, a genius and a truly great mind. John Graham-Cumming, the author of the 5 page feature in New Scientist, successfully campaigned for an official apology for Turing from the UK government in 2009. Given the current debate about gay marriage, it is apposite to remember the injustice that was done just over half a century ago to one of the greatest minds of all time. I’ve no doubt that there are many people who believe that Turing could have been ‘cured', such is the ignorance that still pervades many of the world’s societies, and is often promulgated by conservative religious groups, who have a peculiarly backward and anachronistic view of the world. Turing was ahead of his time in many ways, but in one way, tragically.

Addendum: For more detailed information, there is the Wiki site linked above, and Andrew Hodges dedicated Site. The Stanford Encyclopedia of Philosophy gives a good account of Turing’s seminal work in artificial intelligence. Andrew Hodges gives a good account of his untimely attitude to being openly homosexual and an insight into his modest character. There is a very strong sense of an extraordinary visionary intellect who was a victim of prejudice. 


Tuesday, 12 June 2012

Prometheus, the movie


Everyone is comparing Ridley Scott’s new film with his original Alien, and there are parallels, not just the fact that it’s meant to be a prequel. The crew include an android, a corporate nasty and a gutsy heroine, just like the first two movies. There are also encounters with unpleasant creatures. Alien was a seminal movie, which spawned its own sequels, albeit under different directors, yet it was more horror movie than Sci-Fi. But SF often combines genres and is invariably expected to be a thriller. Prometheus is not as graphically or viscerally scary as Alien, but it’s more a true Sci-Fi than a horror flick. In that respect I think it’s a better movie, though most reviewers I’ve read disagree with me.

Prometheus is a good title because it’s the Greek story about the Gods giving some of their abilities to humankind. Scott’s tale is a 21st Century creation myth, whereby mankind goes in search of the ‘people’ who supposedly ‘engineered’ us. One of the characters in the film quips in response to this claim: ‘There goes 3 centuries of Darwinism.’ From a purely scientific perspective, it’s possible that DNA originally came from somewhere else, either as spores or in meteorites or an icy comet, but it would have been very simple life forms at the start of evolution not the end of it. The idea that someone engineered our DNA so it would be compatible with Earthbound DNA destroys the suspension of disbelief required for the story, so it’s best to ignore that point.

But lots of Sci-Fi stories overlook this fundamental point when aliens meet Earthlings and interbreed for example (Avatar). And I’ve done it myself (in my fiction) though only to the extent that humans could eat food found on another planet. I suspect we could only do that, in reality, if the food contained DNA with the same chirality as ours. The universal unidirectional chirality of DNA is one of the strongest evidential factors that all life on Earth had a common origin.

But I have to admit that Ridley has me intrigued and I’m looking forward to the sequel, as the final scenes effectively promise us one. One of the major differences with Alien and its spinoffs is that there is a mystery in this story and the heroine is bent on finding the answer to it. She wants to find out who made us and where they came from and why they did it. There is an obvious religious allusion here, but this is closer to the Greek gods, suggested by the title rather than the Biblical god. Having said that, our heroine wears a cross and this is emphasised. I expect Ridley wants us to make a religious connection.

Good Sci-Fi in my view should contain a bit of philosophy – make us think about stuff. In this case, stuff includes the possibility of life on other worlds and the possibility that there may exist civilizations greater than ours, to the extent that they could have created us. We find it hard to imagine that we are the end result of a process that started from stardust; that something as complex and intelligent as us could not have been created by a greater intelligence. Ridley brings that point home when the android asks someone how would they feel about meeting their maker, as he has had to. So I’m happy to see where Ridley is going with this – it’s a question that most people have asked and not been satisfied with the answer. I don’t think Ridley is going to give us a metaphysical answer. I expect he’s going to challenge what it means to be human and what responsibilities that entails in the universe’s creation.