Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Thursday, 9 May 2019

The Universe's natural units

It’s well known that Einstein’s general theory of relativity and quantum mechanics (QM) are incompatible. Actually, Viktor T. Toth (IT pro and part-time physicist) claims ‘incompatible’ is a misnomer, in as much as:

We can do quantum field theory just fine on the curved spacetime background of general relativity.

Then he adds this caveat:

What we have so far been unable to do in a convincing manner is turn gravity itself into a quantum field theory.

These carefully selected quotes are from a recent post by Toth on Quora where he is a regular contributor. His area of expertise is in cosmology, including the study of black holes. On another post he explains how the 2 theories are mathematically ‘incompatible’ (my term, not his):

The equation is Einstein’s field equation for gravitation, the equation that is, in many ways, the embodiment of general relativity:

Rμν−12Rgμν=8πGTμν.

The left-hand side of this equation represents a quantity formed from the spacetime metric, which determines the “deformation of spacetime”. The right-hand side of this equation is a quantity that is formed from the energy, momentum, angular momentum and internal stresses and pressure of matter.

He then goes on to explain that, while the RHS of the equation can be reformulated in QM nomenclature, the LHS can’t. There is a way out of this, which is to ‘average’ the QM side of the equation to get it into units compatible with the classical side, and this is called ‘semi-classical gravity’. But, again, in his own words:

…it is hideously inelegant, essentially an ad-hoc averaging of the equation that is really, really ugly and is not derived from any basic principle that we know.

Anyway, the point of this mini-exposition is that there is a mathematical conflict, if not an incompatibility, inherent in Einstein’s equation itself. One side of the equation can be expressed quantum mechanically and the other side can’t. What’s more, the resolution is to ‘bastardise’ the QM side to make it compatible with the classical side.

You may be wondering what all this has to do with the title of this post. The fundamental constant at the heart of general relativity is, of course, G, the same constant that Newton used in his famous formula:




On the other hand, the fundamental constant used in QM is Planck’s constant, h, most famously used by Einstein to explain the photo-electric effect. It was this paper (not his paper on relativity) that garnered Einstein his Nobel prize. It’s best known by Planck’s equation:

                                               E = hf

Where E is energy and f is the frequency of the photon. You may or may not know that Planck determined h empirically by studying hot body radiation, where he used it to resolve a particularly difficult thermodynamics problem. From Planck’s perspective, h was a mathematical invention and had no bearing on reality.

G was also determined empirically, by Cavendish in 1798 (well after Newton) and, of course, is used to mathematically track the course of the planets and the stars. There is no obvious or logical connection between these 2 constants based on their empirical origins.

There is a third constant I will bring into this discussion, which is c, the constant speed of light, which also involves Einstein, via his famous equation:

                                                                E = mc2

Now, having set the stage, I will invoke the subject of this post. If one uses Planck units, also known as ‘natural units’, one can see how these 3 constants are interrelated.

I will introduce another Quora contributor, Jeremiah Johnson (a self-described ‘physics theorist’) to explain:

The way we can arrive at these units of Planck Length and Planck Time is through the mathematical application of non-dimensionalization. What this does is take known constants and find what value each fundamental unit should be set to so they all equal one. (See below.)

Toth (whom I referenced earlier) makes the salient point that many people believe that the Planck units represent the physical smallest component of spacetime, and are therefore evidence, if not proof, that the Universe is inherently granular. But, as Toth points out, spacetime could still be continuous (or non-discrete) and the Planck units represent the limits of what we can know rather than the limits of what exists. I’ve written about the issue of ‘discreteness’ of the Universe before and concluded that it’s not (which, of course, doesn’t mean I’m right).

Planck units in ‘free space’ are the Universe’s ‘natural units’. They are literally the smallest units we can theoretically measure, therefore lending themselves to being the metrics of the Universe.

The Planck length is

1ℓP=1.61622837∗10−35m

And Planck time is

1tP=5.3911613∗10−44s

If you divide one by the other you get:

1ℓP/1tP=299,792,458m/s

Which of course, is the speed of light. As Johnson quips: “Isn’t that cool?”

Now, Max Planck derived these ‘natural units’ himself by looking at 5 equations and adjusting the scale of the units so as they would not only be consistent across the equations, but would non-dimensionalise the constants so they all equal 1 (as Johnson described above).

In fact, the definition of the Plank units (except charge) includes both G and h (which is h/2π). I checked the relationship between h and G, by calculating G from h, knowing the Planck units for length and time. In effect, I reverse-engineered the calculation, having to find the Planck mass from h then putting it into the formula for G.


G = 6.67408 × 10-11 m3 kg-1 s-2

The point is that I was able to derive G from h using Planck units. The Universe lends itself to portraying a consistency across metrics and natural phenomena based on units derived from constants that represent the extremes of scale, h and G. The constant, c, is also part of the derivation, and is essential to the dimension of time. It’s not such a mystery when one realises that the ‘units’ are derived from empirically determined constants   



Addendum: For a comprehensive, yet easy-to-read, historical account, I’d recommend John D. Barrow’s book, The Constants of Nature; From Alpha to Omega.

Addendum (13 April 2023): According to Toth, the Planck units don't provide a limit on anything:

...the Planck scale is not an inherent limit of anything. It is simply the set of “natural” units that characterize Nature.

So the Planck scale is not a physical limit or a limit on what can be observed; rather, it’s a limitation of the theory that we use to describe the quantum world.

Read his erudite exposition on the subject here:

https://www.quora.com/Why-is-the-Planck-length-the-smallest-measurable-length-Why-cant-it-be-smaller/answer/Viktor-T-Toth-1  

Friday, 3 May 2019

What is the third way?

This is the latest Question of the Month from Philosophy Now. I should point out that my last entry didn’t get published. After a run of something like 5 in a row, that was a bit of a perspective-changer. I had been getting cocksure. So I have 7 out of 9 published and now aiming for 8 out of 10. But 7 out of 10 is still a good result over a period of around 10 years. I should point out that I don’t enter every single one of them. I pick and choose, which skews my chances of success.

The ‘third way’ referenced in the question is basically a reference to an alternative societal paradigm to capitalism and communism. I expect that most, if not all responses will be variations on a 'middle way'. But if there is a completely out-of-the-box answer, I’ll be curious to read it. So, maybe the way the question is addressed will be just as important, if not more important, than the proposed resolution.



I think this is the most difficult question Philosophy Now has thrown at us in the decade or two I’ve been reading it. I think there definitely will be a third way by the end of this century, but I’m not entirely sure what it will be. Is that a copout? No, I’m going to attempt to forecast the future by looking at the past.

If one goes back before the industrial revolution, no one would have predicted that feudalism would not continue forever. But the industrial revolution unintentionally spawned two social experiments: communism and capitalism that spanned the 20th Century. I think one can fairly say that capitalism ultimately prevailed, because all communist inspired revolutions became State-run oligarchies that led to the worst excesses in totalitarianism.

What’s more, we saw more societal and technological change in the 20th Century than all previous history. There is no reason to believe that the 21st Century won’t be even more transformative. We are currently going through a technological revolution in every way analogous to the industrial revolution of the 19th Century, and it will be just as socially disruptive and economically challenging.

Capitalism has become so successful globally, especially in the high-tech industries, that corporations are starting to eclipse governments in their influence and power, and, to some extent, now embody the feudal system we thought we’d left behind. I’m referring to third world countries providing exploited labour and resources for the affluent elite, which includes me.

There is an increasing need to stop the wasteful production of goods on the altar of economic growth. It’s not only damaging the environment, it increases the gap between those who consume and those who produce. So a global economy would give the wealth to those who produce and not just those who are their puppet masters. This would require equitable wealth distribution on a global scale, not just nationally.

Future technologies will become more advanced to the point that there will be a symbiosis between humans and machines, and this will have a dramatic impact on economic drivers. A universal basic income, which is unthinkable now, will become a necessity because so many jobs will be AI executed.

People and their ideas are only considered progressive in hindsight. But what was radical in the past often becomes the status quo in the present; and voila: no one can imagine it any other way.


Addendum: I changed the last sentence of the third-last paragraph before I sent it off.

Friday, 26 April 2019

What use is philosophy?

Today I received my copy of Philosophy Now (Issue 131, April/May 2019), a UK based periodical I subscribe to.

Leafing through its pages, I came across the Letters section and saw my name. I had written a letter that I had forgotten about. It was in response to an article (referenced below), in the previous issue, about whether philosophy had lost its relevance in the modern world. Did it still have a role in the 21st Century of economic paradigms and technological miracles?


There are many aspects to Daniel Kaufman’s discussion on The Decline & Rebirth of Philosophy (Issue 130, Feb/Mar 2019, pp. 34-7), but mine is the perspective of an ‘outsider’, in as much as I’m not an academic in any field and I’m not a professional philosopher.

I think the major problem with philosophy, as it’s practiced today as an academic activity, is that it doesn’t fit into the current economic paradigm which specifically or tacitly governs all value judgements of a profession or an activity. In other words, it has no perceived economic value to either corporations or governments.

On the other hand, everyone can see the benefits of science in the form of the technological marvels they use every day, along with all the infrastructure that they quite literally couldn’t live without. Yet I would argue that science and philosophy are joined at the hip. Plato’s Academy was based on Pythagoras’s quadrivium: arithmetic, geometry, astronomy and music. In Western culture, science, mathematics and philosophy have a common origin.

The same people who benefit from the ‘magic’ of modern technology are mostly unaware of the long road from the Enlightenment to the industrial revolution, the formulation of the laws of thermodynamics, followed closely by the laws of electromagnetism, followed by the laws of quantum mechanics, upon which every electronic device depends.

John Wheeler, best known for coining the term, ‘black hole’ (in cosmology) said:

We live on an island of knowledge surrounded by a sea of ignorance. As our island of knowledge grows, so does the shore of our ignorance.

I contend that the ‘island of knowledge’ is science and the ‘shore of ignorance’ is philosophy. Philosophy is at the frontier of knowledge and because the ‘sea of ignorance’ is infinite, there will always be a role for it. Philosophy is not divorced from science and mathematics; it’s just not obviously in the guise it once was.

The marriage between science and philosophy in the 21st Century is about how we are going to live on a planet with limited resources. We need a philosophy to guide us into a collaborative global society that realises we need Earth more than it needs us.

Thursday, 4 April 2019

Is time a psychological illusion or a parameter of the Universe?

I’ve recently read Paul Davies latest book, The Demon in the Machine (released in Feb) and I would highly recommend it.

We have reached a stage in politics and media generally that you are either for or against a person, an idea or an ideology. Anyone who studies philosophy in any depth realises that there are many points of view on a single topic. There are many voices that I admire yet there is not one that I completely agree with on everything they announce or proclaim or theorise about.

Paul Davies new book is a case in point. This book is very intellectually stimulating, even provocative, which is what I expect and is what makes it worth reading. Within its 200 plus pages, there was one short, well-written and erudite passage where I found myself in serious disagreement. It was his discussion on time and its relation to our perceptions.

He starts with Einstein’s well known quote: ‘The past, present and future is only a stubbornly persistent illusion.’ It’s important to put this into its proper context. Einstein wrote this in a letter to a mother of a friend who had recently died. It was written, of course, not only to console her, but to reveal his own conclusions arising from his theories of relativity and their inherent effect on time.

A consequence of Einstein’s theory was that simultaneity was dependent on the observer, so it was possible that 2 observers could disagree on the sequence of events occurring (depending on their respective frames of reference). Note that this is only true if there is no causal relationship between these events.

Also, Einstein believed in what’s now called a ‘block universe’ whereby the future is as fixed as the past. Some physicists still argue this, in the same way that some (if not many) argue that we live in a computer simulation (Davies, it should be pointed out, definitely does not).

I’m getting off the track, because what Davies argues is that the so-called ‘arrow of time’ is an ‘illusion’, as is the ‘flow of time’. He goes so far as to contentiously claim that time can’t be measured. His argument is simple: if time was to ‘slow down’ or ‘speed up’ everything, from your heart rate to atomic clocks would do so as well, so there is no way to perceive it or measure it. He argues that you can’t measure time against time: “It has to be ‘One second per second’ – a tautology!” However, as Davies well knows, Einstein’s theory of relativity tells us that you can measure the ‘rate of time’ of one clock against another, and this is done and allowed for in GPS calculations. See my post on the special theory of relativity where I describe this very phenomenon.

Davies argues that there is no ‘backwards or forwards in time’ and the arrow of time is a ‘misnomer’, a metaphor we use to describe a psychological phenomenon. According to him, it’s our persistent belief in a continuity of self that creates the illusion of ‘time passing’. But I think he has it back-to-front. (I’ll return to this later.)

So, if there is no direction of time and no flow of time, how do we describe it? Well, one way is to talk about whether phenomena are symmetrical or asymmetrical in time. In other words, if you were to reverse a sequence of events would you get back to where you started, or is that even possible? Davies argues that entropy or the second law of thermodynamics accounts for this perception. But here’s the thing: without time, motion would not exist and causation would not exist; both of which we witness all the time. And if time does not ‘pass’ or ‘flow’, then what does it do?

Mathematically, time is a dimension, which even has a smallest unit, called ‘Planck time’. Davies says it’s not measurable, but we do, even to the extent that we derive an age of the Universe. John Barrow, in his The Book of Universes, even provides an estimate in ‘Planck units’. Mathematically, we provide 4 co-ordinates for any event in the Universe – 3 of space and 1 of time. And, obviously, they can all change, but time is unique in that it appears to change continuously.*

And time is ‘fluid’ for want of a better word. Its ‘rate’ can change in gravity and relativistically because the speed of light is constant. The speed of light is the only thing that stops everything from happening at once, and for a photon, time is zero. A photon traverses the entire universe in zero time (from the photon’s perspective).

But for the rest of us, time is a constraint created by light. Everything you observe has already happened because it always takes a finite amount of time (from your perspective) for the photon to reach you and nothing can travel faster than light (because it travels in zero time). This is the paradox, but it’s the relationship between light and time that governs our understanding of the Universe. If something speeds up relative to something else (you), then the light it emits increases in frequency if it’s coming towards you and decreases if it’s moving away. Obviously, the very fact that you can measure its frequency means you can measure its velocity (relative to you), which is meaningless without the dimension of time.

So note that all observations (involving light) mean that everything you perceive is in the past – it’s impossible to see into the future. So the ‘arrow of time’, that Davies specifically calls a ‘misnomer’, is a pertinent description of this everyday perception – we can only observe time in one direction, which is the past.

Davies explains our perception of time as a neurological effect:

It is incontestable that we possess a very strong psychological impression that our awareness is being swept along on an unstoppable current of time, and it is perfectly legitimate to seek a scientific explanation for the feeling that time passes. The explanation of this familiar psychological flux is, in my view, to be found in neuroscience, not physics. (emphasis in the original.)

I’ve argued previously that perhaps it is only consciousness that exists in a constant present. It is certainly true that only consciousness can perceive time as a dynamic entity. Everything around us becomes instantly the past like we are standing in a river where we can’t see upstream. It is for this reason that the concepts of past, present and future are uniquely perceived by a conscious mind. Davies effectively argues that this is the sole representation of time: that ‘time passing’ only exists in our minds and not in reality. But if our minds exist in a constant present (relative to everything else) then time does pass us by; and past, present and future is not an illusion, but a consequence of consciousness interacting with reality.

There are causal events that occur around us all the time, but, like a photographic image, they become past events as soon as they happen. I believe there is a universal ‘now’, otherwise the idea of the age of the Universe makes no sense. But, possibly, only conscious entities ride this constant now, which is why everything else is dynamically going past us in a literal, not just a psychological, sense. This is where Davies and I disagree.

Meanwhile, the future exists in light beams yet to be seen. Quantum mechanically, a photon is a wave function (ψ) that’s in the future of whatever it interacts with. A photon is only observed in retrospect, along with its path, and that’s true for all quantum events, including the famous double slit experiment. As Freeman Dyson points out, QM gives us probabilities which are in the future. To paraphrase: ‘quantum mechanics describes the future and classical physics describes the past’. Most physicists (including Davies, I suspect) would disagree. The orthodox view is that classical physics is a special case of quantum mechanics and, in quantum cosmology, time mathematically disappears.

 Footnote: I should point out that Paul Davies is someone I’ve greatly admired and respected for many years.

*Paradoxically, at the event horizon of a black hole, time stops and we enter the world of quantum gravity. The evidence for black holes are accretion disks where the matter from a companion star forms a ring at the event horizon and emits high energy radiation as a result, which can be observed. However, from everything I've read, we need new physics to understand what happens beyond the event horizon of a black hole.

Addendum: I've since resolved this paradox to my satisfaction: it's space that crosses the event horizon at c. Then I learned that Kip Thorne effectively provided the same explanation, demonstrated with graphics, in Scientific American in 1967. He cited David Finkelstein who demonstrated it mathematically in 1958.


Friday, 15 February 2019

3 rules for humans

A very odd post. I joined a Science Fiction group on Facebook, which has its moments. I sometimes comment, even have conversations, and most of the time manage to avoid conflicts. I’ve been a participator on the internet long enough to know when to shut up, or let something through to the keeper, as we say in Oz. In other words, avoid being baited. Most of the time I succeed, considering how opinionated I can be.

Someone asked the question: what would the equivalent 3 laws for humans be, analogous to Asimov’s 3 laws for robotics?

The 3 laws of robotics (without looking them up) are about avoiding harm to humans within certain constraints and then avoiding harm to robots or itself. It’s hierarchical with humans' safety being at the top, or the first law (from memory).

So I submitted an answer, which I can no longer find, so maybe someone took the post down. But it got me thinking, and I found that what I came up with was more like a manifesto than laws per se; so they're nothing like Asimov’s 3 laws for robotics.

In the end, my so-called laws aren't exactly what I submitted but they are succinct and logically consistent, with enough substance to elaborate upon.

                        1.    Don’t try or pretend to be something you’re not

This is a direct attempt at what existentialists call ‘authenticity’, but it’s as plain as one can make it. I originally thought of something Socrates apparently said:

   To live with honour in this world, actually be what you try to seem to be.

And my Rule No1 (preferable to law) is really another way of saying the same thing, only it’s more direct, and it has a cultural origin as well. As a child, growing up, ‘having tickets on yourself’, or ‘being up yourself’, to use some local colloquialisms, was considered the greatest sin. So I grew up with a disdain for pretentiousness that became ingrained. But there is more to it than that. I don’t believe in false modesty either.

There is a particular profession where being someone you’re not is an essential skill. I’m talking about acting. Spying also comes to mind, but the secret there I believe is to become invisible, which is the opposite to what James Bond does. That’s why John Le Carre’s George Smiley seems more like the real thing than 007 does. Going undercover, by the way, is extremely stressful and potentially detrimental to your health – just ask anyone who’s done it.

But actors routinely become someone they’re not. Many years ago, I used to watch a TV programme called The Actor’s Studio, where well known actors were interviewed, and I have to say that many of them struck me with their authenticity, which seems like a contradiction. But an Australian actress, Kerry Armstrong, once pointed out that acting requires leaving your ego behind. It struck me that actors know better than anyone else what the difference is between being yourself and being someone else.

I’m not an actor but I create characters in fiction, and I’ve always believed the process is mentally the same. Someone once said that ‘acting requires you to say something as if you’ve just thought of it, and not everyone can do that.’ So it’s spontaneity that matters. Someone else once said that acting requires you to always be in the moment. Writing fiction, I would contend, requires the same attributes. Writing, at least for me, requires you to inhabit the character, and that’s why the dialogue feels spontaneous, because it is. But paradoxically, it also requires authenticity. The secret is to leave yourself out of it.

The Chinese hold modesty in high regard. The I Ching has a lot to say about modesty, but basically we all like and admire people who are what they appear to be, as Socrates himself said.

We all wear masks, but I think those rare people who seem most comfortable without a mask are those we intrinsically admire the most.

                                   2.    Honesty starts with honesty to yourself

It’s not hard to see that this is directly related to Rule 1. The truth is that we can’t be honest to others if we are not honest to ourselves. It should be no surprise that sociopathic narcissists are also serial liars. Narcissists, from my experience, and from what I’ve read, create a ‘reality distortion field’ that is often at odds with everyone else except for their most loyal followers.

There is an argument that this should be Rule 1. They are obviously interdependent. But Rule 1 seems to be the logical starting point for me. Rule 2 is a consequence of Rule 1 rather than the other way round.

Hugh Mackay made the observation in his book, Right & Wrong: How to Decide for Yourself, that ‘The most damaging lies are the ones we tell ourselves’. From this, neurosis is born and many of the ills that beleaguer us. Self-honesty can be much harder than we think. Obviously, if we are deceiving ourselves, then, by definition, we are unaware of it. But the real objective of self-honesty is so we can have social intercourse with others and all that entails.

So you can see there is a hierarchy in my rules. It goes from how we perceive ourselves to how others perceive us, and logically to how we interact with them.

But before leaving Rule 2, I would like to mention a movie I saw a few years back called Ali’s Wedding, which was an Australian Muslim rom-com. Yes, it sounds like an oxymoron but it was a really good film, partly because it was based on real events experienced by the filmmaker. The music by Nigel Weslake was so good, I bought the soundtrack. It’s relevance to this discussion is that the movie opens with a quote from the Quran about lying. It effectively says that lies have a habit of snowballing; so you dig yourself deeper the further you go. It’s the premise upon which the entire film is based.

                              3.    Assume all humans have the same rights as you

This is so fundamental, it could be Rule 1, but I would argue that you can’t put this into practice without Rules 1 and 2. It’s the opposite to narcissism, which is what Rules 1 and 2 are attempting to counter.

One can see that a direct consequence is Confucius’s dictum: ‘Don’t do to others what you wouldn’t want done to yourself’; better known in the West as the Golden Rule: ‘Do unto others as you would have others do unto you’; and attributed to Jesus of course.

It’s also the premise behind the United Nations Bill of Human Rights. All these rules are actually hard to live by, and I include myself in that broad statement.

A couple of years back when I wrote a post in response to the question: Is morality objective? I effectively argued that Rule No3 is the only objective morality.

Friday, 8 February 2019

Some people might be offended by this

I read an article recently in The New Yorker (Issue Jan. 21, 2019) by Vinson Cunningham called The Bad Place; How the idea of Hell has shaped the way we think. I think it was meant to be a review of a book, called, aptly enough, The Penguin Book of Hell, edited by Scott G. Bruce, but Cunningham’s discussion meanders widely, including Dante’s Inferno and Homer’s Odyssey, as well as his own Christian upbringing in a Harlem church.

I was reminded of the cultural difference between America and Australia, when it comes to religion. A difference I was very aware of when I lived and worked in America over a combined period of 9 months, including New Jersey, Texas and California.

It’s hard to imagine any mainstream magazine or newspaper having this discussion in Australia, or, if they did, it would be more academic. I was in the US post 9/11 – in fact, I landed in New York the night before. I remember reading an editorial in a newspaper where people were arguing about whether the victims of the attack would go to heaven or not. I thought: how ridiculous. In the end, someone quoted from the Bible, as if that resolved all arguments  – even more ridiculous, from my perspective.

I remember reading in an altogether different context someone criticising a doctor for facilitating prayer meetings in a Jewish hospital because the people weren’t praying to Jesus, so their prayers would be ineffective. This was a cultural shock to me. No one discussed these issues or had these arguments in Australian media. At least, not in mainstream media, be it conservative or liberal.

Reading Cunningham’s article reminded me of all this because he talks about how real hell is for many people. To be fair, he also talks about how hell has been sidelined in secular societies. In Australia, people don’t discuss their religious views that much, so one can’t be sure what people really believe. But I was part of a generation that all but rejected institutionalised religion. I’ve met many people from succeeding generations who have no knowledge of biblical stories, whereas for me, it was simply part of one’s education.

One of the best ‘modern’ examples of hell or the underworld I found was in Neil Gaiman’s Sandman graphic novel series. It’s arguably the best graphic novel series written by anyone, though I’m sure aficionados of the medium may beg to differ. Gaiman borrowed freely from a range of mythologies, including Orpheus, the Bible (in particular the story of Cain and Abel) and even Shakespeare. His hero has to go to Hell and gets out by answering a riddle from its caretaker, the details of which I’ve long forgotten, but I remember thinking it to be one of those gems that writers of fiction (like me) envy. 

Gaiman also co-wrote a book with Terry Pratchett called Good Omens: The Nice and Accurate Prophecies of Agnes Nutter (1990) which is a great deal of fun. The premise, as described in Wikipedia: ‘The book is a comedy about the birth of the son of Satan, the coming of the End Times.’ Both authors are English, which possibly allows them a sense of irreverence that many Americans would find hard to manage. I might be wrong, but it seems to me that Americans take their religion way more seriously than the rest of us English-speaking nations, and this is reflected in their media.

And this brings me back to Cunningham’s article because it’s written in a cultural context that I simply don’t share. And I feel that’s the crux of this issue. Religion and all its mental constructs are cultural, and hell is nothing if not a mental construct.

My own father, whom I’ve written about before, witnessed hell first hand. He was in the Field Ambulance Corp in WW2 so he retrieved bodies in various states of beyond-repair from both sides of the conflict. He also spent 2.5 years as a POW in Germany. I bring this up, because when I was a teenager he told me why he didn’t believe in the biblical hell. He said, in effect, he couldn’t believe in a ‘father’ who sent his children to everlasting torment. I immediately saw the sense in his argument and I rejected the biblical god from that day on. This is the same man, I should point out, who believed it was his duty that I should have a Christian education. I thank him for that, otherwise I’d know nothing about it. When I was young I believed everything I was taught, which perversely made it easier to reject when I started questioning things. I know many people who had the same experience. The more they believed, the stronger their rejection.

I recently watched an excellent 3 part series, available on YouTube, called Testing God, which is really a discussion about science and religion. It was made by the UK’s Channel 4 in 2001, and includes some celebrity names in science, like Roger Penrose, Paul Davies and Richard Dawkins, and theologians as well; in particular, theologians who had become, or been, scientists.

In the last episode they interviewed someone who suffered horrendously in the War – he was German, and a victim of the fire-storm bombing. Contrary to many who have had similar experiences he found God, whereas, before, he’d been an atheist. But his idea of God is of someone who is patiently waiting for us.

I’ve long argued that God is subjective not objective. If humans are the only connection between the Universe and God, then, without humans, there is no reason for God to exist. There is no doubt in my mind that God is a projection, otherwise there wouldn’t be so many variants. Xenophanes, who lived in the 5th century BC, famously said:

The Ethiops say that their gods are flat-nosed and black,

While the Thracians say that theirs have blue eyes and red hair.

Yet if cattle or horses or lions had hands and could draw,

And could sculpt like men, then the horses would draw their gods

Like horses, and cattle like cattle; and each they would shape

Bodies of gods in the likeness, each kind, of their own.

At the risk of offending people even further, the idea that the God one finds in oneself is the Creator of the Universe is a non sequitur. My point is that there are two concepts of God which are commonly conflated. God as a Creator and God as a mystic experience, and there is no reason to believe that they are one and the same. In fact, the God as experience is unique to the person who has it, whilst God as Creator is, by definition, outside of space and time. One does not logically follow from the other.

In another YouTube video altogether, I watched an interview with Freeman Dyson on science and religion. He argues that they are quite separate and there is only conflict when people try to adapt religion to science or science to religion. In fact, he is critical of Einstein because Dyson believes that Einstein made science a religion. Einstein was influenced by Spinoza and would have argued, I believe, that the laws of physics are God.

John Barrow in one his books (Pi in the Sky) half-seriously suggests that the traditional God could be replaced by mathematics.

This brings me to a joke, which I’ve told elsewhere, but is appropriate, given the context.

What is the difference between a physicist and a mathematician?
A physicist studies the laws that God chose for the Universe to obey.
A mathematician studies the laws that God has to obey.


Einstein, in a letter to a friend, once asked the rhetorical question: Do you think God had a choice in creating the laws of the Universe?

I expect that’s unanswerable, but I would argue that if God created mathematics he had no choice. It’s not difficult to see that God can’t make a prime number non-prime, nor can he change the value of pi. To put it more succinctly, God can’t exist without mathematics, but mathematics can exist without God.

In light of this, I expect Freeman Dyson would accuse me of the same philosophical faux pas as Einstein.

As for hell, it’s a cultural artefact, a mental construct devised to manipulate people on a political scale. An anachronism at best and a perverse psychological contrivance at worst.