Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). 2 Reviews: here. Also this promotional Q&A on-line.

Sunday, 19 May 2013

Is the universe a computer?

In New Scientist (9 February 2013, pp.30-31) Ken Wharton presented an abridged version of his essay, The universe is not a computer, which won him third prize in the 2012 Foundational Questions Institute essay contest. Wharton is a quantum physicist at San Jose University, California. I found it an interesting and well-written article that not only put this question into an historical perspective, but addressed a fundamental metaphysical issue that’s relevant to the way we do science and view the universe itself. It also made me revisit Paul Davies’ The Goldilocks Enigma, because he addresses the same issue and more.

Firstly, Wharton argues that Newton changed fundamentally the way we do science when he used his newly discovered (invented) differential calculus (which he called fluxions) to describe the orbits of the planets in the solar system, and simultaneously confirmed, via mathematics, that the gravity that keeps our feet on the ground is the very same phenomenon that keeps the Earth in orbit around the sun. This of itself doesn’t mean the universe is a computer, but Wharton argues that Newton’s use of mathematics to uncover a natural law of the universe created a precedent in the way we do physics and subliminally the way we perceive the universe.

Wharton refers to a ‘Newtonian schema’ that tacitly supports the idea that because we predict future natural phenomena via calculation, perhaps the universe itself behaves in a similar manner. To quote: ‘But even though we’ve moved well beyond Newtonian physics, we haven’t moved beyond the new Newtonian schema. The universe, we almost can’t help but imagine, is some cosmic computer that generates the future from the past via some master “software” (the laws of physics) and some initial input (the big bang).’

Wharton is quick to point out that this is not the same thing as believing that the universe is a computer simulation – they are entirely different issues – Paul Davies and David Deutsch make the same point in their respective books (I reviewed Deutsch’s book, The Fabric of Reality, in September 2012, and Davies I discuss below). In fact, Deutsch argues that the universe is a ‘cosmic computer’ and Davies argues that it isn’t, but I’m getting ahead of myself.

Wharton’s point is that this belief is a tacit assumption underlying all of physics: ‘…where our cosmic computer assumption is so deeply ingrained that we don’t even realise we are making it.’

A significant part of Wharton’s article entails an exposition on the “Lagragian”, which has dominated physics in the last century, though it was first formulated by Joseph Louis Lagrange in 1788 and foreseen, in essence, by Pierre de Fermat (in the previous century) when he proposed the ‘least time’ principle for refracted light. A ray of light will always take the path of least time when it goes between mediums – like air and water or air and glass. James Gleick, in his biography of Richard Feynman, GENIUS, gives the example of a lifesaver having to run at an angle along a beach and then swim through surf to reach a swimmer in trouble. The point is that there is a path of ‘least time’ for the lifesaver, amongst an infinite number of paths he could take. The 2 extremes are that he could run perpendicularly into the surf and swim diagonally to the swimmer or he could run diagonally to the surf at the point opposite the swimmer and swim perpendicularly to him or her. Somewhere in between these 2 extremes there is an optimum path that would take least time (Wharton uses the same analogy in his article). In the case of light, travelling obliquely through 2 different mediums at different speeds, the light automatically takes the path of ‘least time’. This was ‘de Fermat’s principle’ even though he couldn’t prove it at the time he formulated it.

Richard Feynman, in particular, used this principle of ‘least action’, as it’s called, to formulate his integral path method of quantum mechanics. In fact, as Brian Cox and Jeff Fershaw point out in The Quantum Universe (reviewed December, 2011) Planck’s constant, h, is expressed in units of ‘least action’, and Feynman famously derived Schrodinger’s equation from a paper that Paul Dirac wrote on ‘The Lagrangian in Quantum Mechanics’. Feynman also described the significance of the principle, as applied to gravity, in Six-Not-So-Easy Pieces - in effect, it dictates the path of a body in a gravitational field. In a nutshell, the ‘least action’ is the difference between the kinetic and potential energy of the body. Nature contrives that it will always be a minimum, hence the description, ‘principle of least action’.

A bit of a detour, but it seems to be a universal principle that appears in every area of physics. It’s relevance to Wharton’s thesis is that ‘…physicists tend to view it as a mathematical trick rather than an alternative framework for how the universe might really work.’

However, Wharton argues that the mathematics of a ‘Lagrangian-friendly formulation of quantum theory [proposed by him] could be taken literally’. So Wharton is not eschewing mathematics or natural laws in mathematical guise (which is what a Lagrangian really is); he’s contending that the Newtonian schema no longer applies to quantum mechanics because of its inherent uncertainty and the need for a ‘…”collapse”, when all the built-up uncertainty suddenly emerges into reality.’

David Deutsch, for those who are familiar with his ideas, overcomes this obstacle by contending that we live in a quantum multiverse, so there is no ‘collapse’, just a number of realities, all consequences of the multiverse behaving like a cosmic quantum computer. I’ve discussed this and my particular contentions with it in another post.

Paul Davies discusses these same issues in the context of the universe’s evolution and all the diverse philosophical views that such a discussion encompasses. Davies devotes many pages of print to this topic and to present it in a few paragraphs is a travesty, but that’s what I’m going to do. In particular, Davies equates mathematical Platonism with Wharton’s Newtonian schema, though he doesn’t specifically reference Newton. He provides a compelling argument that a finite universe can’t possibly do calculus-type calculations requiring infinite elements of information. And that’s the real schema (or paradigm) that modern physics seems to embrace: that everything in the universe from quantum phenomena to thermodynamics to DNA can be understood in terms of information; in ‘bits’, which makes the computer analogy not only relevant but impossible to ignore. Personally, I think the computer analogy is apposite only because we live in the ‘computer age’. It’s not only the universe that is seen as a computer, but also the human brain (and other species, no doubt). The question I always ask is: where is the software? But that’s another topic.

DNA, to all intents and purposes, is a form of natural software where the code is expressed in amino acids and the hardware are proteins that are constructed and manipulated on a daily basis. DNA is a set of instructions to build a functioning biological organism – it’s as teleological as nature gets. A large part of Davies’ discussion entails teleology and its effective expulsion from science after Darwin, but the construction of every living organism on the planet is teleological even though its evolution is not. Another detour, though not an irrelevant one.

Davies argues that he’s not a Platonist, whilst acknowledging that most physicists conduct science in the Platonist tradition, even if they don’t admit it. Specifically, Davies challenges the Platonist precept that the laws of nature exist independently of the universe. Instead, he supports John Wheeler’s philosophy that ‘the laws of the universe emerged… “higgledy-piggledy”… and gradually congealed over time.’ I disagree with Davies, fundamentally on this point, not because the laws of the universe couldn’t have evolved over time, but because there is simply more mathematics than the universe needs to exist.

Davies also discusses at length the anthropic principle, both the weak and strong versions, and calls Deutsch’s version the ‘final anthropic principle’. Davies acknowledges that the strong version is contrary to the scientific precept that the universe is not teleological, yet, like me, points out the nihilistic conclusion (my term, not his) of a universe without consciousness. Davies overcomes this by embracing Wheeler’s philosophical idea that we are part of a cosmological quantum loop – an intriguing but not physically impossible concept. In fact, Davies’ book is as much a homage to Wheeler as it is an expression of his own philosophy.

My own view is much closer to RogerPenrose’s that there are 3 worlds: the mental, the Platonic and the physical; and that they can be understood in a paradoxical cyclic loop. By Platonic, he means mathematical, which exists independently of humanity and the universe, yet we only comprehend as a product of the human mind, which is a product of the physical universe, which arose from a set of mathematical laws – hence the loop. In my view this doesn’t make the universe a computer. I agree with Wharton on this point, but I see quantum mechanics as a substrate of the physical universe that existed before the universe as we know it evolved. This is consistent with the Hartle-Hawking cosmological view that the universe had no beginning in time as well as being consistent with Davies’ exposition that the ‘…vanishing of time for the entire universe becomes very explicit in quantum cosmology, where the time variable simply drops out of the quantum description.’

I’ve discussed this cosmological viewpoint before, but if the quantum substrate exists outside of time, then Wheeler’s and Davies’ version of the anthropic principle suddenly becomes more tenable.

Addendum: I wrote another post on this in 2018, which I feel is a stronger argument, and, in particular, includes the role of chaos.


Eli said...

One of my professors told me that the mind has historically been compared to the medium of the age - tablets, paper, now computers (I'm leaving some out, but you get the point). I think he intended that information to induce doubt on my part, but I dunno. Ultimately it's the evidence that'll decide the issue.

Paul P. Mealing said...

Ex-pat Aussie and pioneer roboticist, Rodney Brooks, makes a similar point. To quote from another post of mine (on the Singularity):

Brooks states at the outset that he sees biological organisms, and therefore the brain, as a ‘machine’. But the analogy for interpretation has changed over time, depending on the technology of the age. During the 17th Century (Descartes’ time), the model was hydrodynamics, and in the 20th century it has gone from a telephone exchange, to a logic circuit, to a digital computer to even the world wide web (Brooks’ exposition in brief).

Regards, Paul.