Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Thursday, 22 November 2018

The search for ultimate truth is unattainable

Someone lent me a really good philosophy book called Ultimate Questions by Bryan Magee.  To quote directly from the back fly leaf cover: “Bryan Magee has had an unusually multifaceted career as a professor of philosophy, music and theatre critic, BBC broadcaster and member of [British] Parliament.” It so happens I have another of his books, The Story of Philosophy, which is really a series of interviews with philosophers about philosophers, and I expect it’s a transcription of radio podcasts. Magee was over 80 when he wrote Ultimate Questions, which must be prior to 2016 when the book was published.

This is a very thought-provoking book, which is what you'd expect from a philosopher. To a large extent, and to my surprise, Magee and I have come to similar positions on fundamental epistemological and ontological issues, albeit by different paths. However, there is also a difference, possibly a divide, which I’ll come to later.

Where to start? I’ll start at the end because it coincides with my beginning. It’s not a lengthy tome (120+ pages) and it’s comprised of 7 chapters or topics, which are really discussions. In the last chapter, Our Predicament Summarized, he emphasises his view of an inner and outer world, both of which elude full comprehension, that he’s spent the best part of the book elaborating on.

As I’ve discussed previously, the inner and outer world is effectively the starting point for my own world view. The major difference between Magee and myself are the paths we’ve taken. My path has been a scientific one, in particular the science of physics, encapsulating as it does, the extremes of the physical universe, from the cosmos to the infinitesimal.

Magee’s path has been the empirical philosophers from Locke to Hume to Kant to Schopenhauer and eventually arriving at Wittgenstein. His most salient and persistent point is that our belief that we can comprehend everything there is to comprehend about the ‘world’ is a delusion. He tells an anecdotal story of when he was a student of philosophy and he was told that the word ‘World’ comprised not only what we know but everything we can know. He makes the point, that many people fail to grasp, that there could be concepts that are beyond our grasp in the same way that there are concepts we do understand but are nevertheless beyond the comprehension of the most intelligent of chimpanzees or dolphins or any creature other than human. None of these creatures can appreciate the extent of the heavens the way we can or even the way our ancient forebears could. Astronomy has a long history. Even indigenous cultures, without the benefit of script, have learned to navigate long distances with the aid of the stars. We have a comprehension of the world that no other creature has (on this planet) so it’s quite reasonable to assume that there are aspects of our world that we can’t imagine either.

Because my path to philosophy has been through science, I have a subtly different appreciation of this very salient point. I wrote a post based on Noson Yanofsky’s The Outer Limits of Reason, which addresses this very issue: there are limits in logic, mathematics and science, and there always will be. But I’m under the impression that Magee takes this point further. He expounds, better than anyone else I’ve read, that there are actual limits to what our brains can, not only perceive, but conceptualise, which leads to the possibility, most of us ignore, that there are things beyond our kin completely and always.

As Magee himself states, this opens the door to religion, which he discusses at length, yet he gives this warning: “Anyone who sets off in honest and serious pursuit of truth needs to know that in doing that he is leaving religion behind.” It’s a bit unfair to provide this quote out of context, as it comes at the end of a lengthy discussion, nevertheless, it’s the word ‘truth’ that gives his statement cogency. My own view is that religion is not an epistemology, it’s an experience. What’s more it’s an experience (including the experience of God) that is unique to the person who has it and can’t be shared with anyone else. This puts individual religious experience at odds with institutionalised religions, and as someone pointed out (Yuval Harari, from memory) this means that the people who have religious experiences are all iconoclasts.

I’m getting off the point, but it’s relevant in as much that arguments involving science and religion have no common ground. I find them ridiculous because they usually involve pitting an ancient text (of so-called prophecy) against modern scientific knowledge and all the technology it has propagated, which we all rely upon for our day-to-day existence. If religion ever had an epistemological role it has long been usurped.

On the other hand, if religion is an experience, it is part of the unfathomable which lies outside our rational senses, and is not captured by words. Magee contends that the best one can say about an afterlife or the existence of a God, is that ‘we don’t know’. He calls himself an agnostic but not just in the narrow sense relating to a Deity, but in the much broader sense of acknowledging our ignorance. He discusses these issues in much more depth than my succinct paraphrasing implies. He gives the example of music as something we experience that can’t be expressed in words. Many people have used music as an analogy for religious experience, but, as Magee points out, music has a material basis in instruments and a score and sound waves, whereas religion does not.

Coincidentally, someone today showed me a text on Socrates, from a much larger volume on classical Greece. Socrates famously proclaimed his ignorance as the foundation of his wisdom. In regard to science, he said: “Each mystery, when solved, reveals a deeper mystery.” This statement is so prophetic; it captures the essence of science as we know it today, some 2500 years after Socrates. It’s also the reason I agree with Magee.

John Wheeler conceived a metaphor, that I envisaged independently of him. (Further evidence that I’ve never had an original idea.)

We live on an island of knowledge surrounded by a sea of ignorance.
As our island of knowledge grows, so does the shore of our ignorance.


I contend that the island is science and the shoreline is philosophy, which implies that philosophy feeds science, but also that they are inseparable. By philosophy, in this context, I mean epistemology.

To give an example that confirms both Socrates and Wheeler, the discovery and extensive research into DNA provides both evidence and a mechanism for biological evolution from the earliest life forms to the most complex; yet the emergence of DNA as providing ‘instructions’ for the teleological development of an organism is no less a mystery looking for a solution than evolution itself.

The salient point of Wheeler's metaphor is that the sea of ignorance is infinite and so the island grows but is never complete. In his last chapter, Magee makes the point that truth (even in science) is something we progress towards without attaining. “So rationality requires us to renounce the pursuit of proof in favour of the pursuit of progress.” (My emphasis.) However, 'the pursuit of proof’ is something we’ve done successfully in mathematics ever since Euclid. It is on this point that I feel Magee and I part company.

Like many philosophers, when discussing epistemology, Magee hardly mentions mathematics. Only once, as far as I can tell, towards the very end (in the context of the quote I referenced above about ‘proof’) he includes it in the same sentence as science, logic and philosophy as inherited from Descartes, where he has this to say: “It is extraordinary to get people, including oneself, to give up this long-established pursuit of the unattainable.” He is right in as much as there will always be truths, including mathematical truths, that we can never know (refer my recent post on Godel, Turing and Chaitin). But there are also innumerable (mathematical) truths that we have discovered and will continue to discover into the future (part of the island of knowledge). As Freeman Dyson points out, 'Mathematics is forever', whilst discussing the legacy of Srinivasa Ramanujan's genius. In other words, mathematical truths don't become obsolete in the same way that science does.

I don’t know what Magee’s philosophical stance is on mathematics, but not giving it any special consideration tells me something already. I imagine, from his perspective, it serves no special epistemological role, except to give quantitative evidence for the validity of competing scientific theories.

In one of his earlier chapters, Magee talks about the ‘apparatus’ we have in the form of our senses and our brain that provide a limited means to perceive our external world. We have developed technological means to augment our senses; microscopes and telescopes being the most obvious. But we now have particle accelerators and radio telescopes that explore worlds we didn’t even know existed less than a century ago.

Mathematics, I would contend, is part of that extended apparatus. Riemann’s geometry allowed Einstein to perceive a universe that was ‘curved’ and Euler’s equation allowed Schrodinger to conceive a wave function. Both of these mathematically enhanced ‘discoveries’ revolutionised science at opposite ends of the epistemological spectrum: the cosmological and the subatomic.

Magee rightly points out our almost insignificance in both space and time as far as the Universe is concerned. We are figuratively like the blink of an eye on a grain of sand, yet reality has no meaning without our participation. In reference to the internal and external worlds that formulate this reality, Magee has this to say: “But then the most extraordinary thing is that the world of interaction between these two unintelligibles is rationally intelligible.” Einstein famously made a similar point: "The most incomprehensible thing about the Universe is that it’s comprehensible.”

One can’t contemplate that statement, especially in the context of Einstein’s iconic achievements, without considering the specific and necessary role of mathematics. Raymond Tallis, who writes a regular column in Philosophy Now, and for whom I have great respect, nevertheless downplays the role of mathematics. He once made the comment that mathematical Platonists (like me) 'make the error of confusing the map for the terrain.’ I wrote a response, saying: ‘the use of that metaphor infers the map is human-made, but what if the map preceded the terrain.’ (The response wasn’t published.) The Universe obeys laws that are mathematically in situ, as first intimated by Galileo, given credence by Kepler, Newton, Maxwell; then Einstein, Schrodinger, Heisenberg and Bohr.

I’d like to finish by quoting Paul Davies:

We have a closed circle of consistency here: the laws of physics produce complex systems, and these complex systems lead to consciousness, which then produces mathematics, which can then encode in a succinct and inspiring way the very underlying laws of physics that gave rise to it.

This, of course, is another way of formulating Roger Penrose’s 3 Worlds, and it’s the mathematical world that is, for me, the missing piece in Magee’s otherwise thought-provoking discourse.


Last word: I’ve long argued that mathematics determines the limits of our knowledge of the physical world. Science to date has demonstrated that Socrates was right: the resolution of one mystery invariably leads to another. And I agree with Magee that consciousness is a phenomenon that may elude us forever.

Addendum: I came across this discussion between Magee and Harvard philosopher, Hilary Putnam, from 1977 (so over 40 years ago), where Magee exhibits a more nuanced view on the philosophy of science and mathematics (the subject of their discussion) than I gave him credit for in my post. Both of these men take their philosophy of science from philosophers, like Kant, Descartes and Hume, whereas I take my philosophy of science from scientists: principally, Paul Davies, Roger Penrose and Richard Feynman, and to a lesser extent, John Wheeler and Freeman Dyson; I believe this is the main distinction between their views and mine. They even discuss this 'distinction' at one point, with the conclusion that scientists, and particularly physicists, are stuck in the past - they haven't caught up (my terminology, not theirs). They even talk about the scientific method as if it's obsolete or anachronistic, though again, they don't use those specific terms. But I'd point to the LHC (built decades after this discussion) as evidence that the scientific method is alive and well, and it works. (I intend to make this a subject of a separate post.)

Friday, 9 November 2018

Can AI be self-aware?

I recently got involved in a discussion on Facebook with a science fiction group on the subject of artificial intelligence. Basically, it started with a video claiming that a robot had discovered self-awareness, which is purportedly an early criterion for consciousness. But if you analyse what they actually achieved: it’s clever sleight-of-hand at best and pseudo self-awareness at worst. The sleight-of-hand is to turn fairly basic machine logic into an emotive gesture to fool humans (like you and me) into thinking it looks and acts like a human, which I’ll describe in detail below.

And it’s pseudo self-awareness in that it’s make-believe, in the same way that pseudo science is make-believe science, meaning pretend science, like creationism. We have a toy that we pretend exhibits self-awareness. So it is we who do the make-believe and pretending, not the toy.

If you watch the video you’ll see that they have 3 robots and they give them a ‘dumbing pill’ (meaning a switch was pressed) so they can’t talk. But one of them is not dumb and they are asked: “Which pill did you receive?” One of them dramatically stands up and says: “I don’t know”. But then waves its arm and says, “I’m sorry, I know now. I was able to prove I was not given the dumbing pill.”

Obviously, the entire routine could have been programmed, but let’s assume it’s not. It’s a simple TRUE/FALSE logic test. The so-called self-awareness is a consequence of the T/F test being self-referential – whether it can talk or not. It verifies that it’s False because it hears its own voice. Notice the human-laden words like ‘hears’ and ‘voice’ (my anthropomorphic phrasing). Basically, it has a sensor to detect sound that it makes itself, which logically determines whether the statement, it’s ‘dumb’, is true or false. It says, ‘I was not given a dumbing pill’, which means its sound was not switched off. Very simple logic.

I found an on-line article by Steven Schkolne (PhD in Computer Science at Caltech), so someone with far more expertise in this area than me, yet I found his arguments for so-called computer self-awareness a bit misleading, to say the least. He talks about 2 different types of self-awareness (specifically for computers) – external and internal. An example of external self-awareness is an iphone knowing where it is, thanks to GPS. An example of internal self-awareness is a computer responding to someone touching the keyboard. He argues that “machines, unlike humans, have a complete and total self-awareness of their internal state”. For example, a computer can find every file on its hard drive and even tell you its date of origin, which is something no human can do.

From my perspective, this is a bit like the argument that a thermostat can ‘think’. ‘It thinks it’s too hot or it thinks it’s too cold, or it thinks the temperature is just right.’ I don’t know who originally said that, but I’ve seen it quoted by Daniel Dennett, and I’m still not sure if he was joking or serious.

Computers use data in a way that humans can’t and never will, which is why their memory recall is superhuman compared to ours. Anyone who can even remotely recall data like a computer is called a savant, like the famous ‘Rain Man’. The point is that machines don’t ‘think’ like humans at all. I’ll elaborate on this point later. Schkolne’s description of self-awareness for a machine has no cognitive relationship to our experience of self-awareness. As Schkone says himself: “It is a mistake if, in looking for machine self-awareness, we look for direct analogues to human experience.” Which leads me to argue that what he calls self-awareness in a machine is not self-awareness at all.

A machine accesses data, like GPS data, which it can turn into a graphic of a map or just numbers representing co-ordinates. Does the machine actually ‘know’ where it is? You can ask Siri (as Schkolne suggests) and she will tell you, but he acknowledges that it’s not Siri’s technology of voice recognition and voice replication that makes your iphone self-aware. No, the machine creates a map, so you know where ‘You’ are. Logically, a machine, like an aeroplane or a ship, could navigate over large distances with GPS with no humans aboard, like drones do. That doesn’t make them self-aware; it makes their logic self-referential, like the toy robot in my introductory example. So what Schkolne calls self-awareness, I call self-referential machine logic. Self-awareness in humans is dependent on consciousness: something we experience, not something we deduce.

And this is the nub of the argument. The argument goes that if self-awareness amongst humans and other species is a consequence of consciousness, then machines exhibiting self-awareness must be the first sign of consciousness in machines. However, self-referential logic, coded into software doesn’t require consciousness, it just requires machine logic suitably programmed. I’m saying that the argument is back-to-front. Consciousness can definitely imbue self-awareness, but a self-referential logic coded machine does not reverse the process and imbue consciousness.

I can extend this argument more generally to contend that computers will never be conscious for as long as they are based on logic. What I’d call problem-solving logic came late, evolutionarily, in the animal kingdom. Animals are largely driven by emotions and feelings, which I argue came first in evolutionary terms. But as intelligence increased so did social skills, planning and co-operation.

Now, insect colonies seem to put the lie to this. They are arguably closer to how computers work, based on algorithms that are possibly chemically driven (I actually don’t know). The point is that we don’t think of ants and bees as having human-like intelligence. A termite nest is an organic marvel, yet we don’t think the termites actually plan its construction the way a group of humans would. In fact, some would probably argue that insects don’t even have consciousness. Actually, I think they do. But to give another well-known example, I think the dance that bees do is ‘programmed’ into their DNA, whereas humans would have to ‘learn’ it from their predecessors.

There is a way in which humans are like computers, which I think muddies the waters, and leads people into believing that the way we think and the way machines ‘think’ is similar if not synonymous.

Humans are unique within the animal kingdom in that we use language like software; we effectively ‘download’ it from generation to generation and it limits what we can conceive and think about, as Wittgenstein pointed out. In fact, without this facility, culture and civilization would not have evolved. We are the only species (that we are aware of) that develops concepts and then manipulates them mentally because we learn a symbol-based language that gives us that unique facility. But we invented this with our brains; just as we invent symbols for mathematics and symbols for computer software. Computer software is, in effect, a language and it’s more than an analogy.

We may be the only species that uses symbolic language, but so do computers. Note that computers are like us in this respect, rather than us being like computers. With us, consciousness is required first, whereas with AI, people seem to think the process can be reversed: if you create a machine logic language with enough intelligence, then you will achieve consciousness. It’s back-to-front, just as self-referential logic creating self-aware consciousness is back-to-front.

I don't think AI will ever be conscious or sentient. There seems to be an assumption that if you make a computer more intelligent it will eventually become sentient. But I contend that consciousness is not an attribute of intelligence. I don't believe that more intelligent species are more conscious or more sentient. In other words, I don't think the two attributes are concordant, even though there is an obvious dependency between consciousness and intelligence in animals. But it’s a one way dependency; if consciousness was dependent on intelligence then computers would already be conscious.


Addendum: The so-called Turing test is really a test for humans, not robots, as this 'interview' with 'Sophia' illustrates.

Thursday, 11 October 2018

My philosophy in 24 dot points

A friend (Erroll Treslan) posted on Facebook a link to a matrix that attempts to encapsulate the history of (Western) philosophy by listing the most influential people and linking their ideas, either conflicting or in agreement.

I decided to attempt the same for myself and have included those people, whom I believe influenced me, which is not to say they agree with me. In the case of some of my psychological points I haven’t cited anyone as I’ve forgotten where my beliefs came from (in those cases).

  • There are 3 worlds: physical, mental and mathematical. (Penrose)
  • Consciousness exists in a constant present; classical physics describes the past and quantum mechanics describes the future. (Schrodinger, Bragg, Dyson)
  • Reality requires both consciousness and a physical universe. You can have a universe without consciousness, which was the case in the past, but it has no meaning and no purpose. (Barrow, Davies)
  • Purpose has evolved but the Universe is not teleological in that it is not determinable. (Davies)
  • There is a cosmic anthropic principle; without sentient beings there might as well be nothing. (Carter, Barrow, Davies)
  • Mathematics exists independently from humans and the Universe. (Barrow, Penrose, Pythagoras, Plato)
  • There will always be mathematical truths we don’t know. (Godel, Turing, Chaitin)
  • Mathematics is not a language per se. It starts with the prime numbers, called the 'atoms of mathematics', yet extends to infinity and the transcendental. (Euclid, Euler, Riemann)
  • The Universe created the means to understand itself, with mathematics the medium and humans the only known agents. (Einstein, Wigner)
  •  The Universe obeys laws dependent on fine-tuned mathematical parameters. (Hoyle, Barrow, Davies)
  • The Universe is not a computer; chaos rules and is not predictable. (Stewart, Gleik)
  • The brain does not run on algorithms; there is no software. (Penrose, Searle)
  • Human language is analogous to software because we ‘download’ it from generation to generation and it ‘mutates’; if I can mix my metaphors. (Dawkins, Hofstadter)
  • We think and conceptualise in a language. Axiomatically, this limits what we can conceive and think about. (Wittgenstein)
  • We only learn something new when we integrate it into what we already know. (Wittgenstein)
  • Humans have the unique ability to nest concepts within concepts ad-infinitum, which mirror the physical world. (Hofstadter)
  • Morality is largely subjective, dependent on cultural norms but malleable by milieu, conditioning and cognitive dissonance. (Mill, Zimbardo)
  • It is inherently human to form groups with an ingroup-outgroup mentality.
  • Evil requires the denial of humanity in others.
  • Empathy is the key to social reciprocity at all levels of society. (Confucius, Jesus)
  • Quality of life is dependent on our interaction with others from birth to death. (Aristotle, Buddha)
  • Wisdom comes from adversity. The premise of every story ever told is about a protagonist dealing with adversity – it’s a universal theme (Frankl, I Ching).
  • God is an experience that is internal, yet is perceived as external. (Feuerbach)
  • Religion is the mind’s quest to find meaning for its own existence.

Addendum: I’ve changed it from 23 points to 24 by adding point 22. It’s actually a belief I’ve held for some time. They are all ‘beliefs’ except point 7, which arises from a theorem.

Monday, 3 September 2018

Is the world continuous or discrete?

There is an excellent series on YouTube called ‘Closer to Truth’, where the host, Richard Lawrence Kuhn, interviews some of the cleverest people on the planet (about existential and epistemological issues) in such a way that ordinary people, like you and me, can follow. I understand from Wikipedia that it’s really a television series started in 2000 on America’s PBS.

In an interview with Gregory Chaitin, he asks the above question, which made me go back and re-read Chaitin’s book, Thinking about Godel and Turing, which I originally bought and read over a decade ago, and then posted about on this blog, (not long after I created it). It’s really a collection of talks and abridged papers given by Chaitin from 1970 to 2007, so there’s a lot of repetition but also an evolution in his narrative and ideas. Reading it for the second time (from cover to cover) over a decade later has the benefit of using the filter of all the accumulated knowledge that I’ve acquired in the interim.

More than one person (Umberto Eco and Jeremy Lent, for examples) have wondered if the discreteness we find in the world, and which we logically apply to mathematics, is a consequence of a human projection rather than an objective reality. In other words, is it an epistemological bias rather than an ontological condition? I’ll return to this point later.

Back to Chaitin’s opus, he effectively takes us through the same logical and historical evolution over and over again, which ultimately leads to the same conclusions. I’ll summarise briefly. In 1931, Kurt Godel proved a theorem that effectively tells us that, within a formal axiom-based mathematical system, there will always be mathematical truths that can’t be solved. Then in 1936, Alan Turing proved, with a thought experiment that presaged the modern computer, that there will always be machine calculations that may never stop and we can’t predict whether they will or not. For example, Riemann’s hypothesis can be calculated using an algorithm to whatever limit you like (and is being calculated somewhere right now, probably) but you can never know in advance if it will ever stop (by finding a false result). As Chaitin points out, this is an extension of Godel’s theorem, and Godel’s theorem can be deduced from Turing’s.

Then Chaitin himself proved, by inventing (or discovering) a mathematical device, (Ω) called Omega, that there are innumerable numbers that can never be completely calculated (Omega gives a probability of a Turing program halting). In fact, there are more incomputable numbers than rational numbers, even though they are both infinite in extent. The rational Reals are countably infinite while the incomputable Reals are uncountably infinite. I’ve mentioned this previously when discussing Noson Yanofsky’s excellent book, The Outer Limits of Reason; What Science, Mathematics, and Logic CANNOT Tell Us. Chaitin claims that this proves that Godel’s Incompleteness Theorem is not some aberration, but is part of the foundation of mathematics – there are infinitely more numbers that can’t be calculated than those that can.

So that’s the gist of Chaitin’s book, but he draws some interesting conclusions on the side, so-to-speak. For a start, he argues that maths should be done more like physics and maybe we should accept some unproved theorems (like Riemann’s) as new axioms, as one would in physics. In fact, this is happening almost by default in as much as there already exists new theorems that are dependent on Riemann’s conjecture being true. In other words, Riemann’s hypothesis has effectively morphed into a mathematical caveat so people can explore its consequences.

The other area of discussion that Chaitin gets into, which is relevant to this discussion is whether the Universe is like a computer. He cites Stephen Wolfram (who invented Mathematica) and Edward Fredkin.

According to Pythagoras everything is number, and God is a mathematician… However, now a neo-Pythagorean doctrine is emerging, according to which everything is 0/1 bits, and the world is built entirely out of digital information. In other words, now everything is software, God is a computer programmer, not a mathematician, and the world is a giant information-processing system, a giant computer [Fredkin, 2004, Wolfram, 2002, Chaitin, 2005].

Carlo Rovelli also argues that the Universe is discrete, but for different reasons. It’s discrete because quantum mechanics (QM) has a Planck limit for both time and space, which would suggest that even space-time is discrete. Therefore it would seem to lend itself to being made up of ‘bits’. This fits in with the current paradigm that QM and therefore reality, is really about ‘information’ and information, as we know, comes in ‘bits’.

Chaitin, at one point, goes so far as to suggest that the Universe calculates its future state from the current state. This is very similar to Newton’s clockwork universe, whereby Laplace famously claimed that given the position of every particle in the Universe and all the relevant forces, one could, in principle, ‘read the future just as readily as the past’. These days we know that’s not correct, because we’ve since discovered QM, but people are arguing that a QM computer could do the same thing. David Deutsch is one who argues that (in principle).

There is a fundamental issue with all this that everyone seems to have either forgotten or ignored. Prior to the last century, a man called Henri Poincare discovered some mathematical gremlins that seemed of little relevance to reality, but eventually led to a physics discipline which became known as chaos theory.

So after re-reading Chaitin’s book, I decided to re-read Ian Stewart’s erudite and deeply informative book, Does God Play Dice? The New Mathematics of Chaos.

Not quite a third of the way through, Stewart introduces Chaitin’s theorem (of incomputable numbers) to demonstrate why the initial conditions in chaos theory can never be computed, which I thought was a very nice and tidy way to bring the 2 philosophically opposed ideas together. Chaos theory effectively tells us that a computer can never predict the future evolvement of the Universe, and it’s Chaitin’s own theorem which provides the key.

At another point, Stewart quips that God uses an analogue computer. He’s referring to the fact that most differential equations (used by scientists and engineers) are linear whilst nature is clearly nonlinear.

Today’s science shows that nature is relentlessly nonlinear. So whatever God deals with… God’s got an analogue computer as versatile as the entire universe to play with – in fact, it is the entire universe. (Emphasis in the original.)

As all scientists know (and Yanofsky points out in his book) we mostly use statistical methods to understand nature’s dynamics, not the motion of individual particles, which would be impossible. Erwin Schrodinger made a similar point in his excellent tome, What is Life? To give just one example that most people are aware of: radioactive decay (an example Schrodinger used). Statistically, we know the half-lives of radioactive decay, which follow a precise exponential rule, but no one can predict the radioactive decay of an individual isotope.

Whilst on the subject of Schrodinger, his eponymous equation is both linear and deterministic which seems to contradict the very idea of QM discrete and probabilistic effects. Perhaps that is why Carlo Rovelli contends that Schrodinger’s wavefunction has misled our attempts to understand QM in reality.

Roger Penrose explicates QM in phases: U, R and C (he always displays them bold), depicting the wave function phase; the measurement phase; and the classical physics phase. Logically, Schrodinger’s wave function only exists in the U phase, prior to measurement or observation. If it wasn’t linear you couldn’t add the waves together (of all possible paths) which is essential for determining the probabilities and is also fundamental to QED (which is the latest iteration of QM). The fact that it’s deterministic means that it can calculate symmetrically forward and backward in time.

My own take on this is that QM and classical physics obey different rules, and the rules for classical physics are chaos, which are neither predictable nor linear. Both lead to unpredictability but for different reasons and using different mathematics. Stewart has argued that just maybe you could describe QM using chaos theory and David Deutsch has argued the opposite: that you could use the multi-world interpretation of QM to explain chaos theory. I think they’re both wrong-headed, but I’m the first to admit that all these people know far more than me. Freeman Dyson (one of the smartest physicists not to win a Nobel Prize) is the only other person I know who believes that maybe QM and classical physics are distinct. He’s pointed out that classical physics describes events in the past and QM provides future probabilities. It’s not a great leap from there to suggest that the wavefunction exists in the future.

You may have noticed that I’ve wandered away from my original question, so maybe I should wonder my way back. In my introduction, I mentioned the epistemological point, considered by some, that maybe our employment of mathematics, which is based on integers, has made us project discreteness onto the world.

Chaitin’s theorem demonstrates that most of mathematics is not discrete at all. In fact, he cites his hero, Gottlieb Leibniz, that most of mathematics is ‘transcendental’, which means it’s beyond our intellectual grasp. This turns the general perception that mathematics is a totally logical construct on its head. We access mathematics using logic, but if there are an uncountable infinity of Reals that are not computable, then, logically, they are not accessible to logic, including computer logic. This is a consequence of Chaitin’s own theorem, yet he argues that is the reason it’s not reality.

In fact, Chaitin would argue that it’s because of that inacessability that a discrete universe makes sense. In other words, a discrete universe would be computable. However, chaos theory suggests God would have to keep resetting his parameters. (There is such a thing as ‘chaotic control’, called ‘Proportional Perturbation Feedback’, PPF, which is how pacemakers work.)

Ian Stewart has something to say on this, albeit while talking about something else. He makes the valid point that there is a limit to how many decimals you can use in a computer, which has practical limitations:

The philosophical point is that the discrete computer model we end up with is not the same as the discrete model given by atomic physics.

Continuity uses calculus, as in the case of Schrodinger’s equation (referenced above) but also Einstein’s field equations, and calculus uses infinitesimals to maintain continuity mathematically. A computer doing calculus ‘cheats’ (as Stewart points out) by adding differences quite literally.

This leads Stewart to make the following observation:

Computers can work with a small number of particles. Continuum mechanics can work with infinitely many. Zero or infinity. Mother Nature slips neatly into the gap between the two.

Wolfram argues that the Universe is pseudo-random, which would allow it to run on algorithms. But there are 2 levels of randomness, one caused by QM and one caused by chaos. (Chaos can create stability as well, which I‘ve discussed elsewhere.) The point is that initial conditions have to be calculated to infinity to determine chaotic phenomena (like weather), but it applies to virtually everything in nature. Even the orbits of the planets are chaotic, but over millions, even billions of years. So at some level the Universe may be discrete, even at the Planck scale, but when it comes to evolutionary phenomena, chaos rules, and it’s neither computably determinable (long term) nor computably discrete.

There is one aspect of this that I’ve never seen discussed and that is the relationship between chaos theory and time. Carlos Rovelli, in his recent book, The Order of Time, argues that ‘time’s arrow’ can only be explained by entropy, but another physicist, Richard A Muller, in his book, NOW; The Physics of Time, argues the converse. Muller provides a lengthy and compelling argument on why entropy doesn’t explain the arrow of time.

This may sound simplistic, but entropy is really about probabilities. As time progresses, a dynamic system, if left to its own devices, progresses to states of higher probability. For example, perfume released from a bottle in one corner of a room soon dissipates throughout the room because there is a much higher probability of that then it accumulating in one spot. A broken egg has an infinitesimally low probability of coming back together again. The caveat, ‘left to its own devices’, simply means that the system is in equilibrium with no external source of energy to keep it out of equilibrium.

What has this to do with chaos theory? Well, chaotic phenomena are time asymmetrical (they can't be repeated, if rerun). Take weather. If weather was time reversible symmetrical, forecasts would be easy. And weather is not in a state of equilibrium, so entropy is not the dominant factor. Take another example: biological evolution. It’s not driven by entropy because it increases in complexity but it’s definitely time asymmetrical and it’s chaotic. In fact, speciation appears to be fractal, which is a chaos parameter.

Now, I pointed out that the U phase of Penroses’s explication of QM is time symmetrical, but I would contend that the overall U, R, C sequence is not. I contend that there is a sequence from QM to classical physics that is time asymmetrical. This infers, of course, that QM and classical physics are distinct.


Addendum 1: This is slightly off-topic, but relevant to my own philosophical viewpoint. Freeman Dyson delivers a lecture on QM, and, in the 22.15 to 24min time period, he argues that the wavefunction and QM can only tell us about the future and not the past.

Addendum 2 (Conclusion): Someone told me that this was difficult to follow, so I've written a summary based on a comment I gave below.

Chaitin's theorem arises from his derivation of omega (Ω), which is the 'halting probability', an extension of Turing's famous halting theorem. You can read about it here, including its significance to incomputability.

I agree with Chaitin 'mathematically' in that I think there are infinitely more incomputable Reals than computable Reals. You already have this with transcendental numbers like π and e, which are incomputable. Chaitin's Ω can be computed to whatever resolution you like, just like π and e, but (of course) not to infinity.

I disagree with him 'philosophically' in that I don't think the Universe is necessarily discrete and can be reduced to 0s and 1s (bits). In other words, I don't think the Universe is like a computer.

Curiously and ironically, Chaitin has proved that the foundation of mathematics consists mostly of incomputable Reals, yet he believes the Universe is computable. I agree with him on the first part but not the second.

Addendum 3: I discussed the idea of the Universe being like a computer in an earlier post, with no reference to Chaitin or Stewart.

Addendum 4: I recently read Jim Al-Khalili's chapter on 'Laplace's demon' in his book, Paradox; The Nine Greatest Enigmas in Physics, which is specifically a discussion on 'chaos theory'. Al-Khalili contends that 'the Universe is almost certainly deterministic', but I think his definition of 'deterministic' might be subtly different to mine. He rightly points out that chaos is deterministic but unpredictable. What this means is that everything in the past and everything in the future has a cause and effect. So there is a causal path from any current event to as far into the past as you want to go. And there will also be a causal path from that same event into the future; it's just that you won't be able to predict it because it's uncomputable. In that sense the future is deterministic but not determinable. However (as Ian Stewart points out in Does God Play Dice?) if you re-run a chaotic experiment you will get a different result, which is almost a definition of chaos; tossing a coin is the most obvious example (cited by Stewart). My point is that if the Universe is chaotic then it follows that if you were to rerun the Universe you'd get a different result. So it may be 'deterministic' but it's not 'determined'. I might elaborate on this in a separate post.

Saturday, 25 August 2018

Do you know truth when you see it?

I saw an interesting 2 part documentary on the Judgement Day (21 May 2011) prediction by Harold Camping (since deceased as of 15 Dec 2013) on a programme called Compass, which is a long running programme on Australia’s ABC, covering a range of religious topics and some not so religious. It’s a very secular programme. Their highest rating episode for many years was Richard Dawkins’ The Root of all Evil (which at the time had never been shown in the US) but that was at least 10 years ago (pre The God Delusion).

Harold Camping hosted a radio programme on American Christian radio and he had a small but committed following. He had arrived at the date doing calculations based on biblical scripture that, apparently, only he could follow. He had made a prediction before but admitted that he had made a mistake. This time he was absolutely 100% positive that he’d got it right, as he’d rerun and double-checked his calculations a number of times.

What was interesting about this show was the psychology of belief and the severe cognitive dissonance people suffered when it didn’t come to pass. It’s very easy to be dismissive and say these people are gullible but at least some of the ones interviewed came across as intelligent and responsible, not crazies. I confess to being conned by smooth talkers who know how to read people and press the emotional buttons that can make you drop your guard. It’s only in hindsight that one thinks: how could I be so stupid? What I’m talking about is people whom you trust to deliver on something that is in fact a scam. Hopefully, you will learn and be more alert next time; you put it down to experience.

This situation is subtly different in that people accept as truth something that many of us would be sceptical of. It made me think about what criteria do we use to consider something true. Many people consider the Bible to contain truths in the form of prophecies and they will give you examples, challenging you to prove them wrong. They will cite the evidence (like the Dead Sea Scrolls predicting Christ’s crucifixion 350 years before it happened) and dare you to refute it. In other words, you’re the fool for not accepting something that is clearly described.

I give this example because I had this very discussion with someone recently who was an intelligent professional person. He went so far as to claim that the crucifixion as a form of execution hadn’t even been invented then. As soon as he told me that I knew it had to be wrong: someone doesn’t describe something in detail hundreds of years before anyone thought of it. But some research on Google showed that the Persians invented crucifixion around 400BC, so maybe it was described in the Dead Sea Scrolls – still not a prophecy. I told him that I simply don’t believe in prophecy because it implies that all of history is pre-ordained and I don’t accept that. Chaos theory alone dictates that determinism is pretty much an impossibility (and that’s without considering quantum mechanics).

That’s a short detour, but it illustrates the point that many well educated people believe that the Bible contains truths straight from God, which is the highest authority one can claim. It’s not such a great leap from that belief to God also providing the exact date for the end of the world, if one knows how to decipher His hidden code. I’m always wary of people who claim to know ‘the mind of God’ (unless they’re an atheist like Stephen Hawking).

In the current issue of Philosophy Now (Issue 127, Aug/Sep 2018), Sandy Grant (philosopher at the University of Cambridge) published an essay titled Dogmas, whereby she points out the pitfalls of accepting points of view on ‘authority’ without affording them critical analysis. I immediately thought of climate change, though she doesn’t discuss that specifically, and how many people believe that it is a dogma based on an authority that we can’t question, because said authorities live in academia; a place most of us never visit, and if we did we wouldn’t speak the language.

People, who view the Bible as a source of prophecy, have in common with people who are sceptical of climate change, an ingroup-outgroup mentality. It becomes tribal. In other words, we all listen to people whom we already agree with on a specific subject, and that becomes our main criterion for truth. This is the case with the followers of Harold Camping as well as people who claim that climate scientists are fraudulent so they can keep their jobs. You think I’m joking, but that’s what many people in Australia believe is the ‘truth’ about climate change.

Of course, one can argue the converse for climate change – that the people who believe in it (like me) are part of their own ingroup, but there are major differences. The people who are warning us about climate change actually know what they’re talking about, in the same way that a structural engineer discussing what caused the WTC towers to collapse would know what they’re talking about (as opposed to a conspiracy theorist).

I wrote a letter to Philosophy Now regarding Grant’s essay, specifically referencing climate change, which, as I’ve already mentioned, she didn’t address. This is an extract relevant to this discussion.

Opponents of climate change would call it dogma… This is a case where we are dependent on expertise that most of us don't have. But this is not the exception; it is, in fact, the norm with virtually all scientific knowledge.

We trust science because it's given us all the infrastructure and tools that we totally depend on to live a normal life in all Western societies around the globe. However, political ideology can suddenly transform this trust into dogma, and dogma, almost by definition, shouldn't be trusted if you're a thinking person, as Grant advises.

Sometimes, what people call dogma isn't dogma, but a lengthy process of investigation and research that has been hijacked and stigmatised by those who oppose its findings.


In both the case of climate change and Harold Camping’s prediction, it’s ultimately evidence that provides ‘truth’. 21 May 2011 came and went without the end of the world, and evidence of climate change is already apparent with glaciers retreating, ice shelfs melting, migratory species adapting, and it will become more apparent as time passes with sea rise being the most obvious. What’s harder to predict is the time frame and its ultimate impact, not its manifestation.

One of the reasons I’ve become more interested in mathematics as I’ve got older is that it’s a source of objective universal truth that seems to transcend the Universe. This point of view is itself contentious, but I can provide arguments. The most salient being that there will always be mathematical truths that we will never know, yet we know that they exist in some abstract space that can only be accessed by some intelligent being (or possibly a machine).

In science, I know that Einstein’s theories of relativity are true (both of them), not least because the satnav in my car wouldn’t work without them. I also know that quantum mechanics (QM) is true because every electronic device (including the PC I’m using to write this) depends on it. Yet both these theories defy our common sense knowledge of the world.

Truth is elusive and for some people can’t be distinguished from delusion. In both the case of Harold Camping’s prediction and climate change, one’s belief in the ‘truth’ is taken from purported authorities. But ultimately the truth only becomes manifest in hindsight, provided by evidence even ordinary people can’t ignore.

Tuesday, 21 August 2018

Is the world an illusion?

This is the latest Question of the Month in Philosophy Now (Issue 127, August/September 2018). I don't enter them all, but I confess I wrote this one in half an hour, though I spent a lot of time polishing it. Some readers will note that it comprises variations on ideas I've expressed before. The rules stipulate that it must be less than 400 words, so this is 399.


There are two aspects to this question: epistemological and ontological. Dreams are obviously illusional, where time and space are often distorted, yet we are completely accepting of these perceptual dislocations until we wake up and recall them. Dreams are also the only examples of solipsism we ever experience. But here’s the thing: how do you know that the so-called reality we all take for granted is not as illusory as a dream? Philosophers often claim that you don’t, in the same way that you don’t know if you’re a brain in a vat.

The standard answer to solipsism is that you can be the only one, because everyone you know and meet can only exist in your mind. So the answer to the difference between a dream and reality is the converse. If I meet someone in a dream, I’m the only one who is aware of it. On the other hand, if I meet someone in real life, we can both remember it. We both have a subjective conscious experience that is concordant with our respective memories. This doesn’t happen in a dream. By inference, everyone’s individual subjective experience is not only their particular reality but a reality that they share with others. We’ve all had shared experiences that we individually recall, correlate and mutually confirm.

The world, which I will call the Universe, exists on many levels, from the subatomic to the astronomical. Our normal perception of it only covers one level which is intermediate between these extremes. The different realities of scale are deduced through mathematics and empirical evidence, in the form of radio waves collected in an array of gigantic dishes (at the largest scale) to trails of high energy particles in the Large Hadron Collider (at the smallest scale). Kant once argued that we can never know ‘the thing-in-itself”, and he was right because the thing-in-itself changes according to the scale we observe it at.

In the last century we learned that everything we can see and touch is made of atoms that are mostly empty space. It requires advanced mathematics and a knowledge of quantum mechanics (using the Pauli Exclusion Principle) to demonstrate how it is that these atoms (of mostly empty space) don’t allow us to all fall through the floor that we are standing on. So we depend on the illusion that we are not predominantly empty space just to exist.


I wrote a much longer discussion on this issue (almost 2 years ago) in response to an academic paper that claims only conscious agents exist, and that nothing else (including spacetime) exists 'unperceived'.