Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Saturday, 22 December 2018

When real life overtakes fiction

I occasionally write science fiction; a genre I chose out of fundamental laziness. I knew I could write in that medium without having to do any research to speak of. I liked the idea of creating the entire edifice - world, story and characters - from my imagination with no constraints except the bounds of logic.

There are many subgenres of sci-fi: extraterrestrial exploration, alien encounters, time travel, robots & cyborgs, inter-galactic warfare, genetically engineered life-forms; but most SF stories, including mine, are a combination of some of these. Most sci-fi can be divided into 2 broad categories – space opera and speculative fiction, sometimes called hardcore SF. Space operas, exemplified by the Star Wars franchise, Star Trek and Dr Who, generally take more liberties with the science part of science fiction.

I would call my own fictional adventures science-fantasy, in the mould of Frank Herbert’s Dune series or Ursula K Le Guin’s fiction; though it has to be said, I don’t compete with them on any level.

I make no attempt to predict the future, even though the medium seems to demand it. Science fiction is a landscape that I use to explore ideas in the guise of a character-oriented story. I discovered, truly by accident, that I write stories about relationships. Not just relationships between lovers, but between mother and daughter, daughter and father(s), protagonist and nemesis, protagonist and machine.

One of the problems with writing science fiction is that the technology available today seems to overtake what one imagines. In my fiction no one uses a mobile phone. I can see a future where people can just talk to someone in the ether, because they can connect in their home or in their car, without a device per se. People can connect via a holographic form of Skype, which means they can have a meeting with someone in another location. We are already doing this, of course, and variations on this theme have been used in Star Wars and other space operas. But most of the interactions I describe are very old fashioned face-to-face, because that's still the best way to tell a story.

If you watch (or read) crime fiction you’ll generally find it’s very suspenseful with violence not too far away. But if you analyze it, you’ll find it’s a long series of conversations, with occasional action and most of the violence occurring off-screen (or off-the-page). In other words, it’s more about personal interactions than you realise, and that’s what generally attracts you, probably without you even knowing it.

This is a longwinded introduction to explain why I am really no better qualified to predict future societies than anyone else. I subscribe to New Scientist and The New Yorker, both of which give insights into the future by examining the present. In particular, I recently read an article in The New Yorker (Dec, 17, 2018) by David Owen about facial-recognition, called Here’s Looking At You, that is already being used by police forces in America to target arrests without any transparency. Mozilla (in a podcast last year) described how a man had been misidentified twice, was arrested and subsequently lost his job and his career. I also read in last week’s New Scientist (15 Dec. 2018) how databases are being developed to know everything about a person, even what TV shows they watch and their internet use. It’s well known that in China there is a credit-point system that determines what buildings you can access and what jobs you can apply for. China has the most surveillance cameras anywhere in the world, and they intend to combine them with the latest facial recognition software.

Yuval Harari, in Homo Deus, talks about how algorithms are going to take over our lives, but I think he missed the mark. We are slowly becoming more Orwellian with social media already determining election results. In the same issue of New Scientist, journalist, Chelsea Whyte, asks: Is it time to unfriend the social network? with specific reference to Facebook’s recently exposed track-record. According to her: “Facebook’s motto was once ‘move fast and break things.’ Now everything is broken.” Quoting from the same article:

Now, the UK parliament has published internal Facebook emails that expose the mindset inside the company. They reveal discussions among staff over whether to collect users’ phone call logs and SMS texts through its Android app. “This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it.” (So said Product Manager Michael LeBeau in an email from 2015)

Even without Edward Snowden’s whistle-blowing expose, we know that governments the world over are collecting our data because the technological ability to do that is now available. We are approaching a period in our so-called civilised development where we all have an on-line life (if you are reading this) and it can be accessed by governments and corporations alike. I’ve long known that anyone can learn everything they need to know about me from my computer, and increasingly they don’t even need the computer.

In one of my fictional stories, I created a dystopian world where everyone had a ‘chip’ that allowed all conversations to be recorded so there was literally no privacy. We are fast approaching that scenario in some totalitarian societies. In Communist China under Mao, and Communist Soviet Union under Stalin, people found the circle of people they could trust got smaller and smaller. Now with AI capabilities and internet-wide databases, privacy is becoming illusory. With constant surveillance, all subversion can be tracked and subsequently prosecuted. Someone once said that only societies that are open to new ideas progress. If you live in a society where new ideas are censored then you will get stagnation.

In my latest fiction I’ve created another autocratic world, where everyone is tracked because everywhere they go they interact with very realistic androids who act as servants, butlers and concierges, but, in reality, keep track of what everyone’s doing. The only ‘futuristic’ aspect of this are the androids and the fact that I’ve set it on some alien world. (My worlds aren’t terra-formed; people live in bubbles that create a human-friendly environment.)

After reading these very recent articles in New Scientist and TNY, I’ve concluded that our world is closer to the one I’ve created in my imagination than I thought.


Addendum 1: This is a podcast about so-called Surveillance Capitalism, from Mozilla. Obviously, I use Google and I'm also on FaceBook, but I don't use Twitter. Am I part of the problem or part of the solution? The truth is I don't know. I try to make people think and share ideas. I have political leanings, obviously, but they're transparent. Foremost, I believe, that if you can't put your name to something you shouldn't post it.

Thursday, 22 November 2018

The search for ultimate truth is unattainable

Someone lent me a really good philosophy book called Ultimate Questions by Bryan Magee.  To quote directly from the back fly leaf cover: “Bryan Magee has had an unusually multifaceted career as a professor of philosophy, music and theatre critic, BBC broadcaster and member of [British] Parliament.” It so happens I have another of his books, The Story of Philosophy, which is really a series of interviews with philosophers about philosophers, and I expect it’s a transcription of radio podcasts. Magee was over 80 when he wrote Ultimate Questions, which must be prior to 2016 when the book was published.

This is a very thought-provoking book, which is what you'd expect from a philosopher. To a large extent, and to my surprise, Magee and I have come to similar positions on fundamental epistemological and ontological issues, albeit by different paths. However, there is also a difference, possibly a divide, which I’ll come to later.

Where to start? I’ll start at the end because it coincides with my beginning. It’s not a lengthy tome (120+ pages) and it’s comprised of 7 chapters or topics, which are really discussions. In the last chapter, Our Predicament Summarized, he emphasises his view of an inner and outer world, both of which elude full comprehension, that he’s spent the best part of the book elaborating on.

As I’ve discussed previously, the inner and outer world is effectively the starting point for my own world view. The major difference between Magee and myself are the paths we’ve taken. My path has been a scientific one, in particular the science of physics, encapsulating as it does, the extremes of the physical universe, from the cosmos to the infinitesimal.

Magee’s path has been the empirical philosophers from Locke to Hume to Kant to Schopenhauer and eventually arriving at Wittgenstein. His most salient and persistent point is that our belief that we can comprehend everything there is to comprehend about the ‘world’ is a delusion. He tells an anecdotal story of when he was a student of philosophy and he was told that the word ‘World’ comprised not only what we know but everything we can know. He makes the point, that many people fail to grasp, that there could be concepts that are beyond our grasp in the same way that there are concepts we do understand but are nevertheless beyond the comprehension of the most intelligent of chimpanzees or dolphins or any creature other than human. None of these creatures can appreciate the extent of the heavens the way we can or even the way our ancient forebears could. Astronomy has a long history. Even indigenous cultures, without the benefit of script, have learned to navigate long distances with the aid of the stars. We have a comprehension of the world that no other creature has (on this planet) so it’s quite reasonable to assume that there are aspects of our world that we can’t imagine either.

Because my path to philosophy has been through science, I have a subtly different appreciation of this very salient point. I wrote a post based on Noson Yanofsky’s The Outer Limits of Reason, which addresses this very issue: there are limits in logic, mathematics and science, and there always will be. But I’m under the impression that Magee takes this point further. He expounds, better than anyone else I’ve read, that there are actual limits to what our brains can, not only perceive, but conceptualise, which leads to the possibility, most of us ignore, that there are things beyond our kin completely and always.

As Magee himself states, this opens the door to religion, which he discusses at length, yet he gives this warning: “Anyone who sets off in honest and serious pursuit of truth needs to know that in doing that he is leaving religion behind.” It’s a bit unfair to provide this quote out of context, as it comes at the end of a lengthy discussion, nevertheless, it’s the word ‘truth’ that gives his statement cogency. My own view is that religion is not an epistemology, it’s an experience. What’s more it’s an experience (including the experience of God) that is unique to the person who has it and can’t be shared with anyone else. This puts individual religious experience at odds with institutionalised religions, and as someone pointed out (Yuval Harari, from memory) this means that the people who have religious experiences are all iconoclasts.

I’m getting off the point, but it’s relevant in as much that arguments involving science and religion have no common ground. I find them ridiculous because they usually involve pitting an ancient text (of so-called prophecy) against modern scientific knowledge and all the technology it has propagated, which we all rely upon for our day-to-day existence. If religion ever had an epistemological role it has long been usurped.

On the other hand, if religion is an experience, it is part of the unfathomable which lies outside our rational senses, and is not captured by words. Magee contends that the best one can say about an afterlife or the existence of a God, is that ‘we don’t know’. He calls himself an agnostic but not just in the narrow sense relating to a Deity, but in the much broader sense of acknowledging our ignorance. He discusses these issues in much more depth than my succinct paraphrasing implies. He gives the example of music as something we experience that can’t be expressed in words. Many people have used music as an analogy for religious experience, but, as Magee points out, music has a material basis in instruments and a score and sound waves, whereas religion does not.

Coincidentally, someone today showed me a text on Socrates, from a much larger volume on classical Greece. Socrates famously proclaimed his ignorance as the foundation of his wisdom. In regard to science, he said: “Each mystery, when solved, reveals a deeper mystery.” This statement is so prophetic; it captures the essence of science as we know it today, some 2500 years after Socrates. It’s also the reason I agree with Magee.

John Wheeler conceived a metaphor, that I envisaged independently of him. (Further evidence that I’ve never had an original idea.)

We live on an island of knowledge surrounded by a sea of ignorance.
As our island of knowledge grows, so does the shore of our ignorance.


I contend that the island is science and the shoreline is philosophy, which implies that philosophy feeds science, but also that they are inseparable. By philosophy, in this context, I mean epistemology.

To give an example that confirms both Socrates and Wheeler, the discovery and extensive research into DNA provides both evidence and a mechanism for biological evolution from the earliest life forms to the most complex; yet the emergence of DNA as providing ‘instructions’ for the teleological development of an organism is no less a mystery looking for a solution than evolution itself.

The salient point of Wheeler's metaphor is that the sea of ignorance is infinite and so the island grows but is never complete. In his last chapter, Magee makes the point that truth (even in science) is something we progress towards without attaining. “So rationality requires us to renounce the pursuit of proof in favour of the pursuit of progress.” (My emphasis.) However, 'the pursuit of proof’ is something we’ve done successfully in mathematics ever since Euclid. It is on this point that I feel Magee and I part company.

Like many philosophers, when discussing epistemology, Magee hardly mentions mathematics. Only once, as far as I can tell, towards the very end (in the context of the quote I referenced above about ‘proof’) he includes it in the same sentence as science, logic and philosophy as inherited from Descartes, where he has this to say: “It is extraordinary to get people, including oneself, to give up this long-established pursuit of the unattainable.” He is right in as much as there will always be truths, including mathematical truths, that we can never know (refer my recent post on Godel, Turing and Chaitin). But there are also innumerable (mathematical) truths that we have discovered and will continue to discover into the future (part of the island of knowledge). As Freeman Dyson points out, 'Mathematics is forever', whilst discussing the legacy of Srinivasa Ramanujan's genius. In other words, mathematical truths don't become obsolete in the same way that science does.

I don’t know what Magee’s philosophical stance is on mathematics, but not giving it any special consideration tells me something already. I imagine, from his perspective, it serves no special epistemological role, except to give quantitative evidence for the validity of competing scientific theories.

In one of his earlier chapters, Magee talks about the ‘apparatus’ we have in the form of our senses and our brain that provide a limited means to perceive our external world. We have developed technological means to augment our senses; microscopes and telescopes being the most obvious. But we now have particle accelerators and radio telescopes that explore worlds we didn’t even know existed less than a century ago.

Mathematics, I would contend, is part of that extended apparatus. Riemann’s geometry allowed Einstein to perceive a universe that was ‘curved’ and Euler’s equation allowed Schrodinger to conceive a wave function. Both of these mathematically enhanced ‘discoveries’ revolutionised science at opposite ends of the epistemological spectrum: the cosmological and the subatomic.

Magee rightly points out our almost insignificance in both space and time as far as the Universe is concerned. We are figuratively like the blink of an eye on a grain of sand, yet reality has no meaning without our participation. In reference to the internal and external worlds that formulate this reality, Magee has this to say: “But then the most extraordinary thing is that the world of interaction between these two unintelligibles is rationally intelligible.” Einstein famously made a similar point: "The most incomprehensible thing about the Universe is that it’s comprehensible.”

One can’t contemplate that statement, especially in the context of Einstein’s iconic achievements, without considering the specific and necessary role of mathematics. Raymond Tallis, who writes a regular column in Philosophy Now, and for whom I have great respect, nevertheless downplays the role of mathematics. He once made the comment that mathematical Platonists (like me) 'make the error of confusing the map for the terrain.’ I wrote a response, saying: ‘the use of that metaphor infers the map is human-made, but what if the map preceded the terrain.’ (The response wasn’t published.) The Universe obeys laws that are mathematically in situ, as first intimated by Galileo, given credence by Kepler, Newton, Maxwell; then Einstein, Schrodinger, Heisenberg and Bohr.

I’d like to finish by quoting Paul Davies:

We have a closed circle of consistency here: the laws of physics produce complex systems, and these complex systems lead to consciousness, which then produces mathematics, which can then encode in a succinct and inspiring way the very underlying laws of physics that gave rise to it.

This, of course, is another way of formulating Roger Penrose’s 3 Worlds, and it’s the mathematical world that is, for me, the missing piece in Magee’s otherwise thought-provoking discourse.


Last word: I’ve long argued that mathematics determines the limits of our knowledge of the physical world. Science to date has demonstrated that Socrates was right: the resolution of one mystery invariably leads to another. And I agree with Magee that consciousness is a phenomenon that may elude us forever.

Addendum: I came across this discussion between Magee and Harvard philosopher, Hilary Putnam, from 1977 (so over 40 years ago), where Magee exhibits a more nuanced view on the philosophy of science and mathematics (the subject of their discussion) than I gave him credit for in my post. Both of these men take their philosophy of science from philosophers, like Kant, Descartes and Hume, whereas I take my philosophy of science from scientists: principally, Paul Davies, Roger Penrose and Richard Feynman, and to a lesser extent, John Wheeler and Freeman Dyson; I believe this is the main distinction between their views and mine. They even discuss this 'distinction' at one point, with the conclusion that scientists, and particularly physicists, are stuck in the past - they haven't caught up (my terminology, not theirs). They even talk about the scientific method as if it's obsolete or anachronistic, though again, they don't use those specific terms. But I'd point to the LHC (built decades after this discussion) as evidence that the scientific method is alive and well, and it works. (I intend to make this a subject of a separate post.)

Friday, 9 November 2018

Can AI be self-aware?

I recently got involved in a discussion on Facebook with a science fiction group on the subject of artificial intelligence. Basically, it started with a video claiming that a robot had discovered self-awareness, which is purportedly an early criterion for consciousness. But if you analyse what they actually achieved: it’s clever sleight-of-hand at best and pseudo self-awareness at worst. The sleight-of-hand is to turn fairly basic machine logic into an emotive gesture to fool humans (like you and me) into thinking it looks and acts like a human, which I’ll describe in detail below.

And it’s pseudo self-awareness in that it’s make-believe, in the same way that pseudo science is make-believe science, meaning pretend science, like creationism. We have a toy that we pretend exhibits self-awareness. So it is we who do the make-believe and pretending, not the toy.

If you watch the video you’ll see that they have 3 robots and they give them a ‘dumbing pill’ (meaning a switch was pressed) so they can’t talk. But one of them is not dumb and they are asked: “Which pill did you receive?” One of them dramatically stands up and says: “I don’t know”. But then waves its arm and says, “I’m sorry, I know now. I was able to prove I was not given the dumbing pill.”

Obviously, the entire routine could have been programmed, but let’s assume it’s not. It’s a simple TRUE/FALSE logic test. The so-called self-awareness is a consequence of the T/F test being self-referential – whether it can talk or not. It verifies that it’s False because it hears its own voice. Notice the human-laden words like ‘hears’ and ‘voice’ (my anthropomorphic phrasing). Basically, it has a sensor to detect sound that it makes itself, which logically determines whether the statement, it’s ‘dumb’, is true or false. It says, ‘I was not given a dumbing pill’, which means its sound was not switched off. Very simple logic.

I found an on-line article by Steven Schkolne (PhD in Computer Science at Caltech), so someone with far more expertise in this area than me, yet I found his arguments for so-called computer self-awareness a bit misleading, to say the least. He talks about 2 different types of self-awareness (specifically for computers) – external and internal. An example of external self-awareness is an iphone knowing where it is, thanks to GPS. An example of internal self-awareness is a computer responding to someone touching the keyboard. He argues that “machines, unlike humans, have a complete and total self-awareness of their internal state”. For example, a computer can find every file on its hard drive and even tell you its date of origin, which is something no human can do.

From my perspective, this is a bit like the argument that a thermostat can ‘think’. ‘It thinks it’s too hot or it thinks it’s too cold, or it thinks the temperature is just right.’ I don’t know who originally said that, but I’ve seen it quoted by Daniel Dennett, and I’m still not sure if he was joking or serious.

Computers use data in a way that humans can’t and never will, which is why their memory recall is superhuman compared to ours. Anyone who can even remotely recall data like a computer is called a savant, like the famous ‘Rain Man’. The point is that machines don’t ‘think’ like humans at all. I’ll elaborate on this point later. Schkolne’s description of self-awareness for a machine has no cognitive relationship to our experience of self-awareness. As Schkone says himself: “It is a mistake if, in looking for machine self-awareness, we look for direct analogues to human experience.” Which leads me to argue that what he calls self-awareness in a machine is not self-awareness at all.

A machine accesses data, like GPS data, which it can turn into a graphic of a map or just numbers representing co-ordinates. Does the machine actually ‘know’ where it is? You can ask Siri (as Schkolne suggests) and she will tell you, but he acknowledges that it’s not Siri’s technology of voice recognition and voice replication that makes your iphone self-aware. No, the machine creates a map, so you know where ‘You’ are. Logically, a machine, like an aeroplane or a ship, could navigate over large distances with GPS with no humans aboard, like drones do. That doesn’t make them self-aware; it makes their logic self-referential, like the toy robot in my introductory example. So what Schkolne calls self-awareness, I call self-referential machine logic. Self-awareness in humans is dependent on consciousness: something we experience, not something we deduce.

And this is the nub of the argument. The argument goes that if self-awareness amongst humans and other species is a consequence of consciousness, then machines exhibiting self-awareness must be the first sign of consciousness in machines. However, self-referential logic, coded into software doesn’t require consciousness, it just requires machine logic suitably programmed. I’m saying that the argument is back-to-front. Consciousness can definitely imbue self-awareness, but a self-referential logic coded machine does not reverse the process and imbue consciousness.

I can extend this argument more generally to contend that computers will never be conscious for as long as they are based on logic. What I’d call problem-solving logic came late, evolutionarily, in the animal kingdom. Animals are largely driven by emotions and feelings, which I argue came first in evolutionary terms. But as intelligence increased so did social skills, planning and co-operation.

Now, insect colonies seem to put the lie to this. They are arguably closer to how computers work, based on algorithms that are possibly chemically driven (I actually don’t know). The point is that we don’t think of ants and bees as having human-like intelligence. A termite nest is an organic marvel, yet we don’t think the termites actually plan its construction the way a group of humans would. In fact, some would probably argue that insects don’t even have consciousness. Actually, I think they do. But to give another well-known example, I think the dance that bees do is ‘programmed’ into their DNA, whereas humans would have to ‘learn’ it from their predecessors.

There is a way in which humans are like computers, which I think muddies the waters, and leads people into believing that the way we think and the way machines ‘think’ is similar if not synonymous.

Humans are unique within the animal kingdom in that we use language like software; we effectively ‘download’ it from generation to generation and it limits what we can conceive and think about, as Wittgenstein pointed out. In fact, without this facility, culture and civilization would not have evolved. We are the only species (that we are aware of) that develops concepts and then manipulates them mentally because we learn a symbol-based language that gives us that unique facility. But we invented this with our brains; just as we invent symbols for mathematics and symbols for computer software. Computer software is, in effect, a language and it’s more than an analogy.

We may be the only species that uses symbolic language, but so do computers. Note that computers are like us in this respect, rather than us being like computers. With us, consciousness is required first, whereas with AI, people seem to think the process can be reversed: if you create a machine logic language with enough intelligence, then you will achieve consciousness. It’s back-to-front, just as self-referential logic creating self-aware consciousness is back-to-front.

I don't think AI will ever be conscious or sentient. There seems to be an assumption that if you make a computer more intelligent it will eventually become sentient. But I contend that consciousness is not an attribute of intelligence. I don't believe that more intelligent species are more conscious or more sentient. In other words, I don't think the two attributes are concordant, even though there is an obvious dependency between consciousness and intelligence in animals. But it’s a one way dependency; if consciousness was dependent on intelligence then computers would already be conscious.


Addendum: The so-called Turing test is really a test for humans, not robots, as this 'interview' with 'Sophia' illustrates.

Thursday, 11 October 2018

My philosophy in 24 dot points

A friend (Erroll Treslan) posted on Facebook a link to a matrix that attempts to encapsulate the history of (Western) philosophy by listing the most influential people and linking their ideas, either conflicting or in agreement.

I decided to attempt the same for myself and have included those people, whom I believe influenced me, which is not to say they agree with me. In the case of some of my psychological points I haven’t cited anyone as I’ve forgotten where my beliefs came from (in those cases).

  • There are 3 worlds: physical, mental and mathematical. (Penrose)
  • Consciousness exists in a constant present; classical physics describes the past and quantum mechanics describes the future. (Schrodinger, Bragg, Dyson)
  • Reality requires both consciousness and a physical universe. You can have a universe without consciousness, which was the case in the past, but it has no meaning and no purpose. (Barrow, Davies)
  • Purpose has evolved but the Universe is not teleological in that it is not determinable. (Davies)
  • There is a cosmic anthropic principle; without sentient beings there might as well be nothing. (Carter, Barrow, Davies)
  • Mathematics exists independently from humans and the Universe. (Barrow, Penrose, Pythagoras, Plato)
  • There will always be mathematical truths we don’t know. (Godel, Turing, Chaitin)
  • Mathematics is not a language per se. It starts with the prime numbers, called the 'atoms of mathematics', yet extends to infinity and the transcendental. (Euclid, Euler, Riemann)
  • The Universe created the means to understand itself, with mathematics the medium and humans the only known agents. (Einstein, Wigner)
  •  The Universe obeys laws dependent on fine-tuned mathematical parameters. (Hoyle, Barrow, Davies)
  • The Universe is not a computer; chaos rules and is not predictable. (Stewart, Gleik)
  • The brain does not run on algorithms; there is no software. (Penrose, Searle)
  • Human language is analogous to software because we ‘download’ it from generation to generation and it ‘mutates’; if I can mix my metaphors. (Dawkins, Hofstadter)
  • We think and conceptualise in a language. Axiomatically, this limits what we can conceive and think about. (Wittgenstein)
  • We only learn something new when we integrate it into what we already know. (Wittgenstein)
  • Humans have the unique ability to nest concepts within concepts ad-infinitum, which mirror the physical world. (Hofstadter)
  • Morality is largely subjective, dependent on cultural norms but malleable by milieu, conditioning and cognitive dissonance. (Mill, Zimbardo)
  • It is inherently human to form groups with an ingroup-outgroup mentality.
  • Evil requires the denial of humanity in others.
  • Empathy is the key to social reciprocity at all levels of society. (Confucius, Jesus)
  • Quality of life is dependent on our interaction with others from birth to death. (Aristotle, Buddha)
  • Wisdom comes from adversity. The premise of every story ever told is about a protagonist dealing with adversity – it’s a universal theme (Frankl, I Ching).
  • God is an experience that is internal, yet is perceived as external. (Feuerbach)
  • Religion is the mind’s quest to find meaning for its own existence.

Addendum: I’ve changed it from 23 points to 24 by adding point 22. It’s actually a belief I’ve held for some time. They are all ‘beliefs’ except point 7, which arises from a theorem.

Monday, 3 September 2018

Is the world continuous or discrete?

There is an excellent series on YouTube called ‘Closer to Truth’, where the host, Richard Lawrence Kuhn, interviews some of the cleverest people on the planet (about existential and epistemological issues) in such a way that ordinary people, like you and me, can follow. I understand from Wikipedia that it’s really a television series started in 2000 on America’s PBS.

In an interview with Gregory Chaitin, he asks the above question, which made me go back and re-read Chaitin’s book, Thinking about Godel and Turing, which I originally bought and read over a decade ago, and then posted about on this blog, (not long after I created it). It’s really a collection of talks and abridged papers given by Chaitin from 1970 to 2007, so there’s a lot of repetition but also an evolution in his narrative and ideas. Reading it for the second time (from cover to cover) over a decade later has the benefit of using the filter of all the accumulated knowledge that I’ve acquired in the interim.

More than one person (Umberto Eco and Jeremy Lent, for examples) have wondered if the discreteness we find in the world, and which we logically apply to mathematics, is a consequence of a human projection rather than an objective reality. In other words, is it an epistemological bias rather than an ontological condition? I’ll return to this point later.

Back to Chaitin’s opus, he effectively takes us through the same logical and historical evolution over and over again, which ultimately leads to the same conclusions. I’ll summarise briefly. In 1931, Kurt Godel proved a theorem that effectively tells us that, within a formal axiom-based mathematical system, there will always be mathematical truths that can’t be solved. Then in 1936, Alan Turing proved, with a thought experiment that presaged the modern computer, that there will always be machine calculations that may never stop and we can’t predict whether they will or not. For example, Riemann’s hypothesis can be calculated using an algorithm to whatever limit you like (and is being calculated somewhere right now, probably) but you can never know in advance if it will ever stop (by finding a false result). As Chaitin points out, this is an extension of Godel’s theorem, and Godel’s theorem can be deduced from Turing’s.

Then Chaitin himself proved, by inventing (or discovering) a mathematical device, (Ω) called Omega, that there are innumerable numbers that can never be completely calculated (Omega gives a probability of a Turing program halting). In fact, there are more incomputable numbers than rational numbers, even though they are both infinite in extent. The rational Reals are countably infinite while the incomputable Reals are uncountably infinite. I’ve mentioned this previously when discussing Noson Yanofsky’s excellent book, The Outer Limits of Reason; What Science, Mathematics, and Logic CANNOT Tell Us. Chaitin claims that this proves that Godel’s Incompleteness Theorem is not some aberration, but is part of the foundation of mathematics – there are infinitely more numbers that can’t be calculated than those that can.

So that’s the gist of Chaitin’s book, but he draws some interesting conclusions on the side, so-to-speak. For a start, he argues that maths should be done more like physics and maybe we should accept some unproved theorems (like Riemann’s) as new axioms, as one would in physics. In fact, this is happening almost by default in as much as there already exists new theorems that are dependent on Riemann’s conjecture being true. In other words, Riemann’s hypothesis has effectively morphed into a mathematical caveat so people can explore its consequences.

The other area of discussion that Chaitin gets into, which is relevant to this discussion is whether the Universe is like a computer. He cites Stephen Wolfram (who invented Mathematica) and Edward Fredkin.

According to Pythagoras everything is number, and God is a mathematician… However, now a neo-Pythagorean doctrine is emerging, according to which everything is 0/1 bits, and the world is built entirely out of digital information. In other words, now everything is software, God is a computer programmer, not a mathematician, and the world is a giant information-processing system, a giant computer [Fredkin, 2004, Wolfram, 2002, Chaitin, 2005].

Carlo Rovelli also argues that the Universe is discrete, but for different reasons. It’s discrete because quantum mechanics (QM) has a Planck limit for both time and space, which would suggest that even space-time is discrete. Therefore it would seem to lend itself to being made up of ‘bits’. This fits in with the current paradigm that QM and therefore reality, is really about ‘information’ and information, as we know, comes in ‘bits’.

Chaitin, at one point, goes so far as to suggest that the Universe calculates its future state from the current state. This is very similar to Newton’s clockwork universe, whereby Laplace famously claimed that given the position of every particle in the Universe and all the relevant forces, one could, in principle, ‘read the future just as readily as the past’. These days we know that’s not correct, because we’ve since discovered QM, but people are arguing that a QM computer could do the same thing. David Deutsch is one who argues that (in principle).

There is a fundamental issue with all this that everyone seems to have either forgotten or ignored. Prior to the last century, a man called Henri Poincare discovered some mathematical gremlins that seemed of little relevance to reality, but eventually led to a physics discipline which became known as chaos theory.

So after re-reading Chaitin’s book, I decided to re-read Ian Stewart’s erudite and deeply informative book, Does God Play Dice? The New Mathematics of Chaos.

Not quite a third of the way through, Stewart introduces Chaitin’s theorem (of incomputable numbers) to demonstrate why the initial conditions in chaos theory can never be computed, which I thought was a very nice and tidy way to bring the 2 philosophically opposed ideas together. Chaos theory effectively tells us that a computer can never predict the future evolvement of the Universe, and it’s Chaitin’s own theorem which provides the key.

At another point, Stewart quips that God uses an analogue computer. He’s referring to the fact that most differential equations (used by scientists and engineers) are linear whilst nature is clearly nonlinear.

Today’s science shows that nature is relentlessly nonlinear. So whatever God deals with… God’s got an analogue computer as versatile as the entire universe to play with – in fact, it is the entire universe. (Emphasis in the original.)

As all scientists know (and Yanofsky points out in his book) we mostly use statistical methods to understand nature’s dynamics, not the motion of individual particles, which would be impossible. Erwin Schrodinger made a similar point in his excellent tome, What is Life? To give just one example that most people are aware of: radioactive decay (an example Schrodinger used). Statistically, we know the half-lives of radioactive decay, which follow a precise exponential rule, but no one can predict the radioactive decay of an individual isotope.

Whilst on the subject of Schrodinger, his eponymous equation is both linear and deterministic which seems to contradict the very idea of QM discrete and probabilistic effects. Perhaps that is why Carlo Rovelli contends that Schrodinger’s wavefunction has misled our attempts to understand QM in reality.

Roger Penrose explicates QM in phases: U, R and C (he always displays them bold), depicting the wave function phase; the measurement phase; and the classical physics phase. Logically, Schrodinger’s wave function only exists in the U phase, prior to measurement or observation. If it wasn’t linear you couldn’t add the waves together (of all possible paths) which is essential for determining the probabilities and is also fundamental to QED (which is the latest iteration of QM). The fact that it’s deterministic means that it can calculate symmetrically forward and backward in time.

My own take on this is that QM and classical physics obey different rules, and the rules for classical physics are chaos, which are neither predictable nor linear. Both lead to unpredictability but for different reasons and using different mathematics. Stewart has argued that just maybe you could describe QM using chaos theory and David Deutsch has argued the opposite: that you could use the multi-world interpretation of QM to explain chaos theory. I think they’re both wrong-headed, but I’m the first to admit that all these people know far more than me. Freeman Dyson (one of the smartest physicists not to win a Nobel Prize) is the only other person I know who believes that maybe QM and classical physics are distinct. He’s pointed out that classical physics describes events in the past and QM provides future probabilities. It’s not a great leap from there to suggest that the wavefunction exists in the future.

You may have noticed that I’ve wandered away from my original question, so maybe I should wonder my way back. In my introduction, I mentioned the epistemological point, considered by some, that maybe our employment of mathematics, which is based on integers, has made us project discreteness onto the world.

Chaitin’s theorem demonstrates that most of mathematics is not discrete at all. In fact, he cites his hero, Gottlieb Leibniz, that most of mathematics is ‘transcendental’, which means it’s beyond our intellectual grasp. This turns the general perception that mathematics is a totally logical construct on its head. We access mathematics using logic, but if there are an uncountable infinity of Reals that are not computable, then, logically, they are not accessible to logic, including computer logic. This is a consequence of Chaitin’s own theorem, yet he argues that is the reason it’s not reality.

In fact, Chaitin would argue that it’s because of that inacessability that a discrete universe makes sense. In other words, a discrete universe would be computable. However, chaos theory suggests God would have to keep resetting his parameters. (There is such a thing as ‘chaotic control’, called ‘Proportional Perturbation Feedback’, PPF, which is how pacemakers work.)

Ian Stewart has something to say on this, albeit while talking about something else. He makes the valid point that there is a limit to how many decimals you can use in a computer, which has practical limitations:

The philosophical point is that the discrete computer model we end up with is not the same as the discrete model given by atomic physics.

Continuity uses calculus, as in the case of Schrodinger’s equation (referenced above) but also Einstein’s field equations, and calculus uses infinitesimals to maintain continuity mathematically. A computer doing calculus ‘cheats’ (as Stewart points out) by adding differences quite literally.

This leads Stewart to make the following observation:

Computers can work with a small number of particles. Continuum mechanics can work with infinitely many. Zero or infinity. Mother Nature slips neatly into the gap between the two.

Wolfram argues that the Universe is pseudo-random, which would allow it to run on algorithms. But there are 2 levels of randomness, one caused by QM and one caused by chaos. (Chaos can create stability as well, which I‘ve discussed elsewhere.) The point is that initial conditions have to be calculated to infinity to determine chaotic phenomena (like weather), but it applies to virtually everything in nature. Even the orbits of the planets are chaotic, but over millions, even billions of years. So at some level the Universe may be discrete, even at the Planck scale, but when it comes to evolutionary phenomena, chaos rules, and it’s neither computably determinable (long term) nor computably discrete.

There is one aspect of this that I’ve never seen discussed and that is the relationship between chaos theory and time. Carlos Rovelli, in his recent book, The Order of Time, argues that ‘time’s arrow’ can only be explained by entropy, but another physicist, Richard A Muller, in his book, NOW; The Physics of Time, argues the converse. Muller provides a lengthy and compelling argument on why entropy doesn’t explain the arrow of time.

This may sound simplistic, but entropy is really about probabilities. As time progresses, a dynamic system, if left to its own devices, progresses to states of higher probability. For example, perfume released from a bottle in one corner of a room soon dissipates throughout the room because there is a much higher probability of that then it accumulating in one spot. A broken egg has an infinitesimally low probability of coming back together again. The caveat, ‘left to its own devices’, simply means that the system is in equilibrium with no external source of energy to keep it out of equilibrium.

What has this to do with chaos theory? Well, chaotic phenomena are time asymmetrical (they can't be repeated, if rerun). Take weather. If weather was time reversible symmetrical, forecasts would be easy. And weather is not in a state of equilibrium, so entropy is not the dominant factor. Take another example: biological evolution. It’s not driven by entropy because it increases in complexity but it’s definitely time asymmetrical and it’s chaotic. In fact, speciation appears to be fractal, which is a chaos parameter.

Now, I pointed out that the U phase of Penroses’s explication of QM is time symmetrical, but I would contend that the overall U, R, C sequence is not. I contend that there is a sequence from QM to classical physics that is time asymmetrical. This infers, of course, that QM and classical physics are distinct.


Addendum 1: This is slightly off-topic, but relevant to my own philosophical viewpoint. Freeman Dyson delivers a lecture on QM, and, in the 22.15 to 24min time period, he argues that the wavefunction and QM can only tell us about the future and not the past.

Addendum 2 (Conclusion): Someone told me that this was difficult to follow, so I've written a summary based on a comment I gave below.

Chaitin's theorem arises from his derivation of omega (Ω), which is the 'halting probability', an extension of Turing's famous halting theorem. You can read about it here, including its significance to incomputability.

I agree with Chaitin 'mathematically' in that I think there are infinitely more incomputable Reals than computable Reals. You already have this with transcendental numbers like π and e, which are incomputable. Chaitin's Ω can be computed to whatever resolution you like, just like π and e, but (of course) not to infinity.

I disagree with him 'philosophically' in that I don't think the Universe is necessarily discrete and can be reduced to 0s and 1s (bits). In other words, I don't think the Universe is like a computer.

Curiously and ironically, Chaitin has proved that the foundation of mathematics consists mostly of incomputable Reals, yet he believes the Universe is computable. I agree with him on the first part but not the second.

Addendum 3: I discussed the idea of the Universe being like a computer in an earlier post, with no reference to Chaitin or Stewart.

Addendum 4: I recently read Jim Al-Khalili's chapter on 'Laplace's demon' in his book, Paradox; The Nine Greatest Enigmas in Physics, which is specifically a discussion on 'chaos theory'. Al-Khalili contends that 'the Universe is almost certainly deterministic', but I think his definition of 'deterministic' might be subtly different to mine. He rightly points out that chaos is deterministic but unpredictable. What this means is that everything in the past and everything in the future has a cause and effect. So there is a causal path from any current event to as far into the past as you want to go. And there will also be a causal path from that same event into the future; it's just that you won't be able to predict it because it's uncomputable. In that sense the future is deterministic but not determinable. However (as Ian Stewart points out in Does God Play Dice?) if you re-run a chaotic experiment you will get a different result, which is almost a definition of chaos; tossing a coin is the most obvious example (cited by Stewart). My point is that if the Universe is chaotic then it follows that if you were to rerun the Universe you'd get a different result. So it may be 'deterministic' but it's not 'determined'. I might elaborate on this in a separate post.

Saturday, 25 August 2018

Do you know truth when you see it?

I saw an interesting 2 part documentary on the Judgement Day (21 May 2011) prediction by Harold Camping (since deceased as of 15 Dec 2013) on a programme called Compass, which is a long running programme on Australia’s ABC, covering a range of religious topics and some not so religious. It’s a very secular programme. Their highest rating episode for many years was Richard Dawkins’ The Root of all Evil (which at the time had never been shown in the US) but that was at least 10 years ago (pre The God Delusion).

Harold Camping hosted a radio programme on American Christian radio and he had a small but committed following. He had arrived at the date doing calculations based on biblical scripture that, apparently, only he could follow. He had made a prediction before but admitted that he had made a mistake. This time he was absolutely 100% positive that he’d got it right, as he’d rerun and double-checked his calculations a number of times.

What was interesting about this show was the psychology of belief and the severe cognitive dissonance people suffered when it didn’t come to pass. It’s very easy to be dismissive and say these people are gullible but at least some of the ones interviewed came across as intelligent and responsible, not crazies. I confess to being conned by smooth talkers who know how to read people and press the emotional buttons that can make you drop your guard. It’s only in hindsight that one thinks: how could I be so stupid? What I’m talking about is people whom you trust to deliver on something that is in fact a scam. Hopefully, you will learn and be more alert next time; you put it down to experience.

This situation is subtly different in that people accept as truth something that many of us would be sceptical of. It made me think about what criteria do we use to consider something true. Many people consider the Bible to contain truths in the form of prophecies and they will give you examples, challenging you to prove them wrong. They will cite the evidence (like the Dead Sea Scrolls predicting Christ’s crucifixion 350 years before it happened) and dare you to refute it. In other words, you’re the fool for not accepting something that is clearly described.

I give this example because I had this very discussion with someone recently who was an intelligent professional person. He went so far as to claim that the crucifixion as a form of execution hadn’t even been invented then. As soon as he told me that I knew it had to be wrong: someone doesn’t describe something in detail hundreds of years before anyone thought of it. But some research on Google showed that the Persians invented crucifixion around 400BC, so maybe it was described in the Dead Sea Scrolls – still not a prophecy. I told him that I simply don’t believe in prophecy because it implies that all of history is pre-ordained and I don’t accept that. Chaos theory alone dictates that determinism is pretty much an impossibility (and that’s without considering quantum mechanics).

That’s a short detour, but it illustrates the point that many well educated people believe that the Bible contains truths straight from God, which is the highest authority one can claim. It’s not such a great leap from that belief to God also providing the exact date for the end of the world, if one knows how to decipher His hidden code. I’m always wary of people who claim to know ‘the mind of God’ (unless they’re an atheist like Stephen Hawking).

In the current issue of Philosophy Now (Issue 127, Aug/Sep 2018), Sandy Grant (philosopher at the University of Cambridge) published an essay titled Dogmas, whereby she points out the pitfalls of accepting points of view on ‘authority’ without affording them critical analysis. I immediately thought of climate change, though she doesn’t discuss that specifically, and how many people believe that it is a dogma based on an authority that we can’t question, because said authorities live in academia; a place most of us never visit, and if we did we wouldn’t speak the language.

People, who view the Bible as a source of prophecy, have in common with people who are sceptical of climate change, an ingroup-outgroup mentality. It becomes tribal. In other words, we all listen to people whom we already agree with on a specific subject, and that becomes our main criterion for truth. This is the case with the followers of Harold Camping as well as people who claim that climate scientists are fraudulent so they can keep their jobs. You think I’m joking, but that’s what many people in Australia believe is the ‘truth’ about climate change.

Of course, one can argue the converse for climate change – that the people who believe in it (like me) are part of their own ingroup, but there are major differences. The people who are warning us about climate change actually know what they’re talking about, in the same way that a structural engineer discussing what caused the WTC towers to collapse would know what they’re talking about (as opposed to a conspiracy theorist).

I wrote a letter to Philosophy Now regarding Grant’s essay, specifically referencing climate change, which, as I’ve already mentioned, she didn’t address. This is an extract relevant to this discussion.

Opponents of climate change would call it dogma… This is a case where we are dependent on expertise that most of us don't have. But this is not the exception; it is, in fact, the norm with virtually all scientific knowledge.

We trust science because it's given us all the infrastructure and tools that we totally depend on to live a normal life in all Western societies around the globe. However, political ideology can suddenly transform this trust into dogma, and dogma, almost by definition, shouldn't be trusted if you're a thinking person, as Grant advises.

Sometimes, what people call dogma isn't dogma, but a lengthy process of investigation and research that has been hijacked and stigmatised by those who oppose its findings.


In both the case of climate change and Harold Camping’s prediction, it’s ultimately evidence that provides ‘truth’. 21 May 2011 came and went without the end of the world, and evidence of climate change is already apparent with glaciers retreating, ice shelfs melting, migratory species adapting, and it will become more apparent as time passes with sea rise being the most obvious. What’s harder to predict is the time frame and its ultimate impact, not its manifestation.

One of the reasons I’ve become more interested in mathematics as I’ve got older is that it’s a source of objective universal truth that seems to transcend the Universe. This point of view is itself contentious, but I can provide arguments. The most salient being that there will always be mathematical truths that we will never know, yet we know that they exist in some abstract space that can only be accessed by some intelligent being (or possibly a machine).

In science, I know that Einstein’s theories of relativity are true (both of them), not least because the satnav in my car wouldn’t work without them. I also know that quantum mechanics (QM) is true because every electronic device (including the PC I’m using to write this) depends on it. Yet both these theories defy our common sense knowledge of the world.

Truth is elusive and for some people can’t be distinguished from delusion. In both the case of Harold Camping’s prediction and climate change, one’s belief in the ‘truth’ is taken from purported authorities. But ultimately the truth only becomes manifest in hindsight, provided by evidence even ordinary people can’t ignore.

Tuesday, 21 August 2018

Is the world an illusion?

This is the latest Question of the Month in Philosophy Now (Issue 127, August/September 2018). I don't enter them all, but I confess I wrote this one in half an hour, though I spent a lot of time polishing it. Some readers will note that it comprises variations on ideas I've expressed before. The rules stipulate that it must be less than 400 words, so this is 399.


There are two aspects to this question: epistemological and ontological. Dreams are obviously illusional, where time and space are often distorted, yet we are completely accepting of these perceptual dislocations until we wake up and recall them. Dreams are also the only examples of solipsism we ever experience. But here’s the thing: how do you know that the so-called reality we all take for granted is not as illusory as a dream? Philosophers often claim that you don’t, in the same way that you don’t know if you’re a brain in a vat.

The standard answer to solipsism is that you can be the only one, because everyone you know and meet can only exist in your mind. So the answer to the difference between a dream and reality is the converse. If I meet someone in a dream, I’m the only one who is aware of it. On the other hand, if I meet someone in real life, we can both remember it. We both have a subjective conscious experience that is concordant with our respective memories. This doesn’t happen in a dream. By inference, everyone’s individual subjective experience is not only their particular reality but a reality that they share with others. We’ve all had shared experiences that we individually recall, correlate and mutually confirm.

The world, which I will call the Universe, exists on many levels, from the subatomic to the astronomical. Our normal perception of it only covers one level which is intermediate between these extremes. The different realities of scale are deduced through mathematics and empirical evidence, in the form of radio waves collected in an array of gigantic dishes (at the largest scale) to trails of high energy particles in the Large Hadron Collider (at the smallest scale). Kant once argued that we can never know ‘the thing-in-itself”, and he was right because the thing-in-itself changes according to the scale we observe it at.

In the last century we learned that everything we can see and touch is made of atoms that are mostly empty space. It requires advanced mathematics and a knowledge of quantum mechanics (using the Pauli Exclusion Principle) to demonstrate how it is that these atoms (of mostly empty space) don’t allow us to all fall through the floor that we are standing on. So we depend on the illusion that we are not predominantly empty space just to exist.


I wrote a much longer discussion on this issue (almost 2 years ago) in response to an academic paper that claims only conscious agents exist, and that nothing else (including spacetime) exists 'unperceived'.

Friday, 17 August 2018

Aretha Franklin - undisputed Queen of Soul

25 March 1942 (Memphis, Tennessee) to 16 August 2018 (Detroit, Michigan)

Quote: Being a singer is a natural gift. It means I'm using to the highest degree possible the gift that God gave me to use. I'm happy with that.


I can't watch this video without tears. She lived through the civil rights years, won 18 Grammy Awards and was the first woman to be inducted into The Rock and Roll Hall of Fame. A gospel singer by origin, she was one of the Truly Greats.

There have been many tributes but Barack Obama probably sums it up best:

Aretha helped define the American experience. In her voice, we could feel our history, all of it and in every shade—our power and our pain, our darkness and our light, our quest for redemption and our hard-won respect. May the Queen of Soul rest in eternal peace. 

Saturday, 4 August 2018

Jordan Peterson revisited: feminism, #metoo and leadership

I wrote a post on Jordan Peterson after I read his book, 12 Rules for Life; An Antidote to Chaos. I wrote this prior to that but after I’d read the first chapter (Rule 1), which was about lobsters amongst other things (discussed below). But this essay discusses issues not covered in that post and therefore is still worth publishing, somewhat belatedly.

I came across Jordan Peterson via Stephen Law’s blog who had a link to a somewhat famous (or infamous) interview by Cathy Newman on Britain’s Channel 4. The interview gained some notoriety because he effectively turned the tables on her. Basically, he was better prepared than she was. She underestimated him and she thought his arguments or positions were facile and would be easy to knock over, when, in fact, he argued very articulately and precisely and maintained his composure and backed his arguments with statistics and evidence that she couldn’t counter. I’ll come back to some of these positions and arguments later.

This led me to watch a number of his YouTube videos and even buy his aforementioned book. I also read an article he wrote, which I read in The Weekend Australian, about his concern for the future of boys growing up into a world dominated by women, specifically in the humanities in universities. Unfortunately, I no longer have the article, so I can’t reference its original publication. I will come back to this issue later as well.

He’s a practicing clinical psychologist and Professor of Psychology at the University of Toronto, and you can watch some of his lectures on YouTube as well. He makes provocative statements and then backs them up with sound arguments, which is why I wanted to read his book.

He’s been called a ‘public intellectual’ but I would call him a ‘celebrity intellectual’. In that respect I would compare him to well known science celebrities like Richard Dawkins, Stephen Hawking and Brian Cox, along with equally talented, if not so famous figures, like Paul Davies, Roger Penrose, John Barrow and Richard Feynman. If I mentally put him in the same room with these people I don’t find him so intellectually intimidating and daunting to challenge.

I’ve said in a recent post that no one completely agrees philosophically with someone else, and the corollary to that is that no one completely disagrees with someone else either. Okay, there may be exceptions but I’ve never come across anyone that I completely disagree with on every single topic and I’m pretty argumentative.

Peterson is lauded by the ‘Right’ apparently (a ‘poster boy’, I think is the unfortunate phrase) but he rails against what he calls the ‘neo-Marxist post-modernists’. I’m honestly unsure what those terms mean but, given that I consistently argue against the universally accepted paradigm of infinite economic growth, I suspect it includes me.

I don’t know what Peterson would make of my blog if he read it, but one of his pet peeves is the trait of agreeableness. Peterson knows, as a psychologist, that there are personality traits that we are born with which tend to be associated with the left or the right of politics and agreeableness is associated with the left. This seems to be an issue with Peterson, because he raises it in the Cathy Newman interview and elsewhere. I expect, therefore, he would find me far too agreeable for my own good. Agreeableness, according to Peterson, is not a trait that is associated with leadership. I’ll come back to this point later as well.

I’m quite confident that Peterson would never read my blog because I’m way below him on every measure, whether it be celebrity status, academic status, professional status or social status. Having said that, we do similar things, albeit he does it far more successfully and effectively than me. Like him, I have strong opinions that I try to share with as wide an audience as possible. It’s just that we do it on completely different scales, and we have different special interests; but we both practice philosophy in our own ways and our ideas clash and sometimes concur, as I’ll try to delineate.

I’ll start with the Cathy Newman interview because one of the things he talks about is ‘men who don’t grow up’ and I seriously wondered if that included me. When I revisited the interview I decided that it didn’t, but even the fact that I would consider it gives one pause. I think I am and always have been (from a very early age) more conscious of my flaws and faults than I believe most people are. For me, self-examination and self-honesty are important traits, and I suspect Peterson might agree, because they are the first steps to being responsible, and being responsible is something that he talks about a lot. I’ve previously referenced Viktor Frankl (Man’s Search for Meaning) who argued the importance of adversity in shaping one’s life. Peterson makes similar references when he talks about the Buddha, who, the story goes, discovered suffering and mortality after being brought up isolated from these life-challenging experiences. The lesson, as Peterson points out, is that no one escapes suffering in their life, and, in fact, it’s essential in creating a personality worthy of the name.

Another issue raised in the Cathy Newman interview was Peterson’s comparison of human status-seeking with that of lobsters, which, from an evolutionary perspective, go back before the dinosaurs. This is also the subject of the first chapter of his book (or Rule 1), which is effectively a defence of social Darwinism. The inequality we see in society is part of the natural order (he doesn’t use that term, but it’s implied) whereby the fact that 1% of the population controls 50% of the wealth is just a consequence of natural evolutionary process, which lobsters and virtually all social animals demonstrate. He picks lobsters as his example, because they are so ‘old’ on the evolutionary scale.

The specific point he made to Newman was that it proves that the patriarchal hierarchy as a cultural phenomenon is a myth. ‘It’s not a matter of opinion,’ he says, ‘it’s a fact [that it’s a biological mechanism going way back in evolutionary time]’. Well, sex is a biological mechanism with an even older evolutionary history, but its cultural evolution in human societies can’t be compared to the sexual activities of a dog in the street or a bull in a paddock, to give examples with a closer evolutionary connection than lobsters. In other words, comparing the hierarchy of human social structures with lobsters is not very nuanced. I will discuss leadership later, which is really what this is about. Having said that, Trump’s election ticks all of Peterson’s social Darwinian boxes, even to the extent that Trump believes he’s entitled to all the ‘pussy’ he can ‘grab’, which is completely in line with the lobster comparison.

In the same chapter, Peterson discusses bullying and its deleterious effects, and this is something that I have personal experience with. On this issue, I think he and I would agree in that standing up to bullies, be it in the workplace or wherever, is important for your own self-esteem. For better or worse, I grew up with a chip on my shoulder and I don’t take kindly to bullies, but, as I’ve previously revealed, I’ve never solved a problem with my fists.

Another point raised by Newman was Peterson’s refusal to use transgender pronouns legislated apparently by the Canadian government. I’m unsure about this as I don’t live in Canada but, from what I can gather, I completely support him on this stance. Legislating language is Orwellian at best and totalitarian at worse.

On another tack, Peterson’s concern with how boys are being raised and the effect on their self-esteem and their chances of success later in life, I believe is misplaced. I happened to see a documentary (the same week) filmed at a primary school in Britain (the Isle of Wight, from memory) whereby self-assessment in various activities and abilities was compared between the sexes, and the males comprehensively had it over the females when it came to self-esteem. In reality, the change tends to occur in high school where the girls tend to excel over the boys because scholastic achievement for girls is not as ‘uncool’ as it is for boys. At least I would suggest that’s the case in Australia.

I grew up in a country town and I have nieces with boys growing up in country towns, and how a boy performs at cricket and football counts for a lot more than how he scores at mathematics and literature. That apparently hasn’t changed since my time. What has changed is that education for girls is now taken far more seriously (than it was in my time) and they’re overtaking the boys. Peterson’s answer, if I read him correctly, is to bring boys up to be more masculine. Given that domestic violence and violence towards women in general is a major issue all over the world, I don’t think making boys more masculine is the answer.

And this brings me to the so-called ‘MeToo’ phenomenon and a panel discussion I saw on this issue (in Australia) at about the same time. All 3 female panelists had suffered from direct physical forms of sexual harassment (all job related) and the only male panelist was a lawyer with extensive experience in dealing with sexual assault cases. He related how, by the time the women came to him, they were in very distressed states. He said that doctors advise women who have been raped not to pursue the matter in court as it will destroy their health. This alone suggests that our justice system (in Australia) needs a complete overhaul in the area of female sexual assault.

But even more pertinent to this discussion was the last question from the audience (which included school children) asking each panelist what advice they would give to their 12 year old selves. I have to admit that I could not readily find an answer to this question but I couldn’t leave it alone over the next day or so. In the end, after a lot of soul-searching, I decided I would give advice on how to deal with rejection or unrequited love, as I believe it is a universal experience for both sexes. But it seems to me that, boys in particular, don’t deal with rejection well. The most important thing is not to blame the other – it is not their fault. And there is a logic to this, because if it really was their fault why would you want to go back to them?

Friendship can easily slide into creepiness if a man’s advances are not welcome. But it’s easily remedied by simply retreating. If there is a genuine friendship then it will recover, and, if not, then it won’t. But again, it’s not her fault. I wrote a post a number of years ago where I argued that women choose. I believe that women should determine the limits of a relationship and that includes friendship as well as sexual relationships. Persisting in the face of rejection only leads to resentment on both sides. I’ve long argued that no one gains happiness at the expense of another’s unhappiness.

This doesn’t fit very well with Peterson’s social Darwinist model where the top guys get the best girls and the top girls vie for the top men, like a reality TV show. I’ve never married so I’m not best to judge, but I value the friendships I’ve gained with women over a number of decades, so I don’t feel that I’ve necessarily missed out. To be fair to Peterson, he argues that women choose, so we agree on that point. I think if society recognised this and cultivated it as a social norm whereby women set the limits of a relationship, then society would function better. It is the woman who has most to lose in a relationship, and this should be recognised by society as a whole. Peterson makes a similar point in one of his YouTube lectures.

Finally, getting back to the Newman interview, Peterson makes the point that being agreeable doesn’t tally with the evidence when it comes to getting top jobs. This makes me wonder if that’s why it’s been claimed that the ideal psychological profile for corporate leaders is a sociopath. My observations are that leaders without very good people skills, but goal oriented, will promote people with similar personalities to themselves. In cases where I’ve seen people with good people skills (as well as goal oriented skills) achieve top management positions they’ve invariably changed the culture of the entire organisation for the better.

I’ve argued many times that good leadership brings out the best in others. I once read of a study that was done on the most successful sporting teams in a range of sports and countries where they looked at a number of factors. The conclusion from the study was that the success of the team ultimately came down to just one factor and that was leadership. In a team sport it’s not about individual performances per se, yet in a sense it is. The best teams are not dependent on a few key players but on every member performing at their best. The best captains have the ability to get each member of their team to do just that. I’ve experienced this myself when I took part in the 2010 Melbourne Corporate Games in dragon boat racing. There were only 2 members of the team with previous experience (both women) including our captain. Against all expectations in a field of 32 teams, we won bronze. The Australian Navy came first. I give full credit to our captain, whom I know would prefer anonymity.


Footnote: I originally wrote this around 6 mths ago (before my first post on Peterson). I've since watched the Newman interview again, and I think she handled herself reasonably well, and Peterson even seemed to enjoy the combative nature of it. In just the last week, I read an investigative journalist's (Lauren Collins) expose on the BBC gender paygap (The New Yorker, July 23, 2018, pp. 34-43) and, in light of this, I think Peterson's counter argument to this issue is largely smoke and mirrors. The BBC clearly has egg on its face and they outright lied to (at least some of) their prominent female employees over their pay entitlements. And we know it's happened elsewhere (including Australia).

Sunday, 17 June 2018

In defence of (Australia’s) ABC

I have just finished reading a book by the IPA titled Against Public Broadcasting; Why We Should Privatise the ABC and How to Do It (authors: Chris Berg and Sinclair Davidson).  The IPA is the Institute of Public Affairs, and according to Wikipedia ‘…is a conservative public policy think tank based in Melbourne Australia. It advocates free market economic policies such as privatisation and deregulation of state-owned enterprises, trade liberalisation and deregulated workplaces, climate change scepticism, the abolition of the minimum wage, and the repeal of parts of the Racial Discrimination Act 1975.’ From that description alone, one can see that the ABC represents everything IPA opposes.

There has long been a fractured relationship between the ABC and consecutive Liberal governments, but the situation has deteriorated recently, and I see it as a symptom of the polarisation of politics occurring everywhere in the Western world.

It should be obvious where I stand on this issue, so I am biased in the same way that the IPA is biased, though we are on opposing sides. A friend of mine and work colleague for nearly 3 decades, recently told me that I had lost ‘objectivity’ on political issues, and he was specifically referring to my stance on the ABC and offshore detention of refugees. I told him that ‘mathematics is the only intellectual endeavour that is truly objective’. Philosophy is not objective; ever since Socrates, it’s all about argument and argument axiomatically assumes alternative points of view.

The IPA’s book is generally well argued in as much as they provide economic rationales and counter arguments to the most common reasons given for keeping the ABC. I won’t go into these in depth, as I don’t have time; instead I will focus more on the ideological divide that I believe has led to this becoming a political issue.

One of the authors’ themes is that the ABC is anachronistic, which implies it no longer serves the purpose for which it was originally intended. But they go further in that they effectively argue that the policy of having a public broadcaster in the English BBC mould was flawed from the beginning and that Australia should have adopted the American model of a free market, so no public broadcaster in the first place.

The authors refer to a survey done by the Dix inquiry in 1981 as ‘the most comprehensive investigation into the ABC in the public broadcaster’s history.’ Dix gave emphasis to a number of population surveys, but tellingly the authors say that ‘audience surveys are a thin foundation on which to mount an argument for public broadcasting.’ I’m not sure they’d mount that argument if the audience survey had come out negative.

The fact is that throughout the entire history of Australian broadcasting, we have adopted a combined A and B (public and commercial) approach that seems to be complementary rather than conflicting. The IPA would argue that this view is erroneous. Given Australia’s small population base over an enormous territory (Australia is roughly the area of the US without Alaska, but with 60% of California’s population) this mix has proven effective. Arguably, with the Internet and on-line entertainment services, the world has changed. The point is that the ABC has adapted, but commercial entities claim that the ABC has an unfair advantage because it’s government subsidised. Be that the case, the ABC provides quality services that the other networks don’t provide (elaborated on below).

According to figures provided in their book, the ABC captures between 19% and 25% market share for both television and radio (the 19% is for prime time viewing, but they get 24-28% overall). Given that there are 3 other TV networks plus subscription services like Netflix and Stan, this seems a reasonable share. It would have been interesting to see what market share the other networks capture for a valid comparison. But if one of the commercial networks dominates with 30% or more, than the other 2 networks would have less share than the ABC. Despite this, the authors claim, in other parts of the book, that the ABC has a ‘fraction’ of the market. Well, ¼ is a sizeable fraction, all things considered.

One of the points the authors make is that there seems to be conflicting objectives in practice as well as theory. Basically, it is argued that the ABC provide media services that are not provided by the private sector, yet they compete with the private sector in areas like news, current affairs, dramas and child education programmes. Many people who oppose the ABC (not just the IPA) argue that the ABC should not compete with commercial entities. But they have the same market from which to draw consumers, so how can they not? Many of these same people will tell you that they never watch the ABC, which effectively negates their argument.

But the argument disguises an unexpressed desire for the ABC to become irrelevant by choice of content. What they are saying, in effect, is that the ABC should only produce programmes that nobody wants to watch or listen to. There is an inference that they don’t mind if the ABC exists as long as they don’t compete with other networks; in other words, as long as they just produce crap.

In some respects they don’t compete with other networks, because, as the authors say themselves, they produce ‘quality’ programmes. In fact, the authors, in an extraordinary piece of legerdemain say: “If private media outlets are producing only ‘commercial trash’, then that could very well be because the ABC has cornered the market for quality.” I thought this argument so hilarious, I call it an ‘own goal’.

It’s such a specious argument. The authors strongly believe that the market sorts everything out, so it’s just a matter of supply and demand. But here’s the thing: if the commercial networks don’t produce ‘quality’ programmes (to use the authors’ own nomenclature) when they have competition from the ABC, why would they bother when the ABC no longer exists?

For the authors this was a throwaway comment, but for me, it’s the raison d’etre of the ABC. The reason I oppose the abandonment of the ABC is because I don’t want mediocrity to rule unopposed.

Paul Barry, who has worked in both public and commercial television, recalls an occasion when he was covering a Federal election for a commercial network. He wanted to produce some analytical data, and his producer quickly squashed it, saying ‘no one wants to watch that crap’ or words to that effect. Barry said he quickly realised that he was dealing with a completely different audience. Many people call this elitist, and I agree. But if elitist means I want intellectual content in my viewing, then I plead guilty.

If the ABC was an abject failure, if it didn’t have reasonable market share, if it wasn’t held in such high regard by the general public, (including people who don’t use it; as pointed out by the authors themselves) there would appear to be no need to write a book proposing a rationale and finely tuned argument for its planned obsolescence. In other words, the book has been written principally to explain why a successful enterprise should be abandoned or changed at its roots. Since the ABC has been a continuing, evolving success story in the face of technological changes to media distribution, the argument to radically alter its operations, even abandon it, appears specious and suggests ulterior motives.

In fact, I would argue that it’s only because the ABC is so successful that its most virulent critics want to dismantle it and erase it from the Australian collective consciousness. For some people, including the IPA (I suspect), the reasons are ideological. They simply don’t want such a successful media enterprise that doesn’t follow their particular political and ideological goals to have the coverage and popularity that the ABC benefits from.

This brings us to the core of the issue: the ABC’s perceived political bias. Unlike most supporters of the ABC, I think this bias is real, but the authors themselves make the point that the ABC should not be privatised for ‘retribution’. The authors give specific examples where they believe political bias has been demonstrated but it’s hard to argue that it’s endemic. The ABC goes to lengths that most other services don’t, to acquire an opposing point of view. To give a contemporary example: 4 Corners, which is a leading investigative journalism programme, is currently running a 3 part series on Donald Trump and his Russian connections. The journalist, Sarah Ferguson, has lengthy interviews with the people under scrutiny (who have been implicated) and effectively gives them the right of reply to accusations made against them by media and those critical of their conduct. She seeks the counsel of experts, not all of whom agree, and lets the viewer make their own judgements. It’s a very professional dissection (by an outsider) of a major political controversy.

So-called political bias is subjective, completely dependent on the bias of the person making the judgment. I’m an exception in that I share the ABC’s bias yet acknowledge it. Most people who share the ABC’s bias (or any entity’s bias) will claim it’s not biased, but any entity (media or otherwise) with a different view to theirs will be biased according to them. From this perspective, I expect the IPA to consider the ABC biased, because they have a specific political agenda (as spelt out in the opening paragraph of this post) that is the opposite of the ABC’s own political inclinations. The authors acknowledge that intellectuals are statistically left leaning and journalists are predominantly intellectual.

To illustrate my point, the authors give 2 specific examples that they claim demonstrates the ABC’s lack of impartiality. One is that the ABC doesn’t give air time to climate change sceptics. But from my point of view, it’s an example of the ABC’s integrity in that they won’t give credibility to bogus science. In fact, they had a zealous climate change sceptic on a panel with Brian Cox, who annihilated him with facts from NASA. Not surprisingly, the sceptic argued the data was contaminated. Apparently, this embarrassment on national television of a climate change denier is an example of unacceptable political bias on the part of the ABC. The IPA, as mentioned earlier, is a peddler of climate change scepticism.

The other example mentioned by the authors is that the ABC doesn’t give enough support to the government’s policy of offshore detention. In fact, the ABC (and SBS) are the only mainstream media outlets in Australia that are openly critical of this policy, which is a platform of both major political parties, so political bias for one party over the other is not an issue in this case.

A few years ago, under Prime Minister Tony Abbott, laws were introduced to threaten health workers at Nauru and Manus Island (where asylum seekers are kept in detention) if they reported abuse. This was hypocritical in the extreme when health workers on mainland Australia are obliged by law to report suspected abuse. The ABC interviewed whistleblowers who risked jail, which the government of the day would have seen as a form of betrayal: giving a voice to people they wanted to silence.

As recently as last week there has been another death (of a 26 year old Iranian) but it never made it into any mainstream media report.

One only has to visit the web page of the publishers of the book, Connor Court Publishing, to see that they specialise in disseminating conservative political agendas like climate change scepticism and offshore detention.

To give a flavour of the IPA, there was recently a Royal Commission into the banking and finance sector which uncovered widespread corruption and rorting. On IPA’s website, I saw a comment about the hearings that compared them to a Soviet-style show trial. The ABC, it should be noted, reported the facts without emotive rhetoric but fielded comments from politicians on both sides of politics.

At the end of the book, the authors discuss how the ABC could be privatised. Basically, there are 2 alternatives: tender it to a private conglomerate (which could be overseas based) or put it to public shareholders, similar to what was done with Tesltra (a telecommunication company).  The IPA’s proposal is that they make the employees the shareholders so that they have full financial responsibility for their own performance. Their argument is that because they would be forced to appeal to a wider audience they would have to change their political stripes. In other words, they would need to appeal to populist movements that are gaining political momentum in all Western democracies, though they don’t specifically say this. This seems like an exercise in cynicism, as I’m unaware of any large complex media enterprise that is 'owned' by its employees. It seems to my inexpert eye like a recipe for failure, which I believe is their unstated objective.

Their best argument is that it costs roughly $1B a year that could be better spent elsewhere and that it’s an unnecessary burden on our national debt. This comes to 14c a day per capita, apparently. I see it as part of the government’s investment in arts and culture.

There is a larger context that the book glosses over, which is the role of media in keeping a democracy honest. The ABC is possibly unique in that it’s a taxpayer funded media that holds the government of the day to account (for both sides of politics). I think the authors’ arguments are ideologically motivated. In short, the book is a rational economic argument to undermine, if not destroy, an effective media enterprise that doesn’t reflect the IPA’s own political ambitions.

Sunday, 20 May 2018

Quantum mechanics and the arrow of time

Before I get started I need to make an important point. Every now and then I hear or read about someone who puts my life into perspective. Recently, I read an article on Lisa Harvey-Smith, a 39 year old, educated in England who is ‘Group Leader’ of astronomy at Australia’s CSIRO. She appeared on an ABC programme called Stargazing Live (last year) with Brian Cox and Julia Zemiro. She won the 2016 Eureka Science Prize for ‘promoting science research in Australia’. She also runs ultra-marathons (up to 24hs) and is an activist for LGBTI people. The point I’m making is that she’s a real scientist, and by comparison, I’m a pretender.

And I make this point because many people who know more about this subject than me will tell you that much of what I have to say is wrong. So why should you even listen to me? Because I have a philosophical point of view on a subject with many philosophical points of view, some of which border on science fiction. For example: interacting parallel universes; and physical reality only becoming manifest when perceived by a conscious observer. I’ve written about both of these philosophical perspectives on other posts, but they indicate how much we don’t know and how difficult it is to reconcile quantum mechanics (QM) with what we actually perceive in our everyday interaction with the world.

I recently read a very good book on this subject by Philip Ball titled Beyond Weird. He gives a history lesson whilst simultaneously discussing the philosophical nuances inherent in QM in the context of experimental evidence. Ball, more than any other author I’ve read in recent times, challenges my perspective, which makes him all the worth while to read. But in so doing, I’m able to delineate with more confidence between the lesser and greater contentious aspects of my viewpoint. In fact, there is one point which I now realise is the most contentious of all, and it is related to time.

Regarding the title of this post, they seem like separate topics, but I’m aware others have made this connection; in particular Richard A Muller in NOW; The Physics of Time, though he didn’t really elaborate. He did, however, elaborate on why entropy does not provide the arrow of time, which is an oft made misconception. And it is one I’ve made myself in the past, but I now fully believe that the cause and effect is the other way round. Entropy increases with time due to probabilities. There is a much higher probability for disorder than order providing the system is in equilibrium. If there is an energy source (like the sun) that keeps a system out of equilibrium then you can have self-organising complexity occurring (such as life).

I’m unsure if QM provides an ‘arrow of time’, as people like to express it, but I do believe it provides an asymmetry, which is best expounded by Roger Penrose’s 3 phases of U, R and C. U is the evolution of the wave function as expressed by Schrodinger’s equation, R is the measurement or observation process (also called decoherence of the wave function) and C is the classical physics world which we generally call reality. These always occur in that sequence, hence the logical temporal connection.

I say ‘always’ yet Ball gives an example whereby physicists in Canada in 2015 ‘reversed the entanglement of photons’ in a crystal, which Ball calls 'recoherence'. But he also describes it as ‘.. the kind of exception that, in the proper sense, proves the rule.’ The ‘rule’, according to Ball, is that decoherence is the loss of quantum information to the environment. This is a specific interpretation by Ball, which has merit and is analogous to entropy (though he doesn’t make that connection) therefore time-directional in the same way that entropy is.

Towards the end of his book, Ball effectively argues that an ‘information’ approach to QM is the most logical approach to take and talks about a ‘reconstruction’ of QM based on principles like the ‘no cloning’ rule (quantum particles can’t be copied so teleportation destroys the original), the 'no-signalling' rule (you can’t transmit information faster than light) and there is ‘no unconditional secure bit commitment’ (which limits quantum encryption). These 3 were called ‘no-go principles’ by Rob Clifton, Jeffrey Bub and Hans Halvorson. To quote Bub from the University of Maryland: ‘[QM] is fundamentally about the representation and manipulation of information, not a theory about the mechanics of nonclassical waves or particles’. In other words, we scrap wave functions and start again with information. Basically, Ball is arguing that QM should be based on a set of principles and not mathematical formulations, especially ones that describe things we can't perceive directly (we only see interference patterns, not waves per se).

Of course, we’ve known right from its original formulation, that we don’t need Schrodinger’s equation or his wave function to perform calculations in QM (I’ll talk about QED later). Heisenberg’s matrices preceded Schrodinger’s equation and gave the same results without a wave function in sight. So how can they be reconciled philosophically, if they are mathematically equivalent but conceptually at odds?

From my limited perspective, it seems to me that Heisenberg’s and Schrodinger’s respective mathematical approaches reflect their philosophical approaches. In fact, I would argue that they approached the subject from 2 different sides, even opposite sides, and came up with the same answer, which, if I’m correct, says a lot.

Basically, Schrodinger approached it from the quantum side or U phase (to use Penrose’s nomenclature) and Heisenberg approached it from the measurement side or R phase. I’m reading another book on the same subject, What is Real? by Adam Becker, which I acquired at the same time as Ball’s book, and they are complementary, in that Becker’s approach is more historical yet also examines the philosophical aspects. Heisenberg was disappointed (pissed off may be more accurate) at Schrodinger’s success, even though Heisenberg’s matrix approach preceded Schrodinger’s wave function.

But it was Heisenberg’s specific interest in the ‘measurement problem' that led him to his famous Uncertainty Principle and a Nobel Prize. Schrodinger’s wave function, using a Fourier transform, also gives the Uncertainty Principle, so mathematically they are still equivalent in their outcomes. But the point is that Schrodinger’s wave function effectively disappears as soon as a measurement is made, and Heisenberg’s matrices with their eigenvalues don’t tell us anything about the evolution of any wave function because they don’t express it mathematically.

Ball makes the point that Schrodinger’s and Heisenberg’s approaches reflect an ontological and epistemological consideration respectively, which he delineates using the shorthand, ‘ontic’ and ‘epistemic’. In this sense, the wave function is an ontic theory (this is what exists) and Heisenberg and Bohr’s interpretation is purely epistemic (this is what we know).

I’m getting off the track but it’s all relevant. About a month ago, I wrote a letter to New Scientist on 'time'. This is an extract:

There is an obvious difference between time in physics - be it governed by relativity, entropy or quantum mechanics - and time experienced psychologically by us. Erwin Schrodinger in his seminal tome, What is Life? made the observation that consciousness exists in a constant present, and I would contend that it's the only thing that does; everything else we perceive has already happened, except quantum mechanics, which prior to a 'measurement' or 'observation', exists in the future as probabilities. An idea alluded to by Sir William Lawrence Bragg, albeit using different imagery: the future are waves and the past are particles – "The advancing sieve of time coagulates waves into particles at the moment ‘now’". So it's not surprising that the concepts of past, present and future are only meaningful to something with consciousness, because only the continuous ‘now’ of consciousness provides a reference.

Those of you who regularly read my blog will notice that this is consistent with a post I wrote earlier.

The letter was never published and New Scientist inform you in advance that they refer letters to ‘experts’ and that they don’t provide explanations if they don’t publish, which is all very fair and reasonable. I expect in this case the expert (possibly Philip Ball, as I referenced his review of Carlo Rovelli’s book) probably said that this is so wrong-headed that it shouldn’t be published. On the other hand, their expert (whoever it was) may have said this insight is so obvious it’s not worth mentioning (but I doubt it).

I expect that both my citing of Erwin Schrodinger and of Sir William Lawrence Bragg would have been considered, if not contentious, then out of date, and that my views are far too simplistic.

So let me address these issues individually. One reads a lot of words (both in science and philosophical essays) on the so-called ‘flow of time’, and whether it’s an illusion or whether it’s only in the mind or whether it’s the wrong metaphor altogether; as if time is a river and we stand in it and watch it go by.

But staying with that metaphor, the place where we are standing remains ‘now’ for ever and always, whilst we watch the future become the past in a series of endless instants. In fact, we never see the future at all, which is why I say that ‘everything we perceive has already happened’. But the idea that this constant now that we all experience is a consequence of consciousness is contentious in itself. We don’t see ourselves as privileged in that sense; we assume that it only seems a privileged position because we witness it. We assume that everything in the Universe rides this wave of now. But, for everything else, the now becomes frozen, especially if ‘now’ represents the decoherence of a quantum wave function into a classical particle. Without consciousness, ‘now’ becomes relative, an objective point in time between a future event and a past event that quickly only becomes perceived as a past event.

Let’s look at light, because it’s the most ubiquitous quantum phenomena that we all witness all the time (when we are awake). The other thing about light is that we can examine it on a cosmic scale. The Magellanic Clouds (galaxies) are approximately 200,000 light years from here and we can see them with the naked eye in Australia, if you can get away from townships on a clear night. So we can literally look 200,000 years into the past. (That is roughly when homo sapiens evolved in Africa, according to one reference I looked up.)

Now, in my previous post I argued that light is effectively in the future until it interacts with matter, so how is that possible if it took the entire history of humanity to arrive at my retina? Well, from the star’s perspective (in the Magellanic Cloud) it’s in the future because it’s going away from it into the future, quite literally. And no one can perceive the light ray until it interacts with something, so it’s always in the future of whatever it interacts with. For the photon itself, it travels in zero time. Light turns time into distance, which is why there is really only spacetime, and if light didn’t do that (because it has a constant velocity) then everything would happen at once. So, as soon as it hits my retina and not before, I can see 200,000 years into the past. That's a quantum event.

Early in his book, Adam Becker (What is Real?) provides a very good metaphor. A traveller arrives at a fork in a path and we don’t know which one he takes until he arrives at his destination. According to QM he took both at once until someone actually meets him and then we learn he only took one. The 2 paths he can take are in the future and the one he actually took is in the past. But wait, you say: in QM a photon or particle can literally take 2 paths at once and create an interference pattern. Actually, the interference pattern is created by the probabilistic outcomes of individual photons or particles, so there is still only one path for each one.

Superposition is a much misunderstood concept. As Ball explains in a foonote: “…superposition is not really ‘two states at once’, but a circumstance in which either state is a possible measurement outcome.”

He gives a very good description of the Schrodinger wave function and its role in QM:

The Schrodinger equation defines and embraces all possible observable states of a quantum system. Before the wave function collapses (whatever that means) there is no reason to attribute any greater degree of reality to any of these possible states than to any other. For remember that quantum mechanics does not imply that the quantum system is actually in one or other of these states but we don’t know which. We can confidently say that it is not in any one of these states, but is properly described by the wavefunction itself, which in some sense ‘permits’ them all as observational outcomes. Where then do they all go, bar one, when the wavefunction collapses? (emphasis in the original)

He was making this point in the context of explaining why the parallel universe or ‘Many World Interpretation’ (which he calls MWI) is so popular and seductive, because in the MWI they do all exist. Ball, by the way, is not a fan of MWI and gives extensive and persuasive arguments against it.

This leads logically to Feynman’s integral path method or his version of QED (quantum electrodynamics) where all paths are allowed, but the phase interaction of the superposed wave functions cancel most of them out. Only a wave function version of QM with its time dependent phases can provide this interaction. Brian Cox gives a very good, succinct exposition of Feynman’s version of QM on Youtube and Freeman Dyson, who worked with Feynman and who originally showed that the independent work of Schwinger, Feynman and Tomonaga were equivalent, which got them all the Nobel Prize (except Dyson), explains that Feynman’s integral method predicts 'future probabilities from a sum over histories'. The point is, as Ball says himself, none of these histories actually happen. I argue that they never happen because they’re all in the future. Certainly, we never see them or measure them, but one of the probability outcomes will be realised when it becomes the past.

Because a specific path is only known once an observation is made, it appears that we are determining the path backwards-in-time, which has been demonstrated experimentally. I feel this is the key to the whole enigma, like the photon coming from the Magellanic Clouds – the path is revealed in retrospect. Until it’s revealed, it’s effectively in our future. Also this is consistent with the asymmetry in time we all experience. The future is many paths (as per QED) but the past is only one.

Ball argues consistently that there is a transition from ‘quantumness’ to classical physics (as per Penrose, though he doesn’t reference Penrose) but he argues that classical physics is a special case of QM (which is the orthodoxy).

His best argument is that decoherence is the loss of quantum information to the environment, which can happen over time, so not necessarily in an instance. He uses the same idea to explain why large objects decohere virtually instantaneously, because they are exposed to such a massive expanse of the environment.

There is much about QM I don’t discuss, like spin states that distinguish bosons from fermions and the role of symmetry and Emmy Noether’s famous theorem that relates symmetry to conservation laws (not only in QM but relativity theory).

I’m trying to understand QM and how it relates to time. Why is it, as Ball himself asks, are there many possibilities that become one? My contention is that this is exactly what distinguishes the future from the past as we experience it. The enigma with QM, as when we look backwards in time through the entire cosmos, is that those many paths only become one when the quantum object (photon or particle) interacts with something, forcing a wave function collapse or decoherence. Is there a backwards-in-time cosmic scale loop as proposed by John Wheeler? Maybe there is. Maybe the arrow of time goes both ways.


Footnote: This video gives a good summary of QM as discussed above; in particular, the presenter discusses the fundamental enigma of the many possibilities becoming one, and the many paths becoming one, only when an observation or measurement is made. He specifically discusses the so-called Copenhagen interpretation, but in effect describes QED.

Addendum 1: Sometimes I can't stop thinking about what I've written. I'm aware that there is a paradox with a light ray from the past intersecting with our future, so I've shown it in a very crude spacetime diagram, with time on the vertical axis and space on the horizontal axis. The Magellanic Clouds and Earth are 200,000 light years apart and there is a light cone which goes at 45 degrees from the source to the Earth 200,000 years into the future. (Actually, the small Magellanic Cloud is 199,000 while the large one is 158,000, which is probably the one you can see with the naked eye, so maybe you need a telescope for this after all.)

It's assuming that the distance between the Magellanic Clouds and Earth doesn't change (for simplicity) which is almost certainly not true. It allows the Earth to be a vertical line on the spacetime diagram with light being at 45 degrees, so they intersect 200,000 light years in the future.

It also suggests that the photon exists in a constant 'now' (until it interacts with something). As I said before, light is unique in that it has zero time, which explains that particular effect. Consciousness is unique in that it provides a reference for ‘now’ all the time. Light is always in the future of whatever it interacts with, when it becomes ‘now’, then becomes frozen in the past, possibly as an image (e.g. a photo) or a dot on a screen. Consciousness never becomes frozen, but it does become blank sometimes.

Addendum 2: This is a Youtube lecture by Carlo Rovelli, who would tell you that virtually everything I've said above is wrong, including what I said about entropy and time.

Addendum 3 (Conclusion): I've since read Carlo Rovelli's latest book, The Order of Time, where he completely dismantles our intuitive concept of time. For one, he says that Einstein has demonstrated that time doesn't flow at the same rate everywhere, but then effectively says that time doesn't flow at all. He points out that in QM, time can flow both ways mathematically, which is the U phase (using Penrose's nomenclature), and that the only time direction comes from entropy, which is contentious, in as much as many physicists believe that entropy is not the cause of time's apparent direction, but a consequence.

He says that there is no objective 'now', yet elsewhere I've read him being quoted as saying 'now' is the edge of the big bang. In his book, he doesn't discuss the age of the Universe at all, yet it has obvious ramifications to this topic.

There are 4 ways of looking at QM (5 if you include multiple worlds, which I'm not). There is the Copenhagen interpretation, which effectively says the only reality is what we measure or observe, and the wave function is simply a mathematical device for making probabilistic predictions prior to that.

There is Bohm's pilot wave theory, which says there was always a path, created by the pilot wave but not known until after our observation.

There is QED, in particular Feynman's sum over integral interpretation, that says there are an infinitude of paths, most of which cancel each other out, that give the most probabilistic outcome. When the outcome is known they all become irrelevant.

There is a so-called transactional interpretation that says the wave function goes both forwards and backwards in time, formulated by John Cramer in the mid 1980s, but foreshadowed by Schrodinger himself in 1941 (John Gribbin. Erwin Schrodinger and the Quantum Revolution, pp.161-4).

My interpretation effectively captures all of these (except multiple worlds). I don't think there is a pilot wave but I think there is 'one path' that is discovered after the observation. If you take the example I use in the main text; of observing the light from a star in the Magellanic Cloud: when you see it, you instantly look 200,000 light years into the past (or thereabouts). So there is a link between your current 'now' and a 'now' 200,000 years ago. My contention is that this is only possible because there is a backwards in time path from your eyeball to the star.

Addendum 4: Much of what I discuss above was foreshadowed in a post I wrote over 2 years ago; possibly more succinct and more accessible.

Addendum 5: This is a brief interview with Freeman Dyson, which has some relevance to this post. I have to say that Dyson probably comes closest to expressing my own views on QM and classical physics - that they are, in essence, incompatible. By his own admission, these views are not shared by most other physicists (if at all).