Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

12 October 2024

Freedom of the will is requisite for all other freedoms

 I’ve recently read 2 really good books on consciousness and the mind, as well as watch countless YouTube videos on the topic, but the title of this post reflects the endpoint for me. Consciousness has evolved, so for most of the Universe’s history, it didn’t exist, yet without it, the Universe has no meaning and no purpose. Even using the word, purpose, in this context, is anathema to many scientists and philosophers, because it hints at teleology. In fact, Paul Davies raises that very point in one of the many video conversations he has with Robert Lawrence Kuhn in the excellent series, Closer to Truth.
 
Davies is an advocate of a cosmic-scale ‘loop’, whereby QM provides a backwards-in-time connection which can only be determined by a conscious ‘observer’. This is contentious, of course, though not his original idea – it came from John Wheeler. As Davies points out, Stephen Hawking was also an advocate, premised on the idea that there are a number of alternative histories, as per Feynman’s ‘sum-over-histories’ methodology, but only one becomes reality when an ‘observation’ is made. I won’t elaborate, as I’ve discussed it elsewhere, when I reviewed Hawking’s book, The Grand Design.
 
In the same conversation with Kuhn, Davies emphasises the fact that the Universe created the means to understand itself, through us, and quotes Einstein: The most incomprehensible thing about the Universe is that it’s comprehensible. Of course, I’ve made the exact same point many times, and like myself, Davies makes the point that this is only possible because of the medium of mathematics.
 
Now, I know I appear to have gone down a rabbit hole, but it’s all relevant to my viewpoint. Consciousness appears to have a role, arguably a necessary one, in the self-realisation of the Universe – without it, the Universe may as well not exist. To quote Wheeler: The universe gave rise to consciousness and consciousness gives meaning to the Universe.
 
Scientists, of all stripes, appear to avoid any metaphysical aspect of consciousness, but I think it’s unavoidable. One of the books I cite in my introduction is Philip Ball’s The Book of Minds; How to Understand Ourselves and Other Beings; from Animals to Aliens. It’s as ambitious as the title suggests, and with 450 pages, it’s quite a read. I’ve read and reviewed a previous book by Ball, Beyond Weird (about quantum mechanics), which is equally as erudite and thought-provoking as this one. Ball is a ‘physicalist’, as virtually all scientists are (though he’s more open-minded than most), but I tend to agree with Raymond Tallis that, despite what people claim, consciousness is still ‘unexplained’ and might remain so for some time, if not forever.
 
I like an idea that I first encountered in Douglas Hofstadter’s seminal tome, Godel, Escher, Bach; an Eternal Golden Braid, that consciousness is effectively a loop, at what one might call the local level. By which I mean it’s confined to a particular body. It’s created within that body but then it has a causal agency all of its own. Not everyone agrees with that. Many argue that consciousness cannot of itself ‘cause’ anything, but Ball is one of those who begs to differ, and so do I. It’s what free will is all about, which finally gets us back to the subject of this post.
 
Like me, Ball prefers to use the word ‘agency’ over free will. But he introduces the term, ‘volitional decision-making’ and gives it the following context:

I believe that the only meaningful notion of free will – and it is one that seems to me to satisfy all reasonable demands traditionally made of it – is one in which volitional decision-making can be shown to happen according to the definition I give above: in short, that the mind operates as an autonomous source of behaviour and control. It is this, I suspect, that most people have vaguely in mind when speaking of free will: the sense that we are the authors of our actions and that we have some say in what happens to us. (My emphasis)

And, in a roundabout way, this brings me to the point alluded to in the title of this post: our freedoms are constrained by our environment and our circumstances. We all wish to be ‘authors of our actions’ and ‘have some say in what happens to us’, but that varies from person to person, dependent on ‘external’ factors.

Writing stories, believe it or not, had a profound influence on how I perceive free will, because a story, by design, is an interaction between character and plot. In fact, I claim they are 2 sides of the same coin – each character has their own subplot, and as they interact, their storylines intertwine. This describes my approach to writing fiction in a nutshell. The character and plot represent, respectively, the internal and external journey of the story. The journey metaphor is apt, because a story always has the dimension of time, which is visceral, and is one of the essential elements that separates fiction from non-fiction. To stretch the analogy, character represents free will and plot represents fate. Therefore, I tell aspiring writers the importance of giving their characters free will.

A detour, but not irrelevant. I read an article in Philosophy Now sometime back, about people who can escape their circumstances, and it’s the subject of a lot of biographies as well as fiction. We in the West live in a very privileged time whereby many of us can aspire to, and attain, the life that we dream about. I remember at the time I left school, following a less than ideal childhood, feeling I had little control over my life. I was a fatalist in that I thought that whatever happened was dependent on fate and not on my actions (I literally used to attribute everything to fate). I later realised that this is a state-of-mind that many people have who are not happy with their circumstances and feel impotent to change them.

The thing is that it takes a fundamental belief in free will to rise above that and take advantage of what comes your way. No one who has made that journey will accept the self-denial that free will is an illusion and therefore they have no control over their destiny.

I will provide another quote from Ball that is more in line with my own thinking:

…minds are an autonomous part of what causes the future to unfold. This is different to the common view of free will in which the world somehow offers alternative outcomes and the wilful mind selects between them. Alternative outcomes – different, counterfactual realities – are not real, but metaphysical: they can never be observed. When we make a choice, we aren’t selecting between various possible futures, but between various imagined futures, as represented in the mind’s internal model of the world…
(emphasis in the original)

And this highlights a point I’ve made before: that it’s the imagination which plays the key role in free will. I’ve argued that imagination is one of the facilities of a conscious mind that separates us (and other creatures) from AI. Now AI can also demonstrate agency, and, in a game of chess, for example, it will ‘select’ from a number of possible ‘moves’ based on certain criteria. But there are fundamental differences. For a start, the AI doesn’t visualise what it’s doing; it’s following a set of highly constrained rules, within which it can select from a number of options, one of which will be the optimal solution. Its inherent advantage over a human player isn’t just its speed but its ability to compare a number of possibilities that are impossible for the human mind to contemplate simultaneously.

The other book I read was Being You; A New Science of Consciousness by Anil Seth. I came across Seth when I did an online course on consciousness through New Scientist, during COVID lockdowns. To be honest, his book didn’t tell me a lot that I didn’t already know. For example, that the world, we all see and think exists ‘out there’, is actually a model of reality created within our heads. He also emphasises how the brain is a ‘prediction-making’ organ rather than a purely receptive one. Seth mentions that it uses a Bayesian model (which I also knew about previously), whereby it updates its prediction based on new sensory data. Not surprisingly, Seth describes all this in far more detail and erudition than I can muster.

Ball, Seth and I all seem to agree that while AI will become better at mimicking the human mind, this doesn’t necessarily mean it will attain consciousness. Applications software, ChatGPT (for example), despite appearances, does not ‘think’ the way we do, and actually does not ‘understand’ what it’s talking or writing about. I’ve written on this before, so I won’t elaborate.

Seth contends that the ‘mystery’ of consciousness will disappear in the same way that the 'mystery of life’ has effectively become a non-issue. What he means is that we no longer believe that there is some ‘elan vital’ or ‘life force’, which distinguishes living from non-living matter. And he’s right, in as much as the chemical origins of life are less mysterious than they once were, even though abiogenesis is still not fully understood.

By analogy, the concept of a soul has also lost a lot of its cogency, following the scientific revolution. Seth seems to associate the soul with what he calls ‘spooky free will’ (without mentioning the word, soul), but he’s obviously putting ‘spooky free will’ in the same category as ‘elan vital’, which makes his analogy and associated argument consistent. He then says:

Once spooky free will is out of the picture, it is easy to see that the debate over determinism doesn’t matter at all. There’s no longer any need to allow any non-deterministic elbow room for it to intervene. From the perspective of free will as a perceptual experience, there is simply no need for any disruption to the causal flow of physical events. (My emphasis)

Seth differs from Ball (and myself) in that he doesn’t seem to believe that something ‘immaterial’ like consciousness can affect the physical world. To quote:

But experiences of volition do not reveal the existence of an immaterial self with causal power over physical events.

Therefore, free will is purely a ‘perceptual experience’. There is a problem with this view that Ball himself raises. If free will is simply the mind observing effects it can’t cause, but with the illusion that it can, then its role is redundant to say the least. This is a view that Sabine Hossenfelder has also expressed: that we are merely an ‘observer’ of what we are thinking.

Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking.

Ball makes the point that we only have to look at all the material manifestations of human intellectual achievements that are evident everywhere we’ve been. And this brings me back to the loop concept I alluded to earlier. Not only does consciousness create a ‘local’ loop, whereby it has a causal effect on the body it inhabits but also on the external world to that body. This is stating the obvious, except, as I’ve mentioned elsewhere, it’s possible that one could interact with the external world as an automaton, with no conscious awareness of it. The difference is the role of imagination, which I keep coming back to. All the material manifestations of our intellect are arguably a result of imagination.

One insight I gained from Ball, which goes slightly off-topic, is evidence that bees have an internal map of their environment, which is why the dance they perform on returning to the hive can be ‘understood’ by other bees. We’ve learned this by interfering in their behaviour. What I find interesting is that this may have been the original reason that consciousness evolved into the form that we experience it. In other words, we all create an internal world that reflects the external world so realistically, that we think it is the actual world. I believe that this also distinguishes us (and bees) from AI. An AI can use GPS to navigate its way through the physical world, as well as other so-called sensory data, from radar or infra-red sensors or whatever, but it doesn’t create an experience of that world inside itself.

The human mind seems to be able to access an abstract world, which we do when we read or watch a story, or even write one, as I have done. I can understand how Plato took this idea to its logical extreme: that there is an abstract world, of which the one we inhabit is but a facsimile (though he used different terminology). No one believes that today – except, there is a remnant of Plato’s abstract world that persists, which is mathematics. Many mathematicians and physicists (though not all) treat mathematics as a neverending landscape that humans have the unique capacity to explore and comprehend. This, of course, brings me back to Davies’ philosophical ruminations that I opened this discussion with. And as he, and others (like Einstein, Feynman, Wigner, Penrose, to name but a few) have pointed out: the Universe itself seems to follow specific laws that are intrinsically mathematical and which we are continually discovering.

And this closes another loop: that the Universe created the means to comprehend itself, using the medium of mathematics, without which, it has no meaning. Of purpose, we can only conjecture.

02 October 2024

Common sense; uncommonly agreed upon

 The latest New Scientist (28 Sep., 2024) had an article headlined Uncommon Sense, written by Emma Young (based in Sheffield, UK) which was primarily based on a study done by Duncan Watts and Mark Whiting at the University of Pennsylvania. I wasn’t surprised to learn that ‘common sense’ is very subjective, although she pointed out that most people think the opposite: that it’s objective. I’ve long believed that common sense is largely culturally determined, and in many cases, arises out of confirmation bias, which the article affirmed with references to the recent COVID pandemic and the polarised responses this produced; where one person’s common sense was another person’s anathema.
 
Common sense is something we mostly imbibe through social norms, though experience tends to play a role long term. Common sense is often demonstrated, though not expressed, as a heuristic, where people with expertise develop heuristics that others outside their field wouldn’t even know about. This is a point I’ve made before, without using the term common sense. In other words, common sense is contextual in a way that most of us don’t consider.
 
Anyone with an interest in modern physics (like myself) knows that our common sense views on time and space don’t apply in the face of Einstein’s relativity theory. In fact, it’s one of the reasons that people struggle with it (Including me). Quantum mechanics with phenomena like superposition, entanglement and Heisenberg’s Uncertainty Principle also play havoc with our ‘common sense’ view of the world. But this is perfectly logical when one considers that we never encounter these ‘effects’ in our everyday existence, so they can be largely, if not completely, ignored. The fact that the GPS on your phone requires relativistic corrections and that every device you use (including said phone) are dependent on QM dynamics doesn’t change this virtually universal viewpoint.
 
I’ve just finished reading an excellent, albeit lengthy, book by Philip Ball titled ambitiously, if not pretentiously, The Book of Minds. I can honestly say it’s the best book I’ve read on the subject, but that’s a topic for a future post. The reason I raise it in this context, is because throughout I kept using AI as a reference point for appreciating what makes minds unique. You see, AI comes closest to mimicking the human mind, yet it’s nowhere near it, though others may disagree. As I said, it’s a topic for another post.
 
I remember coming up with my own definition of common sense many years ago, when I saw it as something that evolves over time, based on experience. I would contend that our common sense view on a subject changes, whether it be through the gaining of expertise in a specific field (as I mentioned above) or just our everyday encounters. A good example, that most of us can identify with, is driving a car. Notice how, over time, we develop skills and behaviours that have helped us to avoid accidents, some of which have arisen because of accidents.
 
And a long time ago, before I became a blogger, and didn’t even consider myself a philosopher, it occurred to me that AI could also develop something akin to common sense based on learning from its mistakes. Self-driving cars being a case-in-point.
 
According to the New Scientist article, the researchers, Watts and Whiting, claim that there is no correlation between so-called common sense and IQ. Instead, they contend that there is a correlation between a ‘consensual common sense’ (my term) and ‘Reading the Mind in the Eyes’ (their terminology). In other words, the ability to ‘read’ emotions is a good indicator for the ability to determine what’s considered ‘common sense’ for the majority of a cultural group (if I understand them correctly). This infers that common sense is a consensual perception, based on cultural norms, which is what I’ve always believed. This might be a bit simplistic, and an example of confirmation bias (on my part), but I’d be surprised if common sense didn’t morph between cultures in the same way it becomes modified by expertise in a particular field. So the idea of a universal, objective common sense is as much a chimera as objective morality, which is also more dependent on social norms than most people acknowledge.
 
 
Footnote: it’s worth reading the article in New Scientist (if accessible), because it provides a different emphasis and a different perspective, even though it largely draws similar conclusions to myself.

19 September 2024

Prima Facie; the play

 I went and saw a film made of a live performance of this highly rated play, put on by the National Theatre at the Harold Pinter Theatre in London’s West End in 2022. It’s a one-hander, played by Jodie Comer, best known as the quirky assassin with a diabolical sense of humour, in the black comedy hit, Killing Eve. I also saw her in Ridley Scott’s riveting and realistically rendered film, The Last Duel, set in mediaeval France, where she played alongside Matt Damon, Adam Driver and an unrecognisable Ben Affleck. The roles that Comer played in those 2 screen mediums, couldn’t be more different.
 
Theatre is more unforgiving than cinema, because there are no multiple takes or even a break once the curtain’s raised; a one-hander, even more so. In the case of Prima Facie, Comer is on the stage a full 90mins, and even does costume-changes and pushing around her own scenery unaided, without breaking stride. It’s such a tour de force performance, as the Financial Times put it; I’d go so far as to say it’s the best acting performance I’ve ever witnessed by anyone. It’s such an emotionally draining role, where she cries and even breaks into a sweat in one scene, that I marvel she could do it night-after-night, as I assume she did.
 
And I’ve yet to broach the subject matter, which is very apt, given the me-too climate, but philosophically it goes deeper than that. The premise for the entire play, which is even spelt out early on, in case you’re not paying attention, is the difference between truth and justice, and whether it matters. Comer’s character, Tessa, happens to experience it from both sides, which is what makes this so powerful.
 
She’s a defence barrister, who specialises in sexual-assault cases, where, as she explains very early on, effectively telling us the rules of the game: no one wins or loses; you either come first or second. In other words, the barristers and those involved in the legal profession, don’t see the process the same way that you and I do, and I can understand that – to get emotionally involved makes it very stressful.

In fact, I have played a small role in this process in a professional capacity, so I’ve seen this firsthand. But I wasn’t dealing with rape cases or anything involving violence, just contractual disputes where millions of dollars could be at stake. My specific role was to ‘prepare evidence’ for lawyers for either a claim or the defence of a claim or possibly a counter-claim, and I quickly realised the more dispassionate one is, the more successful one is likely to be. I also realised that the lawyers I was supporting in one case could be on the opposing side in the next one, so you don’t get personal.
 
So, I have a small insight into this world, and can appreciate why they see it as a game, where you ‘win or come second’. But in Prima Facie, Tess goes through this very visceral and emotionally scarifying transformation where she finds herself on the receiving end, and it’s suddenly very personal indeed.
 
Back in 2015, I wrote a mini-400-word essay, in answer to one of those Question of the Month topics that Philosophy Now like to throw open to amateur wannabe philosophers, like myself. And in this case, it was one that was selected for publication (among 12 others), from all around the Western globe. I bring this up, because I made the assertion that ‘justice without truth is injustice’, and I feel that this is really what Prima Facie is all about. At the end of the play, with Tess now having the perspective of the victim (there is no other word), it does become a matter of winning or losing, because, not only her career and future livelihood, but her very dignity, is now up for sacrifice.
 
I watched a Q&A programme on Australia’s ABC some years ago, where this issue was discussed. Every woman on the panel, including one from the righteous right (my coinage), had a tale to tell about discrimination or harassment in a workplace situation. But the most damming testimony came from a man, who specialised in representing women in sexual assault cases, and he said that in every case, their doctors tell them not to proceed because it will destroy their health; and he said: they’re right. I was reminded of this when I watched this play.
 
One needs to give special mention to the writer, Suzie Miller, who is an Aussie as it turns out, and as far as 6 degrees of separation go, I happen to know someone who knows her father. Over 5 decades I’ve seen some very good theatre, some of it very innovative and original. In fact, I think the best theatre I’ve seen has invariably been something completely different, unexpected and dare-I-say-it, special. I had a small involvement in theatre when I was still very young, and learned that I couldn’t act to save myself. Nevertheless, my very first foray into writing was an attempt to write a play. Now, I’d say it’s the hardest and most unforgiving medium of storytelling to write for. I had a friend who was involved in theatre for some decades and even won awards. She passed a couple of years ago and I miss her very much. At her funeral, she was given a standing ovation, when her coffin was taken out; it was very moving. I can’t go to a play now without thinking about her and wishing I could discuss it with her.

07 September 2024

Science and religion meet at the boundary of humanity’s ignorance

 I watched a YouTube debate (90 mins) between Sir Roger Penrose and William Lane Craig, and, if I’m honest, I found it a bit frustrating because I wish I was debating Craig instead of Penrose. I also think it would have been more interesting if Craig debated someone like Paul Davies, who is more philosophically inclined than Penrose, even though Penrose is more successful as a scientist, and as a physicist, in particular.
 
But it was set up as an atheist versus theist debate between 2 well known personalities, who were mutually respectful and where there was no animosity evident at all. I confess to having my own biases, which would be obvious to any regular reader of this blog. I admit to finding Craig arrogant and a bit smug in his demeanour, but to be fair, he was on his best behaviour, and perhaps he’s matured (or perhaps I have) or perhaps he adapts to whoever he’s facing. When I call it a debate, it wasn’t very formal and there wasn’t even a nominated topic. I felt the facilitator or mediator had his own biases, but I admit it would be hard to find someone who didn’t.
 
Penrose started with his 3 worlds philosophy of the physical, the mental and the abstract, which has long appealed to me, though most scientists and many philosophers would contend that the categorisation is unnecessary, and that everything is physical at base. Penrose proposed that they present 3 mysteries, though the mysteries are inherent in the connections between them rather than the categories themselves. This became the starting point of the discussion.
 
Craig argued that the overriding component must surely be ‘mind’, whereas Penrose argued that it should be the abstract world, specifically mathematics, which is the position of mathematical Platonists (including myself). Craig pointed out that mathematics can’t ‘create’ the physical, (which is true) but a mind could. As the mediator pointed out (as if it wasn’t obvious) said mind could be God. And this more or less set the course for the remainder of the discussion, with a detour to Penrose’s CCC theory (Conformal Cyclic Cosmology).
 
I actually thought that this was Craig’s best argument, and I’ve written about it myself, in answer to a question on Quora: Did math create the Universe? The answer is no, nevertheless I contend that mathematics is a prerequisite for the Universe to exist, as the laws that allowed the Universe to evolve, in all its facets, are mathematical in nature. Note that this doesn’t rule out a God.
 
Where I would challenge Craig, and where I’d deviate from Penrose, is that we have no cognisance of who this God is or even what ‘It’ could be. Could not this God be the laws of the Universe themselves? Penrose struggled with this aspect of the argument, because, from a scientific perspective, it doesn’t tell us anything that we can either confirm or falsify. I know from previous debates that Craig has had, that he would see this as a win. A scientist can’t refute his God’s existence, nor can they propose an alternative, therefore it’s his point by default.
 
This eventually led to a discussion on the ‘fine-tuning’ of the Universe, which in the case of entropy, is what led Penrose to formulate his CCC model of the Universe. Of course, the standard alternative is the multiverse and the anthropic principle, which, as Penrose points out, is also applicable to his CCC model, where you have an infinite sequence of universes as opposed to an infinity of simultaneous ones, which is the orthodox response among cosmologists.
 
This is where I would have liked to have seen Paul Davies respond, because he’s an advocate of John Wheeler’s so-called ‘participatory Universe’, which is effectively the ‘strong anthropic principle’ as opposed to the ‘weak anthropic principle’. The weak anthropic principle basically says that ‘observers’ (meaning us) can only exist in a universe that allows observers to exist – a tautology. Whereas the strong anthropic principle effectively contends that the emergence of observers is a necessary condition for the Universe to exist (the observers don’t have to be human). Basically, Wheeler was an advocate of a cosmic, acausal (backward-in-time) link from conscious observers to the birth of the Universe. I admit this appeals to me, but as Craig would expound, it’s a purely metaphysical argument, and so is the argument for God.
 
The other possibility that is very rarely expressed, is that God is the end result of the Universe rather than its progenitor. In other words, the ‘mind’ that Craig expounded upon is a consequence of all of us. This aligns more closely with the Hindu concept of Brahman or a Buddhist concept of collective karma – we get the God we deserve. Erwin Schrodinger, who studied the Upanishads, discusses Brahman as a pluralistic ‘mind’ in What is Life?. (Note that in Hinduism, the soul or Atman is a part of Brahman). My point would be that the Judea-Christian-Islamic God does not have a monopoly on Craig’s overriding ‘mind’ concept.
 
A recurring theme on this blog is that there will always be mysteries – we can never know everything – and it’s an unspoken certitude that there will forever be knowledge beyond our cognition. The problem that scientists sometimes have, but are reluctant to admit, is that we can’t explain everything, even though we keep explaining more by the generation. And the problem that theologians sometimes have is that our inherent ignorance is neither ‘proof’ nor ‘evidence’ that there is a ‘creator’ God.
 
I’ve argued elsewhere that a belief in God is purely a subjective and emotional concept, which one then rationalises with either cultural references or as an ultimate explanation for our existence.


Addendum: I like this quote, albeit out of context, from Spinoza:: "The sum of the natural and physical laws of the universe and certainly not an individual entity or creator".


29 August 2024

How scale demonstrates that mathematics is intrinsically entailed in the Universe

 I momentarily contemplated another title: Is the Planck limit an epistemology or an ontology? Because that’s basically the topic of a YouTube video that’s the trigger for this post. I wrote a post some time ago where I discussed whether the Universe is continuous or discrete, and basically concluded that it was continuous. Based on what I’ve learned from this video, I might well change my position. But I should point out that my former opposition was based more on the idea that it could be quantised into ‘bits’ of information, whereas now I’m willing to acknowledge that it could be granular at the Planck scale, which I’ll elaborate on towards the end. I still don’t think that the underlying reality of the Universe is in ‘bits’ of information, therefore potentially created and operated by a computer.
 
Earlier this year, I discussed the problem of reification of mathematics so I want to avoid that if possible. By reification, I mean making a mathematical entity reality. Basically, physics works by formulating mathematical models that we then compare to reality through observations. But as Freeman Dyson pointed out, the wave function (Ψ), for example, is a mathematical entity and not a physical entity, which is sometimes debated. The fact is that if it does exist physically, it’s never observed, and my contention is that it ‘exists’ in the future; a view that is consistent with Dyson’s own philosophical viewpoint that QM can only describe the future and not the past.
 
And this brings me to the video, which has nothing to say about wave functions or reified mathematical entities, but uses high school mathematics to explore such esoteric and exotic topics as black holes and quantum gravity. There is one step involving integral calculus, which is about as esoteric as the maths becomes, and if you allow that 1/ = 0, it leads to the formula for the escape velocity from any astronomical body (including Earth). Note that the escape velocity literally allows an object to escape a gravitational field to infinity (). And the escape velocity for a black hole is c (the speed of light).
 
All the other mathematics is basic algebra using some basic physics equations, like Newton’s equation for gravity, Planck’s equation for energy, Heisenberg’s Uncertainty Principle using Planck’s Constant (h), Einstein’s famous equation for the equivalence of energy and mass, and the equation for the Coulomb Force between 2 point electric charges (electrons). There is also the equation for the Schwarzschild radius of a black hole, which is far easier to derive than you might imagine (despite the fact that Schwarzschild originally derived it from Einstein’s field equations).
 
Back in May 2019, I wrote a post on the Universe’s natural units, which involves the fundamental natural constants, h, c and G. This was originally done by Planck himself, which I describe in that post, while providing a link to a more detailed exposition. In the video (embedded below), the narrator takes a completely different path to deriving the same Planck units before describing a method that Planck himself would have used. In so doing, he explains how at the Planck level, space and time are not only impossible to observe, even in principle, but may well be impossible to remain continuous in reality. You need to watch the video, as he explains it far better than I can, just using high school mathematics.
 
Regarding the title I chose for this post, Roger Penrose’s Conformal Cyclic Cosmology (CCC) model of the Universe, exploits the fact that a universe without matter (just radiation) is scale invariant, which is essential for the ‘conformal’ part of his theory. However, that all changes when one includes matter. I’ve argued in other posts that different forces become dominant at different scales, from the cosmological to the subatomic. The point made in this video is that at the Planck scale all the forces, including gravity, become comparable. Now, as I pointed out at the beginning, physics is about applying mathematical models and comparing them to reality. We can’t, and quite possibly never will, be able to observe reality at the Planck scale, yet the mathematics tells us that it’s where all the physics we currently know is compatible. It tells me that not only is the physics of the Universe scale-dependent, but it's also mathematically dependent (because scale is inherently mathematical). In essence, the Universe’s dynamics are determined by mathematical parameters at all scales, including the Planck scale.
 
Note that the mathematical relationships in the video use ~ not = which means that they are approximate, not exact. But this doesn’t detract from the significance that 2 different approaches arrive at the same conclusion, which is that the Planck scale coincides with the origin of the Universe incorporating all forces equivalently.
 
 
Addendum: I should point out that Viktor T Toth, who knows a great deal more about this than me, argues that there is, in fact, no limit to what we can measure in principle. Even the narrator in the video frames his conclusion cautiously and with caveats. In other words, we are in the realm of speculative physics. Nevertheless, I find it interesting to contemplate where the maths leads us.



28 July 2024

When truth becomes a casualty, democracy is put at risk

 You may know of Raimond Gaita as the author of Romulus, My Father, a memoir of his childhood, as the only child of postwar European parents growing up in rural Australia. It was turned into a movie directed by Richard Roxborough (his directorial debut) and starring Eric Bana. What you may not know is that Raimond Gaita is also a professor of philosophy who happens to live in the same metropolis as me, albeit in different suburbs.
 
I borrowed his latest tome, Justice and Hope; Essays, Lectures and Other Writings, from my local library (published last year, 2023), and have barely made a dent in the 33 essays, unequally divided into 6 parts. So far, I’ve read the 5 essays in Part 1: An Unconditional Love of the World, and just the first essay of Part 2: Truth and Judgement, which is titled rather provocatively, The Intelligentsia in the Age of Trump. Each essay heading includes the year it was written, and the essay on the Trump phenomenon (my term, not his) was written in 2017, so after Trump’s election but well before his ignominious attempt to retain power following his election defeat in 2020. And, of course, he now has more stature and influence than ever, having just won the Presidential nomination from the Republican Party for the 2024 election, which is only months away as I write.
 
Gaita doesn’t write like an academic in that he uses plain language and is not afraid to include personal anecdotes if he thinks they’re relevant, and doesn’t pretend that he’s nonpartisan in his political views. The first 5 essays regarding ‘an unconditional love of the world’ all deal with other writers and postwar intellects, all concerned with the inhumane conditions that many people suffered, and some managed to survive, during World War 2. This is confronting and completely unvarnished testimony, much darker and rawer than anything I’ve come across in the world of fiction, as if no writer’s imagination could possibly capture the absolute lowest and worst aspects of humanity.
 
None of us really know how we would react in those conditions. Sometimes in dreams we may get a hint. I’ve sometimes considered dreams as experiments that our minds play on us to test our moral fortitude. I know from my father’s experiences in WW2, both in the theatre of war and as a POW, that one’s moral compass can be bent out of shape. He told me of how he once threatened to kill someone who was stealing from wounded who were under his care. The fact that the person he threatened was English and the wounded were Arabs says a lot, as my father held the same racial prejudices as most of his generation. But I suspect he’d witnessed so much unnecessary death and destruction on such a massive scale that the life of a petty, opportunistic thief seemed worthless indeed. When he returned, he had a recurring dream where there was someone outside the house and he feared to confront them. And then on one occasion he did and killed them barehanded. His telling of this tale (when I was much older, of course) reminded me of Luke Skywalker meeting and killing his Jungian shadow in The Empire Strikes Back. My father could be a fearsome presence in those early years of my life – he had demons and they affected us all.
 
Another one of my tangents, but Gaita’s ruminations on the worst of humanity perpetrated by a nation with a rich and rightly exalted history makes one realise that we should not take anything for granted. I’ve long believed that anyone can commit evil given the right circumstances. We all live under this thin veneer that only exists because we mostly have everything we need and are generally surrounded by people who have no real axe to grind and who don’t see our existence as a threat to their own wellbeing.
 
I recently saw the movie, Civil War, starring Kirsten Dunst, who plays a journalist covering a hypothetical conflict in America, consequential to an authoritarian government taking control of the White House. The aspect that I found most believable was how the rule of law no longer seemed to apply, and people had become completely tribal whereupon one’s neighbour could become one’s enemy. I’ve seen documentaries on conflicts in Rwanda and the former Yugoslavia where this has happened – neighbours become mortal enemies, virtually overnight, because they suddenly find themselves on opposite sides of a tribal divide. I found the movie quite scary because it showed what happens when the veneer of civility we take for granted is not just lifted, but disappears.
 
On the first page of his essay on Trump, Gaita sets the tone and the context that resulted in Brexit on one side of the Atlantic and Trump’s Republican nomination on the other.
 
Before Donald Trump became the Republican nominee, Brexit forced many among the left-liberal intelligentsia to ask why they had not realised that resentment, anger and even hatred could go so deep as they did in parts of the electorate.

 
I think the root cause of all these dissatisfactions and resentments that lead to political upheavals that no one sees coming is trenchant inequality. I remember my father telling me when I was a child that the conflict in Ireland wasn’t between 2 religious groups but about wealth and inequality. I suspect he was right, even though it seems equally simplistic.
 
In all these divisions that we’ve seen, including in Australia, is the perception that people living in rural areas are being left out of the political process and not getting their fair share of representation, and consequentially everything else that follows from that, which results in what might be called a falling ‘standard of living’. The fallout from the GFC, which was global, exacerbated these differences, both perceived and real, and conservative politicians took advantage. They depicted the Left as ‘elitist’, which is alluded to in the title of Gaita’s essay, and is ‘code’ for ignorant and arrogant. This happened in Australia and I suspect in other Western democracies as well, like the UK and America.
 
Gaita expresses better than most how Trump has changed politics in America, if no where else, by going outside the bounds of normal accepted behaviour for a world leader. In effect, he’s changed the social norms that one associates with a person holding that position.
 
To illustrate my point, I’ll provide selected quotes, albeit out of context.
 
To call Trump a radically unconventional politician is like calling the mafia unconventional debt collectors; it is to fail to understand how important are the conventions, often unspoken, that enables decency in politics. Trump has poured a can of excrement over those conventions.
 
He has this to say about Trump’s ‘alternative facts’ not only espoused by him, but his most loyal followers.

In linking reiterated accusations of fake news to elites, Trump and his accomplices intended to undermine the conceptual and epistemic space that makes conversations between citizens possible.
 
It is hardly possible to exaggerate the seriousness of this. The most powerful democracy on Earth, the nation that considers itself and is often considered by others to be the leader of ‘the free world’, has a president who attacks unrelentingly the conversational space that can exist only because it is based on a common understanding – the space in which citizens can confidently ask one another what facts support their opinions. If they can’t ask that of one another, if they can’t agree on when something counts as having been established as fact, then the value of democracy is diminished.
 
He then goes on to cite J.D. Vance’s (recently nominated as Trump’s running VP), Hillbilly Elegy, where ‘he tells us… that Obama is not an American, that he was “born in some far-flung corner of the world”, that he has ties to Islamic extremism…’ and much worse.
 
Regarding some of Trump’s worse excesses during his 2016 campaign like getting the crowd to shout “Lock her up!” (his political opponent at the time) Gaita makes this point:
 
At the time, a CNN reporter said that his opponents did not take him seriously, but they did take him literally, whereas his supporters took him seriously but not literally. It was repeated many times… he would be reigned in by the Republicans in the House and the Senate and by trusted institutions. [But] He hasn’t changed in office.
 
It’s worth contemplating what this means if he wins Office again in 2024. He’s made it quite clear he’s out for revenge, and he’s also now been given effective immunity from prosecution by the Supreme Court if he seeks revenge through the Justice Department while he’s in Office. There is also the infamous Project 2025 which has the totally unhidden agenda to get rid of the so-called ‘deep state’ and replace public servants with Trump acolytes, not unlike a dictatorship. Did I just use that word?
 
Trump has achieved something I’ve never witnessed before, which Gaita doesn’t mention, though I have the benefit of an additional 7 years hindsight. What I’m referring to is that Trump has created an alternative universe, and from the commentary I’ve read on forums like Quora and elsewhere, you either live in one universe or the other – it’s impossible to claim you inhabit both. In other words, Trump has created an unbridgeable divide, which can’t be reconciled politically or intellectually. In one universe, Biden stole the 2020 POTUS election from Trump, and in another universe, Trump attempted to overturn the election and failed.
 
This is the depth of division that Trump has created in his country, and you have to ask: How far will people go to defend their version of the truth?
 
It was less than a century ago that fascism threatened the entire world order and created the most extensive conflict witnessed by humankind. I don’t think it’s an exaggeration to say that we are on the potential brink of creating a new brand of authoritarianism in the country epitomised by the slogan, ‘the free world’.

22 July 2024

Zen and the art of flow

 This was triggered by a newsletter I received from ABC Classic (Australian radio station) with a link to a study done on ‘flow’, which is a term coined by physiologist, Mihalyi Csikszentmihalyi, to describe a specific psychological experience that many (if not all) people have had when totally immersed in some activity that they not only enjoy but have developed some expertise in.
 
The study was performed by Dr John Kounios from Drexel University's Creative Research Lab in Philadelphia, who “examined the 'neural and psychological correlates of flow' in a sample of jazz guitarists.” The article was authored by Jennifer Mills from ABC Classic’s sister station, ABC Jazz. But the experience of ‘flow’ just doesn’t apply to mental or artistic activities, but also sporting activities like playing tennis or cricket. Mills heads her article with the claim that ‘New research helps unlock the secrets of flow, an important tool for creative and problem solving tasks’. She quotes Csikszentmihalyi to provide a working definition:
 
"A state in which people are so involved in an activity that nothing else seems to matter; the experience is so enjoyable that people will continue to do it even at great cost, for the sheer sake of doing it."
 
I believe I’ve experienced ‘flow’ in 2 quite disparate activities: writing fiction and driving a car. Just to clarify, some people think that experiencing flow while driving means that you daydream, whereas I’m talking about the exact opposite. I hardly ever daydream while driving, and if I find myself doing it, I bring myself back to the moment. Of course, cars are designed these days to insulate you from the experience of driving as much as possible, as we evolve towards self-driving cars. Thankfully, there are still cars available that are designed to involve you in the experience and not remove you from it.
 
I was struck by the fact that the study used jazz musicians, as I’ve often compared the ability to play jazz with the ability to write dialogue (even though I’m not a musician). They both require extemporisation. The article references Nat Bartsch, whom I’ve seen perform live and whose music is an unusual style of jazz in that it can be very contemplative. I saw her perform one of her albums with her quartet, augmented with a cello, which made it a one-off, unique performance. (This is a different concert performed in Sydney without the cellist.)
 
The study emphasised the point that the more experienced practitioners taking part were the ones more likely to experience ‘flow’. In other words, to experience ‘flow’ you need to reach a certain skill-level. In emphasising this point, the author quotes jazz legend, Charlie Parker:
 
"You've got to learn your instrument. Then, you practise, practise, practise. And then, when you finally get up there on the bandstand, forget all that and just wail."
 
I can totally identify with this, as when I started writing it was complete crap, to the extent that I wouldn’t show it to anyone. For some irrational reason, I had the self-belief – some might say, arrogance – that, with enough perseverance and practice, I could ‘break-through’ into the required skill-level. In fact, I now create characters and write dialogue with little conscious effort – it’s become a ‘delegated’ task, so I can concentrate on the more complex tasks of resolving plot points, developing moral dilemmas and formulating plot twists. Notice that these require a completely different set of skills that also had to be learned from scratch. But all this can come together, often in unexpected and surprising ways, when one is in the mental state of ‘flow’. I’ve described this as a feeling like you’re an observer, not the progenitor, so the process occurs as if you’re a medium and you just have to trust it.
 
Dr. Steffan Herff, leader of the Sydney Music, Mind and Body Lab at Sydney University, makes a point that supports this experience:
 
"One component that makes flow so interesting from a cognitive neuroscience and psychology perspective, is that it comes with a 'loss of self-consciousness'."
 
And this allows me to segue into Zen Buddhism. Many years ago, I read an excellent book by Daisetz Suzuki titled, Zen and Japanese Culture, where he traces the evolutionary development of Zen, starting with Buddhism in India, then being adopted in China, where it was influenced by Taoism, before reaching Japan, where it was assimilated into a sister-religion (for want of a better term) with Shintoism, which is an animistic religion.
 
Suzuki describes Zen as going inward rather than outward, while acknowledging that the two can’t be disconnected. But I think it’s the loss of ‘self’ that makes it relevant to the experience of flow. When Suzuki described the way Zen is practiced in Japan, he talked about being in the moment, whatever the activity, and for me, this is an ideal that we rarely attain. It was only much later that I realised that this is synonymous with flow as described by Csikszentmihalyi and currently being examined in the studies referenced above.
 
I’ve only once before written a post on Zen (not counting a post on Buddhism and Christianity), which arose from reading Douglas Hofstadter’s seminal tome, Godel Escher Bach (which is not about Zen, although it gets a mention), and it’s worth quoting this summation from myself:
 
My own take on this is that one’s ego is not involved yet one feels totally engaged. It requires one to be completely in the moment, and what I’ve found in this situation is that time disappears. Sportsmen call it being ‘in the zone’ and it’s something that most of us have experienced at some time or another.

05 July 2024

The universal quest for meaning

I’ve already cited Philosophy Now (Issue 162, June/July 2024) in my last 2 posts and I’m about to do it again. Every issue has a theme, and this one is called ‘The Meaning Issue’, so it’s no surprise that 2 of the articles reference Viktor Frankl’s seminal book, Man’s Search for Meaning. I’ve said that it’s probably the only book I’ve read that I think everyone should read.
 
For those who don’t know, Frankl was an Auschwitz survivor and a ‘logotherapist’, a term he coined to describe his own version of existential psychological therapy. Basically, Frankl saw purpose as being the unrecognised essence of our existence, and its lack as a source of mental issues like depression, neuroticism and stress. I’ve written about the importance of purpose previously, so I might repeat myself.
 
One of the articles (by Georgia Arkell) compares Frankl’s ideas on existentialism with Sartre’s, and finds Frankl more optimistic. I know that I’m taking a famous line out of context, but I feel it sums up their differences. Sartre famously said, ‘Hell is other people’, but Frankl lived through hell, and would no doubt, have strongly disagreed. Frankl argued that we can find meaning even under the most extreme circumstances, and he should know.
 
To quote from Arkell’s article:
 
Frankl noted that the prisoners who appeared to have the highest chance of survival were those with some aim or meaning directed beyond themselves and beyond day to day survival.
 

Then there is this, in Frankl’s own words (from Man’s Search for Meaning):
 
…it becomes clear that the sort of person the prisoner became was the result of an inner-decision and not the result of camp influences alone. Fundamentally then, any man can, under such circumstances, decide what shall become of him – mentally and spiritually.

 
I should point out that my own father spent 2.5 years as a POW in Germany, though it wasn’t a death camp, even though, according to his own testimony, it was only Red Cross food parcels that kept him alive. He rarely talked about it, as he was a firm believer that you couldn’t make the experience of war, in all its manifestations, comprehensible to anyone who hadn’t experienced it. But in light of Frankl’s words, I wonder now how my father did find meaning. There is one aspect of his experience that might shed some light on that – he escaped no less than 3 times.
 
My father was very principled, some might say, to a fault. He volunteered to stay and look after the wounded when they were ordered to evacuate Crete, because, as he said, it was his job (he was an ambulance officer in the Field Ambulance Corp). That action probably later saved his life, but that’s another story. Also on Crete, while trying to escape with another prisoner with the help of a local woman (it was always the women who did this, according to my father), they were discovered by a German, whilst hiding. My father gave himself up so the other 2 could escape. The Australian escapee made it back home and was able to tell my grandmother that her son was still alive (she only knew he was missing in action). But the 3 attempts I mentioned all happened after he was taken to Germany, and on one occasion, the Commandant asked him, why did he escape? My father answered matter-of-factly, ‘It’s my job’. Apparently, due to his sincerity (not for being a smart-arse), the Commandant chose not to punish him.
 
So, I think my father survived because he stuck to some core values and principles that became his own rock and anchor. His attempts to escape are manifestations of his personal affirmation that he never lost hope.
 
Frankl understood better than most, because of his lived experience, the importance of hope to a person’s survival. As an aside, our (Australian) government has a very deliberate policy of eliminating all hope for asylum seekers who arrive by sea. I think it’s so iniquitous, it should be a recognised crime – it goes to the heart of human rights. Slightly off-topic, but very relevant.
 
Loss of hope is something I’ve explored in my own fiction, where we witness its loss like a ball of tightly wound string slowly unravelling (not the metaphor I used in the book), as a key character is abandoned on a distant world (it’s sci-fi, for those who don’t know). I’ve been told by at least one reader that it’s the most impactful section in the book. True story: I was once sitting next to someone on a bus who was up to that part of the book, and as he got up to leave, he said, ‘If she dies, I’ll never speak to you again.’
 
See how easily I get side-tracked - my mind goes off on tangents – I can’t help myself. I’m the same in conversations.
 
Back to the topic: the other article in Philosophy Now that references Frankl, Finding Meaning in Suffering, by Patrick Testa (a psychiatric clinician with a BA in philosophy and political science) also quotes from Man’s Search for Meaning:
 
There are some authors who contend that meanings and values are nothing but defense mechanisms or reaction formulations…  But for myself, I would not be willing to live merely for the sake of my defense mechanisms, nor would I be ready to die merely for the sake of my reaction formulations.
(Emphasis in Testa’s quote)
 
This quote was the original trigger for this essay, as it leads me to consider the role of identity. I’ve long argued that identity is what someone is willing to die for (which Frankl specifically mentions), therefore willing to kill for. Identity is strongly related to ‘meaning’ for most people, albeit at a subconscious level. For some people, their identity is their profession, for others it’s their heritage, and for many it’s their political affiliation. The point about identity is that it both binds us and divides us.
 
But if you were to ask someone what their identity is, they might well struggle to answer – I know I do – but if it appears to be threatened, even erroneously, they will become combative. Speaking for myself, I struggled to find meaning for a large portion of my life, seeking it in relationships that were more fantasy than realistic. I think I only found meaning (or purpose) when I was able to channel my artistic drives and also express my intellectual meanderings like I’m doing on this blog. So that axiomatically becomes my identity. I’ve written more than once about the importance of freedom, by which I mean the freedom to express one’s thoughts and any artistic urges. Even in my profession (which is in engineering), I found I was best when left to my own devices, and suffered most when someone tried to put me in a box and confine me to their way of thinking.
 
I can’t imagine living in a society where that particular freedom is curtailed, yet they exist. I would argue that a society where its participants can’t flourish would stagnate and not progress in any way, except possibly in a strictly material sense. We’ve seen that in totalitarian regimes all over the world.
 
Lastly, one can’t leave this topic without talking about religion. In fact, I imagine that many, on reading the title, would have expected that would be the starting point. I’ll provide a reference at the end, but very early on in the life of this blog, I wrote a post called Hope, which was really a response to a somewhat facile argument by William Lane Craig that atheists can’t possibly have hope. I don’t think I can improve on that argument here, but it also ties into the topic of identity that I just referred to.
 
Apart from identity, which is usually cultural, there is the universal regard for human suffering. As pointed out in the articles I cited, suffering is an unavoidable aspect of life. The Buddhist philosophy makes this its starting point – It’s the first of the Four Noble Truths, from which the other 3 stem. I expect a lot of religions have arisen as a means to psychologically ‘explain’ the purpose of suffering. It’s also a feature of virtually all fiction, without a religious argument in sight.
 
But it’s also a key feature of Frankl’s philosophy. Arguably, without suffering, we can’t find meaning. I’ve argued previously that we don’t find wisdom through learning and achievements, but through dealing with adversity – it’s even a specific teaching in the I Ching, albeit expressed in different words:
 
Adversity is the opposite of success, but it can lead to success if it befalls the right person.
 
I expect many of us can identify with that. Meaning can be found in the darkest of psychological places, yet without it, we wouldn’t keep going.
 
 
Other posts relevant to this topic
: Homage to my Old Man; Hope; The importance of purpose; Freedom, justice, happiness and truth; Freedom, a moral imperative.
 

29 June 2024

Feeling is fundamental

 I’m not sure I’ve ever had an original idea, but I sometimes raise one that no one else seems to talk about. And this is one of them: I contend that the primary, essential attribute of consciousness is to be able to feel, and the ability to comprehend is a secondary attribute.
 
I don’t even mind if this contentious idea triggers debate, but we tend to always discuss consciousness in the context of human consciousness, where we metaphorically talk about making decisions based on the ‘head’ or the ‘heart’. I’m unsure of the origin of this dichotomy, but there is an inference that our emotional and rational ‘centres’ (for want of a better word) have different loci (effectively, different locations). No one believes that, of course, but possibly people once did. The thing is that we are all aware that sometimes our emotional self and rational self can be in conflict. This is already going down a path I didn’t intend, so I may return at a later point.
 
There is some debate about whether insects have consciousness, but I believe they do because they demonstrate behaviours associated with fear and desire, be it for sustenance or company. In other respects, I think they behave like automatons. Colonies of ants and bees can build a nest without a blueprint except the one that apparently exists in their DNA. Spiders build webs and birds build nests, but they don’t do it the way we would – it’s all done organically, as if they have a model in their brain that they can follow; we actually don’t know.
 
So I think the original role of consciousness in evolutionary terms was to feel, concordant with abilities to act on those feelings. I don’t believe plants can feel, but they’d have very limited ability to act on them, even if they could. They can communicate chemically, and generally rely on the animal kingdom to propagate, which is why a global threat to bee populations is very serious indeed.
 
So, in evolutionary terms, I think feeling came before cognitive abilities – a point I’ve made before. It’s one of the reasons that I think AI will never be sentient – a viewpoint not shared by most scientists and philosophers, from what I’ve read.  AI is all about cognitive abilities; specifically, the ability to acquire knowledge and then deploy it to solve problems. Some argue that by programming biases into the AI, we will be simulating emotions. I’ve explored this notion in my own sci-fi, where I’ve added so-called ‘attachment programming’ to an AI to simulate loyalty. This is fiction, remember, but it seems plausible.
 
Psychological studies have revealed that we need an emotive component to behave rationally, which seems counter-intuitive. But would we really prefer if everyone was a zombie or a psychopath, with no ability to empathise or show compassion. We see enough of this already. As I’ve pointed out before, in any ingroup-outgroup scenario, totally rational individuals can become totally irrational. We’ve all observed this, possibly actively participated.
 
An oft made point (by me) that I feel is not given enough consideration is the fact that without consciousness, the universe might as well not exist. I agree with Paul Davies (who does espouse something similar) that the universe’s ability to be self-aware, would seem to be a necessary condition for its existence (my wording, not his). I recently read a stimulating essay in the latest edition of Philosophy Now (Issue 162, June/July 2024) titled enigmatically, Significance, by Ruben David Azevedo, a ‘Portuguese philosophy and social sciences teacher’. His self-described intent is to ‘Tell us why, in a limitless universe, we’re not insignificant’. In fact, that was the trigger for this post. He makes the point (that I’ve made elsewhere myself), that in both time and space, we couldn’t be more insignificant, which leads many scientists and philosophers to see us as a freakish by-product of an otherwise purposeless universe. A perspective that Davies has coined ‘the absurd universe’. In light of this, it’s worth reading Azevedo’s conclusion:
 
In sum, humans are neither insignificant nor negligible in this mind-blowing universe. No living being is. Our smallness and apparent peripherality are far from being measures of our insignificance. Instead, it may well be the case that we represent the apex of cosmic evolution, for we have this absolute evident and at the same time mysterious ability called consciousness to know both ourselves and the universe.
 
I’m not averse to the idea that there is a cosmic role for consciousness. I like John Wheeler’s obvious yet pertinent observation:
 
The Universe gave rise to consciousness, and consciousness gives meaning to the Universe.

 
And this is my point: without consciousness, the Universe would have no meaning. And getting back to the title of this essay, we give the Universe feeling. In fact, I’d say that the ability to feel is more significant than the ability to know or comprehend.
 
Think about the role of art in all its manifestations, and how it’s totally dependent on the ability to feel. In some respects, I consider AI-generated art a perversion, because any feeling we have for its products is of our own making, not the AI’s.
 
I’m one of those weird people who can even find beauty in mathematics, while acknowledging only a limited ability to pursue it. It’s extraordinary that I can find beauty in a symphony, or a well-written story, or the relationship between prime numbers and Riemann’s Zeta function.


Addendum: I realised I can’t leave this topic without briefly discussing the biochemical role in emotional responses and behaviours. I’m thinking of the brain’s drugs-of-choice like serotonin, dopamine, oxytocin and endorphins. Some may argue that these natural ‘messengers’ are all that’s required to explain emotions. However, there are other drugs, like alcohol and caffeine (arguably the most common) that also affect us emotionally, sometimes to our detriment. My point being that the former are nature’s target-specific mechanisms to influence the way we feel, without actually being the genesis of feelings per se.

19 June 2024

Daniel C Dennett (28 March 1942 - 19 April 2024)

 I only learned about Dennett’s passing in the latest issue of Philosophy Now (Issue 162, June/July 2024), where Daniel Hutto (Professor of Philosophical Psychology at the University of Wollongong) wrote a 3-page obituary. Not that long ago, I watched an interview with him, following the publication of his last book, I’ve Been Thinking, which, from what I gathered, is basically a memoir, as well as an insight into his philosophical musings. (I haven’t read it, but that’s the impression I got from the interview.)
 
I should point out that I have fundamental philosophical differences with Dennett, but he’s not someone you can ignore. I must confess I’ve only read one of his books (decades ago), Freedom Evolves (2006), though I’ve read enough of his interviews and commentary to be familiar with his fundamental philosophical views. It’s something of a failing on my part that I haven’t read his most famous tome, Consciousness Explained (1991). Paul Davies once nominated it among his top 5 books, along with Douglas Hofstadter’s Godel Escher Bach. But then he gave a tongue-in-cheek compliment by quipping, ‘Some have said that he explained consciousness away.’
 
Speaking of Hofstadter, he and Dennett co-published a book, The Mind’s I, which is really a collection of essays by different authors, upon which Dennett and Hofstadter commented. I wrote a short review covering only a small selection of said essays on this blog back in 2009.
 
Dennett wasn’t afraid to tackle the big philosophical issues, in particular, anything relating to consciousness. He was unusual for a philosopher in that he took more than a passing interest in science, and appreciated the discourse that axiomatically arises between the 2 disciplines, while many others (on both sides) emphasise the tension that seems to arise and often morphs into antagonism.
 
What I found illuminating in one of his YouTube videos was how Dennett’s views of the world hadn’t really changed that much over time (mind you, neither have mine), and it got me thinking that it reinforces an idea I’ve long held, but was once iterated by Nietzsche, that our original impulses are intuitive or emotive and then we rationalise them with argument. I can’t help but feel that this is what Dennett did, though he did it extremely well.
 
I like the quote at the head of Hutto’s obituary: “The secret of happiness is: Find something more important than you are and dedicate your life to it.”

 


15 June 2024

The negative side of positive thinking

 This was a topic in last week’s New Scientist (8 June 2024) under the heading, The Happiness Trap, an article written by Conor Feehly, a (freelance journalist, based in Bangkok). Basically, he talks about the plethora of ‘self-help’ books and in particular, the ‘emergence of the positive psychology movement in 1998’. I was surprised he could provide a year, when one would tend to think it was a generational transition. At least, that’s my experience.
 
He then discusses the backlash (my term, not his) that’s occurred since, and mentions a study, ‘published in 2022, [by] an international group of psychologists exploring how societal pressure to be happy affects people in 40 countries’ (my emphasis). He cites Brock Bastian at the University of Melbourne, who was part of the study, “When we are not willing to accept negative emotions as a part of life, this can mean that we may see negative emotions as a sign there is something wrong with us.” And this gets to the nub of the issue.
 
I can’t help but think that there is a generational effect, if not a divide. I see myself as being in between, generationally speaking. My parents lived through the Great Depression and WW2, so they experienced enough negative emotion for all of us. Growing up in rural NSW, we didn’t have much but neither did anyone else, so we didn’t think that was exceptional. There was a lot of negative emotion in our lives as a consequence of the trauma that my Dad experienced as both a wartime serviceman and a prisoner-of-war. It was only much later, as an adult, that I realised this was not the norm. Back then, PTSD wasn’t a term.
 
One of the things that struck me in Feehly’s article was the idea of ‘acceptance’. To quote:
 
Research shows that when people accepted their negative emotions – rather than judging mental experience as good or bad – they become more emotionally resilient, experiencing fewer negative feelings in response to environmental stressors and attaining a greater sense of well-being.
 
He also says in the same context:
 
The good news is that, as we age, we increasingly rely on acceptance – which might help to explain why older people tend to report better emotional well-being.

 
As one of that cohort (older people), I can identify with that sentiment. Acceptance is a multi-faceted word, because one of the unexpected benefits of getting older is that we learn to accept ourselves, becoming less critical and judgemental, and hopefully extending that to others.
 
In our youth, acceptance by one’s peers is a prime driver of self-esteem and associated behaviours, and social media has to a large extent hijacked that impulse, which was also highlighted by Brock Bastian (cited above).
 
I’ve got side-tracked to the extent that this is the antithesis of the so-called ‘positive psychology movement’, possibly because I think my generation largely avoided that trap. We are more likely to see that a ‘think positive’ attitude in the face of all of life’s dilemmas and problems is a delusion. What’s obvious is that negative emotional states have evolutionary value, because they have ancient roots. The other point that’s obvious to me is that we are all addicted to stories, where we vicariously experience negative emotions on a regular basis. In fact, a story that only contained positive emotions would never be read, or watched.
 
What has always been obvious to me, and which I’ve written about before, including in the very early history of this blog, is that we need adversity to gain wisdom. As I keep saying, it’s the theme of virtually every story ever told. When I look back on my early adult years and how seemingly insurmountable they felt, my older self is so grateful I persevered. There is a hypothetical often raised: what advice would you give your younger self? I’d just say, ‘Hang in there, it gets better.’

09 June 2024

More on radical ideas

 As you can tell from the title, this post carries on from the last one, because I got a bit bogged down on one issue, when I really wanted to discuss more. One of the things that prompted me was watching a 1hr presentation by cosmologist, Claudia de Rahm, whom I’ve mentioned before, when I had the pleasure of listening to an on-line lecture she gave, care of New Scientist, during the COVID lockdown.
 
Claudia’s particular field of study is gravity, and, by her own admission, she has a ‘crazy idea’. Now here’s the thing: I meet a lot of people on Quora and in the blogosphere, who like me, live (in a virtual sense) on the fringes of knowledge rather than as academic or professional participants. And what I find is that they often have an almost zealous confidence in their ideas. To give one example, I recently came across someone who argued quite adamantly that the Universe is static, not expanding, and has even written a book on the subject. This is contrary to virtually everyone else I’m aware of who works in the field of cosmology and astrophysics. And I can’t help but compare this to Claudia de Rahm who is well aware that her idea is ‘crazy’, even though she’s fully qualified to argue it.
 
In other words, it’s a case of the more you know about a subject, the less you claim to know, because experts are more aware of their limitations than non-experts. I should point out, in case you didn’t already know, I’m not one of the experts.
 
Specifically, Claudia’s crazy idea is that not only are there gravitational waves, but gravitons and that gravitons have an extremely tiny amount of mass, which would alter the effect of gravity at very long range. I should say that at present, the evidence is against her, because if she’s right, gravity waves would travel not at the speed of light, as predicted by Einstein, but ever-so-slightly less than light.
 
Freeman Dyson, by the way, has argued that if gravitons do exist, they would be impossible to detect, but if Claudia is right, then they would be.
 
In her talk, Claudia also discusses the vacuum energy, which according to particle physics, should be 28 orders of magnitude greater than the relativistic effect of ‘dark energy’. She calls it ‘the biggest discrepancy in the entire history of science’. This suggests that there is something rotten in the state of theoretical physics, along with the fact, that what we can physically observe, only accounts for 5% of the Universe.
 
It should be pointed out that at the end of the 19th Century no one saw or predicted the 2 revolutions in physics that were just around the corner – relativity theory and quantum mechanics. They were an example of what Thomas Kuhn called The Structure of Scientific Revolutions (the title of his book expounding on this). And I’d suggest that these current empirical aberrations in cosmology are harbingers of the next Kuhnian revolution.
 
Roger Penrose, whom I’ve referenced a number of times on this blog, is someone else with some ‘crazy’ ideas compared to the status quo, for which I admire him even if I don’t agree with him. One of Penrose’s hobby horses is his own particular inference from Godel’s Incompleteness Theorem, which he learned as a graduate (under Steen, at Cambridge) and which he discusses in this video. He argues that it provides evidence that humans don’t think like computers. If one takes the example of Riemann’s Hypothesis (really a conjecture) we know that a computer can’t tell us if it’s true or not (my example, not Penrose’s).* However, most mathematicians believe it is true, and it would be an enormous shock if it was proven untrue, or a contra-example was found by a computer. This is the case with other conjectures that have been proven true, like Fermat’s Last Theorem and Poincare’s conjecture. Penrose’s point, if I understand him correctly, is that it takes a human mind and not a computer to make this leap into the unknown and grasp a ‘truth’ out of the aether.
 
Anyone who has engaged in some artistic endeavour can identify with this, even if it’s not mathematical truths they are seeking but the key to unravelling a plot in a story.
 
Penrose makes the point in the video that he’s a ‘visual’ person, which he thinks is unusual in his field. Penrose is an excellent artist, by the way, and does all his own graphics. This is something else I can identify with, as I was quite precocious as a very young child at drawing (I could draw in perspective, though no one taught me) even though it never went anywhere.
 
Finally, some crazy ideas of my own. I’ve pointed out on other posts that I have a predilection (for want of a better term) for Kant’s philosophical proposition that we can never know the ‘thing-in-itself’ but only a perception of it.
 
With this in mind, I contend that this philosophical premise not only applies to what we can physically detect via instruments, but what we theoretically infer from the mathematics we use to explore nature. As heretical an idea as it may seem, I argue that mathematics is yet another 'instrument' we use to probe the secrets of the Universe. Quantum mechanics and relativity theory being the most obvious.
 
As I’ve tried to expound on other posts, relativity theory is observer-dependent, in as much as different observers will both measure and calculate different values of time and space, dependent on their specific frame of reference. I believe this is a pertinent example of Kant’s proposition that the thing-in-itself escapes our perception. In particular, physicists (including Penrose) will tell you that events that are ostensibly simultaneous to us (in a galaxy far, far away) will be perceived as both past and future by 2 observers who are simply crossing a street in opposite directions. I’ve written about this elsewhere as ‘the impossible thought experiment’.
 
The fact is that relativity theory rules out the event being observed at all. In other words, simultaneous events can’t be observed (according to relativity). For this reason, virtually all physicists will tell you that simultaneity is an illusion – there is no universal now.
 
But here’s the thing: if there is an edge in either space or time, it can only be observed from outside the Universe. Relativity theory, logically enough, can only tell us what we can observe from within the Universe.
 
But to extend this crazy idea, what’s stopping the Universe existing within a higher dimension that we can’t perceive. Imagine being a fish and you spend your entire existence in a massive body of water, which is your entire universe. But then one day you are plucked out of that environment and you suddenly become aware that there is another, even bigger universe that exists right alongside yours.
 
There is a tendency for us to think that everything that exists we can learn and know about – it’s what separates us from every other living thing on the planet. But perhaps there are other dimensions, or even worlds, that lie forever beyond our comprehension.


*Footnote: Actually, Penrose in his book, The Emperor’s New Mind, discusses this in depth and at length over a number of chapters. He makes the point that Turing’s ‘proof’ that it’s impossible to predict whether a machine attempting to compute all the Riemann zeros (for example) will stop, is a practical demonstration of the difference between ‘truth’ and ‘proof’ (as Godel’s Incompleteness Theorem tell us). Quite simply, if the theorem is true, the computer will never stop, so it can never be proven algorithmically. It can only be proven (or disproven) if one goes ‘outside the [current] rules’ to use Penrose’s own nomenclature.

02 June 2024

Radical ideas

 It’s hard to think of anyone I admire in physics and philosophy who doesn’t have at least one radical idea. Even Richard Feynman, who avoided hyperbole and embraced doubt as part of his credo: "I’d rather have doubt and be uncertain, than be certain and wrong."
 
But then you have this quote from his good friend and collaborator, Freeman Dyson:

Thirty-one years ago, Dick Feynman told me about his ‘sum over histories’ version of quantum mechanics. ‘The electron does anything it likes’, he said. ‘It goes in any direction at any speed, forward and backward in time, however it likes, and then you add up the amplitudes and it gives you the wave-function.’ I said, ‘You’re crazy.’ But he wasn’t.
 
In fact, his crazy idea led him to a Nobel Prize. That exception aside, most radical ideas are either still-born or yet to bear fruit, and that includes mine. No, I don’t compare myself to Feynman – I’m not even a physicist - and the truth is I’m unsure if I even have an original idea to begin with, be they radical or otherwise. I just read a lot of books by people much smarter than me, and cobble together a philosophical approach that I hope is consistent, even if sometimes unconventional. My only consolation is that I’m not alone. Most, if not all, the people smarter than me, also hold unconventional ideas.
 
Recently, I re-read Robert M. Pirsig’s iconoclastic book, Zen and the Art of Motorcycle Maintenance, which I originally read in the late 70s or early 80s, so within a decade of its publication (1974). It wasn’t how I remembered it, not that I remembered much at all, except it had a huge impact on a lot of people who would never normally read a book that was mostly about philosophy, albeit disguised as a road-trip. I think it keyed into a zeitgeist at the time, where people were questioning everything. You might say that was more the 60s than the 70s, but it was nearly all written in the late 60s, so yes, the same zeitgeist, for those of us who lived through it.
 
Its relevance to this post is that Pirsig had some radical ideas of his own – at least, radical to me and to virtually anyone with a science background. I’ll give you a flavour with some selective quotes. But first some context: the story’s protagonist, whom we assume is Pirsig himself, telling the story in first-person, is having a discussion with his fellow travellers, a husband and wife, who have their own motorcycle (Pirsig is travelling with his teenage son as pillion), so there are 2 motorcycles and 4 companions for at least part of the journey.
 
Pirsig refers to a time (in Western culture) when ghosts were considered a normal part of life. But then introduces his iconoclastic idea that we have our own ghosts.
 
Modern man has his own ghosts and spirits too, you know.
The laws of physics and logic… the number system… the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.

 
Then he specifically cites the law of gravity, saying provocatively:
 
The law of gravity and gravity itself did not exist before Isaac Newton. No other conclusion makes sense.
And what that means, is that the law of gravity exists nowhere except in people’s heads! It’s a ghost! We are all of us very arrogant and conceited about running down other people’s ghosts but just as ignorant and barbaric and superstitious about our own.
Why does everybody believe in the law of gravity then?
Mass hypnosis. In a very orthodox form known as “education”.

 
He then goes from the specific to the general:
 
Laws of nature are human inventions, like ghosts. Laws of logic, of mathematics are also human inventions, like ghosts. The whole blessed thing is a human invention, including the idea it isn’t a human invention. (His emphasis)
 
And this is philosophy in action: someone challenges one of your deeply held beliefs, which forces you to defend it. Of course, I’ve argued the exact opposite, claiming that ‘in the beginning there was logic’. And it occurred to me right then, that this in itself, is a radical idea, and possibly one that no one else holds. So, one person’s radical idea can be the antithesis of someone else’s radical idea.
 
Then there is this, which I believe holds the key to our disparate points of view:
 
We believe the disembodied 'words' of Sir Isaac Newton were sitting in the middle of nowhere billions of years before he was born and that magically he discovered these words. They were always there, even when they applied to nothing. Gradually the world came into being and then they applied to it. In fact, those words themselves were what formed the world. (again, his emphasis)
 
Note his emphasis on 'words', as if they alone make some phenomenon physically manifest.
 
My response: don’t confuse or conflate the language one uses to describe some physical entity, phenomena or manifestation with what it describes. The natural laws, including gravity, are mathematical in nature, obeying sometimes obtuse and esoteric mathematical relationships, which we have uncovered over eons of time, which doesn’t mean they only came into existence when we discovered them and created the language to describe them. Mathematical notation only exists in the mind, correct, including the number system we adopt, but the mathematical relationships that notation describes, exist independently of mind in the same way that nature’s laws do.
 
John Barrow, cosmologist and Fellow of the Royal Society, made the following point about the mathematical ‘laws’ we formulated to describe the first moments of the Universe’s genesis (Pi in the Sky, 1992).
 
Specifically, he says our mathematical theories describing the first three minutes of the Universe predict specific ratios of the earliest ‘heavier’ elements: deuterium, 2 isotopes of helium and lithium, which are 1/1000, 1/1000, 22 and 1/100,000,000 respectively; with the remaining (roughly 78%) being hydrogen. And this has been confirmed by astronomical observations. He then makes the following salient point:



It confirms that the mathematical notions that we employ here and now apply to the state of the Universe during the first three minutes of its expansion history at which time there existed no mathematicians… This offers strong support for the belief that the mathematical properties that are necessary to arrive at a detailed understanding of events during those first few minutes of the early Universe exist independently of the presence of minds to appreciate them.
 
As you can see this effectively repudiates Pirsig’s argument; but to be fair to Pirsig, Barrow wrote this almost 2 decades after Pirsig’s book.
 
In the same vein, Pirsig then goes on to discuss Poincare’s Foundations of Science (which I haven’t read), specifically talking about Euclid’s famous fifth postulate concerning parallel lines never meeting, and how it created problems because it couldn’t be derived from more basic axioms and yet didn’t, of itself, function as an axiom. Euclid himself was aware of this, and never used it as an axiom to prove any of his theorems.
 
It was only in the 19th Century, with the advent of Riemann and other non-Euclidean geometries on curved surfaces that this was resolved. According to Pirsig, it led Poincare to question the very nature of axioms.
 
Are they synthetic a priori judgements, as Kant said? That is, do they exist as a fixed part of man’s consciousness, independently of experience and uncreated by experience? Poincare thought not…
Should we therefore conclude that the axioms of geometry are experimental verities? Poincare didn’t think that was so either…
Poincare concluded that the axioms of geometry are conventions, our choice among all possible conventions is guided by experimental facts, but it remains free and is limited only by the necessity of avoiding all contradiction.

 
I have my own view on this, but it’s worth seeing where Pirsig goes with it:
 
Then, having identified the nature of geometric axioms, [Poincare] turned to the question, Is Euclidean geometry true or is Riemann geometry true?
He answered, The question has no meaning.
[One might] as well as ask whether the metric system is true and the avoirdupois system is false; whether Cartesian coordinates are true and polar coordinates are false. One geometry can not be more true than another; it can only be more convenient. Geometry is not true, it is advantageous.
 
I think this is a false analogy, because the adoption of a system of measurement (i.e. units) and even the adoption of which base arithmetic one uses (decimal, binary, hexadecimal being the most common) are all conventions.
 
So why wouldn’t I say the same about axioms? Pirsig and Poincare are right in as much that both Euclidean and Riemann geometry are true because they’re dependent on the topology that one is describing. They are both used to describe physical phenomena. In fact, in a twist that Pirsig probably wasn’t aware of, Einstein used Riemann geometry to describe gravity in a way that Newton could never have envisaged, because Newton only had Euclidean geometry at his disposal. Einstein formulated a mathematical expression of gravity that is dependent on the geometry of spacetime, and has been empirically verified to explain phenomena that Newton couldn’t. Of course, there are also limits to what Einstein’s equations can explain, so there are more mathematical laws still to uncover.
 
But where Pirsig states that we adopt the axiom that is convenient, I contend that we adopt the axiom that is necessary, because axioms inherently expand the area of mathematics we are investigating. This is a consequence of Godel’s Incompleteness Theorem that states there are limits to what any axiom-based, consistent, formal system of mathematics can prove to be true. Godel himself pointed out that that the resolution lies in expanding the system by adopting further axioms. The expansion of Euclidean to non-Euclidean geometry is a case in point. The example I like to give is the adoption of √-1 = i, which gave us complex algebra and the means to mathematically describe quantum mechanics. In both cases, the axioms allowed us to solve problems that had hitherto been impossible to solve. So it’s not just a convenience but a necessity.
 
I know I’ve belaboured a point, but both of these: non-Euclidean geometry and complex algebra; were at one time radical ideas in the mathematical world that ultimately led to radical ideas: general relativity and quantum mechanics; in the scientific world. Are they ghosts? Perhaps ghost is an apt metaphor, given that they appear timeless and have outlived their discoverers, not to mention the rest of us. Most physicists and mathematicians tacitly believe that they not only continue to exist beyond us, but existed prior to us, and possibly the Universe itself.
 
I will briefly mention another radical idea, which I borrowed from Schrodinger but drew conclusions that he didn’t formulate. That consciousness exists in a constant present, and hence creates the psychological experience of the flow of time, because everything else becomes the past as soon as it happens. I contend that only consciousness provides a reference point for past, present and future that we all take for granted.

19 May 2024

It all started with Euclid

 I’ve mentioned Euclid before, but this rumination was triggered by a post on Quora that someone wrote about Plato, where they argued, along with another contributor, that Plato is possibly overrated because he got a lot of things wrong, which is true. Nevertheless, as I’ve pointed out in other posts, his Academy was effectively the origin of Western philosophy, science and mathematics. It was actually based on the Pythagorean quadrivium of geometry, arithmetic, astronomy and music.
 
But Plato was also a student and devoted follower of Socrates and the mentor of Aristotle, who in turn mentored Alexander the Great. So Plato was a pivotal historical figure and without his writings, we probably wouldn’t know anything about Socrates. In the same way that, without Paul, we probably wouldn’t know anything about Jesus. (I’m sure a lot of people would find that debatable, but, if so, it’s a debate for another post.)
 
Anyway, I mentioned Euclid in my own comment (on Quora), who was the Librarian at Alexandria around 300BC, and thus a product of Plato’s school of thought. Euclid wrote The Elements, which I contend is arguably the most important book written in the history of humankind – more important than any religious text, including the Bible, Homer’s Iliad and the Mahabharata, which, I admit, is quite a claim. It's generally acknowledged as the most copied text in the secular world. In fact, according to Wikipedia:
 
It was one of the very earliest mathematical works to be printed after the invention of the printing press and has been estimated to be second only to the Bible in the number of editions published since the first printing in 1482.
 
Euclid was revolutionary in one very significant way: he was able to demonstrate what ‘truth’ was, using pure logic, albeit in a very abstract and narrow field of inquiry, which is mathematics.
 
Before then, and in other cultures, truth was transient and subjective and often prescribed by the gods. But Euclid changed all that, and forever. I find it extraordinary that I was examined on Euclid’s theorems in high school in the 20th Century.
 
And this mathematical insight has become, millennia later, a key ingredient (for want of a better term) in the hunt for truths in the physical world. In the 20th Century, in what has become known as the Golden Age of Physics, the marriage between mathematics and scientific inquiry at all scales, from the cosmic to the infinitesimal, has uncovered deeply held secrets of nature that the Pythagoreans, and Euclid for that matter, could never have dreamed of. Look no further than quantum mechanics (QM) and the General Theory of Relativity (GR). Between these 2 iconic developments, they underpin every theory we currently have in physics, and they both rely on mathematics that was pivotal in the development of the theories from the outset. In other words, without the mathematics of complex algebra and Riemann geometry respectively, these theories would have been stillborn.
 
I like to quote Richard Feynman from his book, The Character of Physical Law, in a chapter titled, The Relation of Mathematics to Physics:
 
…what turns out to be true is that the more we investigate, the more laws we find, and the deeper we penetrate nature, the more this disease persists. Every one of our laws is a purely mathematical statement in rather complex and abstruse mathematics... Why? I have not the slightest idea. It is only my purpose to tell you about this fact.
 
The strange thing about physics is that for the fundamental laws we still need mathematics.
 
Physicists cannot make a conversation in any other language. If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in. She offers her information only in one form.

 
And this has only become more evident since Feynman wrote those words.
 
There was another revolution in the 20th Century, involving Alan Turing, Alonso Church and Kurt Godel; this time involving mathematics itself. Basically, each of these independently demonstrated that some mathematical truths were elusive to proof. Some mathematical conjectures could not be proved within the mathematical system from which they arose. The most famous example would be Riemann’s Hypothesis, involving primes. But the Goldbach conjecture (also involving primes) and the conjecture of twin primes also fit into this category. While most mathematicians believe them to be true, they are yet to be proven. I won’t elaborate on them, as they can easily be looked up.
 
But there is more: according to Gregory Chaitin, there are infinitely more incomputable Real numbers than computable Real numbers, which means that most of mathematics is inaccessible to logic.
 
So, when I say it all started with Euclid, I mean all the technology and infrastructure that we take for granted; and which allows me to write this so that virtually anyone anywhere in the world can read it; only exists because Euclid was able to derive ‘truths’ that stood for centuries and ultimately led to this.