Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Saturday, 7 September 2024

Science and religion meet at the boundary of humanity’s ignorance

 I watched a YouTube debate (90 mins) between Sir Roger Penrose and William Lane Craig, and, if I’m honest, I found it a bit frustrating because I wish I was debating Craig instead of Penrose. I also think it would have been more interesting if Craig debated someone like Paul Davies, who is more philosophically inclined than Penrose, even though Penrose is more successful as a scientist, and as a physicist, in particular.
 
But it was set up as an atheist versus theist debate between 2 well known personalities, who were mutually respectful and where there was no animosity evident at all. I confess to having my own biases, which would be obvious to any regular reader of this blog. I admit to finding Craig arrogant and a bit smug in his demeanour, but to be fair, he was on his best behaviour, and perhaps he’s matured (or perhaps I have) or perhaps he adapts to whoever he’s facing. When I call it a debate, it wasn’t very formal and there wasn’t even a nominated topic. I felt the facilitator or mediator had his own biases, but I admit it would be hard to find someone who didn’t.
 
Penrose started with his 3 worlds philosophy of the physical, the mental and the abstract, which has long appealed to me, though most scientists and many philosophers would contend that the categorisation is unnecessary, and that everything is physical at base. Penrose proposed that they present 3 mysteries, though the mysteries are inherent in the connections between them rather than the categories themselves. This became the starting point of the discussion.
 
Craig argued that the overriding component must surely be ‘mind’, whereas Penrose argued that it should be the abstract world, specifically mathematics, which is the position of mathematical Platonists (including myself). Craig pointed out that mathematics can’t ‘create’ the physical, (which is true) but a mind could. As the mediator pointed out (as if it wasn’t obvious) said mind could be God. And this more or less set the course for the remainder of the discussion, with a detour to Penrose’s CCC theory (Conformal Cyclic Cosmology).
 
I actually thought that this was Craig’s best argument, and I’ve written about it myself, in answer to a question on Quora: Did math create the Universe? The answer is no, nevertheless I contend that mathematics is a prerequisite for the Universe to exist, as the laws that allowed the Universe to evolve, in all its facets, are mathematical in nature. Note that this doesn’t rule out a God.
 
Where I would challenge Craig, and where I’d deviate from Penrose, is that we have no cognisance of who this God is or even what ‘It’ could be. Could not this God be the laws of the Universe themselves? Penrose struggled with this aspect of the argument, because, from a scientific perspective, it doesn’t tell us anything that we can either confirm or falsify. I know from previous debates that Craig has had, that he would see this as a win. A scientist can’t refute his God’s existence, nor can they propose an alternative, therefore it’s his point by default.
 
This eventually led to a discussion on the ‘fine-tuning’ of the Universe, which in the case of entropy, is what led Penrose to formulate his CCC model of the Universe. Of course, the standard alternative is the multiverse and the anthropic principle, which, as Penrose points out, is also applicable to his CCC model, where you have an infinite sequence of universes as opposed to an infinity of simultaneous ones, which is the orthodox response among cosmologists.
 
This is where I would have liked to have seen Paul Davies respond, because he’s an advocate of John Wheeler’s so-called ‘participatory Universe’, which is effectively the ‘strong anthropic principle’ as opposed to the ‘weak anthropic principle’. The weak anthropic principle basically says that ‘observers’ (meaning us) can only exist in a universe that allows observers to exist – a tautology. Whereas the strong anthropic principle effectively contends that the emergence of observers is a necessary condition for the Universe to exist (the observers don’t have to be human). Basically, Wheeler was an advocate of a cosmic, acausal (backward-in-time) link from conscious observers to the birth of the Universe. I admit this appeals to me, but as Craig would expound, it’s a purely metaphysical argument, and so is the argument for God.
 
The other possibility that is very rarely expressed, is that God is the end result of the Universe rather than its progenitor. In other words, the ‘mind’ that Craig expounded upon is a consequence of all of us. This aligns more closely with the Hindu concept of Brahman or a Buddhist concept of collective karma – we get the God we deserve. Erwin Schrodinger, who studied the Upanishads, discusses Brahman as a pluralistic ‘mind’ in What is Life?. (Note that in Hinduism, the soul or Atman is a part of Brahman). My point would be that the Judea-Christian-Islamic God does not have a monopoly on Craig’s overriding ‘mind’ concept.
 
A recurring theme on this blog is that there will always be mysteries – we can never know everything – and it’s an unspoken certitude that there will forever be knowledge beyond our cognition. The problem that scientists sometimes have, but are reluctant to admit, is that we can’t explain everything, even though we keep explaining more by the generation. And the problem that theologians sometimes have is that our inherent ignorance is neither ‘proof’ nor ‘evidence’ that there is a ‘creator’ God.
 
I’ve argued elsewhere that a belief in God is purely a subjective and emotional concept, which one then rationalises with either cultural references or as an ultimate explanation for our existence.


Addendum: I like this quote, albeit out of context, from Spinoza:: "The sum of the natural and physical laws of the universe and certainly not an individual entity or creator".


Thursday, 29 August 2024

How scale demonstrates that mathematics is intrinsically entailed in the Universe

 I momentarily contemplated another title: Is the Planck limit an epistemology or an ontology? Because that’s basically the topic of a YouTube video that’s the trigger for this post. I wrote a post some time ago where I discussed whether the Universe is continuous or discrete, and basically concluded that it was continuous. Based on what I’ve learned from this video, I might well change my position. But I should point out that my former opposition was based more on the idea that it could be quantised into ‘bits’ of information, whereas now I’m willing to acknowledge that it could be granular at the Planck scale, which I’ll elaborate on towards the end. I still don’t think that the underlying reality of the Universe is in ‘bits’ of information, therefore potentially created and operated by a computer.
 
Earlier this year, I discussed the problem of reification of mathematics so I want to avoid that if possible. By reification, I mean making a mathematical entity reality. Basically, physics works by formulating mathematical models that we then compare to reality through observations. But as Freeman Dyson pointed out, the wave function (Ψ), for example, is a mathematical entity and not a physical entity, which is sometimes debated. The fact is that if it does exist physically, it’s never observed, and my contention is that it ‘exists’ in the future; a view that is consistent with Dyson’s own philosophical viewpoint that QM can only describe the future and not the past.
 
And this brings me to the video, which has nothing to say about wave functions or reified mathematical entities, but uses high school mathematics to explore such esoteric and exotic topics as black holes and quantum gravity. There is one step involving integral calculus, which is about as esoteric as the maths becomes, and if you allow that 1/ = 0, it leads to the formula for the escape velocity from any astronomical body (including Earth). Note that the escape velocity literally allows an object to escape a gravitational field to infinity (). And the escape velocity for a black hole is c (the speed of light).
 
All the other mathematics is basic algebra using some basic physics equations, like Newton’s equation for gravity, Planck’s equation for energy, Heisenberg’s Uncertainty Principle using Planck’s Constant (h), Einstein’s famous equation for the equivalence of energy and mass, and the equation for the Coulomb Force between 2 point electric charges (electrons). There is also the equation for the Schwarzschild radius of a black hole, which is far easier to derive than you might imagine (despite the fact that Schwarzschild originally derived it from Einstein’s field equations).
 
Back in May 2019, I wrote a post on the Universe’s natural units, which involves the fundamental natural constants, h, c and G. This was originally done by Planck himself, which I describe in that post, while providing a link to a more detailed exposition. In the video (embedded below), the narrator takes a completely different path to deriving the same Planck units before describing a method that Planck himself would have used. In so doing, he explains how at the Planck level, space and time are not only impossible to observe, even in principle, but may well be impossible to remain continuous in reality. You need to watch the video, as he explains it far better than I can, just using high school mathematics.
 
Regarding the title I chose for this post, Roger Penrose’s Conformal Cyclic Cosmology (CCC) model of the Universe, exploits the fact that a universe without matter (just radiation) is scale invariant, which is essential for the ‘conformal’ part of his theory. However, that all changes when one includes matter. I’ve argued in other posts that different forces become dominant at different scales, from the cosmological to the subatomic. The point made in this video is that at the Planck scale all the forces, including gravity, become comparable. Now, as I pointed out at the beginning, physics is about applying mathematical models and comparing them to reality. We can’t, and quite possibly never will, be able to observe reality at the Planck scale, yet the mathematics tells us that it’s where all the physics we currently know is compatible. It tells me that not only is the physics of the Universe scale-dependent, but it's also mathematically dependent (because scale is inherently mathematical). In essence, the Universe’s dynamics are determined by mathematical parameters at all scales, including the Planck scale.
 
Note that the mathematical relationships in the video use ~ not = which means that they are approximate, not exact. But this doesn’t detract from the significance that 2 different approaches arrive at the same conclusion, which is that the Planck scale coincides with the origin of the Universe incorporating all forces equivalently.
 
 
Addendum: I should point out that Viktor T Toth, who knows a great deal more about this than me, argues that there is, in fact, no limit to what we can measure in principle. Even the narrator in the video frames his conclusion cautiously and with caveats. In other words, we are in the realm of speculative physics. Nevertheless, I find it interesting to contemplate where the maths leads us.



Sunday, 28 July 2024

When truth becomes a casualty, democracy is put at risk

 You may know of Raimond Gaita as the author of Romulus, My Father, a memoir of his childhood, as the only child of postwar European parents growing up in rural Australia. It was turned into a movie directed by Richard Roxborough (his directorial debut) and starring Eric Bana. What you may not know is that Raimond Gaita is also a professor of philosophy who happens to live in the same metropolis as me, albeit in different suburbs.
 
I borrowed his latest tome, Justice and Hope; Essays, Lectures and Other Writings, from my local library (published last year, 2023), and have barely made a dent in the 33 essays, unequally divided into 6 parts. So far, I’ve read the 5 essays in Part 1: An Unconditional Love of the World, and just the first essay of Part 2: Truth and Judgement, which is titled rather provocatively, The Intelligentsia in the Age of Trump. Each essay heading includes the year it was written, and the essay on the Trump phenomenon (my term, not his) was written in 2017, so after Trump’s election but well before his ignominious attempt to retain power following his election defeat in 2020. And, of course, he now has more stature and influence than ever, having just won the Presidential nomination from the Republican Party for the 2024 election, which is only months away as I write.
 
Gaita doesn’t write like an academic in that he uses plain language and is not afraid to include personal anecdotes if he thinks they’re relevant, and doesn’t pretend that he’s nonpartisan in his political views. The first 5 essays regarding ‘an unconditional love of the world’ all deal with other writers and postwar intellects, all concerned with the inhumane conditions that many people suffered, and some managed to survive, during World War 2. This is confronting and completely unvarnished testimony, much darker and rawer than anything I’ve come across in the world of fiction, as if no writer’s imagination could possibly capture the absolute lowest and worst aspects of humanity.
 
None of us really know how we would react in those conditions. Sometimes in dreams we may get a hint. I’ve sometimes considered dreams as experiments that our minds play on us to test our moral fortitude. I know from my father’s experiences in WW2, both in the theatre of war and as a POW, that one’s moral compass can be bent out of shape. He told me of how he once threatened to kill someone who was stealing from wounded who were under his care. The fact that the person he threatened was English and the wounded were Arabs says a lot, as my father held the same racial prejudices as most of his generation. But I suspect he’d witnessed so much unnecessary death and destruction on such a massive scale that the life of a petty, opportunistic thief seemed worthless indeed. When he returned, he had a recurring dream where there was someone outside the house and he feared to confront them. And then on one occasion he did and killed them barehanded. His telling of this tale (when I was much older, of course) reminded me of Luke Skywalker meeting and killing his Jungian shadow in The Empire Strikes Back. My father could be a fearsome presence in those early years of my life – he had demons and they affected us all.
 
Another one of my tangents, but Gaita’s ruminations on the worst of humanity perpetrated by a nation with a rich and rightly exalted history makes one realise that we should not take anything for granted. I’ve long believed that anyone can commit evil given the right circumstances. We all live under this thin veneer that only exists because we mostly have everything we need and are generally surrounded by people who have no real axe to grind and who don’t see our existence as a threat to their own wellbeing.
 
I recently saw the movie, Civil War, starring Kirsten Dunst, who plays a journalist covering a hypothetical conflict in America, consequential to an authoritarian government taking control of the White House. The aspect that I found most believable was how the rule of law no longer seemed to apply, and people had become completely tribal whereupon one’s neighbour could become one’s enemy. I’ve seen documentaries on conflicts in Rwanda and the former Yugoslavia where this has happened – neighbours become mortal enemies, virtually overnight, because they suddenly find themselves on opposite sides of a tribal divide. I found the movie quite scary because it showed what happens when the veneer of civility we take for granted is not just lifted, but disappears.
 
On the first page of his essay on Trump, Gaita sets the tone and the context that resulted in Brexit on one side of the Atlantic and Trump’s Republican nomination on the other.
 
Before Donald Trump became the Republican nominee, Brexit forced many among the left-liberal intelligentsia to ask why they had not realised that resentment, anger and even hatred could go so deep as they did in parts of the electorate.

 
I think the root cause of all these dissatisfactions and resentments that lead to political upheavals that no one sees coming is trenchant inequality. I remember my father telling me when I was a child that the conflict in Ireland wasn’t between 2 religious groups but about wealth and inequality. I suspect he was right, even though it seems equally simplistic.
 
In all these divisions that we’ve seen, including in Australia, is the perception that people living in rural areas are being left out of the political process and not getting their fair share of representation, and consequentially everything else that follows from that, which results in what might be called a falling ‘standard of living’. The fallout from the GFC, which was global, exacerbated these differences, both perceived and real, and conservative politicians took advantage. They depicted the Left as ‘elitist’, which is alluded to in the title of Gaita’s essay, and is ‘code’ for ignorant and arrogant. This happened in Australia and I suspect in other Western democracies as well, like the UK and America.
 
Gaita expresses better than most how Trump has changed politics in America, if no where else, by going outside the bounds of normal accepted behaviour for a world leader. In effect, he’s changed the social norms that one associates with a person holding that position.
 
To illustrate my point, I’ll provide selected quotes, albeit out of context.
 
To call Trump a radically unconventional politician is like calling the mafia unconventional debt collectors; it is to fail to understand how important are the conventions, often unspoken, that enables decency in politics. Trump has poured a can of excrement over those conventions.
 
He has this to say about Trump’s ‘alternative facts’ not only espoused by him, but his most loyal followers.

In linking reiterated accusations of fake news to elites, Trump and his accomplices intended to undermine the conceptual and epistemic space that makes conversations between citizens possible.
 
It is hardly possible to exaggerate the seriousness of this. The most powerful democracy on Earth, the nation that considers itself and is often considered by others to be the leader of ‘the free world’, has a president who attacks unrelentingly the conversational space that can exist only because it is based on a common understanding – the space in which citizens can confidently ask one another what facts support their opinions. If they can’t ask that of one another, if they can’t agree on when something counts as having been established as fact, then the value of democracy is diminished.
 
He then goes on to cite J.D. Vance’s (recently nominated as Trump’s running VP), Hillbilly Elegy, where ‘he tells us… that Obama is not an American, that he was “born in some far-flung corner of the world”, that he has ties to Islamic extremism…’ and much worse.
 
Regarding some of Trump’s worse excesses during his 2016 campaign like getting the crowd to shout “Lock her up!” (his political opponent at the time) Gaita makes this point:
 
At the time, a CNN reporter said that his opponents did not take him seriously, but they did take him literally, whereas his supporters took him seriously but not literally. It was repeated many times… he would be reigned in by the Republicans in the House and the Senate and by trusted institutions. [But] He hasn’t changed in office.
 
It’s worth contemplating what this means if he wins Office again in 2024. He’s made it quite clear he’s out for revenge, and he’s also now been given effective immunity from prosecution by the Supreme Court if he seeks revenge through the Justice Department while he’s in Office. There is also the infamous Project 2025 which has the totally unhidden agenda to get rid of the so-called ‘deep state’ and replace public servants with Trump acolytes, not unlike a dictatorship. Did I just use that word?
 
Trump has achieved something I’ve never witnessed before, which Gaita doesn’t mention, though I have the benefit of an additional 7 years hindsight. What I’m referring to is that Trump has created an alternative universe, and from the commentary I’ve read on forums like Quora and elsewhere, you either live in one universe or the other – it’s impossible to claim you inhabit both. In other words, Trump has created an unbridgeable divide, which can’t be reconciled politically or intellectually. In one universe, Biden stole the 2020 POTUS election from Trump, and in another universe, Trump attempted to overturn the election and failed.
 
This is the depth of division that Trump has created in his country, and you have to ask: How far will people go to defend their version of the truth?
 
It was less than a century ago that fascism threatened the entire world order and created the most extensive conflict witnessed by humankind. I don’t think it’s an exaggeration to say that we are on the potential brink of creating a new brand of authoritarianism in the country epitomised by the slogan, ‘the free world’.

Monday, 22 July 2024

Zen and the art of flow

 This was triggered by a newsletter I received from ABC Classic (Australian radio station) with a link to a study done on ‘flow’, which is a term coined by physiologist, Mihalyi Csikszentmihalyi, to describe a specific psychological experience that many (if not all) people have had when totally immersed in some activity that they not only enjoy but have developed some expertise in.
 
The study was performed by Dr John Kounios from Drexel University's Creative Research Lab in Philadelphia, who “examined the 'neural and psychological correlates of flow' in a sample of jazz guitarists.” The article was authored by Jennifer Mills from ABC Classic’s sister station, ABC Jazz. But the experience of ‘flow’ just doesn’t apply to mental or artistic activities, but also sporting activities like playing tennis or cricket. Mills heads her article with the claim that ‘New research helps unlock the secrets of flow, an important tool for creative and problem solving tasks’. She quotes Csikszentmihalyi to provide a working definition:
 
"A state in which people are so involved in an activity that nothing else seems to matter; the experience is so enjoyable that people will continue to do it even at great cost, for the sheer sake of doing it."
 
I believe I’ve experienced ‘flow’ in 2 quite disparate activities: writing fiction and driving a car. Just to clarify, some people think that experiencing flow while driving means that you daydream, whereas I’m talking about the exact opposite. I hardly ever daydream while driving, and if I find myself doing it, I bring myself back to the moment. Of course, cars are designed these days to insulate you from the experience of driving as much as possible, as we evolve towards self-driving cars. Thankfully, there are still cars available that are designed to involve you in the experience and not remove you from it.
 
I was struck by the fact that the study used jazz musicians, as I’ve often compared the ability to play jazz with the ability to write dialogue (even though I’m not a musician). They both require extemporisation. The article references Nat Bartsch, whom I’ve seen perform live and whose music is an unusual style of jazz in that it can be very contemplative. I saw her perform one of her albums with her quartet, augmented with a cello, which made it a one-off, unique performance. (This is a different concert performed in Sydney without the cellist.)
 
The study emphasised the point that the more experienced practitioners taking part were the ones more likely to experience ‘flow’. In other words, to experience ‘flow’ you need to reach a certain skill-level. In emphasising this point, the author quotes jazz legend, Charlie Parker:
 
"You've got to learn your instrument. Then, you practise, practise, practise. And then, when you finally get up there on the bandstand, forget all that and just wail."
 
I can totally identify with this, as when I started writing it was complete crap, to the extent that I wouldn’t show it to anyone. For some irrational reason, I had the self-belief – some might say, arrogance – that, with enough perseverance and practice, I could ‘break-through’ into the required skill-level. In fact, I now create characters and write dialogue with little conscious effort – it’s become a ‘delegated’ task, so I can concentrate on the more complex tasks of resolving plot points, developing moral dilemmas and formulating plot twists. Notice that these require a completely different set of skills that also had to be learned from scratch. But all this can come together, often in unexpected and surprising ways, when one is in the mental state of ‘flow’. I’ve described this as a feeling like you’re an observer, not the progenitor, so the process occurs as if you’re a medium and you just have to trust it.
 
Dr. Steffan Herff, leader of the Sydney Music, Mind and Body Lab at Sydney University, makes a point that supports this experience:
 
"One component that makes flow so interesting from a cognitive neuroscience and psychology perspective, is that it comes with a 'loss of self-consciousness'."
 
And this allows me to segue into Zen Buddhism. Many years ago, I read an excellent book by Daisetz Suzuki titled, Zen and Japanese Culture, where he traces the evolutionary development of Zen, starting with Buddhism in India, then being adopted in China, where it was influenced by Taoism, before reaching Japan, where it was assimilated into a sister-religion (for want of a better term) with Shintoism, which is an animistic religion.
 
Suzuki describes Zen as going inward rather than outward, while acknowledging that the two can’t be disconnected. But I think it’s the loss of ‘self’ that makes it relevant to the experience of flow. When Suzuki described the way Zen is practiced in Japan, he talked about being in the moment, whatever the activity, and for me, this is an ideal that we rarely attain. It was only much later that I realised that this is synonymous with flow as described by Csikszentmihalyi and currently being examined in the studies referenced above.
 
I’ve only once before written a post on Zen (not counting a post on Buddhism and Christianity), which arose from reading Douglas Hofstadter’s seminal tome, Godel Escher Bach (which is not about Zen, although it gets a mention), and it’s worth quoting this summation from myself:
 
My own take on this is that one’s ego is not involved yet one feels totally engaged. It requires one to be completely in the moment, and what I’ve found in this situation is that time disappears. Sportsmen call it being ‘in the zone’ and it’s something that most of us have experienced at some time or another.

Friday, 5 July 2024

The universal quest for meaning

I’ve already cited Philosophy Now (Issue 162, June/July 2024) in my last 2 posts and I’m about to do it again. Every issue has a theme, and this one is called ‘The Meaning Issue’, so it’s no surprise that 2 of the articles reference Viktor Frankl’s seminal book, Man’s Search for Meaning. I’ve said that it’s probably the only book I’ve read that I think everyone should read.
 
For those who don’t know, Frankl was an Auschwitz survivor and a ‘logotherapist’, a term he coined to describe his own version of existential psychological therapy. Basically, Frankl saw purpose as being the unrecognised essence of our existence, and its lack as a source of mental issues like depression, neuroticism and stress. I’ve written about the importance of purpose previously, so I might repeat myself.
 
One of the articles (by Georgia Arkell) compares Frankl’s ideas on existentialism with Sartre’s, and finds Frankl more optimistic. I know that I’m taking a famous line out of context, but I feel it sums up their differences. Sartre famously said, ‘Hell is other people’, but Frankl lived through hell, and would no doubt, have strongly disagreed. Frankl argued that we can find meaning even under the most extreme circumstances, and he should know.
 
To quote from Arkell’s article:
 
Frankl noted that the prisoners who appeared to have the highest chance of survival were those with some aim or meaning directed beyond themselves and beyond day to day survival.
 

Then there is this, in Frankl’s own words (from Man’s Search for Meaning):
 
…it becomes clear that the sort of person the prisoner became was the result of an inner-decision and not the result of camp influences alone. Fundamentally then, any man can, under such circumstances, decide what shall become of him – mentally and spiritually.

 
I should point out that my own father spent 2.5 years as a POW in Germany, though it wasn’t a death camp, even though, according to his own testimony, it was only Red Cross food parcels that kept him alive. He rarely talked about it, as he was a firm believer that you couldn’t make the experience of war, in all its manifestations, comprehensible to anyone who hadn’t experienced it. But in light of Frankl’s words, I wonder now how my father did find meaning. There is one aspect of his experience that might shed some light on that – he escaped no less than 3 times.
 
My father was very principled, some might say, to a fault. He volunteered to stay and look after the wounded when they were ordered to evacuate Crete, because, as he said, it was his job (he was an ambulance officer in the Field Ambulance Corp). That action probably later saved his life, but that’s another story. Also on Crete, while trying to escape with another prisoner with the help of a local woman (it was always the women who did this, according to my father), they were discovered by a German, whilst hiding. My father gave himself up so the other 2 could escape. The Australian escapee made it back home and was able to tell my grandmother that her son was still alive (she only knew he was missing in action). But the 3 attempts I mentioned all happened after he was taken to Germany, and on one occasion, the Commandant asked him, why did he escape? My father answered matter-of-factly, ‘It’s my job’. Apparently, due to his sincerity (not for being a smart-arse), the Commandant chose not to punish him.
 
So, I think my father survived because he stuck to some core values and principles that became his own rock and anchor. His attempts to escape are manifestations of his personal affirmation that he never lost hope.
 
Frankl understood better than most, because of his lived experience, the importance of hope to a person’s survival. As an aside, our (Australian) government has a very deliberate policy of eliminating all hope for asylum seekers who arrive by sea. I think it’s so iniquitous, it should be a recognised crime – it goes to the heart of human rights. Slightly off-topic, but very relevant.
 
Loss of hope is something I’ve explored in my own fiction, where we witness its loss like a ball of tightly wound string slowly unravelling (not the metaphor I used in the book), as a key character is abandoned on a distant world (it’s sci-fi, for those who don’t know). I’ve been told by at least one reader that it’s the most impactful section in the book. True story: I was once sitting next to someone on a bus who was up to that part of the book, and as he got up to leave, he said, ‘If she dies, I’ll never speak to you again.’
 
See how easily I get side-tracked - my mind goes off on tangents – I can’t help myself. I’m the same in conversations.
 
Back to the topic: the other article in Philosophy Now that references Frankl, Finding Meaning in Suffering, by Patrick Testa (a psychiatric clinician with a BA in philosophy and political science) also quotes from Man’s Search for Meaning:
 
There are some authors who contend that meanings and values are nothing but defense mechanisms or reaction formulations…  But for myself, I would not be willing to live merely for the sake of my defense mechanisms, nor would I be ready to die merely for the sake of my reaction formulations.
(Emphasis in Testa’s quote)
 
This quote was the original trigger for this essay, as it leads me to consider the role of identity. I’ve long argued that identity is what someone is willing to die for (which Frankl specifically mentions), therefore willing to kill for. Identity is strongly related to ‘meaning’ for most people, albeit at a subconscious level. For some people, their identity is their profession, for others it’s their heritage, and for many it’s their political affiliation. The point about identity is that it both binds us and divides us.
 
But if you were to ask someone what their identity is, they might well struggle to answer – I know I do – but if it appears to be threatened, even erroneously, they will become combative. Speaking for myself, I struggled to find meaning for a large portion of my life, seeking it in relationships that were more fantasy than realistic. I think I only found meaning (or purpose) when I was able to channel my artistic drives and also express my intellectual meanderings like I’m doing on this blog. So that axiomatically becomes my identity. I’ve written more than once about the importance of freedom, by which I mean the freedom to express one’s thoughts and any artistic urges. Even in my profession (which is in engineering), I found I was best when left to my own devices, and suffered most when someone tried to put me in a box and confine me to their way of thinking.
 
I can’t imagine living in a society where that particular freedom is curtailed, yet they exist. I would argue that a society where its participants can’t flourish would stagnate and not progress in any way, except possibly in a strictly material sense. We’ve seen that in totalitarian regimes all over the world.
 
Lastly, one can’t leave this topic without talking about religion. In fact, I imagine that many, on reading the title, would have expected that would be the starting point. I’ll provide a reference at the end, but very early on in the life of this blog, I wrote a post called Hope, which was really a response to a somewhat facile argument by William Lane Craig that atheists can’t possibly have hope. I don’t think I can improve on that argument here, but it also ties into the topic of identity that I just referred to.
 
Apart from identity, which is usually cultural, there is the universal regard for human suffering. As pointed out in the articles I cited, suffering is an unavoidable aspect of life. The Buddhist philosophy makes this its starting point – It’s the first of the Four Noble Truths, from which the other 3 stem. I expect a lot of religions have arisen as a means to psychologically ‘explain’ the purpose of suffering. It’s also a feature of virtually all fiction, without a religious argument in sight.
 
But it’s also a key feature of Frankl’s philosophy. Arguably, without suffering, we can’t find meaning. I’ve argued previously that we don’t find wisdom through learning and achievements, but through dealing with adversity – it’s even a specific teaching in the I Ching, albeit expressed in different words:
 
Adversity is the opposite of success, but it can lead to success if it befalls the right person.
 
I expect many of us can identify with that. Meaning can be found in the darkest of psychological places, yet without it, we wouldn’t keep going.
 
 
Other posts relevant to this topic
: Homage to my Old Man; Hope; The importance of purpose; Freedom, justice, happiness and truth; Freedom, a moral imperative.
 

Saturday, 29 June 2024

Feeling is fundamental

 I’m not sure I’ve ever had an original idea, but I sometimes raise one that no one else seems to talk about. And this is one of them: I contend that the primary, essential attribute of consciousness is to be able to feel, and the ability to comprehend is a secondary attribute.
 
I don’t even mind if this contentious idea triggers debate, but we tend to always discuss consciousness in the context of human consciousness, where we metaphorically talk about making decisions based on the ‘head’ or the ‘heart’. I’m unsure of the origin of this dichotomy, but there is an inference that our emotional and rational ‘centres’ (for want of a better word) have different loci (effectively, different locations). No one believes that, of course, but possibly people once did. The thing is that we are all aware that sometimes our emotional self and rational self can be in conflict. This is already going down a path I didn’t intend, so I may return at a later point.
 
There is some debate about whether insects have consciousness, but I believe they do because they demonstrate behaviours associated with fear and desire, be it for sustenance or company. In other respects, I think they behave like automatons. Colonies of ants and bees can build a nest without a blueprint except the one that apparently exists in their DNA. Spiders build webs and birds build nests, but they don’t do it the way we would – it’s all done organically, as if they have a model in their brain that they can follow; we actually don’t know.
 
So I think the original role of consciousness in evolutionary terms was to feel, concordant with abilities to act on those feelings. I don’t believe plants can feel, but they’d have very limited ability to act on them, even if they could. They can communicate chemically, and generally rely on the animal kingdom to propagate, which is why a global threat to bee populations is very serious indeed.
 
So, in evolutionary terms, I think feeling came before cognitive abilities – a point I’ve made before. It’s one of the reasons that I think AI will never be sentient – a viewpoint not shared by most scientists and philosophers, from what I’ve read.  AI is all about cognitive abilities; specifically, the ability to acquire knowledge and then deploy it to solve problems. Some argue that by programming biases into the AI, we will be simulating emotions. I’ve explored this notion in my own sci-fi, where I’ve added so-called ‘attachment programming’ to an AI to simulate loyalty. This is fiction, remember, but it seems plausible.
 
Psychological studies have revealed that we need an emotive component to behave rationally, which seems counter-intuitive. But would we really prefer if everyone was a zombie or a psychopath, with no ability to empathise or show compassion. We see enough of this already. As I’ve pointed out before, in any ingroup-outgroup scenario, totally rational individuals can become totally irrational. We’ve all observed this, possibly actively participated.
 
An oft made point (by me) that I feel is not given enough consideration is the fact that without consciousness, the universe might as well not exist. I agree with Paul Davies (who does espouse something similar) that the universe’s ability to be self-aware, would seem to be a necessary condition for its existence (my wording, not his). I recently read a stimulating essay in the latest edition of Philosophy Now (Issue 162, June/July 2024) titled enigmatically, Significance, by Ruben David Azevedo, a ‘Portuguese philosophy and social sciences teacher’. His self-described intent is to ‘Tell us why, in a limitless universe, we’re not insignificant’. In fact, that was the trigger for this post. He makes the point (that I’ve made elsewhere myself), that in both time and space, we couldn’t be more insignificant, which leads many scientists and philosophers to see us as a freakish by-product of an otherwise purposeless universe. A perspective that Davies has coined ‘the absurd universe’. In light of this, it’s worth reading Azevedo’s conclusion:
 
In sum, humans are neither insignificant nor negligible in this mind-blowing universe. No living being is. Our smallness and apparent peripherality are far from being measures of our insignificance. Instead, it may well be the case that we represent the apex of cosmic evolution, for we have this absolute evident and at the same time mysterious ability called consciousness to know both ourselves and the universe.
 
I’m not averse to the idea that there is a cosmic role for consciousness. I like John Wheeler’s obvious yet pertinent observation:
 
The Universe gave rise to consciousness, and consciousness gives meaning to the Universe.

 
And this is my point: without consciousness, the Universe would have no meaning. And getting back to the title of this essay, we give the Universe feeling. In fact, I’d say that the ability to feel is more significant than the ability to know or comprehend.
 
Think about the role of art in all its manifestations, and how it’s totally dependent on the ability to feel. In some respects, I consider AI-generated art a perversion, because any feeling we have for its products is of our own making, not the AI’s.
 
I’m one of those weird people who can even find beauty in mathematics, while acknowledging only a limited ability to pursue it. It’s extraordinary that I can find beauty in a symphony, or a well-written story, or the relationship between prime numbers and Riemann’s Zeta function.


Addendum: I realised I can’t leave this topic without briefly discussing the biochemical role in emotional responses and behaviours. I’m thinking of the brain’s drugs-of-choice like serotonin, dopamine, oxytocin and endorphins. Some may argue that these natural ‘messengers’ are all that’s required to explain emotions. However, there are other drugs, like alcohol and caffeine (arguably the most common) that also affect us emotionally, sometimes to our detriment. My point being that the former are nature’s target-specific mechanisms to influence the way we feel, without actually being the genesis of feelings per se.

Wednesday, 19 June 2024

Daniel C Dennett (28 March 1942 - 19 April 2024)

 I only learned about Dennett’s passing in the latest issue of Philosophy Now (Issue 162, June/July 2024), where Daniel Hutto (Professor of Philosophical Psychology at the University of Wollongong) wrote a 3-page obituary. Not that long ago, I watched an interview with him, following the publication of his last book, I’ve Been Thinking, which, from what I gathered, is basically a memoir, as well as an insight into his philosophical musings. (I haven’t read it, but that’s the impression I got from the interview.)
 
I should point out that I have fundamental philosophical differences with Dennett, but he’s not someone you can ignore. I must confess I’ve only read one of his books (decades ago), Freedom Evolves (2006), though I’ve read enough of his interviews and commentary to be familiar with his fundamental philosophical views. It’s something of a failing on my part that I haven’t read his most famous tome, Consciousness Explained (1991). Paul Davies once nominated it among his top 5 books, along with Douglas Hofstadter’s Godel Escher Bach. But then he gave a tongue-in-cheek compliment by quipping, ‘Some have said that he explained consciousness away.’
 
Speaking of Hofstadter, he and Dennett co-published a book, The Mind’s I, which is really a collection of essays by different authors, upon which Dennett and Hofstadter commented. I wrote a short review covering only a small selection of said essays on this blog back in 2009.
 
Dennett wasn’t afraid to tackle the big philosophical issues, in particular, anything relating to consciousness. He was unusual for a philosopher in that he took more than a passing interest in science, and appreciated the discourse that axiomatically arises between the 2 disciplines, while many others (on both sides) emphasise the tension that seems to arise and often morphs into antagonism.
 
What I found illuminating in one of his YouTube videos was how Dennett’s views of the world hadn’t really changed that much over time (mind you, neither have mine), and it got me thinking that it reinforces an idea I’ve long held, but was once iterated by Nietzsche, that our original impulses are intuitive or emotive and then we rationalise them with argument. I can’t help but feel that this is what Dennett did, though he did it extremely well.
 
I like the quote at the head of Hutto’s obituary: “The secret of happiness is: Find something more important than you are and dedicate your life to it.”

 


Saturday, 15 June 2024

The negative side of positive thinking

 This was a topic in last week’s New Scientist (8 June 2024) under the heading, The Happiness Trap, an article written by Conor Feehly, a (freelance journalist, based in Bangkok). Basically, he talks about the plethora of ‘self-help’ books and in particular, the ‘emergence of the positive psychology movement in 1998’. I was surprised he could provide a year, when one would tend to think it was a generational transition. At least, that’s my experience.
 
He then discusses the backlash (my term, not his) that’s occurred since, and mentions a study, ‘published in 2022, [by] an international group of psychologists exploring how societal pressure to be happy affects people in 40 countries’ (my emphasis). He cites Brock Bastian at the University of Melbourne, who was part of the study, “When we are not willing to accept negative emotions as a part of life, this can mean that we may see negative emotions as a sign there is something wrong with us.” And this gets to the nub of the issue.
 
I can’t help but think that there is a generational effect, if not a divide. I see myself as being in between, generationally speaking. My parents lived through the Great Depression and WW2, so they experienced enough negative emotion for all of us. Growing up in rural NSW, we didn’t have much but neither did anyone else, so we didn’t think that was exceptional. There was a lot of negative emotion in our lives as a consequence of the trauma that my Dad experienced as both a wartime serviceman and a prisoner-of-war. It was only much later, as an adult, that I realised this was not the norm. Back then, PTSD wasn’t a term.
 
One of the things that struck me in Feehly’s article was the idea of ‘acceptance’. To quote:
 
Research shows that when people accepted their negative emotions – rather than judging mental experience as good or bad – they become more emotionally resilient, experiencing fewer negative feelings in response to environmental stressors and attaining a greater sense of well-being.
 
He also says in the same context:
 
The good news is that, as we age, we increasingly rely on acceptance – which might help to explain why older people tend to report better emotional well-being.

 
As one of that cohort (older people), I can identify with that sentiment. Acceptance is a multi-faceted word, because one of the unexpected benefits of getting older is that we learn to accept ourselves, becoming less critical and judgemental, and hopefully extending that to others.
 
In our youth, acceptance by one’s peers is a prime driver of self-esteem and associated behaviours, and social media has to a large extent hijacked that impulse, which was also highlighted by Brock Bastian (cited above).
 
I’ve got side-tracked to the extent that this is the antithesis of the so-called ‘positive psychology movement’, possibly because I think my generation largely avoided that trap. We are more likely to see that a ‘think positive’ attitude in the face of all of life’s dilemmas and problems is a delusion. What’s obvious is that negative emotional states have evolutionary value, because they have ancient roots. The other point that’s obvious to me is that we are all addicted to stories, where we vicariously experience negative emotions on a regular basis. In fact, a story that only contained positive emotions would never be read, or watched.
 
What has always been obvious to me, and which I’ve written about before, including in the very early history of this blog, is that we need adversity to gain wisdom. As I keep saying, it’s the theme of virtually every story ever told. When I look back on my early adult years and how seemingly insurmountable they felt, my older self is so grateful I persevered. There is a hypothetical often raised: what advice would you give your younger self? I’d just say, ‘Hang in there, it gets better.’

Sunday, 9 June 2024

More on radical ideas

 As you can tell from the title, this post carries on from the last one, because I got a bit bogged down on one issue, when I really wanted to discuss more. One of the things that prompted me was watching a 1hr presentation by cosmologist, Claudia de Rahm, whom I’ve mentioned before, when I had the pleasure of listening to an on-line lecture she gave, care of New Scientist, during the COVID lockdown.
 
Claudia’s particular field of study is gravity, and, by her own admission, she has a ‘crazy idea’. Now here’s the thing: I meet a lot of people on Quora and in the blogosphere, who like me, live (in a virtual sense) on the fringes of knowledge rather than as academic or professional participants. And what I find is that they often have an almost zealous confidence in their ideas. To give one example, I recently came across someone who argued quite adamantly that the Universe is static, not expanding, and has even written a book on the subject. This is contrary to virtually everyone else I’m aware of who works in the field of cosmology and astrophysics. And I can’t help but compare this to Claudia de Rahm who is well aware that her idea is ‘crazy’, even though she’s fully qualified to argue it.
 
In other words, it’s a case of the more you know about a subject, the less you claim to know, because experts are more aware of their limitations than non-experts. I should point out, in case you didn’t already know, I’m not one of the experts.
 
Specifically, Claudia’s crazy idea is that not only are there gravitational waves, but gravitons and that gravitons have an extremely tiny amount of mass, which would alter the effect of gravity at very long range. I should say that at present, the evidence is against her, because if she’s right, gravity waves would travel not at the speed of light, as predicted by Einstein, but ever-so-slightly less than light.
 
Freeman Dyson, by the way, has argued that if gravitons do exist, they would be impossible to detect, but if Claudia is right, then they would be.
 
In her talk, Claudia also discusses the vacuum energy, which according to particle physics, should be 28 orders of magnitude greater than the relativistic effect of ‘dark energy’. She calls it ‘the biggest discrepancy in the entire history of science’. This suggests that there is something rotten in the state of theoretical physics, along with the fact, that what we can physically observe, only accounts for 5% of the Universe.
 
It should be pointed out that at the end of the 19th Century no one saw or predicted the 2 revolutions in physics that were just around the corner – relativity theory and quantum mechanics. They were an example of what Thomas Kuhn called The Structure of Scientific Revolutions (the title of his book expounding on this). And I’d suggest that these current empirical aberrations in cosmology are harbingers of the next Kuhnian revolution.
 
Roger Penrose, whom I’ve referenced a number of times on this blog, is someone else with some ‘crazy’ ideas compared to the status quo, for which I admire him even if I don’t agree with him. One of Penrose’s hobby horses is his own particular inference from Godel’s Incompleteness Theorem, which he learned as a graduate (under Steen, at Cambridge) and which he discusses in this video. He argues that it provides evidence that humans don’t think like computers. If one takes the example of Riemann’s Hypothesis (really a conjecture) we know that a computer can’t tell us if it’s true or not (my example, not Penrose’s).* However, most mathematicians believe it is true, and it would be an enormous shock if it was proven untrue, or a contra-example was found by a computer. This is the case with other conjectures that have been proven true, like Fermat’s Last Theorem and Poincare’s conjecture. Penrose’s point, if I understand him correctly, is that it takes a human mind and not a computer to make this leap into the unknown and grasp a ‘truth’ out of the aether.
 
Anyone who has engaged in some artistic endeavour can identify with this, even if it’s not mathematical truths they are seeking but the key to unravelling a plot in a story.
 
Penrose makes the point in the video that he’s a ‘visual’ person, which he thinks is unusual in his field. Penrose is an excellent artist, by the way, and does all his own graphics. This is something else I can identify with, as I was quite precocious as a very young child at drawing (I could draw in perspective, though no one taught me) even though it never went anywhere.
 
Finally, some crazy ideas of my own. I’ve pointed out on other posts that I have a predilection (for want of a better term) for Kant’s philosophical proposition that we can never know the ‘thing-in-itself’ but only a perception of it.
 
With this in mind, I contend that this philosophical premise not only applies to what we can physically detect via instruments, but what we theoretically infer from the mathematics we use to explore nature. As heretical an idea as it may seem, I argue that mathematics is yet another 'instrument' we use to probe the secrets of the Universe. Quantum mechanics and relativity theory being the most obvious.
 
As I’ve tried to expound on other posts, relativity theory is observer-dependent, in as much as different observers will both measure and calculate different values of time and space, dependent on their specific frame of reference. I believe this is a pertinent example of Kant’s proposition that the thing-in-itself escapes our perception. In particular, physicists (including Penrose) will tell you that events that are ostensibly simultaneous to us (in a galaxy far, far away) will be perceived as both past and future by 2 observers who are simply crossing a street in opposite directions. I’ve written about this elsewhere as ‘the impossible thought experiment’.
 
The fact is that relativity theory rules out the event being observed at all. In other words, simultaneous events can’t be observed (according to relativity). For this reason, virtually all physicists will tell you that simultaneity is an illusion – there is no universal now.
 
But here’s the thing: if there is an edge in either space or time, it can only be observed from outside the Universe. Relativity theory, logically enough, can only tell us what we can observe from within the Universe.
 
But to extend this crazy idea, what’s stopping the Universe existing within a higher dimension that we can’t perceive. Imagine being a fish and you spend your entire existence in a massive body of water, which is your entire universe. But then one day you are plucked out of that environment and you suddenly become aware that there is another, even bigger universe that exists right alongside yours.
 
There is a tendency for us to think that everything that exists we can learn and know about – it’s what separates us from every other living thing on the planet. But perhaps there are other dimensions, or even worlds, that lie forever beyond our comprehension.


*Footnote: Actually, Penrose in his book, The Emperor’s New Mind, discusses this in depth and at length over a number of chapters. He makes the point that Turing’s ‘proof’ that it’s impossible to predict whether a machine attempting to compute all the Riemann zeros (for example) will stop, is a practical demonstration of the difference between ‘truth’ and ‘proof’ (as Godel’s Incompleteness Theorem tell us). Quite simply, if the theorem is true, the computer will never stop, so it can never be proven algorithmically. It can only be proven (or disproven) if one goes ‘outside the [current] rules’ to use Penrose’s own nomenclature.

Sunday, 2 June 2024

Radical ideas

 It’s hard to think of anyone I admire in physics and philosophy who doesn’t have at least one radical idea. Even Richard Feynman, who avoided hyperbole and embraced doubt as part of his credo: "I’d rather have doubt and be uncertain, than be certain and wrong."
 
But then you have this quote from his good friend and collaborator, Freeman Dyson:

Thirty-one years ago, Dick Feynman told me about his ‘sum over histories’ version of quantum mechanics. ‘The electron does anything it likes’, he said. ‘It goes in any direction at any speed, forward and backward in time, however it likes, and then you add up the amplitudes and it gives you the wave-function.’ I said, ‘You’re crazy.’ But he wasn’t.
 
In fact, his crazy idea led him to a Nobel Prize. That exception aside, most radical ideas are either still-born or yet to bear fruit, and that includes mine. No, I don’t compare myself to Feynman – I’m not even a physicist - and the truth is I’m unsure if I even have an original idea to begin with, be they radical or otherwise. I just read a lot of books by people much smarter than me, and cobble together a philosophical approach that I hope is consistent, even if sometimes unconventional. My only consolation is that I’m not alone. Most, if not all, the people smarter than me, also hold unconventional ideas.
 
Recently, I re-read Robert M. Pirsig’s iconoclastic book, Zen and the Art of Motorcycle Maintenance, which I originally read in the late 70s or early 80s, so within a decade of its publication (1974). It wasn’t how I remembered it, not that I remembered much at all, except it had a huge impact on a lot of people who would never normally read a book that was mostly about philosophy, albeit disguised as a road-trip. I think it keyed into a zeitgeist at the time, where people were questioning everything. You might say that was more the 60s than the 70s, but it was nearly all written in the late 60s, so yes, the same zeitgeist, for those of us who lived through it.
 
Its relevance to this post is that Pirsig had some radical ideas of his own – at least, radical to me and to virtually anyone with a science background. I’ll give you a flavour with some selective quotes. But first some context: the story’s protagonist, whom we assume is Pirsig himself, telling the story in first-person, is having a discussion with his fellow travellers, a husband and wife, who have their own motorcycle (Pirsig is travelling with his teenage son as pillion), so there are 2 motorcycles and 4 companions for at least part of the journey.
 
Pirsig refers to a time (in Western culture) when ghosts were considered a normal part of life. But then introduces his iconoclastic idea that we have our own ghosts.
 
Modern man has his own ghosts and spirits too, you know.
The laws of physics and logic… the number system… the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.

 
Then he specifically cites the law of gravity, saying provocatively:
 
The law of gravity and gravity itself did not exist before Isaac Newton. No other conclusion makes sense.
And what that means, is that the law of gravity exists nowhere except in people’s heads! It’s a ghost! We are all of us very arrogant and conceited about running down other people’s ghosts but just as ignorant and barbaric and superstitious about our own.
Why does everybody believe in the law of gravity then?
Mass hypnosis. In a very orthodox form known as “education”.

 
He then goes from the specific to the general:
 
Laws of nature are human inventions, like ghosts. Laws of logic, of mathematics are also human inventions, like ghosts. The whole blessed thing is a human invention, including the idea it isn’t a human invention. (His emphasis)
 
And this is philosophy in action: someone challenges one of your deeply held beliefs, which forces you to defend it. Of course, I’ve argued the exact opposite, claiming that ‘in the beginning there was logic’. And it occurred to me right then, that this in itself, is a radical idea, and possibly one that no one else holds. So, one person’s radical idea can be the antithesis of someone else’s radical idea.
 
Then there is this, which I believe holds the key to our disparate points of view:
 
We believe the disembodied 'words' of Sir Isaac Newton were sitting in the middle of nowhere billions of years before he was born and that magically he discovered these words. They were always there, even when they applied to nothing. Gradually the world came into being and then they applied to it. In fact, those words themselves were what formed the world. (again, his emphasis)
 
Note his emphasis on 'words', as if they alone make some phenomenon physically manifest.
 
My response: don’t confuse or conflate the language one uses to describe some physical entity, phenomena or manifestation with what it describes. The natural laws, including gravity, are mathematical in nature, obeying sometimes obtuse and esoteric mathematical relationships, which we have uncovered over eons of time, which doesn’t mean they only came into existence when we discovered them and created the language to describe them. Mathematical notation only exists in the mind, correct, including the number system we adopt, but the mathematical relationships that notation describes, exist independently of mind in the same way that nature’s laws do.
 
John Barrow, cosmologist and Fellow of the Royal Society, made the following point about the mathematical ‘laws’ we formulated to describe the first moments of the Universe’s genesis (Pi in the Sky, 1992).
 
Specifically, he says our mathematical theories describing the first three minutes of the Universe predict specific ratios of the earliest ‘heavier’ elements: deuterium, 2 isotopes of helium and lithium, which are 1/1000, 1/1000, 22 and 1/100,000,000 respectively; with the remaining (roughly 78%) being hydrogen. And this has been confirmed by astronomical observations. He then makes the following salient point:



It confirms that the mathematical notions that we employ here and now apply to the state of the Universe during the first three minutes of its expansion history at which time there existed no mathematicians… This offers strong support for the belief that the mathematical properties that are necessary to arrive at a detailed understanding of events during those first few minutes of the early Universe exist independently of the presence of minds to appreciate them.
 
As you can see this effectively repudiates Pirsig’s argument; but to be fair to Pirsig, Barrow wrote this almost 2 decades after Pirsig’s book.
 
In the same vein, Pirsig then goes on to discuss Poincare’s Foundations of Science (which I haven’t read), specifically talking about Euclid’s famous fifth postulate concerning parallel lines never meeting, and how it created problems because it couldn’t be derived from more basic axioms and yet didn’t, of itself, function as an axiom. Euclid himself was aware of this, and never used it as an axiom to prove any of his theorems.
 
It was only in the 19th Century, with the advent of Riemann and other non-Euclidean geometries on curved surfaces that this was resolved. According to Pirsig, it led Poincare to question the very nature of axioms.
 
Are they synthetic a priori judgements, as Kant said? That is, do they exist as a fixed part of man’s consciousness, independently of experience and uncreated by experience? Poincare thought not…
Should we therefore conclude that the axioms of geometry are experimental verities? Poincare didn’t think that was so either…
Poincare concluded that the axioms of geometry are conventions, our choice among all possible conventions is guided by experimental facts, but it remains free and is limited only by the necessity of avoiding all contradiction.

 
I have my own view on this, but it’s worth seeing where Pirsig goes with it:
 
Then, having identified the nature of geometric axioms, [Poincare] turned to the question, Is Euclidean geometry true or is Riemann geometry true?
He answered, The question has no meaning.
[One might] as well as ask whether the metric system is true and the avoirdupois system is false; whether Cartesian coordinates are true and polar coordinates are false. One geometry can not be more true than another; it can only be more convenient. Geometry is not true, it is advantageous.
 
I think this is a false analogy, because the adoption of a system of measurement (i.e. units) and even the adoption of which base arithmetic one uses (decimal, binary, hexadecimal being the most common) are all conventions.
 
So why wouldn’t I say the same about axioms? Pirsig and Poincare are right in as much that both Euclidean and Riemann geometry are true because they’re dependent on the topology that one is describing. They are both used to describe physical phenomena. In fact, in a twist that Pirsig probably wasn’t aware of, Einstein used Riemann geometry to describe gravity in a way that Newton could never have envisaged, because Newton only had Euclidean geometry at his disposal. Einstein formulated a mathematical expression of gravity that is dependent on the geometry of spacetime, and has been empirically verified to explain phenomena that Newton couldn’t. Of course, there are also limits to what Einstein’s equations can explain, so there are more mathematical laws still to uncover.
 
But where Pirsig states that we adopt the axiom that is convenient, I contend that we adopt the axiom that is necessary, because axioms inherently expand the area of mathematics we are investigating. This is a consequence of Godel’s Incompleteness Theorem that states there are limits to what any axiom-based, consistent, formal system of mathematics can prove to be true. Godel himself pointed out that that the resolution lies in expanding the system by adopting further axioms. The expansion of Euclidean to non-Euclidean geometry is a case in point. The example I like to give is the adoption of √-1 = i, which gave us complex algebra and the means to mathematically describe quantum mechanics. In both cases, the axioms allowed us to solve problems that had hitherto been impossible to solve. So it’s not just a convenience but a necessity.
 
I know I’ve belaboured a point, but both of these: non-Euclidean geometry and complex algebra; were at one time radical ideas in the mathematical world that ultimately led to radical ideas: general relativity and quantum mechanics; in the scientific world. Are they ghosts? Perhaps ghost is an apt metaphor, given that they appear timeless and have outlived their discoverers, not to mention the rest of us. Most physicists and mathematicians tacitly believe that they not only continue to exist beyond us, but existed prior to us, and possibly the Universe itself.
 
I will briefly mention another radical idea, which I borrowed from Schrodinger but drew conclusions that he didn’t formulate. That consciousness exists in a constant present, and hence creates the psychological experience of the flow of time, because everything else becomes the past as soon as it happens. I contend that only consciousness provides a reference point for past, present and future that we all take for granted.

Sunday, 19 May 2024

It all started with Euclid

 I’ve mentioned Euclid before, but this rumination was triggered by a post on Quora that someone wrote about Plato, where they argued, along with another contributor, that Plato is possibly overrated because he got a lot of things wrong, which is true. Nevertheless, as I’ve pointed out in other posts, his Academy was effectively the origin of Western philosophy, science and mathematics. It was actually based on the Pythagorean quadrivium of geometry, arithmetic, astronomy and music.
 
But Plato was also a student and devoted follower of Socrates and the mentor of Aristotle, who in turn mentored Alexander the Great. So Plato was a pivotal historical figure and without his writings, we probably wouldn’t know anything about Socrates. In the same way that, without Paul, we probably wouldn’t know anything about Jesus. (I’m sure a lot of people would find that debatable, but, if so, it’s a debate for another post.)
 
Anyway, I mentioned Euclid in my own comment (on Quora), who was the Librarian at Alexandria around 300BC, and thus a product of Plato’s school of thought. Euclid wrote The Elements, which I contend is arguably the most important book written in the history of humankind – more important than any religious text, including the Bible, Homer’s Iliad and the Mahabharata, which, I admit, is quite a claim. It's generally acknowledged as the most copied text in the secular world. In fact, according to Wikipedia:
 
It was one of the very earliest mathematical works to be printed after the invention of the printing press and has been estimated to be second only to the Bible in the number of editions published since the first printing in 1482.
 
Euclid was revolutionary in one very significant way: he was able to demonstrate what ‘truth’ was, using pure logic, albeit in a very abstract and narrow field of inquiry, which is mathematics.
 
Before then, and in other cultures, truth was transient and subjective and often prescribed by the gods. But Euclid changed all that, and forever. I find it extraordinary that I was examined on Euclid’s theorems in high school in the 20th Century.
 
And this mathematical insight has become, millennia later, a key ingredient (for want of a better term) in the hunt for truths in the physical world. In the 20th Century, in what has become known as the Golden Age of Physics, the marriage between mathematics and scientific inquiry at all scales, from the cosmic to the infinitesimal, has uncovered deeply held secrets of nature that the Pythagoreans, and Euclid for that matter, could never have dreamed of. Look no further than quantum mechanics (QM) and the General Theory of Relativity (GR). Between these 2 iconic developments, they underpin every theory we currently have in physics, and they both rely on mathematics that was pivotal in the development of the theories from the outset. In other words, without the mathematics of complex algebra and Riemann geometry respectively, these theories would have been stillborn.
 
I like to quote Richard Feynman from his book, The Character of Physical Law, in a chapter titled, The Relation of Mathematics to Physics:
 
…what turns out to be true is that the more we investigate, the more laws we find, and the deeper we penetrate nature, the more this disease persists. Every one of our laws is a purely mathematical statement in rather complex and abstruse mathematics... Why? I have not the slightest idea. It is only my purpose to tell you about this fact.
 
The strange thing about physics is that for the fundamental laws we still need mathematics.
 
Physicists cannot make a conversation in any other language. If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in. She offers her information only in one form.

 
And this has only become more evident since Feynman wrote those words.
 
There was another revolution in the 20th Century, involving Alan Turing, Alonso Church and Kurt Godel; this time involving mathematics itself. Basically, each of these independently demonstrated that some mathematical truths were elusive to proof. Some mathematical conjectures could not be proved within the mathematical system from which they arose. The most famous example would be Riemann’s Hypothesis, involving primes. But the Goldbach conjecture (also involving primes) and the conjecture of twin primes also fit into this category. While most mathematicians believe them to be true, they are yet to be proven. I won’t elaborate on them, as they can easily be looked up.
 
But there is more: according to Gregory Chaitin, there are infinitely more incomputable Real numbers than computable Real numbers, which means that most of mathematics is inaccessible to logic.
 
So, when I say it all started with Euclid, I mean all the technology and infrastructure that we take for granted; and which allows me to write this so that virtually anyone anywhere in the world can read it; only exists because Euclid was able to derive ‘truths’ that stood for centuries and ultimately led to this.

Sunday, 5 May 2024

Why you need memory to have free will

 This is so obvious once I explain it to you, you’ll wonder why no one else ever mentions it. I’ve pointed out a number of times before that consciousness exists in a constant present, so the time is always ‘now’ for us. I credit Erwin Schrodinger for providing this insight in his lectures, Mind and Matter, appended to his short tome (an oxymoron), What is Life?
 
A logical consequence is that, without memory, you wouldn’t know you’re conscious. And this has actually happened, where people have been knocked unconscious, then acted as if they were conscious in order to defend themselves, but have no memory of it. It happened to my father in a boxing ring (I didn’t believe him when he first told me) and it happened to a woman security guard (in Sydney) where she shot her assailant after he knocked her out. In both cases, they claimed they had no memory of the incident.
 
And, as I’ve pointed out before, this begs a question: if we can survive an attack without being consciously aware of it, then why did evolution select for consciousness? In other words, we could be automatons. The difference is that we have memory.
 
The brain is effectively a memory storage device, without which we would function quite differently. Perhaps this is the real difference between animals and plants. Perhaps plants are sentient, but without memories they can’t ‘think’. There are different types of memory. There is so-called muscle-memory, whereby when we learn a new skill we don’t have to keep relearning it, and eventually we do it without really thinking about it. Driving a car is an example that most of us are familiar with, but it applies to most sports and the playing of musical instruments. I’ve learned that this applies to cognitive skills as well. For example, I write stories and creating characters is something I do without thinking about it too much.
 
People who suffer from retrograde amnesia (as described by Oliver Sacks in his seminal book, The Man Who Mistook His Wife for a Hat, in the chapter titled, The Lost Mariner) don’t lose their memory of specific skills, or what we call muscle-memory. So you could have muscle-memory and still be an automaton, as I described above.
 
Other types of memory are semantic memory and episodic memory. Semantic memory, which is essential to learning a language, is basically our ability to remember facts, which may or may not require a specific context. Rote learning is just exercising semantic memory, which doesn’t necessarily require a deep understanding of a subject, but that’s another topic.
 
Episodic memory is the one I’m most concerned with here. It’s the ability to recount an event in one’s life – a form of time-travelling we all indulge in from time to time. Unlike a computer memory, it’s not an exact recollection – we reconstruct it – which is why it can change over time and why it doesn’t necessarily agree with someone else’s recollection of the same event. Then there is imagination, which I believe is the key to it all. Apparently, imagination uses the same part of the brain as episodic memory. In effect, we are creating a memory of something that is yet to happen – an attempt to time-travel into the future. And this, I argue, is how free will works.

Philosophers have invented a term called ‘intentionality’, which is not what you might think it is. I’ll give a dictionary definition:
 
The quality of mental states (e.g. thoughts, beliefs, desires, hopes) which consists in their being directed towards some object or state of affairs.
 
Philosophers who write on the topic of consciousness, like Daniel C Dennett and John Searle, like to use the term ‘aboutness’ to describe intentionality, and if you break down the definition I gave above, you might discern what they mean. It’s effectively the ability to direct ‘thoughts… towards some object or state of affairs’. But I see this as either episodic memory or imagination. In other words, the ‘object or state of affairs’ could be historical or yet to happen or pure fantasy. We can imagine events we’ve never experienced, though we may have read or heard about them, and they may not only have happened in another time but also another place – so mental time-travelling.
 
As well as a memory storage device, the brain is also a predictability device – it literally thinks a fraction of a second ahead. I’ve pointed out in another post that the brain creates a model in space and time so we can interact with the real world of space and time, which allows us to survive it. And one of the facets of that model is that it’s actually, minisculy ahead of the real world, otherwise we wouldn’t even be able to catch a ball. In other words, it makes predictions that our life depends on. But I contend that this doesn’t need episodic memory or imagination either, because it happens subconsciously and is part of our automaton brain.
 
My point is that the automaton brain, as I’ve coined it, could have evolved by natural selection, without memory. The major difference memory makes is that we become self-aware, and it gives consciousness a role it would otherwise not possess. And that role is what we call free will. I like a definition that philosopher and neuroscientist, Raymond Tallis, gave:
 
Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.
 
So, as I said earlier, I think imagination is key. Free will requires imagination, which I argue is called ‘aboutness’ or ‘intentionality’ in philosophical jargon (though others may differ). And imagination requires episodic memory or mental time-travelling, without which we would all be automatons; still able to interact with the real world of space and time and to acquire skills necessary for survival.
 
And if one goes back to the very beginning of this essay, it is all premised on the observed and experiential phenomenon that consciousness exists in a constant present. We take this for granted, yet nothing else does. Everything becomes the past as soon as it happens, which I keep repeating, is demonstrated every time someone takes a photo. The only exception I can think of is a photon of light, for which time is zero. Our very thoughts become memory as soon as we think them, otherwise we wouldn’t know we exist, yet we could apparently survive without it.
 
Just today, I read a review in New Scientist (27 April 2024) of a book, The Elephant and the Blind: The experience of pure consciousness – philosophy, science and 500+ experiential reports by Thomas Metzinger. Apparently, Metzinger did an ‘online survey of meditators from 57 countries providing over 500 reports for the book.’ Basically, he argues that one can achieve a state that he calls ‘pure consciousness’ whereby the practitioner loses all sense of self. In effect, he argues (according to the reviewer, Alun Anderson):
 
 That a first-person perspective isn’t necessary for consciousness at all: your sense of self, of a continuous “you”, is part of the content of consciousness, not consciousness itself.

 
A provocative and contentious perspective, yet it reminds me of studies, also reported in New Scientist, many years ago, using brain-scan-imagery, of people experiencing ‘God’ also having a sense of being ‘self-less’, if I can use that term. Personally, I think consciousness is something fundamental with a possible independent existence to anything physical. It has a physical manifestation, if you like, purely because of memory, because our brains are effectively a storage device for consciousness.
 
This is a radical idea, but it is one I woke up with one day as if it was an epiphany, and realised that it was quite a departure from what I normally think. Raymond Tallis, whom I’ve already mentioned, once made the claim that science can only study objects and phenomena that can be measured. I claim that consciousness can’t be measured, but because we can measure brain waves and neuron activity many people argue that we are measuring consciousness.
 
But here’s the thing: if we didn’t experience consciousness, then scientists would tell us it doesn’t exist in the same way they tell us that free will doesn’t exist. I can make this claim because the same scientists argue that eventually AI will exhibit consciousness while simultaneously telling us that we will know this from the way the AI behaves, not because anyone will be measuring anything.

 

Addendum: I came across this related video by self-described philosopher-physicist, Avshalom Elitzur, who takes a subtly different approach to the same issue, giving examples from the animal kingdom. Towards the end, he talks about specific 'isms' (e.g. physicalism and dualism), but he doesn't mention the one I'm an advocate of, which is a 'loop' - that matter interacts with consciousness, via neurons, and then consciousness interacts with matter, which is necessary for free will.

Basically, he argues that consciousness interacting with matter breaks conservation laws (watch the video) but the brain consumes energy whether it's doing a maths calculation, running around an oval or lying asleep. Running around an oval is arguably consciousness interacting with matter - the same for an animal chasing prey - because one assumes they're based on a conscious decision, which is based on an imagined future, as per my thesis above. Also, processing information uses energy, which is why computers get hot, with no consciousness required. I fail to see what the difference is.

Tuesday, 30 April 2024

Logic rules

I’ve written on this topic before, but a question on Quora made me revisit it.
 
Self-referencing can lead to contradiction or to illumination. It was a recurring theme in Douglas Hofstadter’s Godel Escher Bach, and it’s key to Godel’s famous Incompleteness Theorem, which has far-reaching ramifications for mathematics if not epistemology generally. We can never know everything there is to know, which effectively means there will always be known unknowns and unknown unknowns, with possibly infinitely more of the latter than the former.
 
I recently came across a question on Quora: Will a philosopher typically say that their belief that the phenomenal world "abides by all the laws of logic" is an entailment of those laws being tautologies? Or would they rather consider that belief to be an assumption made outside of logic?

If you’re like me, you might struggle with even understanding this question. But it seems to me to be a question about self-referencing. In other words, my understanding is that it’s postulating, albeit as a question, that a belief in logic requires logic. The alternative being ‘the belief is an assumption made outside of logic’. It’s made more confusing by suggesting that the belief is a tautology because it’s self-referencing.
 
I avoided all that, by claiming that logic is fundamental even to the extent that it transcends the Universe, so not a ‘belief’ as such. And you will say that even making that statement is a belief. My response is that logic exists independently of us or any belief system. Basically, I’m arguing that logic is fundamental in that its rules govern the so-called laws of the Universe, which are independent of our cognisance of them. Therefore, independent of whether we believe in them or not.
 
I’ve said on previous occasions that logic should be a verb, because it’s something we do, and not just humans, but other creatures, and even machines. But that can’t be completely true if it really does transcend the Universe. My main argument is hypothetical in that, if there is a hypothetical God, then said God also has to obey the rules of logic. God can’t tell us the last digit of pi (it doesn’t exist) and he can’t make a prime number non-prime or vice versa, because they are determined by pure logic, not divine fiat.
 
And now, of course, I’ve introduced mathematics into the equation (pun intended) because mathematics and logic are inseparable, as probably best demonstrated by Godel’s famous theorem. It was Euclid (circa 300BC) who introduced the concept of proof into mathematics, and a lynch pin of many mathematical proofs is the fundamental principle of logic that you can’t have a contradiction, including Euclid’s own relatively simple proof that there are an infinity of primes. Back to Godel (or forward 2,300 years, to be more accurate), and he effectively proved that there is a distinction between 'proof' and 'truth' in mathematics, in as much as there will always be mathematical truths that can’t be proven true within a given axiom based, consistent, mathematical system. In practical terms, you need to keep extending the ‘system’ to formulate more truths into proofs.
 
It's not a surprise that the ‘laws of the Universe’ that I alluded to above, seem to obey mathematical ‘rules', and in fact, it’s only because of our prodigious abilities to mine the mathematical landscape that we understand the Universe (at every observable scale) to the extent that we do, including scales that were unimaginable even a century ago.
 
I’ve spoken before about Penrose’s 3 Worlds: Physical, Mental and Platonic; which represent the Universe, consciousness and mathematics respectively. What links them all is logic. The Universe is riddled with paradoxes, yet even paradoxes obey logic, and the deeper we look into the Universe’s secrets the more advanced mathematics we need, just to describe it, let alone understand it. And logic is the means by which humans access mathematics, which closes the loop.
 


 Addendum:
I'd forgotten that I wrote a similar post almost 5 years ago, where, unsurprisingly, I came to much the same conclusion. However, there's no reference to God, and I provide a specific example.

Monday, 22 April 2024

Kant’s 300th Birthday (22nd April)

 I wouldn’t have known this if I hadn’t read about it in Philosophy Now. I have to confess I’ve only read the first and most famous of his 3 ‘Critiques’, The Critique of Pure Reason. I have to say that I think Kant was the first philosopher I read where I realised that it’s not about trying to convince everyone you’re right (even though, that’s effectively the methodology) so much as making people think outside their own box.
 
Kant famously attempted to bridge ‘empiricism’ (a la Hume) with ‘reason’ (a la Leibniz), as both necessary in the pursuit of knowledge. In other words, you can’t rely on just one of these fundamental approaches to epistemology. He also famously categorised them as ‘post priori’ and ‘a priori’ respectively, meaning that reason or logic is knowledge gained prior or independently of observation, while empirically derived evidence is derived after an observed event (by necessity). Curiously, he categorised space and time, as a priori, meaning they were mental states. I’ve quoted this often from The Critique of Pure Reason.
 
But this space and this time, and with them all appearances, are not in themselves things; they are nothing but representations and cannot exist outside our minds.
 
I’ve always fundamentally disagreed with this, but the fact that Kant challenges our intuitively held comprehension of space and time, based on our everyday experience, makes one think more deeply about it, if one wants to present a counter-argument.
 
He’s also famous for coining the term, ‘transcendental idealism’, which is like some exotic taxonomy in the library of philosophical ideas. Again, I’ll quote from the source:

All these faculties have a transcendental (as well as an empirical) employment which concerns the form alone, and is possible apriori. 
 
By ‘all these faculties’, he’s talking about our mental faculties to use reason to understand something ‘a priori’. I concluded in an essay I wrote on this very topic, when I studied Kant, that the logical and practical realisation of ‘transcendental idealism’ is mathematics, though I doubt that’s what Kant meant. The fact is that in the intervening 200+ years, epistemology has been dominated by physics, which combines empirical evidence with mathematics in a dialectical relationship, so it’s become impossible to do one without the other. So, in a way, I think Kant foresaw this relationship before it evolved into the profound and materially successful enterprise that we call science.
 
A couple of things I didn’t know. In his early years before he gained tenure, he supplemented his meagre income by private tutoring and hustling at billiards – who would have thought.
 
He also got into trouble with newly elected king, Friedrich Wilhelm II, for his critiques on religion, when he published The General Natural History and Theory of the Heavens in 1755, arguing for a purely physical explanation of the Universe’s origins, a good 200 years before it became acceptable. In effect, he was censored, and he didn’t publish anything else on religion until after Friedrich died, whereupon he immediately made up for lost time.