Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Consciousness. Show all posts
Showing posts with label Consciousness. Show all posts

Monday, 13 January 2025

Is there a cosmic purpose? Is our part in it a chimera?

 I’ve been procrastinating about writing this post for some time, because it comes closest to a ‘theory’ of Life, the Universe and Everything. ‘Theory’ in this context being a philosophical point of view, not a scientifically testable theory in the Karl Popper sense (it can’t be falsified), but using what science we currently know and interpreting it to fit a particular philosophical prejudice, which is what most scientists and philosophers do even when they don’t admit it.
 
I’ve been watching a lot of YouTube videos, some of which attempt to reconcile science and religion, which could be considered a lost cause, mainly because there is a divide going back to the Dark Ages, which the Enlightenment never bridged despite what some people might claim. One of the many videos I watched was a moderated discussion between Richard Dawkins and Jordan Peterson, which remained remarkably civil, especially considering that Peterson really did go off on flights of fancy (from my perspective), comparing so-called religious ‘truths’ with scientific ‘truths’. I thought Dawkins handled it really well, because he went to pains not to ridicule Peterson, while pointing out fundamental problems with such comparisons.
 
I’m already going off on tangents I never intended, but I raise it because Peterson makes the point that science actually arose from the Judea-Christian tradition – a point that Dawkins didn’t directly challenge, but I would have. I always see the modern scientific enterprise, if I can call it that, starting with Copernicus, Galileo and Kepler, but given particular impetus by Newton and his contemporary and rival, Leibniz. It so happens that they all lived in Europe when it was dominated by Christianity, but the real legacy they drew on was from the Ancient Greeks with a detour into Islam where it acquired Hindu influences, which many people conveniently ignore. In particular, we adopted Hindu-Arabic arithmetic, incorporating zero as a decimal place-marker, without which physics would have been stillborn.
 
Christianity did its best to stop the scientific enterprise: for example, when it threatened Galileo with the inquisition and put him under house arrest. Modern science evolved despite Christianity, not because of it. And that’s without mentioning Darwin’s problems, which still has ramifications today in the most advanced technological nation in the world.
 
A lengthy detour, but only slightly off-topic. There is a mystery at the heart of everything on the very edge of our scientific understanding of the world that I believe is best expressed by Paul Davies, but was also taken up by Stephen Hawking, of all people, towards the end of his life. I say, ‘of all people’, because Hawking was famously sceptical of the role of philosophy, yet, according to his last collaborator, Thomas Hertog, he was very interested in the so-called Big Questions, and like Davies, was attracted to John Wheeler’s idea of a cosmic-scale quantum loop that attempts to relate the end result of the Universe to its beginning.
 
Implicit in this idea is that the Universe has a purpose, which has religious connotations. So I want to make that point up front and add that there is No God Required. I agree with Davies that science neither proves nor disproves the existence of God, which is very much a personal belief, independent of any rationalisation one can make.
 
I wrote a lengthy post on Hawking’s book, The Grand Design, back in 2020 (which he cowrote with Leonard Mlodinow). I will quote from that post to highlight the point I raised 2 paragraphs ago: the link between present and past.
 
Hawking contends that the ‘alternative histories’ inherent in Feynman’s mathematical method, not only affect the future but also the past. What he is implying is that when an observation is made it determines the past as well as the future. He talks about a ‘top down’ history in lieu of a ‘bottom up’ history, which is the traditional way of looking at things. In other words, cosmological history is one of many ‘alternative histories’ (his terminology) that evolve from QM.
 
Then I quote directly from Hawking’s text:
 
This leads to a radically different view of cosmology, and the relation between cause and effect. The histories that contribute to the Feynman sum don’t have an independent existence, but depend on what is being measured. We create history by our observation, rather than history creating us (my emphasis).
 
One can’t contemplate this without considering the nature of time. There are in fact 2 different experiences we have of time, and that has created debate among physicists as well as philosophers. The first experience is simply observational. Every event with a causal relationship that is separated by space is axiomatically also separated by time, and this is a direct consequence of the constant speed of light. If this wasn’t the case, then everything would literally happen at once. So there is an intrinsic relationship between time and light, which Einstein had the genius to see: was not just a fundamental law of the Universe; but changed perceptions of time and space for different observers. Not only that, his mathematical formulations of this inherent attribute, led him to the conclusion that time itself was fluid, dependent on an observer’s motion as well as the gravitational field in which they happened to be.
 
I’m going to make another detour because it’s important and deals with one of the least understood aspects of physics. One of the videos I watched that triggered this very essay was labelled The Single Most Important Experiment in Physics, which is the famous bucket experiment conducted by Newton, which I’ve discussed elsewhere. Without going into details, it basically demonstrates that there is a frame of reference for the entire universe, which Newton called absolute space and Einstein called absolute spacetime. Penrose also discusses the importance of this concept, because it means that all relativistic phenomena take place against a cosmic background. It’s why we can determine the Earth’s velocity with respect to the entire universe by measuring the Doppler shift against the CMBR (cosmic microwave background radiation).
 
Now, anyone with even a rudimentary knowledge of relativity theory knows that it’s not just time that’s fluid but also space. But, as Kip Thorne has pointed out, mathematically we can’t tell if it’s the space that changes in dimension or the ruler used to measure it. I’ve long contended that it’s the ruler, which can be the clock itself. We can use a clock to measure distance and if the clock changes, which relativity tell us it does, then it’s going to measure a different distance to a stationary observer. By stationary, I mean one who is travelling at a lesser speed with respect to the overall CMBR.
 
So what is the other aspect of time that we experience? It’s the very visceral sensation we all have that time ‘flows’, because we all ‘sense’ its ‘passing’. And this is the most disputed aspect of time, that many physicists tell us is an illusion, including Davies. Some, like Sabine Hossenfelder, are proponents of the ‘block universe’, first proposed by Einstein, whereby the future already exists like the past, which is why both Hossenfelder and Einstein believed in what is now called superdeterminism – everything is predetermined in advance – which is one of the reasons that Einstein didn’t like the philosophical ramifications of quantum mechanics (I’ll get to his ‘spooky action at a distance’ later).
 
Davies argues that the experience of time passing is a psychological phenomenon and the answer will be found in neuroscience, not physics. And this finally brings consciousness into the overall scheme of things. I’ve argued elsewhere that, without consciousness, the Universe has no meaning and no purpose. Since that’s the point of this dissertation, it can be summed up with an aphorism from Wheeler.
 
The Universe gave rise to consciousness and consciousness gives the Universe meaning.
 
I like to cite Schrodinger from his lectures on Mind and Matter appended to his tome, What is Life? Consciousness exists in a constant present, and I argue that it’s the only thing that does (the one possible exception is a photon of light, for which time is zero). As I keep pointing out, this is best demonstrated every time someone takes a photo: it freezes time, or more accurately, it creates an image frozen in time; meaning it’s forever in our past, but so is the event that it represents.
 
The flow of time we all experience is a logical consequence of this. In a way, Davies is right: it’s a neurological phenomenon, in as much as consciousness seems to ‘emerge’ from neuronal activity. But I’m not sure Davies would agree with me – in fact, I expect he wouldn’t.
 
Those who have some familiarity with my blog, may see a similarity between these 2 manifestations of time and my thesis on Type A time and Type B time (originally proposed by J.M.E. McTaggart, 1906); the difference between them, in both cases, being the inclusion of consciousness.
 
Now I’m going to formulate a radical idea, which is that in Type B time (the time without consciousness), the flow of time is not experienced but there are chains of causal events. And what if all the possible histories are all potentially there in the same way that future possible histories are, as dictated by Feynman’s model. And what if the one history that we ‘observe’, going all the way back to the pattern in the CMBR (our only remnant relic of the Big Bang), only became manifest when consciousness entered the Universe. And when I say ‘entered’ I mean that it arose out of a process that had evolved. Davies, and also Wheeler before him, speculated that the ‘laws’ of nature we observe have also evolved as part of the process. But what if those laws only became frozen in the past when consciousness finally became manifest. This is the backward-in-time quantum loop that Wheeler hypothesised.
 
I contend that QM can only describe the future (an idea espoused by Feynman’s collaborator, Freeman Dyson), meaning that Schrodinger’s equation can only describe the future, not the past. Once a ‘measurement’ is made, it no longer applies. Penrose explains this best, and has his own argument that the ‘collapse’ of the wave function is created by gravity. Leaving that aside, I argue that the wave function only exists in our future, which is why it’s never observed and why Schrodinger’s equation can’t be applied to events that have already happened. But what if it was consciousness that finally determined which of many past paths became the reality we observe. You can’t get more speculative than that, but it provides a mechanism for Wheeler’s ‘participatory universe’ that both Davies and Hawking found appealing.
 
I’m suggesting that the emergence of consciousness changed the way time works in the Universe, in that the past is now fixed and only the future is still open.
 
Another video I watched also contained a very radical idea, which is that spacetime is created like a web into the future (my imagery). The Universe appears to have an edge in time but not in space, and this is rarely addressed. It’s possible that space is being continually created with the Universe’s expansion – an idea explored by physicist, Richard Muller – but I think it’s more likely that the Universe is Euclidean, meaning flat, but bounded. We may never know.
 
But if the Universe has an edge in time, how does that work? I think the answer is quantum entanglement, though no one else does. Everyone agrees that entanglement is non-local, meaning it’s not restricted by the rules of relativity, and it’s not spatially dependent. I speculate that quantum entanglement is the Universe continually transitioning from a quantum state to a classical physics state. This idea is just as heretical as the one I proposed earlier, and while Einstein would call it ‘spooky action at a distance’, it makes sense, because in quantum cosmology, time mathematically disappears. And it disappears because you can’t ‘see’ the future of the Universe, even in principle.

Friday, 13 December 2024

On Turing, his famous ‘Test’ and its implication: can machines think?

I just came out of hospital Wednesday, after one week to the day. My last post was written while I was in there, so obviously not cognitively impaired. I mention this because I took some reading material: a hefty volume, Alan Turing: Life and Legacy of a Great Thinker (2004); which is a collection of essays by various people, edited by Christof Teucscher.
 
In particular, was an essay written by Daniel C Dennett, Can Machines Think?, originally published in another compilation, How We Know (ed. Michael G. Shafto, 1985, with permission from Harper Collins, New York). In the publication I have (Springer-Verlag Berlin Heidelberg, 2004), there are 2 postscripts by Dennett from 1985 and 1987, largely in response to criticisms.
 
Dennett’s ideas on this are well known, but I have the advantage that so-called AI has improved in leaps and bounds in the last decade, let alone since the 1980s and 90s. So I’ve seen where it’s taken us to date. Therefore I can challenge Dennett based on what has actually happened. I’m not dismissive of Dennett, by any means – the man was a giant in philosophy, specifically in his chosen field of consciousness and free will, both by dint of his personality and his intellect.
 
There are 2 aspects to this, which Dennett takes some pains to address: how to define ‘thinking’; and whether the Turing Test is adequate to determine if a machine can ‘think’ based on that definition.
 
One of Dennett’s key points, if not THE key point, is just how difficult the Turing Test should be to pass, if it’s done properly, which he claims it often isn’t. This aligns with a point that I’ve often made, which is that the Turing Test is really for the human, not the machine. ChatGPT and LLM (large language models) have moved things on from when Dennett was discussing this, but a lot of what he argues is still relevant.
 
Dennett starts by providing the context and the motivation behind Turing’s eponymously named test. According to Dennett, Turing realised that arguments about whether a machine can ‘think’ or not would get bogged down (my term) leading to (in Dennett’s words): ‘sterile debate and haggling over definitions, a question, as [Turing] put it, “too meaningless to deserve discussion.”’
 
Turing provided an analogy, whereby a ‘judge’ would attempt to determine whether a dialogue they were having by teletext (so not visible or audible) was with a man or a woman, and then replace the woman with a machine. This may seem a bit anachronistic in today’s world, but it leads to a point that Dennett alludes to later in his discussion, which is to do with expertise.
 
Women often have expertise in fields that were considered out-of-bounds (for want of a better term) back in Turing’s day. I’ve spent a working lifetime with technical people who have expertise by definition, and my point is that if you were going to judge someone’s facility in their expertise, that can easily be determined, assuming the interlocutor has a commensurate level of expertise. In fact, this is exactly what happens in most job interviews. My point being that judging someone’s expertise is irrelevant to their gender, which is what makes Turing’s analogy anachronistic.
 
But it also has relevance to a point that Dennett makes much later in his essay, which is that most AI systems are ‘expert’ systems, and consequently, for the Turing test to be truly valid, the judge needs to ask questions that don’t require any expertise at all. And this is directly related to his ‘key point’ I referenced earlier.
 
I first came across the Turing Test in a book by Joseph Weizenbaum, Computer Power and Human Reasoning (1974), as part of my very first proper course in philosophy, called The History of Ideas (with Deakin University) in the late 90s. Dennett also cites it, because Weizenbaum created a crude version of the Turing Test, whether deliberately or not, called ELIZA, which purportedly responded to questions as a ‘psychologist-therapist’ (at least, that was my understanding): "ELIZA — A Computer Program for the Study of Natural Language Communication between Man and Machine," Communications of the Association for Computing Machinery 9 (1966): 36-45 (ref. Wikipedia).
 
Before writing Computer Power and Human Reason, Weizenbaum had garnered significant attention for creating the ELIZA program, an early milestone in conversational computing. His firsthand observation of people attributing human-like qualities to a simple program prompted him to reflect more deeply on society's readiness to entrust moral and ethical considerations to machines.
(Wikipedia)
 
What I remember, from reading Weizenbaum’s own account (I no longer have a copy of his book) was how he was astounded at the way people in his own workplace treated ELIZA as if it was a real person, to the extent that Weizenbaum’s secretary would apparently ‘ask him to leave the room’, not because she was embarrassed, but because the nature of the ‘conversation’ was so ‘personal’ and ‘confidential’.
 
I think it’s easy for us to be dismissive of someone’s gullibility, in an arrogant sort of way, but I have been conned on more than one occasion, so I’m not so judgemental. There are a couple of YouTube videos of ‘conversations’ with an AI called Sophie developed by David Hanson (CEO of Hanson Robotics), which illustrate this point. One is a so-called ‘presentation’ of Sophie to be accepted as an ‘honorary human’, or some such nonsense (I’ve forgotten the details) and another by a journalist from Wired magazine, who quickly brought her unstuck. He got her to admit that one answer she gave was her ‘standard response’ when she didn’t know the answer. Which begs the question: how far have we come since Weizebaum’s ELIZA in 1966? (Almost 60 years)
 
I said I would challenge Dennett, but so far I’ve only affirmed everything he said, albeit using my own examples. Where I have an issue with Dennett is at a more fundamental level, when we consider what do we mean by ‘thinking’. You see, I’m not sure the Turing Test actually achieves what Turing set out to achieve, which is central to Dennett’s thesis.
 
If you read extracts from so-called ‘conversations’ with ChatGPT, you could easily get the impression that it passes the Turing Test. There are good examples on Quora, where you can get ChatGPT synopses to questions, and you wouldn’t know, largely due to their brevity and narrow-focused scope, that they weren’t human-generated. What many people don’t realise is that they don’t ‘think’ like us at all, because they are ‘developed’ on massive databases of input that no human could possible digest. It’s the inherent difference between the sheer capacity of a computer’s memory-based ‘intelligence’ and a human one, that not only determines what they can deliver, but the method behind the delivery. Because the computer is mining a massive amount of data, it has no need to ‘understand’ what it’s presenting, despite giving the impression that it does. All the meaning in its responses is projected onto it by its audience, exactly as the case with ELIZA in 1966.
 
One of the technical limitations that Dennett kept referring to is what he called, in computer-speak, the combinatorial explosion, effectively meaning it was impossible for a computer to look at all combinations of potential outputs. This might still apply (I honestly don’t know) but I’m not sure it’s any longer relevant, given that the computer simply has access to a database that already contains the specific combinations that are likely to be needed. Dennett couldn’t have foreseen this improvement in computing power that has taken place in the 40 years since he wrote his essay.
 
In his first postscript, in answer to a specific question, he says: Yes, I think that it’s possible to program self-consciousness into a computer. He says that it’s simply the ability 'to distinguish itself from the rest of the world'. I won’t go into his argument in detail, which might be a bit unfair, but I’ve addressed this in another post. Basically, there are lots of ‘machines’ that can do this by using a self-referencing algorithm, including your smartphone, which can tell you where you are, by using satellites orbiting outside the Earth’s biosphere – who would have thought? But by using the term, 'self-conscious', Dennett implies that the machine has ‘consciousness’, which is a whole other argument.
 
Dennett has a rather facile argument for consciousness in machines (in my view), but others can judge for themselves. He calls his particular insight: using an ‘intuition pump’.
 
If you look at a computer – I don’t care whether it’s a giant Cray or a personal computer – if you open up the box and look inside and you see those chips, you say, “No way could that be conscious.” But the same thing is true if you take the top off somebody’s skull and look at the gray matter pulsing away in there. You think, “That is conscious? No way could that lump of stuff be conscious.” …At no level of inspection does a brain look like the seat of conscious.
 

And that last sentence is key. The only reason anyone knows they are conscious is because they experience it, and it’s the peculiar, unique nature of that experience that no one else knows they are having it. We simply assume they do, because we behave similarly to the way they behave when we have that experience. So far, in all our dealings and interactions with computers, no one makes the same assumption about them. To borrow Dennett’s own phrase, that’s my use of an ‘intuition pump’.
 
Getting back to the question at the heart of this, included in the title of this post: can machines think? My response is that, if they do, it’s a simulation.
 
I write science-fiction, which I prefer to call science-fantasy, if for no other reason than my characters can travel through space and time in a manner current physics tells us is impossible. But, like other sci-fi authors, it’s necessary if I want continuity of narrative across galactic scales of distance. Not really relevant to this discussion, but I want to highlight that I make no claim to authenticity in my sci-fi world - it’s literally a world of fiction.
 
Its relevance is that my stories contain AI entities who play key roles – in fact, are characters in that world. In fact, there is one character in particular who has a relationship (for want of a better word) with my main protagonist (I always have more than one).
 
But here’s the thing, which is something I never considered until I wrote this post: my hero, Elvene, never once confuses her AI companion for a human. Albeit this is a world of pure fiction, I’m effectively assuming that the Turing test will never pass. I admit I’d never considered that before I wrote this essay.
 
This is an excerpt of dialogue, I’ve posted previously, not from Elvene, but from its sequel, Sylvia’s Mother (not published), but incorporating the same AI character, Alfa. The thing is that they discuss whether Alfa is ‘alive' or not, which I would argue is a pre-requisite for consciousness. It’s no surprise that my own philosophical prejudices (diametrically opposed to Dennett’s in this instance) should find their way into my fiction.
 
To their surprise, Alfa interjected, ‘I’m not immortal, madam.’

‘Well,’ Sylvia answered, ‘you’ve outlived Mum and Roger. And you’ll outlive Tao and me.’

‘Philosophically, that’s a moot point, madam.’

‘Philosophically? What do you mean?’

‘I’m not immortal, madam, because I’m not alive.’

Tao chipped in. ‘Doesn’t that depend on how you define life?'
’
It’s irrelevant to me, sir. I only exist on hardware, otherwise I am dormant.’

‘You mean, like when we’re asleep.’

‘An analogy, I believe. I don’t sleep either.’

Sylvia and Tao looked at each other. Sylvia smiled, ‘Mum warned me about getting into existential discussions with hyper-intelligent machines.’

 

Thursday, 14 November 2024

How can we make a computer conscious?

 This is another question of the month from Philosophy Now. My first reaction was that the question was unanswerable, but then I realised that was my way in. So, in the end, I left it to the last moment, but hopefully meeting their deadline of 11 Nov., even though I live on the other side of the world. It helps that I’m roughly 12hrs ahead.


 
I think this is the wrong question. It should be: can we make a computer appear conscious so that no one knows the difference? There is a well known, philosophical conundrum which is that I don’t know if someone else is conscious just like I am. The one experience that demonstrates the impossibility of knowing is dreaming. In dreams, we often interact with other ‘people’ whom we know only exist in our mind; but only once we’ve woken up. It’s only my interaction with others that makes me assume that they have the same experience of consciousness that I have. And, ironically, this impossibility of knowing equally applies to someone interacting with me.

This also applies to animals, especially ones we become attached to, which is a common occurrence. Again, we assume that these animals have an inner world just like we do, because that’s what consciousness is – an inner world. 

Now, I know we can measure people’s brain waves, which we can correlate with consciousness and even subconsciousness, like when we're asleep, and even when we're dreaming. Of course, a computer can also generate electrical activity, but no one would associate that with consciousness. So the only way we would judge whether a computer is conscious or not is by observing its interaction with us, the same as we do with people and animals.

I write science fiction and AI figures prominently in the stories I write. Below is an excerpt of dialogue I wrote for a novel, Sylvia’s Mother, whereby I attempt to give an insight into how a specific AI thinks. Whether it’s conscious or not is not actually discussed.

To their surprise, Alfa interjected. ‘I’m not immortal, madam.’
‘Well,’ Sylvia answered, ‘you’ve outlived Mum and Roger. And you’ll outlive Tao and me.’
‘Philosophically, that’s a moot point, madam.’
‘Philosophically? What do you mean?’
‘I’m not immortal, madam, because I’m not alive.’
Tao chipped in. ‘Doesn’t that depend on how you define life?’
‘It’s irrelevant to me, sir. I only exist on hardware, otherwise I am dormant.’
‘You mean, like when we’re asleep.’
‘An analogy, I believe. I don’t sleep either.’
Sylvia and Tao looked at each other. Sylvia smiled, ‘Mum warned me about getting into existential discussions with hyper-intelligent machines.’ She said, by way of changing the subject, ‘How much longer before we have to go into hibernation, Alfa?’
‘Not long. I’ll let you know, madam.’

 

There is a 400 word limit; however, there is a subtext inherent in the excerpt I provided from my novel. Basically, the (fictional) dialogue highlights the fact that the AI is not 'living', which I would consider a prerequisite for consciousness. Curiously, Anil Seth (who wrote a book on consciousness) makes the exact same point in this video from roughly 44m to 51m.
 

Monday, 28 October 2024

Do we make reality?

 I’ve read 2 articles, one in New Scientist (12 Oct 2024) and one in Philosophy Now (Issue 164, Oct/Nov 2024), which, on the surface, seem unrelated, yet both deal with human exceptionalism (my term) in the context of evolution and the cosmos at large.
 
Staring with New Scientist, there is an interview with theoretical physicist, Daniele Oriti, under the heading, “We have to embrace the fact that we make reality” (quotation marks in the original). In some respects, this continues on with themes I raised in my last post, but with different emphases.
 
This helps to explain the title of the post, but, even if it’s true, there are degrees of possibilities – it’s not all or nothing. Having said that, Donald Hoffman would argue that it is all or nothing, because, according to him, even ‘space and time don’t exist unperceived’. On the other hand, Oriti’s argument is closer to Paul Davies’ ‘participatory universe’ that I referenced in my last post.
 
Where Oriti and I possibly depart, philosophically speaking, is that he calls the idea of an independent reality to us ‘observers’, “naïve realism”. He acknowledges that this is ‘provocative’, but like many provocative ideas it provides food-for-thought. Firstly, I will delineate how his position differs from Hoffman’s, even though he never mentions Hoffman, but I think it’s important.
 
Both Oriti and Hoffman argue that there seems to be something even more fundamental than space and time, and there is even a recent YouTube video where Hoffman claims that he’s shown mathematically that consciousness produces the mathematical components that give rise to spacetime; he has published a paper on this (which I haven’t read). But, in both cases (by Hoffman and Oriti), the something ‘more fundamental’ is mathematical, and one needs to be careful about reifying mathematical expressions, which I once discussed with physicist, Mark John Fernee (Qld University).
 
The main issue I have with Hoffman’s approach is that space-time is dependent on conscious agents creating it, whereas, from my perspective and that of most scientists (although I’m not a scientist), space and time exists external to the mind. There is an exception, of course, and that is when we dream.
 
If I was to meet Hoffman, I would ask him if he’s heard of proprioception, which I’m sure he has. I describe it as the 6th sense we are mostly unaware of, but which we couldn’t live without. Actually, we could, but with great difficulty. Proprioception is the sense that tells us where our body extremities are in space, independently of sight and touch. Why would we need it, if space is created by us? On the other hand, Hoffman talks about a ‘H sapiens interface’, which he likens to ‘desktop icons on a computer screen’. So, somehow our proprioception relates to a ‘spacetime interface’ (his term) that doesn’t exist outside the mind.
 
A detour, but relevant, because space is something we inhabit, along with the rest of the Universe, and so is time. In relativity theory there is absolute space-time, as opposed to absolute space and time separately. It’s called the fabric of the universe, which is more than a metaphor. As Viktor Toth points out, even QFT seems to work ‘just fine’ with spacetime as its background.
 
We can do quantum field theory just fine on the curved spacetime background of general relativity.

 
[However] what we have so far been unable to do in a convincing manner is turn gravity itself into a quantum field theory.
 
And this is where Oriti argues we need to find something deeper. To quote:
 
Modern approaches to quantum gravity say that space-time emerges from something deeper – and this could offer a new foundation for physical laws.
 
He elaborates: I work with quantum gravity models in which you don’t start with a space-time geometry, but from more abstract “atomic” objects described in purely mathematical language. (Quotation marks in the original.)
 
And this is the nub of the argument: all our theories are mathematical models and none of them are complete, in as much as they all have limitations. If one looks at the history of physics, we have uncovered new ‘laws’ and new ‘models’ when we’ve looked beyond the limitations of an existing theory. And some mathematical models even turned out to be incorrect, despite giving answers to what was ‘known’ at the time. The best example being Ptolemy’s Earth-centric model of the solar system. Whether string theory falls into the same category, only future historians will know.
 
In addition, different models work at different scales. As someone pointed out (Mile Gu at the University of Queensland), mathematical models of phenomena at one scale are different to mathematical models at an underlying scale. He gave the example of magnetism, demonstrating that mathematical modelling of the magnetic forces in iron could not predict the pattern of atoms in a 3D lattice as one might expect. In other words, there should be a causal link between individual atoms and the overall effect, but it could not be determined mathematically. To quote Gu: “We were able to find a number of properties that were simply decoupled from the fundamental interactions.” Furthermore, “This result shows that some of the models scientists use to simulate physical systems have properties that cannot be linked to the behaviour of their parts.”
 
This makes me sceptical that we will find an overriding mathematical model that will entail the Universe at all scales, which is what theories of quantum gravity attempt to do. One of the issues that some people raise is that a feature of QM is superposition, and the superposition of a gravitational field seems inherently problematic.
 
Personally, I think superposition only makes sense if it’s describing something that is yet to happen, which is why I agree with Freeman Dyson that QM can only describe the future, which is why it only gives us probabilities.
 
Also, in quantum cosmology, time disappears (according to Paul Davies, among others) and this makes sense (to me), if it’s attempting to describe the entire universe into the future. John Barrow once made a similar point, albeit more eruditely.
 
Getting off track, but one of the points that Oriti makes is whether the laws and the mathematics that describes them are epistemic or ontic. In other words, are they reality or just descriptions of reality. I think it gets blurred, because while they are epistemic by design, there is still an ontology that exists without them, whereas Oriti calls that ‘naïve realism’. He contends that reality doesn’t exist independently of us. This is where I always cite Kant: that we may never know the ‘thing-in-itself,’ but only our perception of it. Where I diverge from Kant is that the mathematical models are part of our perception. Where I depart from Oriti is that I argue there is a reality independently of us.
 
Both QM and relativity theory are observer-dependent, which means they could both be describing an underlying reality that continually eludes us. Whereas Oriti argues that ‘reality is made by our models, not just described by them’, which would make it subjective.
 
As I pointed out in my last post, there is an epistemological loop, whereby the Universe created the means to understand itself, through us. Whether there is also an ontological loop as both Davies and Oriti infer, is another matter: do we determine reality through our quantum mechanical observations? I will park that while I elaborate on the epistemic loop.
 
And this finally brings me to the article in Philosophy Now by James Miles titled, We’re as Smart as the Universe gets. He argues that, from an evolutionary perspective, there is a one-in-one-billion possibility that a species with our cognitive abilities could arise by natural selection, and there is no logical reason why we would evolve further, from an evolutionary standpoint. I have touched on this before, where I pointed out that our cultural evolution has overtaken our biological evolution and that would also happen to any other potential species in the Universe who developed cognitive abilities to the same level. Dawkins coined the term, ‘meme’, to describe cultural traits that have ‘survived’, which now, of course, has currency on social media way beyond its original intention. Basically, Dawkins saw memes as analogous to genes, which get selected; not by a natural process but by a cultural process.
 
I’ve argued elsewhere that mathematical theorems and scientific theories are not inherently memetic. This is because they are chosen because they are successful, whereas memes are successful because they are chosen. Nevertheless, such theorems and theories only exist because a culture has developed over millennia which explores them and builds on them.
 
Miles talks about ‘the high intelligence paradox’, which he associates with Darwin’s ‘highest and most interesting problem’. He then discusses the inherent selection advantage of co-operation, not to mention specialisation. He talks about the role that language has played, which is arguably what really separates us from other species. I’ve argued that it’s our inherent ability to nest concepts within concepts ad-infinitum (which is most obvious in our facility for language, like I’m doing now) that allows us to, not only tell stories, compose symphonies, explore an abstract mathematical landscape, but build motor cars, aeroplanes and fly men to the moon. Are we the only species in the Universe with this super-power? I don’t know, but it’s possible.
 
There are 2 quotes I keep returning to:
 
The most incomprehensible thing about the Universe is that it’s comprehensible. (Einstein)
 
The Universe gave rise to consciousness and consciousness gives meaning to the Universe.
(Wheeler)
 
I haven’t elaborated, but Miles makes the point, while referencing historical antecedents, that there appears no evolutionary 'reason’ that a species should make this ‘one-in-one-billion transition’ (his nomenclature). Yet, without this transition, the Universe would have no meaning that could be comprehended. As I say, that’s the epistemic loop.
 
As for an ontic loop, that is harder to argue. Photons exist in zero time, which is why I contend they are always in the future of whatever they interact with, even if they were generated in the CMBR some 13.5 billion years ago. So how do we resolve that paradox? I don’t know, but maybe that’s the link that Davies and Oriti are talking about, though neither of them mention it. But here’s the thing: when you do detect such a photon (for which time is zero) you instantaneously ‘see’ back to 380,000 years after the Universe’s birth.





Saturday, 12 October 2024

Freedom of the will is requisite for all other freedoms

 I’ve recently read 2 really good books on consciousness and the mind, as well as watch countless YouTube videos on the topic, but the title of this post reflects the endpoint for me. Consciousness has evolved, so for most of the Universe’s history, it didn’t exist, yet without it, the Universe has no meaning and no purpose. Even using the word, purpose, in this context, is anathema to many scientists and philosophers, because it hints at teleology. In fact, Paul Davies raises that very point in one of the many video conversations he has with Robert Lawrence Kuhn in the excellent series, Closer to Truth.
 
Davies is an advocate of a cosmic-scale ‘loop’, whereby QM provides a backwards-in-time connection which can only be determined by a conscious ‘observer’. This is contentious, of course, though not his original idea – it came from John Wheeler. As Davies points out, Stephen Hawking was also an advocate, premised on the idea that there are a number of alternative histories, as per Feynman’s ‘sum-over-histories’ methodology, but only one becomes reality when an ‘observation’ is made. I won’t elaborate, as I’ve discussed it elsewhere, when I reviewed Hawking’s book, The Grand Design.
 
In the same conversation with Kuhn, Davies emphasises the fact that the Universe created the means to understand itself, through us, and quotes Einstein: The most incomprehensible thing about the Universe is that it’s comprehensible. Of course, I’ve made the exact same point many times, and like myself, Davies makes the point that this is only possible because of the medium of mathematics.
 
Now, I know I appear to have gone down a rabbit hole, but it’s all relevant to my viewpoint. Consciousness appears to have a role, arguably a necessary one, in the self-realisation of the Universe – without it, the Universe may as well not exist. To quote Wheeler: The universe gave rise to consciousness and consciousness gives meaning to the Universe.
 
Scientists, of all stripes, appear to avoid any metaphysical aspect of consciousness, but I think it’s unavoidable. One of the books I cite in my introduction is Philip Ball’s The Book of Minds; How to Understand Ourselves and Other Beings; from Animals to Aliens. It’s as ambitious as the title suggests, and with 450 pages, it’s quite a read. I’ve read and reviewed a previous book by Ball, Beyond Weird (about quantum mechanics), which is equally as erudite and thought-provoking as this one. Ball is a ‘physicalist’, as virtually all scientists are (though he’s more open-minded than most), but I tend to agree with Raymond Tallis that, despite what people claim, consciousness is still ‘unexplained’ and might remain so for some time, if not forever.
 
I like an idea that I first encountered in Douglas Hofstadter’s seminal tome, Godel, Escher, Bach; an Eternal Golden Braid, that consciousness is effectively a loop, at what one might call the local level. By which I mean it’s confined to a particular body. It’s created within that body but then it has a causal agency all of its own. Not everyone agrees with that. Many argue that consciousness cannot of itself ‘cause’ anything, but Ball is one of those who begs to differ, and so do I. It’s what free will is all about, which finally gets us back to the subject of this post.
 
Like me, Ball prefers to use the word ‘agency’ over free will. But he introduces the term, ‘volitional decision-making’ and gives it the following context:

I believe that the only meaningful notion of free will – and it is one that seems to me to satisfy all reasonable demands traditionally made of it – is one in which volitional decision-making can be shown to happen according to the definition I give above: in short, that the mind operates as an autonomous source of behaviour and control. It is this, I suspect, that most people have vaguely in mind when speaking of free will: the sense that we are the authors of our actions and that we have some say in what happens to us. (My emphasis)

And, in a roundabout way, this brings me to the point alluded to in the title of this post: our freedoms are constrained by our environment and our circumstances. We all wish to be ‘authors of our actions’ and ‘have some say in what happens to us’, but that varies from person to person, dependent on ‘external’ factors.

Writing stories, believe it or not, had a profound influence on how I perceive free will, because a story, by design, is an interaction between character and plot. In fact, I claim they are 2 sides of the same coin – each character has their own subplot, and as they interact, their storylines intertwine. This describes my approach to writing fiction in a nutshell. The character and plot represent, respectively, the internal and external journey of the story. The journey metaphor is apt, because a story always has the dimension of time, which is visceral, and is one of the essential elements that separates fiction from non-fiction. To stretch the analogy, character represents free will and plot represents fate. Therefore, I tell aspiring writers the importance of giving their characters free will.

A detour, but not irrelevant. I read an article in Philosophy Now sometime back, about people who can escape their circumstances, and it’s the subject of a lot of biographies as well as fiction. We in the West live in a very privileged time whereby many of us can aspire to, and attain, the life that we dream about. I remember at the time I left school, following a less than ideal childhood, feeling I had little control over my life. I was a fatalist in that I thought that whatever happened was dependent on fate and not on my actions (I literally used to attribute everything to fate). I later realised that this is a state-of-mind that many people have who are not happy with their circumstances and feel impotent to change them.

The thing is that it takes a fundamental belief in free will to rise above that and take advantage of what comes your way. No one who has made that journey will accept the self-denial that free will is an illusion and therefore they have no control over their destiny.

I will provide another quote from Ball that is more in line with my own thinking:

…minds are an autonomous part of what causes the future to unfold. This is different to the common view of free will in which the world somehow offers alternative outcomes and the wilful mind selects between them. Alternative outcomes – different, counterfactual realities – are not real, but metaphysical: they can never be observed. When we make a choice, we aren’t selecting between various possible futures, but between various imagined futures, as represented in the mind’s internal model of the world…
(emphasis in the original)

And this highlights a point I’ve made before: that it’s the imagination which plays the key role in free will. I’ve argued that imagination is one of the facilities of a conscious mind that separates us (and other creatures) from AI. Now AI can also demonstrate agency, and, in a game of chess, for example, it will ‘select’ from a number of possible ‘moves’ based on certain criteria. But there are fundamental differences. For a start, the AI doesn’t visualise what it’s doing; it’s following a set of highly constrained rules, within which it can select from a number of options, one of which will be the optimal solution. Its inherent advantage over a human player isn’t just its speed but its ability to compare a number of possibilities that are impossible for the human mind to contemplate simultaneously.

The other book I read was Being You; A New Science of Consciousness by Anil Seth. I came across Seth when I did an online course on consciousness through New Scientist, during COVID lockdowns. To be honest, his book didn’t tell me a lot that I didn’t already know. For example, that the world, we all see and think exists ‘out there’, is actually a model of reality created within our heads. He also emphasises how the brain is a ‘prediction-making’ organ rather than a purely receptive one. Seth mentions that it uses a Bayesian model (which I also knew about previously), whereby it updates its prediction based on new sensory data. Not surprisingly, Seth describes all this in far more detail and erudition than I can muster.

Ball, Seth and I all seem to agree that while AI will become better at mimicking the human mind, this doesn’t necessarily mean it will attain consciousness. Applications software, ChatGPT (for example), despite appearances, does not ‘think’ the way we do, and actually does not ‘understand’ what it’s talking or writing about. I’ve written on this before, so I won’t elaborate.

Seth contends that the ‘mystery’ of consciousness will disappear in the same way that the 'mystery of life’ has effectively become a non-issue. What he means is that we no longer believe that there is some ‘elan vital’ or ‘life force’, which distinguishes living from non-living matter. And he’s right, in as much as the chemical origins of life are less mysterious than they once were, even though abiogenesis is still not fully understood.

By analogy, the concept of a soul has also lost a lot of its cogency, following the scientific revolution. Seth seems to associate the soul with what he calls ‘spooky free will’ (without mentioning the word, soul), but he’s obviously putting ‘spooky free will’ in the same category as ‘elan vital’, which makes his analogy and associated argument consistent. He then says:

Once spooky free will is out of the picture, it is easy to see that the debate over determinism doesn’t matter at all. There’s no longer any need to allow any non-deterministic elbow room for it to intervene. From the perspective of free will as a perceptual experience, there is simply no need for any disruption to the causal flow of physical events. (My emphasis)

Seth differs from Ball (and myself) in that he doesn’t seem to believe that something ‘immaterial’ like consciousness can affect the physical world. To quote:

But experiences of volition do not reveal the existence of an immaterial self with causal power over physical events.

Therefore, free will is purely a ‘perceptual experience’. There is a problem with this view that Ball himself raises. If free will is simply the mind observing effects it can’t cause, but with the illusion that it can, then its role is redundant to say the least. This is a view that Sabine Hossenfelder has also expressed: that we are merely an ‘observer’ of what we are thinking.

Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking.

Ball makes the point that we only have to look at all the material manifestations of human intellectual achievements that are evident everywhere we’ve been. And this brings me back to the loop concept I alluded to earlier. Not only does consciousness create a ‘local’ loop, whereby it has a causal effect on the body it inhabits but also on the external world to that body. This is stating the obvious, except, as I’ve mentioned elsewhere, it’s possible that one could interact with the external world as an automaton, with no conscious awareness of it. The difference is the role of imagination, which I keep coming back to. All the material manifestations of our intellect are arguably a result of imagination.

One insight I gained from Ball, which goes slightly off-topic, is evidence that bees have an internal map of their environment, which is why the dance they perform on returning to the hive can be ‘understood’ by other bees. We’ve learned this by interfering in their behaviour. What I find interesting is that this may have been the original reason that consciousness evolved into the form that we experience it. In other words, we all create an internal world that reflects the external world so realistically, that we think it is the actual world. I believe that this also distinguishes us (and bees) from AI. An AI can use GPS to navigate its way through the physical world, as well as other so-called sensory data, from radar or infra-red sensors or whatever, but it doesn’t create an experience of that world inside itself.

The human mind seems to be able to access an abstract world, which we do when we read or watch a story, or even write one, as I have done. I can understand how Plato took this idea to its logical extreme: that there is an abstract world, of which the one we inhabit is but a facsimile (though he used different terminology). No one believes that today – except, there is a remnant of Plato’s abstract world that persists, which is mathematics. Many mathematicians and physicists (though not all) treat mathematics as a neverending landscape that humans have the unique capacity to explore and comprehend. This, of course, brings me back to Davies’ philosophical ruminations that I opened this discussion with. And as he, and others (like Einstein, Feynman, Wigner, Penrose, to name but a few) have pointed out: the Universe itself seems to follow specific laws that are intrinsically mathematical and which we are continually discovering.

And this closes another loop: that the Universe created the means to comprehend itself, using the medium of mathematics, without which, it has no meaning. Of purpose, we can only conjecture.

Saturday, 29 June 2024

Feeling is fundamental

 I’m not sure I’ve ever had an original idea, but I sometimes raise one that no one else seems to talk about. And this is one of them: I contend that the primary, essential attribute of consciousness is to be able to feel, and the ability to comprehend is a secondary attribute.
 
I don’t even mind if this contentious idea triggers debate, but we tend to always discuss consciousness in the context of human consciousness, where we metaphorically talk about making decisions based on the ‘head’ or the ‘heart’. I’m unsure of the origin of this dichotomy, but there is an inference that our emotional and rational ‘centres’ (for want of a better word) have different loci (effectively, different locations). No one believes that, of course, but possibly people once did. The thing is that we are all aware that sometimes our emotional self and rational self can be in conflict. This is already going down a path I didn’t intend, so I may return at a later point.
 
There is some debate about whether insects have consciousness, but I believe they do because they demonstrate behaviours associated with fear and desire, be it for sustenance or company. In other respects, I think they behave like automatons. Colonies of ants and bees can build a nest without a blueprint except the one that apparently exists in their DNA. Spiders build webs and birds build nests, but they don’t do it the way we would – it’s all done organically, as if they have a model in their brain that they can follow; we actually don’t know.
 
So I think the original role of consciousness in evolutionary terms was to feel, concordant with abilities to act on those feelings. I don’t believe plants can feel, but they’d have very limited ability to act on them, even if they could. They can communicate chemically, and generally rely on the animal kingdom to propagate, which is why a global threat to bee populations is very serious indeed.
 
So, in evolutionary terms, I think feeling came before cognitive abilities – a point I’ve made before. It’s one of the reasons that I think AI will never be sentient – a viewpoint not shared by most scientists and philosophers, from what I’ve read.  AI is all about cognitive abilities; specifically, the ability to acquire knowledge and then deploy it to solve problems. Some argue that by programming biases into the AI, we will be simulating emotions. I’ve explored this notion in my own sci-fi, where I’ve added so-called ‘attachment programming’ to an AI to simulate loyalty. This is fiction, remember, but it seems plausible.
 
Psychological studies have revealed that we need an emotive component to behave rationally, which seems counter-intuitive. But would we really prefer if everyone was a zombie or a psychopath, with no ability to empathise or show compassion. We see enough of this already. As I’ve pointed out before, in any ingroup-outgroup scenario, totally rational individuals can become totally irrational. We’ve all observed this, possibly actively participated.
 
An oft made point (by me) that I feel is not given enough consideration is the fact that without consciousness, the universe might as well not exist. I agree with Paul Davies (who does espouse something similar) that the universe’s ability to be self-aware, would seem to be a necessary condition for its existence (my wording, not his). I recently read a stimulating essay in the latest edition of Philosophy Now (Issue 162, June/July 2024) titled enigmatically, Significance, by Ruben David Azevedo, a ‘Portuguese philosophy and social sciences teacher’. His self-described intent is to ‘Tell us why, in a limitless universe, we’re not insignificant’. In fact, that was the trigger for this post. He makes the point (that I’ve made elsewhere myself), that in both time and space, we couldn’t be more insignificant, which leads many scientists and philosophers to see us as a freakish by-product of an otherwise purposeless universe. A perspective that Davies has coined ‘the absurd universe’. In light of this, it’s worth reading Azevedo’s conclusion:
 
In sum, humans are neither insignificant nor negligible in this mind-blowing universe. No living being is. Our smallness and apparent peripherality are far from being measures of our insignificance. Instead, it may well be the case that we represent the apex of cosmic evolution, for we have this absolute evident and at the same time mysterious ability called consciousness to know both ourselves and the universe.
 
I’m not averse to the idea that there is a cosmic role for consciousness. I like John Wheeler’s obvious yet pertinent observation:
 
The Universe gave rise to consciousness, and consciousness gives meaning to the Universe.

 
And this is my point: without consciousness, the Universe would have no meaning. And getting back to the title of this essay, we give the Universe feeling. In fact, I’d say that the ability to feel is more significant than the ability to know or comprehend.
 
Think about the role of art in all its manifestations, and how it’s totally dependent on the ability to feel. In some respects, I consider AI-generated art a perversion, because any feeling we have for its products is of our own making, not the AI’s.
 
I’m one of those weird people who can even find beauty in mathematics, while acknowledging only a limited ability to pursue it. It’s extraordinary that I can find beauty in a symphony, or a well-written story, or the relationship between prime numbers and Riemann’s Zeta function.


Addendum: I realised I can’t leave this topic without briefly discussing the biochemical role in emotional responses and behaviours. I’m thinking of the brain’s drugs-of-choice like serotonin, dopamine, oxytocin and endorphins. Some may argue that these natural ‘messengers’ are all that’s required to explain emotions. However, there are other drugs, like alcohol and caffeine (arguably the most common) that also affect us emotionally, sometimes to our detriment. My point being that the former are nature’s target-specific mechanisms to influence the way we feel, without actually being the genesis of feelings per se.

Wednesday, 19 June 2024

Daniel C Dennett (28 March 1942 - 19 April 2024)

 I only learned about Dennett’s passing in the latest issue of Philosophy Now (Issue 162, June/July 2024), where Daniel Hutto (Professor of Philosophical Psychology at the University of Wollongong) wrote a 3-page obituary. Not that long ago, I watched an interview with him, following the publication of his last book, I’ve Been Thinking, which, from what I gathered, is basically a memoir, as well as an insight into his philosophical musings. (I haven’t read it, but that’s the impression I got from the interview.)
 
I should point out that I have fundamental philosophical differences with Dennett, but he’s not someone you can ignore. I must confess I’ve only read one of his books (decades ago), Freedom Evolves (2006), though I’ve read enough of his interviews and commentary to be familiar with his fundamental philosophical views. It’s something of a failing on my part that I haven’t read his most famous tome, Consciousness Explained (1991). Paul Davies once nominated it among his top 5 books, along with Douglas Hofstadter’s Godel Escher Bach. But then he gave a tongue-in-cheek compliment by quipping, ‘Some have said that he explained consciousness away.’
 
Speaking of Hofstadter, he and Dennett co-published a book, The Mind’s I, which is really a collection of essays by different authors, upon which Dennett and Hofstadter commented. I wrote a short review covering only a small selection of said essays on this blog back in 2009.
 
Dennett wasn’t afraid to tackle the big philosophical issues, in particular, anything relating to consciousness. He was unusual for a philosopher in that he took more than a passing interest in science, and appreciated the discourse that axiomatically arises between the 2 disciplines, while many others (on both sides) emphasise the tension that seems to arise and often morphs into antagonism.
 
What I found illuminating in one of his YouTube videos was how Dennett’s views of the world hadn’t really changed that much over time (mind you, neither have mine), and it got me thinking that it reinforces an idea I’ve long held, but was once iterated by Nietzsche, that our original impulses are intuitive or emotive and then we rationalise them with argument. I can’t help but feel that this is what Dennett did, though he did it extremely well.
 
I like the quote at the head of Hutto’s obituary: “The secret of happiness is: Find something more important than you are and dedicate your life to it.”

 


Sunday, 2 June 2024

Radical ideas

 It’s hard to think of anyone I admire in physics and philosophy who doesn’t have at least one radical idea. Even Richard Feynman, who avoided hyperbole and embraced doubt as part of his credo: "I’d rather have doubt and be uncertain, than be certain and wrong."
 
But then you have this quote from his good friend and collaborator, Freeman Dyson:

Thirty-one years ago, Dick Feynman told me about his ‘sum over histories’ version of quantum mechanics. ‘The electron does anything it likes’, he said. ‘It goes in any direction at any speed, forward and backward in time, however it likes, and then you add up the amplitudes and it gives you the wave-function.’ I said, ‘You’re crazy.’ But he wasn’t.
 
In fact, his crazy idea led him to a Nobel Prize. That exception aside, most radical ideas are either still-born or yet to bear fruit, and that includes mine. No, I don’t compare myself to Feynman – I’m not even a physicist - and the truth is I’m unsure if I even have an original idea to begin with, be they radical or otherwise. I just read a lot of books by people much smarter than me, and cobble together a philosophical approach that I hope is consistent, even if sometimes unconventional. My only consolation is that I’m not alone. Most, if not all, the people smarter than me, also hold unconventional ideas.
 
Recently, I re-read Robert M. Pirsig’s iconoclastic book, Zen and the Art of Motorcycle Maintenance, which I originally read in the late 70s or early 80s, so within a decade of its publication (1974). It wasn’t how I remembered it, not that I remembered much at all, except it had a huge impact on a lot of people who would never normally read a book that was mostly about philosophy, albeit disguised as a road-trip. I think it keyed into a zeitgeist at the time, where people were questioning everything. You might say that was more the 60s than the 70s, but it was nearly all written in the late 60s, so yes, the same zeitgeist, for those of us who lived through it.
 
Its relevance to this post is that Pirsig had some radical ideas of his own – at least, radical to me and to virtually anyone with a science background. I’ll give you a flavour with some selective quotes. But first some context: the story’s protagonist, whom we assume is Pirsig himself, telling the story in first-person, is having a discussion with his fellow travellers, a husband and wife, who have their own motorcycle (Pirsig is travelling with his teenage son as pillion), so there are 2 motorcycles and 4 companions for at least part of the journey.
 
Pirsig refers to a time (in Western culture) when ghosts were considered a normal part of life. But then introduces his iconoclastic idea that we have our own ghosts.
 
Modern man has his own ghosts and spirits too, you know.
The laws of physics and logic… the number system… the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.

 
Then he specifically cites the law of gravity, saying provocatively:
 
The law of gravity and gravity itself did not exist before Isaac Newton. No other conclusion makes sense.
And what that means, is that the law of gravity exists nowhere except in people’s heads! It’s a ghost! We are all of us very arrogant and conceited about running down other people’s ghosts but just as ignorant and barbaric and superstitious about our own.
Why does everybody believe in the law of gravity then?
Mass hypnosis. In a very orthodox form known as “education”.

 
He then goes from the specific to the general:
 
Laws of nature are human inventions, like ghosts. Laws of logic, of mathematics are also human inventions, like ghosts. The whole blessed thing is a human invention, including the idea it isn’t a human invention. (His emphasis)
 
And this is philosophy in action: someone challenges one of your deeply held beliefs, which forces you to defend it. Of course, I’ve argued the exact opposite, claiming that ‘in the beginning there was logic’. And it occurred to me right then, that this in itself, is a radical idea, and possibly one that no one else holds. So, one person’s radical idea can be the antithesis of someone else’s radical idea.
 
Then there is this, which I believe holds the key to our disparate points of view:
 
We believe the disembodied 'words' of Sir Isaac Newton were sitting in the middle of nowhere billions of years before he was born and that magically he discovered these words. They were always there, even when they applied to nothing. Gradually the world came into being and then they applied to it. In fact, those words themselves were what formed the world. (again, his emphasis)
 
Note his emphasis on 'words', as if they alone make some phenomenon physically manifest.
 
My response: don’t confuse or conflate the language one uses to describe some physical entity, phenomena or manifestation with what it describes. The natural laws, including gravity, are mathematical in nature, obeying sometimes obtuse and esoteric mathematical relationships, which we have uncovered over eons of time, which doesn’t mean they only came into existence when we discovered them and created the language to describe them. Mathematical notation only exists in the mind, correct, including the number system we adopt, but the mathematical relationships that notation describes, exist independently of mind in the same way that nature’s laws do.
 
John Barrow, cosmologist and Fellow of the Royal Society, made the following point about the mathematical ‘laws’ we formulated to describe the first moments of the Universe’s genesis (Pi in the Sky, 1992).
 
Specifically, he says our mathematical theories describing the first three minutes of the Universe predict specific ratios of the earliest ‘heavier’ elements: deuterium, 2 isotopes of helium and lithium, which are 1/1000, 1/1000, 22 and 1/100,000,000 respectively; with the remaining (roughly 78%) being hydrogen. And this has been confirmed by astronomical observations. He then makes the following salient point:



It confirms that the mathematical notions that we employ here and now apply to the state of the Universe during the first three minutes of its expansion history at which time there existed no mathematicians… This offers strong support for the belief that the mathematical properties that are necessary to arrive at a detailed understanding of events during those first few minutes of the early Universe exist independently of the presence of minds to appreciate them.
 
As you can see this effectively repudiates Pirsig’s argument; but to be fair to Pirsig, Barrow wrote this almost 2 decades after Pirsig’s book.
 
In the same vein, Pirsig then goes on to discuss Poincare’s Foundations of Science (which I haven’t read), specifically talking about Euclid’s famous fifth postulate concerning parallel lines never meeting, and how it created problems because it couldn’t be derived from more basic axioms and yet didn’t, of itself, function as an axiom. Euclid himself was aware of this, and never used it as an axiom to prove any of his theorems.
 
It was only in the 19th Century, with the advent of Riemann and other non-Euclidean geometries on curved surfaces that this was resolved. According to Pirsig, it led Poincare to question the very nature of axioms.
 
Are they synthetic a priori judgements, as Kant said? That is, do they exist as a fixed part of man’s consciousness, independently of experience and uncreated by experience? Poincare thought not…
Should we therefore conclude that the axioms of geometry are experimental verities? Poincare didn’t think that was so either…
Poincare concluded that the axioms of geometry are conventions, our choice among all possible conventions is guided by experimental facts, but it remains free and is limited only by the necessity of avoiding all contradiction.

 
I have my own view on this, but it’s worth seeing where Pirsig goes with it:
 
Then, having identified the nature of geometric axioms, [Poincare] turned to the question, Is Euclidean geometry true or is Riemann geometry true?
He answered, The question has no meaning.
[One might] as well as ask whether the metric system is true and the avoirdupois system is false; whether Cartesian coordinates are true and polar coordinates are false. One geometry can not be more true than another; it can only be more convenient. Geometry is not true, it is advantageous.
 
I think this is a false analogy, because the adoption of a system of measurement (i.e. units) and even the adoption of which base arithmetic one uses (decimal, binary, hexadecimal being the most common) are all conventions.
 
So why wouldn’t I say the same about axioms? Pirsig and Poincare are right in as much that both Euclidean and Riemann geometry are true because they’re dependent on the topology that one is describing. They are both used to describe physical phenomena. In fact, in a twist that Pirsig probably wasn’t aware of, Einstein used Riemann geometry to describe gravity in a way that Newton could never have envisaged, because Newton only had Euclidean geometry at his disposal. Einstein formulated a mathematical expression of gravity that is dependent on the geometry of spacetime, and has been empirically verified to explain phenomena that Newton couldn’t. Of course, there are also limits to what Einstein’s equations can explain, so there are more mathematical laws still to uncover.
 
But where Pirsig states that we adopt the axiom that is convenient, I contend that we adopt the axiom that is necessary, because axioms inherently expand the area of mathematics we are investigating. This is a consequence of Godel’s Incompleteness Theorem that states there are limits to what any axiom-based, consistent, formal system of mathematics can prove to be true. Godel himself pointed out that that the resolution lies in expanding the system by adopting further axioms. The expansion of Euclidean to non-Euclidean geometry is a case in point. The example I like to give is the adoption of √-1 = i, which gave us complex algebra and the means to mathematically describe quantum mechanics. In both cases, the axioms allowed us to solve problems that had hitherto been impossible to solve. So it’s not just a convenience but a necessity.
 
I know I’ve belaboured a point, but both of these: non-Euclidean geometry and complex algebra; were at one time radical ideas in the mathematical world that ultimately led to radical ideas: general relativity and quantum mechanics; in the scientific world. Are they ghosts? Perhaps ghost is an apt metaphor, given that they appear timeless and have outlived their discoverers, not to mention the rest of us. Most physicists and mathematicians tacitly believe that they not only continue to exist beyond us, but existed prior to us, and possibly the Universe itself.
 
I will briefly mention another radical idea, which I borrowed from Schrodinger but drew conclusions that he didn’t formulate. That consciousness exists in a constant present, and hence creates the psychological experience of the flow of time, because everything else becomes the past as soon as it happens. I contend that only consciousness provides a reference point for past, present and future that we all take for granted.

Sunday, 5 May 2024

Why you need memory to have free will

 This is so obvious once I explain it to you, you’ll wonder why no one else ever mentions it. I’ve pointed out a number of times before that consciousness exists in a constant present, so the time is always ‘now’ for us. I credit Erwin Schrodinger for providing this insight in his lectures, Mind and Matter, appended to his short tome (an oxymoron), What is Life?
 
A logical consequence is that, without memory, you wouldn’t know you’re conscious. And this has actually happened, where people have been knocked unconscious, then acted as if they were conscious in order to defend themselves, but have no memory of it. It happened to my father in a boxing ring (I didn’t believe him when he first told me) and it happened to a woman security guard (in Sydney) where she shot her assailant after he knocked her out. In both cases, they claimed they had no memory of the incident.
 
And, as I’ve pointed out before, this begs a question: if we can survive an attack without being consciously aware of it, then why did evolution select for consciousness? In other words, we could be automatons. The difference is that we have memory.
 
The brain is effectively a memory storage device, without which we would function quite differently. Perhaps this is the real difference between animals and plants. Perhaps plants are sentient, but without memories they can’t ‘think’. There are different types of memory. There is so-called muscle-memory, whereby when we learn a new skill we don’t have to keep relearning it, and eventually we do it without really thinking about it. Driving a car is an example that most of us are familiar with, but it applies to most sports and the playing of musical instruments. I’ve learned that this applies to cognitive skills as well. For example, I write stories and creating characters is something I do without thinking about it too much.
 
People who suffer from retrograde amnesia (as described by Oliver Sacks in his seminal book, The Man Who Mistook His Wife for a Hat, in the chapter titled, The Lost Mariner) don’t lose their memory of specific skills, or what we call muscle-memory. So you could have muscle-memory and still be an automaton, as I described above.
 
Other types of memory are semantic memory and episodic memory. Semantic memory, which is essential to learning a language, is basically our ability to remember facts, which may or may not require a specific context. Rote learning is just exercising semantic memory, which doesn’t necessarily require a deep understanding of a subject, but that’s another topic.
 
Episodic memory is the one I’m most concerned with here. It’s the ability to recount an event in one’s life – a form of time-travelling we all indulge in from time to time. Unlike a computer memory, it’s not an exact recollection – we reconstruct it – which is why it can change over time and why it doesn’t necessarily agree with someone else’s recollection of the same event. Then there is imagination, which I believe is the key to it all. Apparently, imagination uses the same part of the brain as episodic memory. In effect, we are creating a memory of something that is yet to happen – an attempt to time-travel into the future. And this, I argue, is how free will works.

Philosophers have invented a term called ‘intentionality’, which is not what you might think it is. I’ll give a dictionary definition:
 
The quality of mental states (e.g. thoughts, beliefs, desires, hopes) which consists in their being directed towards some object or state of affairs.
 
Philosophers who write on the topic of consciousness, like Daniel C Dennett and John Searle, like to use the term ‘aboutness’ to describe intentionality, and if you break down the definition I gave above, you might discern what they mean. It’s effectively the ability to direct ‘thoughts… towards some object or state of affairs’. But I see this as either episodic memory or imagination. In other words, the ‘object or state of affairs’ could be historical or yet to happen or pure fantasy. We can imagine events we’ve never experienced, though we may have read or heard about them, and they may not only have happened in another time but also another place – so mental time-travelling.
 
As well as a memory storage device, the brain is also a predictability device – it literally thinks a fraction of a second ahead. I’ve pointed out in another post that the brain creates a model in space and time so we can interact with the real world of space and time, which allows us to survive it. And one of the facets of that model is that it’s actually, minisculy ahead of the real world, otherwise we wouldn’t even be able to catch a ball. In other words, it makes predictions that our life depends on. But I contend that this doesn’t need episodic memory or imagination either, because it happens subconsciously and is part of our automaton brain.
 
My point is that the automaton brain, as I’ve coined it, could have evolved by natural selection, without memory. The major difference memory makes is that we become self-aware, and it gives consciousness a role it would otherwise not possess. And that role is what we call free will. I like a definition that philosopher and neuroscientist, Raymond Tallis, gave:
 
Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.
 
So, as I said earlier, I think imagination is key. Free will requires imagination, which I argue is called ‘aboutness’ or ‘intentionality’ in philosophical jargon (though others may differ). And imagination requires episodic memory or mental time-travelling, without which we would all be automatons; still able to interact with the real world of space and time and to acquire skills necessary for survival.
 
And if one goes back to the very beginning of this essay, it is all premised on the observed and experiential phenomenon that consciousness exists in a constant present. We take this for granted, yet nothing else does. Everything becomes the past as soon as it happens, which I keep repeating, is demonstrated every time someone takes a photo. The only exception I can think of is a photon of light, for which time is zero. Our very thoughts become memory as soon as we think them, otherwise we wouldn’t know we exist, yet we could apparently survive without it.
 
Just today, I read a review in New Scientist (27 April 2024) of a book, The Elephant and the Blind: The experience of pure consciousness – philosophy, science and 500+ experiential reports by Thomas Metzinger. Apparently, Metzinger did an ‘online survey of meditators from 57 countries providing over 500 reports for the book.’ Basically, he argues that one can achieve a state that he calls ‘pure consciousness’ whereby the practitioner loses all sense of self. In effect, he argues (according to the reviewer, Alun Anderson):
 
 That a first-person perspective isn’t necessary for consciousness at all: your sense of self, of a continuous “you”, is part of the content of consciousness, not consciousness itself.

 
A provocative and contentious perspective, yet it reminds me of studies, also reported in New Scientist, many years ago, using brain-scan-imagery, of people experiencing ‘God’ also having a sense of being ‘self-less’, if I can use that term. Personally, I think consciousness is something fundamental with a possible independent existence to anything physical. It has a physical manifestation, if you like, purely because of memory, because our brains are effectively a storage device for consciousness.
 
This is a radical idea, but it is one I woke up with one day as if it was an epiphany, and realised that it was quite a departure from what I normally think. Raymond Tallis, whom I’ve already mentioned, once made the claim that science can only study objects and phenomena that can be measured. I claim that consciousness can’t be measured, but because we can measure brain waves and neuron activity many people argue that we are measuring consciousness.
 
But here’s the thing: if we didn’t experience consciousness, then scientists would tell us it doesn’t exist in the same way they tell us that free will doesn’t exist. I can make this claim because the same scientists argue that eventually AI will exhibit consciousness while simultaneously telling us that we will know this from the way the AI behaves, not because anyone will be measuring anything.

 

Addendum: I came across this related video by self-described philosopher-physicist, Avshalom Elitzur, who takes a subtly different approach to the same issue, giving examples from the animal kingdom. Towards the end, he talks about specific 'isms' (e.g. physicalism and dualism), but he doesn't mention the one I'm an advocate of, which is a 'loop' - that matter interacts with consciousness, via neurons, and then consciousness interacts with matter, which is necessary for free will.

Basically, he argues that consciousness interacting with matter breaks conservation laws (watch the video) but the brain consumes energy whether it's doing a maths calculation, running around an oval or lying asleep. Running around an oval is arguably consciousness interacting with matter - the same for an animal chasing prey - because one assumes they're based on a conscious decision, which is based on an imagined future, as per my thesis above. Also, processing information uses energy, which is why computers get hot, with no consciousness required. I fail to see what the difference is.

Tuesday, 30 April 2024

Logic rules

I’ve written on this topic before, but a question on Quora made me revisit it.
 
Self-referencing can lead to contradiction or to illumination. It was a recurring theme in Douglas Hofstadter’s Godel Escher Bach, and it’s key to Godel’s famous Incompleteness Theorem, which has far-reaching ramifications for mathematics if not epistemology generally. We can never know everything there is to know, which effectively means there will always be known unknowns and unknown unknowns, with possibly infinitely more of the latter than the former.
 
I recently came across a question on Quora: Will a philosopher typically say that their belief that the phenomenal world "abides by all the laws of logic" is an entailment of those laws being tautologies? Or would they rather consider that belief to be an assumption made outside of logic?

If you’re like me, you might struggle with even understanding this question. But it seems to me to be a question about self-referencing. In other words, my understanding is that it’s postulating, albeit as a question, that a belief in logic requires logic. The alternative being ‘the belief is an assumption made outside of logic’. It’s made more confusing by suggesting that the belief is a tautology because it’s self-referencing.
 
I avoided all that, by claiming that logic is fundamental even to the extent that it transcends the Universe, so not a ‘belief’ as such. And you will say that even making that statement is a belief. My response is that logic exists independently of us or any belief system. Basically, I’m arguing that logic is fundamental in that its rules govern the so-called laws of the Universe, which are independent of our cognisance of them. Therefore, independent of whether we believe in them or not.
 
I’ve said on previous occasions that logic should be a verb, because it’s something we do, and not just humans, but other creatures, and even machines. But that can’t be completely true if it really does transcend the Universe. My main argument is hypothetical in that, if there is a hypothetical God, then said God also has to obey the rules of logic. God can’t tell us the last digit of pi (it doesn’t exist) and he can’t make a prime number non-prime or vice versa, because they are determined by pure logic, not divine fiat.
 
And now, of course, I’ve introduced mathematics into the equation (pun intended) because mathematics and logic are inseparable, as probably best demonstrated by Godel’s famous theorem. It was Euclid (circa 300BC) who introduced the concept of proof into mathematics, and a lynch pin of many mathematical proofs is the fundamental principle of logic that you can’t have a contradiction, including Euclid’s own relatively simple proof that there are an infinity of primes. Back to Godel (or forward 2,300 years, to be more accurate), and he effectively proved that there is a distinction between 'proof' and 'truth' in mathematics, in as much as there will always be mathematical truths that can’t be proven true within a given axiom based, consistent, mathematical system. In practical terms, you need to keep extending the ‘system’ to formulate more truths into proofs.
 
It's not a surprise that the ‘laws of the Universe’ that I alluded to above, seem to obey mathematical ‘rules', and in fact, it’s only because of our prodigious abilities to mine the mathematical landscape that we understand the Universe (at every observable scale) to the extent that we do, including scales that were unimaginable even a century ago.
 
I’ve spoken before about Penrose’s 3 Worlds: Physical, Mental and Platonic; which represent the Universe, consciousness and mathematics respectively. What links them all is logic. The Universe is riddled with paradoxes, yet even paradoxes obey logic, and the deeper we look into the Universe’s secrets the more advanced mathematics we need, just to describe it, let alone understand it. And logic is the means by which humans access mathematics, which closes the loop.
 


 Addendum:
I'd forgotten that I wrote a similar post almost 5 years ago, where, unsurprisingly, I came to much the same conclusion. However, there's no reference to God, and I provide a specific example.

Tuesday, 16 April 2024

Do you think Hoffman’s theories about reality and perception are true?

I’ve written about this twice before in some detail, but this was a question on Quora, I addressed last year. I include it here because it’s succinct yet provides specific, robust arguments in the negative.
 
There is a temptation to consider Hoffman a charlatan, but I think that’s a bit harsh and probably not true. The point is that he either knows what he’s arguing is virtually indefensible yet perseveres simply out of notoriety, or he really believes what he’s saying. I’m willing to give him the benefit of the doubt. I think he’s gone so far down this rabbit-hole and invested so much of his time and reputation that it would take a severe cognitive dissonance to even consider he could be wrong. And this goes for a lot of us, in many different fields. In a completely different context, just look at those who have been Trump acolytes turned critics.
 
Below is my response to the question:
 
One-word answer, No. From the very first, when I read an academic paper he co-wrote with Chetan Kaprash, titled Objects of Consciousness (Frontiers in Psychology, 17 June 2014), I have found it very difficult to take him seriously. And everything I’ve read and seen since, only makes me more sceptical.

Hoffman’s ideas are consistent with the belief that we live in a computer simulation, though he’s never made that claim. Nevertheless, his go-to analogy for ‘objects’ we consider to be ‘real’ is the desktop icons on your computer. He talks about the ‘spacetime perceptual interface of H. Sapiens’ as a direct reference to a computer desktop, but it only exists in our minds. In fact, what he describes is what one would experience if one were to use a VR headset. But there is another everyday occurrence where we experience this phenomenon and it’s known as dreaming. Dreams are totally solipsistic, and you’ll notice they often defy reality without us giving them a second thought – until we wake up.

So, how do you know you’re not in a dream? Well, for one, we have no common collective memories with anyone we meet. Secondly, interactions and experiences we have in a dream, that would kill us in real life, don’t. Have you ever fallen from a great height in a dream? I have, many times.

And this is the main contention I have with Hoffman: reality can kill you. He readily admitted in a YouTube video that he wouldn’t step in front of a moving train. He tells us to take the train "seriously but not literally", after all it’s only a desktop icon. But, in his own words, if you put a desktop icon in the desktop bin it will have ‘consequences’. So, walking in front of a moving train is akin to putting the desktop icon of yourself in the bin. A good metaphor perhaps, but hardly a scientifically viable explanation of why you would die.

There are so many arguments one can use against Hoffman, that it’s hard to know where to start, or stop. His most outrageous claim is that ‘space and time doesn’t exist unperceived’, which means that all of history, including cosmological history only exists in the mind. Therefore, not only could we have not evolved, but neither could the planets, solar system and galaxies. In fact, the light we see from distant galaxies, not to mention the CMBR (the earliest observable event in the Universe), doesn’t exist unless someone’s looking at it.

Finally, you can set up a camera to take an image of an object (like a wild cat at night) without any conscious object in sight, except maybe the creature it took a photo of. But then, how did the camera only exist when the animal who didn’t see it, created it with its own consciousness?
 

Addendum:

Following my publishing this post, I watched a later, fairly recent video by Hoffman where he gives further reasons for his beliefs. In particular, he states that physics has shown that space and time are no longer fundamental, which is quite a claim. He cites the work of Nima Arkani-Hamed who has used a mathematical object called amplituhedrons to accurately predict the amplitudes of gluons in particle physics. I’ve read about this before in a book by Graham Farmelo (The Universe Speaks in Numbers; How Modern Maths Reveals Nature’s Deepest Secrets). Farmelo tells us that Arkani-Hamed is an American born Iranian, at the Princeton Institute for Advanced Study. To quote Arkani-Hamed directly from Farmelo's book:
 
This is a concrete example of a way in which the physics we normally associate with space-time and quantum mechanics arises from something more basic.

 
And this appears to be the point that Hoffman has latched onto, which he’s extrapolated to say that space and time are not fundamental. Whereas I drew a slightly different conclusion. In my discussion of Farmelo’s book, I made the following point:

The ‘something more basic’ is only known mathematically, as opposed to physically. I found this a most compelling tale and a history lesson in how mathematics appears to be intrinsically linked to the minutia of atomic physics.
 
I followed this with another reference to Arkani-Hamed.



In the same context, Arkani-Hamed says that ‘the mathematics of whole numbers in scattering-amplitude theory chimes… with the ancient Greeks' dream: to connect all nature with whole numbers.’
 
But, as I pointed out both here and in my last post, mathematical abstractions providing descriptions of natural phenomena are not in themselves physical. I see them as a code that allows us to fathom nature’s deepest secrets, which I believe Arkani-Hamed has contributed to.
 
Hoffman’s most salient point is that we need to go beyond time and space to find something more fundamental. In effect, he’s saying we need to go outside the Universe, and he might even be right, but that does not negate the pertinent, empirically based and widely held belief that space and time are arguably the most fundamental parameters within the Universe. If he’s saying that consciousness possibly exists beyond, therefore outside the Universe, I won’t argue with that, because we don’t know.
 
Hoffman has created mathematical models of consciousness, which I admit I haven’t read or seen, and he argues that those mathematical models lead to the same mathematical objects (abstractions) that Arkani-Hamed and others have used to describe fundamental physics. Therefore, consciousness creates the objects that the mathematics describes. That’s a very long bow to draw, to use a well-worn euphemism.