Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Saturday, 15 June 2024

The negative side of positive thinking

 This was a topic in last week’s New Scientist (8 June 2024) under the heading, The Happiness Trap, an article written by Conor Feehly, a (freelance journalist, based in Bangkok). Basically, he talks about the plethora of ‘self-help’ books and in particular, the ‘emergence of the positive psychology movement in 1998’. I was surprised he could provide a year, when one would tend to think it was a generational transition. At least, that’s my experience.
 
He then discusses the backlash (my term, not his) that’s occurred since, and mentions a study, ‘published in 2022, [by] an international group of psychologists exploring how societal pressure to be happy affects people in 40 countries’ (my emphasis). He cites Brock Bastian at the University of Melbourne, who was part of the study, “When we are not willing to accept negative emotions as a part of life, this can mean that we may see negative emotions as a sign there is something wrong with us.” And this gets to the nub of the issue.
 
I can’t help but think that there is a generational effect, if not a divide. I see myself as being in between, generationally speaking. My parents lived through the Great Depression and WW2, so they experienced enough negative emotion for all of us. Growing up in rural NSW, we didn’t have much but neither did anyone else, so we didn’t think that was exceptional. There was a lot of negative emotion in our lives as a consequence of the trauma that my Dad experienced as both a wartime serviceman and a prisoner-of-war. It was only much later, as an adult, that I realised this was not the norm. Back then, PTSD wasn’t a term.
 
One of the things that struck me in Feehly’s article was the idea of ‘acceptance’. To quote:
 
Research shows that when people accepted their negative emotions – rather than judging mental experience as good or bad – they become more emotionally resilient, experiencing fewer negative feelings in response to environmental stressors and attaining a greater sense of well-being.
 
He also says in the same context:
 
The good news is that, as we age, we increasingly rely on acceptance – which might help to explain why older people tend to report better emotional well-being.

 
As one of that cohort (older people), I can identify with that sentiment. Acceptance is a multi-faceted word, because one of the unexpected benefits of getting older is that we learn to accept ourselves, becoming less critical and judgemental, and hopefully extending that to others.
 
In our youth, acceptance by one’s peers is a prime driver of self-esteem and associated behaviours, and social media has to a large extent hijacked that impulse, which was also highlighted by Brock Bastian (cited above).
 
I’ve got side-tracked to the extent that this is the antithesis of the so-called ‘positive psychology movement’, possibly because I think my generation largely avoided that trap. We are more likely to see that a ‘think positive’ attitude in the face of all of life’s dilemmas and problems is a delusion. What’s obvious is that negative emotional states have evolutionary value, because they have ancient roots. The other point that’s obvious to me is that we are all addicted to stories, where we vicariously experience negative emotions on a regular basis. In fact, a story that only contained positive emotions would never be read, or watched.
 
What has always been obvious to me, and which I’ve written about before, including in the very early history of this blog, is that we need adversity to gain wisdom. As I keep saying, it’s the theme of virtually every story ever told. When I look back on my early adult years and how seemingly insurmountable they felt, my older self is so grateful I persevered. There is a hypothetical often raised: what advice would you give your younger self? I’d just say, ‘Hang in there, it gets better.’

Sunday, 9 June 2024

More on radical ideas

 As you can tell from the title, this post carries on from the last one, because I got a bit bogged down on one issue, when I really wanted to discuss more. One of the things that prompted me was watching a 1hr presentation by cosmologist, Claudia de Rahm, whom I’ve mentioned before, when I had the pleasure of listening to an on-line lecture she gave, care of New Scientist, during the COVID lockdown.
 
Claudia’s particular field of study is gravity, and, by her own admission, she has a ‘crazy idea’. Now here’s the thing: I meet a lot of people on Quora and in the blogosphere, who like me, live (in a virtual sense) on the fringes of knowledge rather than as academic or professional participants. And what I find is that they often have an almost zealous confidence in their ideas. To give one example, I recently came across someone who argued quite adamantly that the Universe is static, not expanding, and has even written a book on the subject. This is contrary to virtually everyone else I’m aware of who works in the field of cosmology and astrophysics. And I can’t help but compare this to Claudia de Rahm who is well aware that her idea is ‘crazy’, even though she’s fully qualified to argue it.
 
In other words, it’s a case of the more you know about a subject, the less you claim to know, because experts are more aware of their limitations than non-experts. I should point out, in case you didn’t already know, I’m not one of the experts.
 
Specifically, Claudia’s crazy idea is that not only are there gravitational waves, but gravitons and that gravitons have an extremely tiny amount of mass, which would alter the effect of gravity at very long range. I should say that at present, the evidence is against her, because if she’s right, gravity waves would travel not at the speed of light, as predicted by Einstein, but ever-so-slightly less than light.
 
Freeman Dyson, by the way, has argued that if gravitons do exist, they would be impossible to detect, but if Claudia is right, then they would be.
 
In her talk, Claudia also discusses the vacuum energy, which according to particle physics, should be 28 orders of magnitude greater than the relativistic effect of ‘dark energy’. She calls it ‘the biggest discrepancy in the entire history of science’. This suggests that there is something rotten in the state of theoretical physics, along with the fact, that what we can physically observe, only accounts for 5% of the Universe.
 
It should be pointed out that at the end of the 19th Century no one saw or predicted the 2 revolutions in physics that were just around the corner – relativity theory and quantum mechanics. They were an example of what Thomas Kuhn called The Structure of Scientific Revolutions (the title of his book expounding on this). And I’d suggest that these current empirical aberrations in cosmology are harbingers of the next Kuhnian revolution.
 
Roger Penrose, whom I’ve referenced a number of times on this blog, is someone else with some ‘crazy’ ideas compared to the status quo, for which I admire him even if I don’t agree with him. One of Penrose’s hobby horses is his own particular inference from Godel’s Incompleteness Theorem, which he learned as a graduate (under Steen, at Cambridge) and which he discusses in this video. He argues that it provides evidence that humans don’t think like computers. If one takes the example of Riemann’s Hypothesis (really a conjecture) we know that a computer can’t tell us if it’s true or not (my example, not Penrose’s).* However, most mathematicians believe it is true, and it would be an enormous shock if it was proven untrue, or a contra-example was found by a computer. This is the case with other conjectures that have been proven true, like Fermat’s Last Theorem and Poincare’s conjecture. Penrose’s point, if I understand him correctly, is that it takes a human mind and not a computer to make this leap into the unknown and grasp a ‘truth’ out of the aether.
 
Anyone who has engaged in some artistic endeavour can identify with this, even if it’s not mathematical truths they are seeking but the key to unravelling a plot in a story.
 
Penrose makes the point in the video that he’s a ‘visual’ person, which he thinks is unusual in his field. Penrose is an excellent artist, by the way, and does all his own graphics. This is something else I can identify with, as I was quite precocious as a very young child at drawing (I could draw in perspective, though no one taught me) even though it never went anywhere.
 
Finally, some crazy ideas of my own. I’ve pointed out on other posts that I have a predilection (for want of a better term) for Kant’s philosophical proposition that we can never know the ‘thing-in-itself’ but only a perception of it.
 
With this in mind, I contend that this philosophical premise not only applies to what we can physically detect via instruments, but what we theoretically infer from the mathematics we use to explore nature. As heretical an idea as it may seem, I argue that mathematics is yet another 'instrument' we use to probe the secrets of the Universe. Quantum mechanics and relativity theory being the most obvious.
 
As I’ve tried to expound on other posts, relativity theory is observer-dependent, in as much as different observers will both measure and calculate different values of time and space, dependent on their specific frame of reference. I believe this is a pertinent example of Kant’s proposition that the thing-in-itself escapes our perception. In particular, physicists (including Penrose) will tell you that events that are ostensibly simultaneous to us (in a galaxy far, far away) will be perceived as both past and future by 2 observers who are simply crossing a street in opposite directions. I’ve written about this elsewhere as ‘the impossible thought experiment’.
 
The fact is that relativity theory rules out the event being observed at all. In other words, simultaneous events can’t be observed (according to relativity). For this reason, virtually all physicists will tell you that simultaneity is an illusion – there is no universal now.
 
But here’s the thing: if there is an edge in either space or time, it can only be observed from outside the Universe. Relativity theory, logically enough, can only tell us what we can observe from within the Universe.
 
But to extend this crazy idea, what’s stopping the Universe existing within a higher dimension that we can’t perceive. Imagine being a fish and you spend your entire existence in a massive body of water, which is your entire universe. But then one day you are plucked out of that environment and you suddenly become aware that there is another, even bigger universe that exists right alongside yours.
 
There is a tendency for us to think that everything that exists we can learn and know about – it’s what separates us from every other living thing on the planet. But perhaps there are other dimensions, or even worlds, that lie forever beyond our comprehension.


*Footnote: Actually, Penrose in his book, The Emperor’s New Mind, discusses this in depth and at length over a number of chapters. He makes the point that Turing’s ‘proof’ that it’s impossible to predict whether a machine attempting to compute all the Riemann zeros (for example) will stop, is a practical demonstration of the difference between ‘truth’ and ‘proof’ (as Godel’s Incompleteness Theorem tell us). Quite simply, if the theorem is true, the computer will never stop, so it can never be proven algorithmically. It can only be proven (or disproven) if one goes ‘outside the [current] rules’ to use Penrose’s own nomenclature.

Sunday, 2 June 2024

Radical ideas

 It’s hard to think of anyone I admire in physics and philosophy who doesn’t have at least one radical idea. Even Richard Feynman, who avoided hyperbole and embraced doubt as part of his credo: "I’d rather have doubt and be uncertain, than be certain and wrong."
 
But then you have this quote from his good friend and collaborator, Freeman Dyson:

Thirty-one years ago, Dick Feynman told me about his ‘sum over histories’ version of quantum mechanics. ‘The electron does anything it likes’, he said. ‘It goes in any direction at any speed, forward and backward in time, however it likes, and then you add up the amplitudes and it gives you the wave-function.’ I said, ‘You’re crazy.’ But he wasn’t.
 
In fact, his crazy idea led him to a Nobel Prize. That exception aside, most radical ideas are either still-born or yet to bear fruit, and that includes mine. No, I don’t compare myself to Feynman – I’m not even a physicist - and the truth is I’m unsure if I even have an original idea to begin with, be they radical or otherwise. I just read a lot of books by people much smarter than me, and cobble together a philosophical approach that I hope is consistent, even if sometimes unconventional. My only consolation is that I’m not alone. Most, if not all, the people smarter than me, also hold unconventional ideas.
 
Recently, I re-read Robert M. Pirsig’s iconoclastic book, Zen and the Art of Motorcycle Maintenance, which I originally read in the late 70s or early 80s, so within a decade of its publication (1974). It wasn’t how I remembered it, not that I remembered much at all, except it had a huge impact on a lot of people who would never normally read a book that was mostly about philosophy, albeit disguised as a road-trip. I think it keyed into a zeitgeist at the time, where people were questioning everything. You might say that was more the 60s than the 70s, but it was nearly all written in the late 60s, so yes, the same zeitgeist, for those of us who lived through it.
 
Its relevance to this post is that Pirsig had some radical ideas of his own – at least, radical to me and to virtually anyone with a science background. I’ll give you a flavour with some selective quotes. But first some context: the story’s protagonist, whom we assume is Pirsig himself, telling the story in first-person, is having a discussion with his fellow travellers, a husband and wife, who have their own motorcycle (Pirsig is travelling with his teenage son as pillion), so there are 2 motorcycles and 4 companions for at least part of the journey.
 
Pirsig refers to a time (in Western culture) when ghosts were considered a normal part of life. But then introduces his iconoclastic idea that we have our own ghosts.
 
Modern man has his own ghosts and spirits too, you know.
The laws of physics and logic… the number system… the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.

 
Then he specifically cites the law of gravity, saying provocatively:
 
The law of gravity and gravity itself did not exist before Isaac Newton. No other conclusion makes sense.
And what that means, is that the law of gravity exists nowhere except in people’s heads! It’s a ghost! We are all of us very arrogant and conceited about running down other people’s ghosts but just as ignorant and barbaric and superstitious about our own.
Why does everybody believe in the law of gravity then?
Mass hypnosis. In a very orthodox form known as “education”.

 
He then goes from the specific to the general:
 
Laws of nature are human inventions, like ghosts. Laws of logic, of mathematics are also human inventions, like ghosts. The whole blessed thing is a human invention, including the idea it isn’t a human invention. (His emphasis)
 
And this is philosophy in action: someone challenges one of your deeply held beliefs, which forces you to defend it. Of course, I’ve argued the exact opposite, claiming that ‘in the beginning there was logic’. And it occurred to me right then, that this in itself, is a radical idea, and possibly one that no one else holds. So, one person’s radical idea can be the antithesis of someone else’s radical idea.
 
Then there is this, which I believe holds the key to our disparate points of view:
 
We believe the disembodied 'words' of Sir Isaac Newton were sitting in the middle of nowhere billions of years before he was born and that magically he discovered these words. They were always there, even when they applied to nothing. Gradually the world came into being and then they applied to it. In fact, those words themselves were what formed the world. (again, his emphasis)
 
Note his emphasis on 'words', as if they alone make some phenomenon physically manifest.
 
My response: don’t confuse or conflate the language one uses to describe some physical entity, phenomena or manifestation with what it describes. The natural laws, including gravity, are mathematical in nature, obeying sometimes obtuse and esoteric mathematical relationships, which we have uncovered over eons of time, which doesn’t mean they only came into existence when we discovered them and created the language to describe them. Mathematical notation only exists in the mind, correct, including the number system we adopt, but the mathematical relationships that notation describes, exist independently of mind in the same way that nature’s laws do.
 
John Barrow, cosmologist and Fellow of the Royal Society, made the following point about the mathematical ‘laws’ we formulated to describe the first moments of the Universe’s genesis (Pi in the Sky, 1992).
 
Specifically, he says our mathematical theories describing the first three minutes of the Universe predict specific ratios of the earliest ‘heavier’ elements: deuterium, 2 isotopes of helium and lithium, which are 1/1000, 1/1000, 22 and 1/100,000,000 respectively; with the remaining (roughly 78%) being hydrogen. And this has been confirmed by astronomical observations. He then makes the following salient point:



It confirms that the mathematical notions that we employ here and now apply to the state of the Universe during the first three minutes of its expansion history at which time there existed no mathematicians… This offers strong support for the belief that the mathematical properties that are necessary to arrive at a detailed understanding of events during those first few minutes of the early Universe exist independently of the presence of minds to appreciate them.
 
As you can see this effectively repudiates Pirsig’s argument; but to be fair to Pirsig, Barrow wrote this almost 2 decades after Pirsig’s book.
 
In the same vein, Pirsig then goes on to discuss Poincare’s Foundations of Science (which I haven’t read), specifically talking about Euclid’s famous fifth postulate concerning parallel lines never meeting, and how it created problems because it couldn’t be derived from more basic axioms and yet didn’t, of itself, function as an axiom. Euclid himself was aware of this, and never used it as an axiom to prove any of his theorems.
 
It was only in the 19th Century, with the advent of Riemann and other non-Euclidean geometries on curved surfaces that this was resolved. According to Pirsig, it led Poincare to question the very nature of axioms.
 
Are they synthetic a priori judgements, as Kant said? That is, do they exist as a fixed part of man’s consciousness, independently of experience and uncreated by experience? Poincare thought not…
Should we therefore conclude that the axioms of geometry are experimental verities? Poincare didn’t think that was so either…
Poincare concluded that the axioms of geometry are conventions, our choice among all possible conventions is guided by experimental facts, but it remains free and is limited only by the necessity of avoiding all contradiction.

 
I have my own view on this, but it’s worth seeing where Pirsig goes with it:
 
Then, having identified the nature of geometric axioms, [Poincare] turned to the question, Is Euclidean geometry true or is Riemann geometry true?
He answered, The question has no meaning.
[One might] as well as ask whether the metric system is true and the avoirdupois system is false; whether Cartesian coordinates are true and polar coordinates are false. One geometry can not be more true than another; it can only be more convenient. Geometry is not true, it is advantageous.
 
I think this is a false analogy, because the adoption of a system of measurement (i.e. units) and even the adoption of which base arithmetic one uses (decimal, binary, hexadecimal being the most common) are all conventions.
 
So why wouldn’t I say the same about axioms? Pirsig and Poincare are right in as much that both Euclidean and Riemann geometry are true because they’re dependent on the topology that one is describing. They are both used to describe physical phenomena. In fact, in a twist that Pirsig probably wasn’t aware of, Einstein used Riemann geometry to describe gravity in a way that Newton could never have envisaged, because Newton only had Euclidean geometry at his disposal. Einstein formulated a mathematical expression of gravity that is dependent on the geometry of spacetime, and has been empirically verified to explain phenomena that Newton couldn’t. Of course, there are also limits to what Einstein’s equations can explain, so there are more mathematical laws still to uncover.
 
But where Pirsig states that we adopt the axiom that is convenient, I contend that we adopt the axiom that is necessary, because axioms inherently expand the area of mathematics we are investigating. This is a consequence of Godel’s Incompleteness Theorem that states there are limits to what any axiom-based, consistent, formal system of mathematics can prove to be true. Godel himself pointed out that that the resolution lies in expanding the system by adopting further axioms. The expansion of Euclidean to non-Euclidean geometry is a case in point. The example I like to give is the adoption of √-1 = i, which gave us complex algebra and the means to mathematically describe quantum mechanics. In both cases, the axioms allowed us to solve problems that had hitherto been impossible to solve. So it’s not just a convenience but a necessity.
 
I know I’ve belaboured a point, but both of these: non-Euclidean geometry and complex algebra; were at one time radical ideas in the mathematical world that ultimately led to radical ideas: general relativity and quantum mechanics; in the scientific world. Are they ghosts? Perhaps ghost is an apt metaphor, given that they appear timeless and have outlived their discoverers, not to mention the rest of us. Most physicists and mathematicians tacitly believe that they not only continue to exist beyond us, but existed prior to us, and possibly the Universe itself.
 
I will briefly mention another radical idea, which I borrowed from Schrodinger but drew conclusions that he didn’t formulate. That consciousness exists in a constant present, and hence creates the psychological experience of the flow of time, because everything else becomes the past as soon as it happens. I contend that only consciousness provides a reference point for past, present and future that we all take for granted.

Sunday, 19 May 2024

It all started with Euclid

 I’ve mentioned Euclid before, but this rumination was triggered by a post on Quora that someone wrote about Plato, where they argued, along with another contributor, that Plato is possibly overrated because he got a lot of things wrong, which is true. Nevertheless, as I’ve pointed out in other posts, his Academy was effectively the origin of Western philosophy, science and mathematics. It was actually based on the Pythagorean quadrivium of geometry, arithmetic, astronomy and music.
 
But Plato was also a student and devoted follower of Socrates and the mentor of Aristotle, who in turn mentored Alexander the Great. So Plato was a pivotal historical figure and without his writings, we probably wouldn’t know anything about Socrates. In the same way that, without Paul, we probably wouldn’t know anything about Jesus. (I’m sure a lot of people would find that debatable, but, if so, it’s a debate for another post.)
 
Anyway, I mentioned Euclid in my own comment (on Quora), who was the Librarian at Alexandria around 300BC, and thus a product of Plato’s school of thought. Euclid wrote The Elements, which I contend is arguably the most important book written in the history of humankind – more important than any religious text, including the Bible, Homer’s Iliad and the Mahabharata, which, I admit, is quite a claim. It's generally acknowledged as the most copied text in the secular world. In fact, according to Wikipedia:
 
It was one of the very earliest mathematical works to be printed after the invention of the printing press and has been estimated to be second only to the Bible in the number of editions published since the first printing in 1482.
 
Euclid was revolutionary in one very significant way: he was able to demonstrate what ‘truth’ was, using pure logic, albeit in a very abstract and narrow field of inquiry, which is mathematics.
 
Before then, and in other cultures, truth was transient and subjective and often prescribed by the gods. But Euclid changed all that, and forever. I find it extraordinary that I was examined on Euclid’s theorems in high school in the 20th Century.
 
And this mathematical insight has become, millennia later, a key ingredient (for want of a better term) in the hunt for truths in the physical world. In the 20th Century, in what has become known as the Golden Age of Physics, the marriage between mathematics and scientific inquiry at all scales, from the cosmic to the infinitesimal, has uncovered deeply held secrets of nature that the Pythagoreans, and Euclid for that matter, could never have dreamed of. Look no further than quantum mechanics (QM) and the General Theory of Relativity (GR). Between these 2 iconic developments, they underpin every theory we currently have in physics, and they both rely on mathematics that was pivotal in the development of the theories from the outset. In other words, without the mathematics of complex algebra and Riemann geometry respectively, these theories would have been stillborn.
 
I like to quote Richard Feynman from his book, The Character of Physical Law, in a chapter titled, The Relation of Mathematics to Physics:
 
…what turns out to be true is that the more we investigate, the more laws we find, and the deeper we penetrate nature, the more this disease persists. Every one of our laws is a purely mathematical statement in rather complex and abstruse mathematics... Why? I have not the slightest idea. It is only my purpose to tell you about this fact.
 
The strange thing about physics is that for the fundamental laws we still need mathematics.
 
Physicists cannot make a conversation in any other language. If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in. She offers her information only in one form.

 
And this has only become more evident since Feynman wrote those words.
 
There was another revolution in the 20th Century, involving Alan Turing, Alonso Church and Kurt Godel; this time involving mathematics itself. Basically, each of these independently demonstrated that some mathematical truths were elusive to proof. Some mathematical conjectures could not be proved within the mathematical system from which they arose. The most famous example would be Riemann’s Hypothesis, involving primes. But the Goldbach conjecture (also involving primes) and the conjecture of twin primes also fit into this category. While most mathematicians believe them to be true, they are yet to be proven. I won’t elaborate on them, as they can easily be looked up.
 
But there is more: according to Gregory Chaitin, there are infinitely more incomputable Real numbers than computable Real numbers, which means that most of mathematics is inaccessible to logic.
 
So, when I say it all started with Euclid, I mean all the technology and infrastructure that we take for granted; and which allows me to write this so that virtually anyone anywhere in the world can read it; only exists because Euclid was able to derive ‘truths’ that stood for centuries and ultimately led to this.

Sunday, 5 May 2024

Why you need memory to have free will

 This is so obvious once I explain it to you, you’ll wonder why no one else ever mentions it. I’ve pointed out a number of times before that consciousness exists in a constant present, so the time is always ‘now’ for us. I credit Erwin Schrodinger for providing this insight in his lectures, Mind and Matter, appended to his short tome (an oxymoron), What is Life?
 
A logical consequence is that, without memory, you wouldn’t know you’re conscious. And this has actually happened, where people have been knocked unconscious, then acted as if they were conscious in order to defend themselves, but have no memory of it. It happened to my father in a boxing ring (I didn’t believe him when he first told me) and it happened to a woman security guard (in Sydney) where she shot her assailant after he knocked her out. In both cases, they claimed they had no memory of the incident.
 
And, as I’ve pointed out before, this begs a question: if we can survive an attack without being consciously aware of it, then why did evolution select for consciousness? In other words, we could be automatons. The difference is that we have memory.
 
The brain is effectively a memory storage device, without which we would function quite differently. Perhaps this is the real difference between animals and plants. Perhaps plants are sentient, but without memories they can’t ‘think’. There are different types of memory. There is so-called muscle-memory, whereby when we learn a new skill we don’t have to keep relearning it, and eventually we do it without really thinking about it. Driving a car is an example that most of us are familiar with, but it applies to most sports and the playing of musical instruments. I’ve learned that this applies to cognitive skills as well. For example, I write stories and creating characters is something I do without thinking about it too much.
 
People who suffer from retrograde amnesia (as described by Oliver Sacks in his seminal book, The Man Who Mistook His Wife for a Hat, in the chapter titled, The Lost Mariner) don’t lose their memory of specific skills, or what we call muscle-memory. So you could have muscle-memory and still be an automaton, as I described above.
 
Other types of memory are semantic memory and episodic memory. Semantic memory, which is essential to learning a language, is basically our ability to remember facts, which may or may not require a specific context. Rote learning is just exercising semantic memory, which doesn’t necessarily require a deep understanding of a subject, but that’s another topic.
 
Episodic memory is the one I’m most concerned with here. It’s the ability to recount an event in one’s life – a form of time-travelling we all indulge in from time to time. Unlike a computer memory, it’s not an exact recollection – we reconstruct it – which is why it can change over time and why it doesn’t necessarily agree with someone else’s recollection of the same event. Then there is imagination, which I believe is the key to it all. Apparently, imagination uses the same part of the brain as episodic memory. In effect, we are creating a memory of something that is yet to happen – an attempt to time-travel into the future. And this, I argue, is how free will works.

Philosophers have invented a term called ‘intentionality’, which is not what you might think it is. I’ll give a dictionary definition:
 
The quality of mental states (e.g. thoughts, beliefs, desires, hopes) which consists in their being directed towards some object or state of affairs.
 
Philosophers who write on the topic of consciousness, like Daniel C Dennett and John Searle, like to use the term ‘aboutness’ to describe intentionality, and if you break down the definition I gave above, you might discern what they mean. It’s effectively the ability to direct ‘thoughts… towards some object or state of affairs’. But I see this as either episodic memory or imagination. In other words, the ‘object or state of affairs’ could be historical or yet to happen or pure fantasy. We can imagine events we’ve never experienced, though we may have read or heard about them, and they may not only have happened in another time but also another place – so mental time-travelling.
 
As well as a memory storage device, the brain is also a predictability device – it literally thinks a fraction of a second ahead. I’ve pointed out in another post that the brain creates a model in space and time so we can interact with the real world of space and time, which allows us to survive it. And one of the facets of that model is that it’s actually, minisculy ahead of the real world, otherwise we wouldn’t even be able to catch a ball. In other words, it makes predictions that our life depends on. But I contend that this doesn’t need episodic memory or imagination either, because it happens subconsciously and is part of our automaton brain.
 
My point is that the automaton brain, as I’ve coined it, could have evolved by natural selection, without memory. The major difference memory makes is that we become self-aware, and it gives consciousness a role it would otherwise not possess. And that role is what we call free will. I like a definition that philosopher and neuroscientist, Raymond Tallis, gave:
 
Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.
 
So, as I said earlier, I think imagination is key. Free will requires imagination, which I argue is called ‘aboutness’ or ‘intentionality’ in philosophical jargon (though others may differ). And imagination requires episodic memory or mental time-travelling, without which we would all be automatons; still able to interact with the real world of space and time and to acquire skills necessary for survival.
 
And if one goes back to the very beginning of this essay, it is all premised on the observed and experiential phenomenon that consciousness exists in a constant present. We take this for granted, yet nothing else does. Everything becomes the past as soon as it happens, which I keep repeating, is demonstrated every time someone takes a photo. The only exception I can think of is a photon of light, for which time is zero. Our very thoughts become memory as soon as we think them, otherwise we wouldn’t know we exist, yet we could apparently survive without it.
 
Just today, I read a review in New Scientist (27 April 2024) of a book, The Elephant and the Blind: The experience of pure consciousness – philosophy, science and 500+ experiential reports by Thomas Metzinger. Apparently, Metzinger did an ‘online survey of meditators from 57 countries providing over 500 reports for the book.’ Basically, he argues that one can achieve a state that he calls ‘pure consciousness’ whereby the practitioner loses all sense of self. In effect, he argues (according to the reviewer, Alun Anderson):
 
 That a first-person perspective isn’t necessary for consciousness at all: your sense of self, of a continuous “you”, is part of the content of consciousness, not consciousness itself.

 
A provocative and contentious perspective, yet it reminds me of studies, also reported in New Scientist, many years ago, using brain-scan-imagery, of people experiencing ‘God’ also having a sense of being ‘self-less’, if I can use that term. Personally, I think consciousness is something fundamental with a possible independent existence to anything physical. It has a physical manifestation, if you like, purely because of memory, because our brains are effectively a storage device for consciousness.
 
This is a radical idea, but it is one I woke up with one day as if it was an epiphany, and realised that it was quite a departure from what I normally think. Raymond Tallis, whom I’ve already mentioned, once made the claim that science can only study objects and phenomena that can be measured. I claim that consciousness can’t be measured, but because we can measure brain waves and neuron activity many people argue that we are measuring consciousness.
 
But here’s the thing: if we didn’t experience consciousness, then scientists would tell us it doesn’t exist in the same way they tell us that free will doesn’t exist. I can make this claim because the same scientists argue that eventually AI will exhibit consciousness while simultaneously telling us that we will know this from the way the AI behaves, not because anyone will be measuring anything.

 

Addendum: I came across this related video by self-described philosopher-physicist, Avshalom Elitzur, who takes a subtly different approach to the same issue, giving examples from the animal kingdom. Towards the end, he talks about specific 'isms' (e.g. physicalism and dualism), but he doesn't mention the one I'm an advocate of, which is a 'loop' - that matter interacts with consciousness, via neurons, and then consciousness interacts with matter, which is necessary for free will.

Basically, he argues that consciousness interacting with matter breaks conservation laws (watch the video) but the brain consumes energy whether it's doing a maths calculation, running around an oval or lying asleep. Running around an oval is arguably consciousness interacting with matter - the same for an animal chasing prey - because one assumes they're based on a conscious decision, which is based on an imagined future, as per my thesis above. Also, processing information uses energy, which is why computers get hot, with no consciousness required. I fail to see what the difference is.

Tuesday, 30 April 2024

Logic rules

I’ve written on this topic before, but a question on Quora made me revisit it.
 
Self-referencing can lead to contradiction or to illumination. It was a recurring theme in Douglas Hofstadter’s Godel Escher Bach, and it’s key to Godel’s famous Incompleteness Theorem, which has far-reaching ramifications for mathematics if not epistemology generally. We can never know everything there is to know, which effectively means there will always be known unknowns and unknown unknowns, with possibly infinitely more of the latter than the former.
 
I recently came across a question on Quora: Will a philosopher typically say that their belief that the phenomenal world "abides by all the laws of logic" is an entailment of those laws being tautologies? Or would they rather consider that belief to be an assumption made outside of logic?

If you’re like me, you might struggle with even understanding this question. But it seems to me to be a question about self-referencing. In other words, my understanding is that it’s postulating, albeit as a question, that a belief in logic requires logic. The alternative being ‘the belief is an assumption made outside of logic’. It’s made more confusing by suggesting that the belief is a tautology because it’s self-referencing.
 
I avoided all that, by claiming that logic is fundamental even to the extent that it transcends the Universe, so not a ‘belief’ as such. And you will say that even making that statement is a belief. My response is that logic exists independently of us or any belief system. Basically, I’m arguing that logic is fundamental in that its rules govern the so-called laws of the Universe, which are independent of our cognisance of them. Therefore, independent of whether we believe in them or not.
 
I’ve said on previous occasions that logic should be a verb, because it’s something we do, and not just humans, but other creatures, and even machines. But that can’t be completely true if it really does transcend the Universe. My main argument is hypothetical in that, if there is a hypothetical God, then said God also has to obey the rules of logic. God can’t tell us the last digit of pi (it doesn’t exist) and he can’t make a prime number non-prime or vice versa, because they are determined by pure logic, not divine fiat.
 
And now, of course, I’ve introduced mathematics into the equation (pun intended) because mathematics and logic are inseparable, as probably best demonstrated by Godel’s famous theorem. It was Euclid (circa 300BC) who introduced the concept of proof into mathematics, and a lynch pin of many mathematical proofs is the fundamental principle of logic that you can’t have a contradiction, including Euclid’s own relatively simple proof that there are an infinity of primes. Back to Godel (or forward 2,300 years, to be more accurate), and he effectively proved that there is a distinction between 'proof' and 'truth' in mathematics, in as much as there will always be mathematical truths that can’t be proven true within a given axiom based, consistent, mathematical system. In practical terms, you need to keep extending the ‘system’ to formulate more truths into proofs.
 
It's not a surprise that the ‘laws of the Universe’ that I alluded to above, seem to obey mathematical ‘rules', and in fact, it’s only because of our prodigious abilities to mine the mathematical landscape that we understand the Universe (at every observable scale) to the extent that we do, including scales that were unimaginable even a century ago.
 
I’ve spoken before about Penrose’s 3 Worlds: Physical, Mental and Platonic; which represent the Universe, consciousness and mathematics respectively. What links them all is logic. The Universe is riddled with paradoxes, yet even paradoxes obey logic, and the deeper we look into the Universe’s secrets the more advanced mathematics we need, just to describe it, let alone understand it. And logic is the means by which humans access mathematics, which closes the loop.
 


 Addendum:
I'd forgotten that I wrote a similar post almost 5 years ago, where, unsurprisingly, I came to much the same conclusion. However, there's no reference to God, and I provide a specific example.