Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Being. Show all posts
Showing posts with label Being. Show all posts

Wednesday 24 January 2024

Can AI have free will?

This is a question I’ve never seen asked, let alone answered. I think there are good reasons for that, which I’ll come to later.
 
The latest issue of Philosophy Now (Issue 159, Dec 2023/Jan 2024), which I’ve already referred to in 2 previous posts, has as its theme (they always have a theme), Freewill Versus Determinism. I’ll concentrate on an article by the Editor, Grant Bartley, titled What Is Free Will? That’s partly because he and I have similar views on the topic, and partly because reading the article led me to ask the question at the head of this post (I should point out that he never mentions AI).
 
It's a lengthy article, meaning I won’t be able to fully do it justice, or even cover all aspects that he discusses. For instance, towards the end, he posits a personal ‘pet’ theory that there is a quantum aspect to the internal choice we make in our minds. And he even provides a link to videos he’s made on this topic. I mention this in passing, and will make 2 comments: one, I also have ‘pet’ theories, so I can’t dismiss him out-of-hand; and two, I haven’t watched the videos, so I can’t comment on its plausibility.
 
He starts with an attempt to define what we mean by free will, and what it doesn’t mean. For instance, he differentiates between subconscious choices, which he calls ‘impulses’ and free will, which requires a conscious choice. He also differentiates what he calls ‘making a decision’. I will quote him directly, as I still see this involving free will, if it’s based on making a ‘decision’ from alternative possibilities (as he explains).
 
…sometimes, our decision-making is a choice, that is, mentally deciding between alternative possibilities present to your awareness. But your mind doesn’t always explicitly present you with multiple choices from which to choose. Sometimes no distinct options are present in your awareness, and you must cause your next contents of your mind on the basis of the present content, through intuition and imagination. This is not choice so much as making a decision. (My emphasis)
 
This is worth a detour, because I see what he’s describing in this passage as the process I experience when writing fiction, which is ‘creating’. In this case, some of the content, if not all of it, is subconscious. When you write a story, it feels to you (but no one else) that the characters are real and the story you’re telling already exists. Nevertheless, I still think there’s an element of free will, because you make choices and judgements about what your imagination presents to your consciousness. As I said, this is a detour.
 
I don’t think this is what he’s referring to, and I’ll come back to it later when I introduce AI into the discussion. Meanwhile, I’ll discuss what I think is the nub of his thesis and my own perspective, which is the apparent dependency between consciousness and free will.
 
If conscious causation is not real, why did consciousness evolve at all? What would be the function of awareness if it can’t change behaviour? How could an impotent awareness evolve if it cannot change what the brain’s going to do to help the human body or its genes survive?
(Italics in the original)
 
This is a point I’ve made myself, but Bartley goes further and argues “Since determinism can’t answer these questions, we can know determinism is false.” This is the opposite to Sabine Hossenfelder’s argument (declaration really) that ‘free will is an illusion [therefore false]’.
 
Note that Bartley coins the term, ‘conscious causation’, as a de facto synonym for free will. In fact, he says this explicitly in his conclusion: “If you say there is no free will, you’re basically saying there is no such thing as conscious causation.” I’d have to agree.
 
I made the point in another post that consciousness seems to act outside the causal chain of the Universe, and I feel that’s what Bartley is getting at. In fact, he explicitly cites Kant on this point, who (according to Bartley) “calls the will ‘transcendental’…” He talks at length about ‘soft (or weak) determinism’ and ‘strong determinism’, which I’ve also discussed. Now, the usual argument is that consciousness is ‘caused’ by neuron activity, therefore strong determinism is not broken.
 
To quote Hossenfelder: Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking. (Hossenfelder even uses the term ‘software’ to describe what does the ‘calculating’ in your brain.)
 
And this allows me to segue into AI, because what Hossenfelder describes is what we expect a computer to do. The thing is that while most scientists (and others) believe that AI will eventually become conscious (not sure what Hossenfelder thinks), I’ve never heard or seen anyone argue that AI will have free will. And this is why I don’t think the question at the head of this post has ever been asked. Many of the people who believe that AI will become conscious also don’t believe free will exists.
 
There is another component to this, which I’ve raised before and that’s imagination. I like to quote Raymond Tallis (neuroscientist and also a contributor to Philosophy Now).
 
Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.
(My emphasis)
 
Now, in another post, I argued that AI can’t have imagination in the way we experience it, yet I acknowledge that AI can look at numerous possibilities (like in a game of chess) and 'choose' what it ‘thinks’ is the optimum action. So, in this sense, AI would have ‘agency’, but that’s not free will, because it’s not ‘conscious causation’. And in this sense, I agree with Bartley that ‘making a decision’ does not constitute free will, if it’s what an AI does. So the difference is consciousness. To quote from that same post on this topic.
 
But the key here is imagination. It is because we can imagine a future that we attempt to bring it about - that's free will. And what we imagine is affected by our past, our emotions and our intellectual considerations, but that doesn't make it predetermined.
 
So, if imagination and consciousness are both faculties that separate us from AI, then I can’t see AI having free will, even though it will make ‘decisions’ based on data it receives (as inputs), and those decisions may not be predictable.
 
And this means that AI may not be deterministic either, in the ‘strong’ sense. One of the differences with humans, and other creatures that evolved consciousness, is that consciousness can apparently change the neural pathways of the brain, which I’d argue is the ‘strange loop’ posited by Douglas Hofstadter. (I have discussed free will and brain-plasticity in another post)
 
But there’s another way of looking at this, which differentiates humans from AI. Our decision-making is a combination of logical reasoning and emotion. AI only uses logic, and even then, it uses logic differently to us. It uses a database of samples and possibilities to come up with a ‘decision’ (or output), but without using the logic to arise at that decision the way we would. In other words, it doesn’t ‘understand’ the decision, like when it translates between languages, for example.
 
There is a subconscious and conscious component to our decision-making. Arguably, the subconscious component is analogous to what a computer does with algorithm-based software (as per Hossenfelder’s description). But there is no analogous conscious component in AI, which makes a choice or decision. In other words, there is no ‘conscious causation’, therefore no free will, as per Bartley’s definition.
 

Sunday 3 December 2023

Philosophy in practice

 As someone recently pointed out, my posts on this blog invariably arise from things I have read (sometimes watched) and I’ve already written a post based on a column I read in the last issue of Philosophy Now (No 158, Oct/Nov 2023).
 
Well, I’ve since read a few more articles and they have prompted quite a lot of thinking. Firstly, there is an article called What Happened to Philosophy? By Dr Alexander Jeuk, who is to quote: “an independent researcher writing on philosophy, economics, politics and the institutional structure of science.” He compares classical philosophy (in his own words, the ‘great philosophers’) with the way philosophy is practiced today in academia – that place most us of don’t visit and wouldn’t understand the language if we did.
 
I don’t want to dwell on it, but it’s relevance to this post is that he laments the specialisation of philosophy, which he blames (if I can use that word) on the specialisation of science. The specialisation of most things is not a surprise to anyone who works in a technical field (I work in engineering). I should point out that I’m not a technical person, so I’m a non-specialist who works in a specialist field. Maybe that puts me in a better position than most to address this. I have a curious mind that started young and my curiosity shifted as I got older, which means I never really settled into one area of knowledge, and, if I had, I didn’t quite have the intellectual ability to become competent in it. And that’s why this blog is a bit eclectic.
 
In his conclusion, Jeuk suggests that ‘great philosophy’ should be looked for ‘in the classics, and perhaps encourage a re-emergence of great philosophical thought from outside academia.’ He mentions social media and the internet, which is relevant to this blog. I don’t claim to do ‘great philosophy’; I just attempt to disperse ideas and provoke thought. But I think that’s what philosophy represents to most people outside of academia. Academic philosophy has become lost in its obsession with language, whilst using language that most find obtuse, if not opaque.
 
Another article was titled Does a Just Society Require Just Citizens? By Jimmy Aflonso Licon, Assistant Teaching Professor in Philosophy at Arizona State University. I wouldn’t call the title misleading, but it doesn’t really describe the content of the essay, or even get to the gist of it, in my view. Licon introduces a term, ‘moral mediocrity’, which might have been a better title, if an enigmatic one, as it’s effectively what he discusses for the next, not-quite 3 pages.
 
He makes the point that our moral behaviour stems from social norms – a point I’ve made myself – but he makes it more compellingly. Most of us do ‘moral’ acts because that’s what our peers do, and we are species-destined (my term, not his) to conform. This is what he calls moral mediocrity, because we don’t really think it through or deliberate on whether it’s right or wrong, though we might convince ourselves that we do. He makes the salient point that if we had lived when slavery was the norm, we would have been slave-owners (assuming the reader is white, affluent and male). Likewise, suffrage was once anathema to a lot of women, as well as men. This supports my view that morality changes, and what was once considered radical becomes conservative. And such changes are usually generational, as we are witnessing in the current age with marriage equality.
 
He coins another term, when he says ‘we are the recipients of a moral inheritance’ (his italics). In other words, the moral norms we follow today, we’ve inherited from our forebears. Towards the end of his essay, he discusses Kant’s ideas on ‘duty’. I won’t go into that, but, if I understand Licon’s argument correctly, he’s saying that a ‘just society’ is one that has norms and laws that allow moral mediocrity, whereby its members don’t have to think about what’s right or wrong; they just follow the rules. This leads to his very last sentence: And this is fundamentally the moral problem with moral mediocrity: it is wrongly motivated.
 
I’ve written on this before, and, given the title as well as the content, I needed to think on what I consider leads to a ‘just society’. And I keep coming back to the essential need for trust. Societies don’t function without some level of trust, but neither do personal relationships, contractual arrangements or the raising of children.
 
And this leads to the third article in the same issue, Seeing Through Transparency, by Paul Doolan, who ‘teaches philosophy at Zurich International School and is the author of Collective Memory and the Dutch East Indies; Unremembering Decolonization (Amsterdam Univ Press, 2021).
 
In effect, he discusses the paradoxical nature of modern societies, whereby we insist on ‘transparency’ yet claim that privacy is sacrosanct – see the contradiction? Is this hypocrisy? And this relates directly to trust. Without transparency, be it corporate or governmental, we have trust issues. My experience is that when it comes to personal relationships, it’s a given, a social norm in fact, that a person reveals as much of their interior life as they want to, and it’s not ours to mine. An example of moral mediocrity perhaps. And yet, as Doolan points out, we give away so much on social media, where our online persona takes on a life of its own, which we cultivate (this blog not being an exception).
 
I think there does need to be transparency about decisions that affect our lives collectively, as opposed to secrets we all keep for the sake of our sanity. I have written dystopian fiction where people are surveilled to the point of monitoring all speech, and explored how it affects personal relationships. This already happens in some parts of the world. I’ve also explored a dystopian scenario where the surveillance is less obvious – every household has an android that monitors all activity. We might already have that with certain devices in our homes. Can you turn them off?  Do you have a device that monitors everyone who comes to your door?
 
The thing is that we become habituated to their presence, and it becomes part of our societal structure. As I said earlier, social norms change and are largely generational. Now they incorporate AI as well, and it’s happening without a lot of oversight or consultation with users. I don’t want to foster paranoia, but the genie has already escaped and I’d suggest it’s a matter of how we use it rather than how we put it back in the bottle.

Leaving that aside, Doolan also asks if you would behave differently if you could be completely invisible, which, of course, has been explored in fiction. We all know that anonymity fosters bad behaviour – just look online. One of my tenets is that honesty starts with honesty to oneself; it determines how we behave towards others.
 
I also know that an extreme environment, like a prison camp, can change one’s moral compass. I’ve never experienced it, but my father did. It brings out the best and worst in people, and I’d contend that you wouldn’t know how you’d be affected if you haven’t experienced it. This is an environment that turns Licon’s question on its head: can you be just in an intrinsically unjust environment?

Monday 23 October 2023

The mystery of reality

Many will say, ‘What mystery? Surely, reality just is.’ So, where to start? I’ll start with an essay by Raymond Tallis, who has a regular column in Philosophy Now called, Tallis in Wonderland – sometimes contentious, often provocative, always thought-expanding. His latest in Issue 157, Aug/Sep 2023 (new one must be due) is called Reflections on Reality, and it’s all of the above.
 
I’ve written on this topic many times before, so I’m sure to repeat myself. But Tallis’s essay, I felt, deserved both consideration and a response, partly because he starts with the one aspect of reality that we hardly ever ponder, which is doubting its existence.
 
Actually, not so much its existence, but whether our senses fool us, which they sometimes do, like when we dream (a point Tallis makes himself). And this brings me to the first point about reality that no one ever seems to discuss, and that is its dependence on consciousness, because when you’re unconscious, reality ceases to exist, for You. Now, you might argue that you’re unconscious when you dream, but I disagree; it’s just that your consciousness is misled. The point is that we sometimes remember our dreams, and I can’t see how that’s possible unless there is consciousness involved. If you think about it, everything you remember was laid down by a conscious thought or experience.
 
So, just to be clear, I’m not saying that the objective material world ceases to exist without consciousness – a philosophical position called idealism (advocated by Donald Hoffman) – but that the material objective world is ‘unknown’ and, to all intents and purposes, might as well not exist if it’s unperceived by conscious agents (like us). Try to imagine the Universe if no one observed it. It’s impossible, because the word, ‘imagine’, axiomatically requires a conscious agent.
 
Tallis proffers a quote from celebrated sci-fi author, Philip K Dick: 'Reality is that which, when you stop believing in it, doesn’t go away' (from The Shifting Realities of Philip K Dick, 1955). And this allows me to segue into the world of fiction, which Tallis doesn’t really discuss, but it’s another arena where we willingly ‘suspend disbelief' to temporarily and deliberately conflate reality with non-reality. This is something I have in common with Dick, because we have both created imaginary worlds that are more than distorted versions of the reality we experience every day; they’re entirely new worlds that no one has ever experienced in real life. But Dick’s aphorism expresses this succinctly. The so-called reality of these worlds, in these stories, only exist while we believe in them.
 
I’ve discussed elsewhere how the brain (not just human but animal brains, generally) creates a model of reality that is so ‘realistic’, we actually believe it exists outside our head.
 
I recently had a cataract operation, which was most illuminating when I took the bandage off, because my vision in that eye was so distorted, it made me feel sea sick. Everything had a lean to it and it really did feel like I was looking through a lens; I thought they had botched the operation. With both eyes open, it looked like objects were peeling apart. So I put a new eye patch on, and distracted myself for an hour by doing a Sudoku problem. When I had finished it, I took the patch off and my vision was restored. The brain had made the necessary adjustments to restore the illusion of reality as I normally interacted with it. And that’s the key point: the brain creates a model so accurately, integrating all our senses, but especially, sight, sound and touch, that we think the model is the reality. And all creatures have evolved that facility simply so they can survive; it’s a matter of life-and-death.
 
But having said all that, there are some aspects of reality that really do only exist in your mind, and not ‘out there’. Colour is the most obvious, but so is sound and smell, which all may be experienced differently by other species – how are we to know? Actually, we do know that some animals can hear sounds that we can’t and see colours that we don’t, and vice versa. And I contend that these sensory experiences are among the attributes that keep us distinct from AI.
 
Tallis makes a passing reference to Kant, who argued that space and time are also aspects of reality that are produced by the mind. I have always struggled to understand how Kant got that so wrong. Mind you, he lived more than a century before Einstein all-but proved that space and time are fundamental parameters of the Universe. Nevertheless, there are more than a few physicists who argue that the ‘flow of time’ is a purely psychological phenomenon. They may be right (but arguably for different reasons). If consciousness exists in a constant present (as expounded by Schrodinger) and everything else becomes the past as soon as it happens, then the flow of time is guaranteed for any entity with consciousness. However, many physicists (like Sabine Hossenfelder), if not most, argue that there is no ‘now’ – it’s an illusion.
 
Speaking of Schrodinger, he pointed out that there are fundamental differences between how we sense sight and sound, even though they are both waves. In the case of colour, we can blend them to get a new colour, and in fact, as we all know, all the colours we can see can be generated by just 3 colours, which is how the screens on all your devices work. However, that’s not the case with sound, otherwise we wouldn’t be able to distinguish all the different instruments in an orchestra. Just think: all the complexity is generated by a vibrating membrane (in the case of a speaker) and somehow our hearing separates it all. Of course, it can be done mathematically with a Fourier transform, but I don’t think that’s how our brains work, though I could be wrong.
 
And this leads me to discuss the role of science, and how it challenges our everyday experience of reality. Not surprisingly, Tallis also took his discussion in that direction. Quantum mechanics (QM) is the logical starting point, and Tallis references Bohr’s Copenhagen interpretation, ‘the view that the world has no definite state in the absence of observation.’ Now, I happen to think that there is a logical explanation for this, though I’m not sure anyone else agrees. If we go back to Schrodinger again, but this time his eponymous equation, it describes events before the ‘observation’ takes place, albeit with probabilities. What’s more, all the weird aspects of QM, like the Uncertainty Principle, superposition and entanglement, are all mathematically entailed in that equation. What’s missing is relativity theory, which has since been incorporated into QED or QFT.
 
But here’s the thing: once an observation or ‘measurement’ has taken place, Schrodinger’s equation no longer applies. In other words, you can’t use Schrodinger’s equation to describe something that has already happened. This is known as the ‘measurement problem’, because no one can explain it. But if QM only describes things that are yet to happen, then all the weird aspects aren’t so weird.
 
Tallis also mentions Einstein’s 'block universe', which infers past, present and future all exist simultaneously. In fact, that’s what Sabine Hossenfelder says in her book, Existential Physics:
 
The idea that the past and future exist in the same way as the present is compatible with all we currently know.

 
And:

Once you agree that anything exists now elsewhere, even though you see it only later, you are forced to accept that everything in the universe exists now. (Her emphasis.)
 
I’m not sure how she resolves this with cosmological history, but it does explain why she believes in superdeterminism (meaning the future is fixed), which axiomatically leads to her other strongly held belief that free will is an illusion; but so did Einstein, so she’s in good company.
 
In a passing remark, Tallis says, ‘science is entirely based on measurement’. I know from other essays that Tallis has written, that he believes the entire edifice of mathematics only exists because we can measure things, which we then applied to the natural world, which is why we have so-called ‘natural laws’. I’ve discussed his ideas on this elsewhere, but I think he has it back-to-front, whilst acknowledging that our ability to measure things, which is an extension of counting, is how humanity was introduced to mathematics. In fact, the ancient Greeks put geometry above arithmetic because it’s so physical. This is why there were no negative numbers in their mathematics, because the idea of a negative volume or area made no sense.
 
But, in the intervening 2 millennia, mathematics took on a life of its own, with such exotic entities like negative square roots and non-Euclidean geometry, which in turn suddenly found an unexpected home in QM and relativity theory respectively. All of a sudden, mathematics was informing us about reality before measurements were even made. Take Schrodinger’s wavefunction, which lies at the heart of his equation, and can’t be measured because it only exists in the future, assuming what I said above is correct.
 
But I think Tallis has a point, and I would argue that consciousness can’t be measured, which is why it might remain inexplicable to science, correlation with brain waves and their like notwithstanding.
 
So what is the mystery? Well, there’s more than one. For a start there is consciousness, without which reality would not be perceived or even be known, which seems to me to be pretty fundamental. Then there are the aspects of reality which have only recently been discovered, like the fact that time and space can have different ‘measurements’ dependent on the observer’s frame of reference. Then there is the increasing role of mathematics in our comprehension of reality at scales both cosmic and subatomic. In fact, given the role of numbers and mathematical relationships in determining fundamental constants and natural laws of the Universe, it would seem that mathematics is an inherent facet of reality.
 

Sunday 15 October 2023

What is your philosophy of life and why?

This was a question I answered on Quora, and, without specifically intending to, I brought together 2 apparently unrelated topics. The reason I discuss language is because it’s so intrinsic to our identity, not only as a species, but as an individual within our species. I’ve written an earlier post on language (in response to a Philosophy Now question-of-the-month), which has a different focus, and I deliberately avoided referencing that.
 
A ‘philosophy of life’ can be represented in many ways, but my perspective is within the context of relationships, in all their variety and manifestations. It also includes a recurring theme of mine.



First of all, what does one mean by ‘philosophy of life? For some people, it means a religious or cultural way-of-life. For others it might mean a category of philosophy, like post-modernism or existentialism or logical positivism.
 
For me, it means a philosophy on how I should live, and on how I both look at and interact with the world. This is not only dependent on my intrinsic beliefs that I might have grown up with, but also on how I conduct myself professionally and socially. So it’s something that has evolved over time.
 
I think that almost all aspects of our lives are dependent on our interactions with others, which starts right from when we were born, and really only ends when we die. And the thing is that everything we do, including all our failures and successes occur in this context.
 
Just to underline the significance of this dependence, we all think in a language, and we all gain our language from our milieu at an age before we can rationally and critically think, especially compared to when we mature. In fact, language is analogous to software that gets downloaded from generation to generation, so that knowledge can also be passed on and accumulated over ages, which has given rise to civilizations and disciplines like science, mathematics and art.
 
This all sounds off-topic, but it’s core to who we are and it’s what distinguishes us from other creatures. Language is also key to our relationships with others, both socially and professionally. But I take it further, because I’m a storyteller and language is the medium I use to create a world inside your head, populated by characters who feel like real people and who interact in ways we find believable. More than any other activity, this illustrates how powerful language is.
 
But it’s the necessity of relationships in all their manifestations that determines how one lives one’s life. As a consequence, my philosophy of life centres around one core value and that is trust. Without trust, I believe I am of no value. But, not only that, trust is the foundational value upon which a society either flourishes or devolves into a state of oppression with its antithesis, rebellion.

 

Saturday 16 September 2023

Modes of thinking

 I’ve written a few posts on creative thinking as well as analytical and critical thinking. But, not that long ago, I read a not-so-recently published book (2015) by 2 psychologists (John Kounios and Mark Beeman) titled, The Eureka Factor; Creative Insights and the Brain. To quote from the back fly-leaf:
 
Dr John Kounios is Professor of Psychology at Drexel University and has published cognitive neuroscience research on insight, creativity, problem solving, memory, knowledge representation and Alzheimer’s disease.
 
Dr Mark Beeman is Professor of Psychology and Neuroscience at Northwestern University, and researches creative problem solving and creative cognition, language comprehension and how the right and left hemispheres process information.

 
They divide people into 2 broad groups: ‘Insightfuls’ and ‘analytical thinkers’. Personally, I think the coined term, ‘insightfuls’ is misleading or too narrow in its definition, and I prefer the term ‘creatives’. More on that below.
 
As the authors say, themselves, ‘People often use the terms “insight” and “creativity” interchangeably.’ So that’s obviously what they mean by the term. However, the dictionary definition of ‘insight’ is ‘an accurate and deep understanding’, which I’d argue can also be obtained by analytical thinking. Later in the book, they describe insights obtained by analytical thinking as ‘pseudo-insights’, and the difference can be ‘seen’ with neuro-imaging techniques.
 
All that aside, they do provide compelling arguments that there are 2 distinct modes of thinking that most of us experience. Very early in the book (in the preface, actually), they describe the ‘ah-ha’ experience that we’ve all had at some point, where we’re trying to solve a puzzle and then it comes to us unexpectedly, like a light-bulb going off in our head. They then relate something that I didn’t know, which is that neurological studies show that when we have this ‘insight’ there’s a spike in our brain waves and it comes from a location in the right hemisphere of the brain.
 
Many years ago (decades) I read a book called Drawing on the Right Side of the Brain by Betty Edwards. I thought neuroscientists would disparage this as pop-science, but Kounios and Beeman seem to give it some credence. Later in the book, they describe this in more detail, where there are signs of activity in other parts of the brain, but the ah-ha experience has a unique EEG signature and it’s in the right hemisphere.
 
The authors distinguish this unexpected insightful experience from an insight that is a consequence of expertise. I made this point myself, in another post, where experts make intuitive shortcuts based on experience that the rest of us don’t have in our mental toolkits.
 
They also spend an entire chapter on examples involving a special type of insight, where someone spends a lot of time thinking about a problem or an issue, and then the solution comes to them unexpected. A lot of scientific breakthroughs follow this pattern, and the point is that the insight wouldn’t happen at all without all the rumination taking place beforehand, often over a period of weeks or months, sometimes years. I’ve experienced this myself, when writing a story, and I’ll return to that experience later.
 
A lot of what we’ve learned about the brain’s functions has come from studying people with damage to specific areas of the brain. You may have heard of a condition called ‘aphasia’, which is when someone develops a serious disability in language processing following damage to the left hemisphere (possibly from a stroke). What you probably don’t know (I didn’t) is that damage to the right hemisphere, while not directly affecting one’s ability with language can interfere with its more nuanced interpretations, like sarcasm or even getting a joke. I’ve long believed that when I’m writing fiction, I’m using the right hemisphere as much as the left, but it never occurred to me that readers (or viewers) need the right hemisphere in order to follow a story.
 
According to the authors, the difference between the left and right neo-cortex is one of connections. The left hemisphere has ‘local’ connections, whereas the right hemisphere has more widely spread connections. This seems to correspond to an ‘analytic’ ability in the left hemisphere, and a more ‘creative’ ability in the right hemisphere, where we make conceptual connections that are more wideranging. I’ve probably oversimplified that, but it was the gist I got from their exposition.
 
Like most books and videos on ‘creative thinking’ or ‘insights’ (as the authors prefer), they spend a lot of time giving hints and advice on how to improve your own creativity. It’s not until one is more than halfway through the book, in a chapter titled, The Insightful and the Analyst, that they get to the crux of the issue, and describe how there are effectively 2 different types who think differently, even in a ‘resting state’, and how there is a strong genetic component.
 
I’m not surprised by this, as I saw it in my own family, where the difference is very distinct. In another chapter, they describe the relationship between creativity and mental illness, but they don’t discuss how artists are often moody and neurotic, which is a personality trait. Openness is another personality trait associated with creative people. I would add another point, based on my own experience, if someone is creative and they are not creating, they can suffer depression. This is not discussed by the authors either.
 
Regarding the 2 types they refer to, they acknowledge there is a spectrum, and I can’t help but wonder where I sit on it. I spent a working lifetime in engineering, which is full of analytic types, though I didn’t work in a technical capacity. Instead, I worked with a lot of technical people of all disciplines: from software engineers to civil and structural engineers to architects, not to mention lawyers and accountants, because I worked on disputes as well.
 
The curious thing is that I was aware of 2 modes of thinking, where I was either looking at the ‘big-picture’ or looking at the detail. I worked as a planner, and one of my ‘tricks’ was the ability to distil a large and complex project into a one-page ‘Gantt’ chart (bar chart). For the individual disciplines, I’d provide a multipage detailed ‘program’ just for them.
 
Of course, I also write stories, where the 2 components are plot and character. Creating characters is purely a non-analytic process, which requires a lot of extemporising. I try my best not to interfere, and I do this by treating them as if they are real people, independent of me. Plotting, on the other hand, requires a big-picture approach, but I almost never know the ending until I get there. In the last story I wrote, I was in COVID lockdown when I knew the ending was close, so I wrote some ‘notes’ in an attempt to work out what happens. Then, sometime later (like a month), I had one sleepless night when it all came to me. Afterwards, I went back and looked at my notes, and they were all questions – I didn’t have a clue.

Wednesday 7 June 2023

Consciousness, free will, determinism, chaos theory – all connected

 I’ve said many times that philosophy is all about argument. And if you’re serious about philosophy, you want to be challenged. And if you want to be challenged you should seek out people who are both smarter and more knowledgeable than you. And, in my case, Sabine Hossenfelder fits the bill.
 
When I read people like Sabine, and others whom I interact with on Quora, I’m aware of how limited my knowledge is. I don’t even have a university degree, though I’ve attempted a number of times. I’ve spent my whole life in the company of people smarter than me, including at school. Believe it or not, I still have occasional contact with them, through social media and school reunions. I grew up in a small rural town, where the people you went to school with feel like siblings.
 
Likewise, in my professional life, I have always encountered people cleverer than me – it provides perspective.
 
In her book, Existential Physics; A Scientist’s Guide to Life’s Biggest Questions, Sabine interviews people who are possibly even smarter than she is, and I sometimes found their conversations difficult to follow. To be fair to Sabine, she also sought out people who have different philosophical views to her, and also have the intellect to match her.
 
I’m telling you all this to put things in perspective. Sabine has her prejudices like everyone else, some of which she defends better than others. I concede that my views are probably more simplistic than hers, and I support my challenges with examples that are hopefully easy to follow. Our points of disagreement can be distilled down to a few pertinent topics, which are time, consciousness, free will and chaos. Not surprisingly, they are all related – what you believe about one, affects what you believe about the others.
 
Sabine is very strict about what constitutes a scientific theory. She argues that so-called theories like the multiverse have ‘no explanatory power’, because they can’t be verified or rejected by evidence, and she calls them ‘ascientific’. She’s critical of popularisers like Brian Cox who tell us that there could be an infinite number of ‘you(s)’ in an infinite multiverse. She distinguishes between beliefs and knowledge, which is a point I’ve made myself. Having said that, I’ve also argued that beliefs matter in science. She puts all interpretations of quantum mechanics (QM) in this category. She keeps emphasising that it doesn’t mean they are wrong, but they are ‘ascientific’. It’s part of the distinction that I make between philosophy and science, and why I perceive science as having a dialectical relationship with philosophy.
 
I’ll start with time, as Sabine does, because it affects everything else. In fact, the first chapter in her book is titled, Does The Past Still Exist? Basically, she argues for Einstein’s ‘block universe’ model of time, but it’s her conclusion that ‘now is an illusion’ that is probably the most contentious. This critique will cite a lot of her declarations, so I will start with her description of the block universe:
 
The idea that the past and future exist in the same way as the present is compatible with all we currently know.
 
This viewpoint arises from the fact that, according to relativity theory, simultaneity is completely observer-dependent. I’ve discussed this before, whereby I argue that for an observer who is moving relative to a source, or stationary relative to a moving source, like the observer who is standing on the platform of Einstein’s original thought experiment, while a train goes past, knows this because of the Doppler effect. In other words, an observer who doesn’t see a Doppler effect is in a privileged position, because they are in the same frame of reference as the source of the signal. This is why we know the Universe is expanding with respect to us, and why we can work out our movement with respect to the CMBR (cosmic microwave background radiation), hence to the overall universe (just think about that).
 
Sabine clinches her argument by drawing a spacetime diagram, where 2 independent observers moving away from each other, observe a pulsar with 2 different simultaneities. One, who is traveling towards the pulsar, sees the pulsar simultaneously with someone’s birth on Earth, while the one travelling away from the pulsar sees it simultaneously with the same person’s death. This is her slam-dunk argument that ‘now’ is an illusion, if it can produce such a dramatic contradiction.
 
However, I drew up my own spacetime diagram of the exact same scenario, where no one is travelling relative to anyone one else, yet create the same apparent contradiction.


 My diagram follows the convention in that the horizontal axis represents space (all 3 dimensions) and the vertical axis represents time. So the 4 dotted lines represent 4 observers who are ‘stationary’ but ‘travelling through time’ (vertically). As per convention, light and other signals are represented as diagonal lines of 45 degrees, as they are travelling through both space and time, and nothing can travel faster than them. So they also represent the ‘edge’ of their light cones.
 
So notice that observer A sees the birth of Albert when he sees the pulsar and observer B sees the death of Albert when he sees the pulsar, which is exactly the same as Sabine’s scenario, with no relativity theory required. Albert, by the way, for the sake of scalability, must have lived for thousands of years, so he might be a tree or a robot.
 
But I’ve also added 2 other observers, C and D, who see the pulsar before Albert is born and after Albert dies respectively. But, of course, there’s no contradiction, because it’s completely dependent on how far away they are from the sources of the signals (the pulsar and Earth).
 
This is Sabine’s perspective:
 
Once you agree that anything exists now elsewhere, even though you see it only later, you are forced to accept that everything in the universe exists now. (Her emphasis.)
 
I actually find this statement illogical. If you take it to its logical conclusion, then the Big Bang exists now and so does everything in the universe that’s yet to happen. If you look at the first quote I cited, she effectively argues that the past and future exist alongside the present.
 
One of the points she makes is that, for events with causal relationships, all observers see the events happening in the same sequence. The scenario where different observers see different sequences of events have no causal relationships. But this begs a question: what makes causal events exceptional? What’s more, this is fundamental, because the whole of physics is premised on the principle of causality. In addition, I fail to see how you can have causality without time. In fact, causality is governed by the constant speed of light – it’s literally what stops everything from happening at once.
 
Einstein also believed in the block universe, and like Sabine, he argued that, as a consequence, there is no free will. Sabine is adamant that both ‘now’ and ‘free will’ are illusions. She argues that the now we all experience is a consequence of memory. She quotes Carnap that our experience of ‘past, present and future can be described and explained by psychology’ – a point also made by Paul Davies. Basically, she argues that what separates our experience of now from the reality of no-now (my expression, not hers) is our memory.
 
Whereas, I think she has it back-to-front, because, as I’ve pointed out before, without memory, we wouldn’t know we are conscious. Our brains are effectively a storage device that allows us to have a continuity of self through time, otherwise we would not even be aware that we exist. Memory doesn’t create the sense of now; it records it just like a photograph does. The photograph is evidence that the present becomes the past as soon as it happens. And our thoughts become memories as soon as they happen, otherwise we wouldn’t know we think.
 
Sabine spends an entire chapter on free will, where she persistently iterates variations on the following mantra:
 
The future is fixed except for occasional quantum events that we cannot influence.

 
But she acknowledges that while the future is ‘fixed’, it’s not predictable. And this brings us to chaos theory. Sabine discusses chaos late in the book and not in relation to free will. She explicates what she calls the ‘real butterfly effect’.
 
The real butterfly effect… means that even arbitrarily precise initial data allow predictions for only a finite amount of time. A system with this behaviour would be deterministic and yet unpredictable.
 
Now, if deterministic means everything physically manifest has a causal relationship with something prior, then I agree with her. If she means that therefore ‘the future is fixed’, I’m not so sure, and I’ll explain why. By specifying ‘physically manifest’, I’m excluding thoughts and computer algorithms that can have an effect on something physical, whereas the cause is not so easily determined. For example, In the case of the algorithm, does it go back to the coder who wrote it?
 
My go-to example for chaos is tossing coins, because it’s so easy to demonstrate and it’s linked to probability theory, as well as being the very essence of a random event. One of the key, if not definitive, features of a chaotic phenomenon is that, if you were to rerun it, you’d get a different result, and that’s fundamental to probability theory – every coin toss is independent of any previous toss – they are causally independent. Unrepeatability is common among chaotic systems (like the weather). Even the Earth and Moon were created from a chaotic event.
 
I recently read another book called Quantum Physics Made Me Do It by Jeremie Harris, who argues that tossing a coin is not random – in fact, he’s very confident about it. He’s not alone. Mark John Fernee, a physicist with Qld Uni, in a personal exchange on Quora argued that, in principle, it should be possible to devise a robot to perform perfectly predictable tosses every time, like a tennis ball launcher. But, as another Quora contributor and physicist, Richard Muller, pointed out: it’s not dependent on the throw but the surface it lands on. Marcus du Sautoy makes the same point about throwing dice and provides evidence to support it.
 
Getting back to Sabine. She doesn’t discuss tossing coins, but she might think that the ‘imprecise initial data’ is the actual act of tossing, and after that the outcome is determined, even if can’t be predicted. However, the deterministic chain is broken as soon as it hits a surface.
 
Just before she gets to chaos theory, she talks about computability, with respect to Godel’s Theorem and a discussion she had with Roger Penrose (included in the book), where she says:
 
The current laws of nature are computable, except for that random element from quantum mechanics.
 
Now, I’m quoting this out of context, because she then argues that if they were uncomputable, they open the door to unpredictability.
 
My point is that the laws of nature are uncomputable because of chaos theory, and I cite Ian Stewart’s book, Does God Play Dice? In fact, Stewart even wonders if QM could be explained using chaos (I don’t think so). Chaos theory has mathematical roots, because not only are the ‘initial conditions’ of a chaotic event impossible to measure, they are impossible to compute – you have to calculate to infinite decimal places. And this is why I disagree with Sabine that the ‘future is fixed’.
 
It's impossible to discuss everything in a 223 page book on a blog post, but there is one other topic she raises where we disagree, and that’s the Mary’s Room thought experiment. As she explains it was proposed by philosopher, Frank Jackson, in 1982, but she also claims that he abandoned his own argument. After describing the experiment (refer this video, if you’re not familiar with it), she says:
 
The flaw in this argument is that it confuses knowledge about the perception of colour with the actual perception of it.
 
Whereas, I thought the scenario actually delineated the difference – that perception of colour is not the same as knowledge. A person who was severely colour-blind might never have experienced the colour red (the specified colour in the thought experiment) but they could be told what objects might be red. It’s well known that some animals are colour-blind compared to us and some animals specifically can’t discern red. Colour is totally a subjective experience. But I think the Mary’s room thought experiment distinguishes the difference between human perception and AI. An AI can be designed to delineate colours by wavelength, but it would not experience colour the way we do. I wrote a separate post on this.
 
Sabine gives the impression that she thinks consciousness is a non-issue. She talks about the brain like it’s a computer.
 
You feel you have free will, but… really, you’re running a sophisticated computation on your neural processor.
 
Now, many people, including most scientists, think that, because our brains are just like computers, then it’s only a matter of time before AI also shows signs of consciousness. Sabine doesn’t make this connection, even when she talks about AI. Nevertheless, she discusses one of the leading theories of neuroscience (IIT, Information Integration Theory), based on calculating the amount of information processed, which gives a number called phi (Φ). I came across this when I did an online course on consciousness through New Scientist, during COVID lockdown. According to the theory, this number provides a ‘measure of consciousness’, which suggests that it could also be used with AI, though Sabine doesn’t pursue that possibility.
 
Instead, Sabine cites an interview in New Scientist with Daniel Bor from the University of Cambridge: “Phi should decrease when you go to sleep or are sedated… but work in Bor’s laboratory has shown that it doesn’t.”
 
Sabine’s own view:
 
Personally, I am highly skeptical that any measure consisting of a single number will ever adequately represent something as complex as human consciousness.
 
Sabine discusses consciousness at length, especially following her interview with Penrose, and she gives one of the best arguments against panpsychism I’ve read. Her interview with Penrose, along with a discussion on Godel’s Theorem, which is another topic, discusses whether consciousness is computable or not. I don’t think it is and I don’t think it’s algorithmic.
 
She makes a very strong argument for reductionism: that the properties we observe of a system can be understood from studying the properties of its underlying parts. In other words, that emergent properties can be understood in terms of the properties that it emerges from. And this includes consciousness. I’m one of those who really thinks that consciousness is the exception. Thoughts can cause actions, which is known as ‘agency’.
 
I don’t claim to understand consciousness, but I’m not averse to the idea that it could exist outside the Universe – that it’s something we tap into. This is completely ascientific, to borrow from Sabine. As I said, our brains are storage devices and sometimes they let us down, and, without which, we wouldn’t even know we are conscious. I don’t believe in a soul. I think the continuity of the self is a function of memory – just read The Lost Mariner chapter in Oliver Sacks’ book, The Man Who Mistook His Wife For A Hat. It’s about a man suffering from retrograde amnesia, so his life is stuck in the past because he’s unable to create new memories.
 
At the end of her book, Sabine surprises us by talking about religion, and how she agrees with Stephen Jay Gould ‘that religion and science are two “nonoverlapping magisteria!”. She makes the point that a lot of scientists have religious beliefs but won’t discuss them in public because it’s taboo.
 
I don’t doubt that Sabine has answers to all my challenges.
 
There is one more thing: Sabine talks about an epiphany, following her introduction to physics in middle school, which started in frustration.
 
Wasn’t there some minimal set of equations, I wanted to know, from which all the rest could be derived?
 
When the principle of least action was introduced, it was a revelation: there was indeed a procedure to arrive at all these equations! Why hadn’t anybody told me?

 
The principle of least action is one concept common to both the general theory of relativity and quantum mechanics. It’s arguably the most fundamental principle in physics. And yes, I posted on that too.

 

Wednesday 31 May 2023

Immortality; from the Pharaohs to cryonics

 I thought the term was cryogenics, but a feature article in the Weekend Australian Magazine (27-28 May 2023) calls the facilities that perform this process, cryonics, and looking up my dictionary, there is a distinction. Cryogenics is about low temperature freezing in general, and cryonics deals with the deep-freezing of bodies specifically, with the intention of one day reviving them.
 
The article cites a few people, but the author, Ross Bilton, features an Australian, Peter Tsolakides, who is in my age group. From what the article tells me, he’s a software engineer who has seen many generations of computer code and has also been a ‘globe-trotting executive for ExxonMobil’.
 
He’s one of the drivers behind a cryonic facility in Australia – its first – located at Holbrook, which is roughly halfway between Melbourne and Sydney. In fact, I often stop at Holbrook for a break and meal on my interstate trips. According to my car’s odometer it is almost exactly half way between my home and my destination, which is a good hour short of Sydney, so it’s actually closer to Melbourne, but not by much.
 
I’m not sure when Tsolakides plans to enter the facility, but he’s forecasting his resurrection in around 250 years time, when he expects he may live for another thousand years. Yes, this is science fiction to most of us, but there are some science facts that provide some credence to this venture.
 
For a start, we already cryogenically freeze embryos and sperm, and we know it works for them. There is also the case of Ewa Wisnierska, 35, a German paraglider taking part in an international competition in Australia, when she was sucked into a storm and elevated to 9947 metres (jumbo jet territory, and higher than Everest). Needless to say, she lost consciousness and spent a frozen 45 mins before she came back to Earth. Quite a miracle and I’ve watched a doco on it. She made a full recovery and was back at her sport within a couple of weeks. And I know of other cases, where the brain of a living person has been frozen to keep them alive, as counter-intuitive as that may sound.
 
Believe it or not, scientists are divided on this, or at least cautious about dismissing it outright. Many take the position, ‘Never say never’. And I think that’s fair enough, because it really is impossible to predict the future when it comes to humanity. It’s not surprising that advocates, like Tsolakides, can see a future where this will become normal for most humans. People who decline immortality will be the exception and not the norm. And I can imagine, if this ‘procedure’ became successful and commonplace, who would say no?
 
Now, I write science fiction, and I have written a story where a group of people decided to create an immortal human race, who were part machine. It’s a reflection of my own prejudices that I portrayed this as a dystopia, but I could have done the opposite.
 
There may be an assumption that if you write science fiction then you are attempting to predict the future, but I make no such claim. My science fiction is complete fantasy, but, like all science fiction, it addresses issues relevant to the contemporary society in which it was created.
 
Getting back to the article in the Weekend Australian, there is an aspect of this that no one addressed – not directly, anyway. There’s no point in cheating death if you can’t cheat old age. In the case of old age, you are dealing with a fundamental law of the Universe, entropy, the second law of thermodynamics. No one asked the obvious question: how do you expect to live for 1,000 years without getting dementia?
 
I think some have thought about this, because, in the same article, they discuss the ultimate goal of downloading their memories and their thinking apparatus (for want of a better term) into a computer. I’ve written on this before, so I won’t go into details.
 
Curiously, I’m currently reading a book by Sabine Hossenfelder called Existential Physics; A Scientist’s Guide to Life’s Biggest Questions, which you would think could not possibly have anything to say on this topic. Nevertheless:
 
The information that makes you you can be encoded in many different physical forms. The possibility that you might one day upload yourself to a computer and continue living a virtual life is arguably beyond present-day technology. It might sound entirely crazy, but it’s compatible with all we currently know.
 
I promise to write another post on Sabine’s book, because she’s nothing if not thought-provoking.
 
So where do I stand? I don’t want immortality – I don’t even want a gravestone, and neither did my father. I have no dependents, so I won’t live on in anyone’s memory. The closest I’ll get to immortality are the words on this blog.

Thursday 25 May 2023

Philosophy’s 2 disparate strands: what can we know; how can we live

The question I’d like to ask, is there a philosophical view that encompasses both? Some may argue that Aristotle attempted that, but I’m going to take a different approach.
 
For a start, the first part can arguably be broken into 2 further strands: physics and metaphysics. And even this divide is contentious, with some arguing that metaphysics is an ‘abstract theory with no basis in reality’ (one dictionary definition).
 
I wrote an earlier post arguing that we are ‘metaphysical animals’ after discussing a book of the same name, though it was really a biography of 4 Oxford women in the 20th Century: Elizabeth Anscombe, Mary Midgley, Philippa Foot and Iris Murdoch. But I’ll start with this quote from said book.
 
Poetry, art, religion, history, literature and comedy are all metaphysical tools. They are how metaphysical animals explore, discover and describe what is real (and beautiful and good). (My emphasis.)
 
So, arguably, metaphysics could give us a connection between the 2 ‘strands’ in the title. Now here’s the thing: I contend that mathematics should be part of that list, hence part of metaphysics. And, of course, we all know that mathematics is essential to physics as an epistemology. So physics and metaphysics, in my philosophy, are linked in a rather intimate  way.
 
The curious thing about mathematics, or anything metaphysical for that matter, is that, without human consciousness, they don’t really exist, or are certainly not manifest. Everything on that list is a product of human consciousness, notwithstanding that there could be other conscious entities somewhere in the universe with the same capacity.
 
But again, I would argue that mathematics is an exception. I agree with a lot of mathematicians and physicists that while we create the symbols and language of mathematics, we don’t create the intrinsic relationships that said language describes. And furthermore, some of those relationships seem to govern the universe itself.
 
And completely relevant to the first part of this discussion, the limits of our knowledge of mathematics seems to determine the limits of our knowledge of the physical world.
 
I’ve written other posts on how to live, specifically, 3 rules for humans and How should I live? But I’m going to go via metaphysics again, specifically storytelling, because that’s something I do. Storytelling requires an inner and outer world, manifest as character and plot, which is analogous to free will and fate in the real world. Now, even these concepts are contentious, especially free will, because many scientists tell us it’s an illusion. Again, I’ve written about this many times, but it’s relevance to my approach to fiction is that I try and give my characters free will. An important part of my fiction is that the characters are independent of me. If my characters don’t take on a life of their own, then I know I’m wasting my time, and I’ll ditch that story.
 
Its relevance to ‘how to live’ is authenticity. Artists understand better than most the importance of authenticity in their work, which really means keeping themselves out of it. But authenticity has ramifications, as any existentialist will tell you. To live authentically requires an honesty to oneself that is integral to one’s being. And ‘being’ in this sense is about being human rather than its broader ontological meaning. In other words, it’s a fundamental aspect of our psychology, because it evolves and changes according to our environment and milieu. Also, in the world of fiction, it's a fundamental dynamic.
 
What's more, if you can maintain this authenticity (and it’s genuine), then you gain people’s trust, and that becomes your currency, whether in your professional life or your social life. However, there is nothing more fake than false authenticity; examples abound.
 
I’ll give the last word to Socrates; arguably the first existentialist.
 
To live with honour in this world, actually be what you try to appear to be.


Tuesday 4 April 2023

Finding purpose without a fortune teller

 I just started watching a show on Apple TV+ called The Big Door Prize, starring Irish actor, Chris O’Dowd, set in suburban America (Deerfield). It’s listed as a comedy, but it might be a black comedy or a satire; I haven’t watched it long enough to judge.
 
It has an interesting premise: the local store has a machine, which, for small change, will tell you what your ‘potential’ is. Not that surprisingly, people start queuing up to find their potential (or purpose). I say, ‘not surprising’, because people consult Tarot cards or the I Ching for the same reason, not to mention weekly astrological charts found in the local newspaper, magazine or whatever. And of course, if the ‘reading’ coincides with our specific desire or wish, we wholeheartedly agree, whereas, if it doesn’t, we dismiss it as rubbish.
 
I’ve written previously about the importance of finding purpose, and, in fact, it’s considered necessary for one’s psychological health. But this is a subtly different take on it, prompted by the aforementioned premise. I have the advantage of over half a century of hindsight because I think I found my purpose late, yet it was hiding in plain sight all along.
 
We sometimes think of our purpose as a calling or vocation. In my case, I believe it was to be a writer. Now, even though I’m not a successful writer by any stretch of the imagination, the fact that I do write is important to me. It gives me a sense of purpose that I don’t find in my job or my relationships, even though they are all important to me. I don’t often agree with Jordan Peterson, but he once made the comment that creative people who don’t create are like ‘broken sticks’. I totally identify with that.
 
I only have to look to my early childhood (pre-high school) when I started to write stories and draw my own superheroes. But as a teenager and a young adult (in my 20s), I found I couldn’t write to save myself, including essays (like I write on this blog), let alone attempts at fiction. But here’s the thing: when I did start writing fiction, I knew it was terrible – so terrible, I didn’t even tell anyone – yet I persevered because I ‘knew’ that I could. And I think that’s the key point: if you have a purpose, you can visualise it even when everything you’re doing tells you that you should give it up.
 
So, you don’t need a ‘machine’ or Tarot cards, just self-belief. Purpose comes to those who look for it, and know it when they see it, even in its emerging phase, when no one else can see it.
 
 
Now, I’m going to tell you a story about someone else, whom I knew for over 4 decades and who found their ‘purpose’ in spite of circumstances that might have prevented it, or at least, worked against it. She was a single Mum who raised 3 daughters and simultaneously found a role in theatre. The thing is that she never gained any substantial financial reward, yet she won awards, both as an actor and director. She even partook in a theatre festival in Monaco, even though it took a government grant to get her there. The thing is that she had very little in terms of material wealth but it never bothered her and she was generous to a fault. She was a trained nurse, but had no other qualifications – certainly none relevant to her theatrical career. She passed last year and she is sorely missed, not only by me, but by the many lives she touched. She was, by anyone’s judgement, a force of nature.
 
 
 
This is a review of a play, Tuesdays with Morrie, for which Liz Bradley won an award. I happened to attend the opening with her, so it has a special memory for me. Dylan Muir, especially mentioned as providing the vocal, is Liz’s daughter.


Tuesday 28 March 2023

Why do philosophers think differently?

 This was a question on Quora, and this is my answer, which, hopefully, explains the shameless self-referencing to this blog.

 

Who says they do? I think this is one of those questions that should be reworded: what distinguishes a philosopher’s thinking from most other people’s? I’m not sure there is a definitive answer to this, because, like other individuals, every philosopher is unique. The major difference is that they spend more time writing down what they’re thinking than most people, and I’m a case in point.
 
Not that I’m a proper philosopher, in that it’s not my profession – I’m an amateur, a dilettante. I wrote a little aphorism at the head of my blog that might provide a clue.

Philosophy, at its best, challenges our long held views, such that we examine them more deeply than we might otherwise consider.

Philosophy, going back to Socrates, is all about argument. Basically, Socrates challenged the dogma of his day and it ultimately cost him his life. I write a philosophy blog and it’s full of arguments, not that I believe I can convince everyone to agree with my point of view. But basically, I hope to make people think outside their comfort zone, and that’s the best I can do.
 
Socrates is my role model, because he was the first (that we know of) who challenged the perceived wisdom provided by figures of authority. In Western traditions tracing the more than 2 millennia since Socrates, figures of authority were associated with the Church, in all its manifestations, where challenging them could result in death or torture or both.
 
That’s no longer the case - well, not quite true - try following that path if you’re a woman in Saudi Arabia or Iran. But, for most of us, living in a Western society, one can challenge anything at all, including whether the Earth is a sphere.
 
Back to the question, I don’t think it can be answered, even in the transcribed form that I substituted. Personally, I think philosophy in the modern world requires analysis and a healthy dose of humility. The one thing I’ve learned from reading and listening to many people much smarter than me is that the knowledge we actually know is but a blip and it always will be. Nowhere is this more evident than in mathematics. There are infinitely more incomputable numbers than computable numbers. So, if our knowledge of maths is just the tip of a universe-sized iceberg, what does that say about anything else we can possibly know.
 
Perhaps what separates a philosopher’s thinking from most other people’s is that they are acutely aware of how little we know. Come to think of it, Socrates famously made the same point.

Tuesday 20 December 2022

What grounds morality?

 In the most recent issue of Philosophy Now (No 153, Dec 2022/Jan 2023), they’ve published the answers to the last Question of the Month: What Grounds or Justifies Morality? I submitted an answer that wasn’t included, and having read the 10 selected, I believe I could have done better. In my answer, I said, ‘courage’, based on the fact that it takes courage for someone to take a stand against the tide of demonisation of the ‘other’, which we witness so often in history and even contemporary society.
 
However, that is too specific and doesn’t really answer the question, which arguably is seeking a principle, like the ‘Golden Rule’ or the Utilitarian principle of ‘the greatest happiness to the greatest number’. Many answers cited Kant’s appeal to ‘reason’, and some cited religion and others, some form of relativism. All in all, I thought they were good answers without singling any one out.
 
So what did I come up with? Well, partly based on observations of my own fiction and my own life, I decided that morality needed to be grounded in trust. I’ve written about trust at least twice before, and I think it’s so fundamental, because, both one-on-one relationships (of all types) and society as a whole, can’t function properly without it. If you think about it, how well you trust someone is a good measure of your assessment of their moral character. But it functions at all levels of society. Imagine living in a society where you can’t say what you think, where you have to obey strict rules of secrecy and deception or you will be punished. Such societies exist.
 
I’ve noticed a recurring motif in my stories (not deliberate) of loyalties being tested and of moral dilemmas. Both in my private life and professional life, I think trust is paramount. It’s my currency. I realised a long time ago that if people don’t trust me, I have no worth.

Wednesday 28 September 2022

Humanity’s Achilles’ heel

Good and evil are characteristics that imbue almost every aspect of our nature. It’s why it’s the subject of so many narratives, including mythologies and religions, not to mention actual real-world histories. It effectively defines what we are, what we are capable of and what we are destined to be.
 
I’ve discussed evil in one of my earliest posts, and also its recurring motif in fiction. Humanity is unique, at least on this small world we call home, in that we can change it on a biblical scale, both intentionally and unintentionally – climate change being the most obvious and recent example. We are doing this in combination with creating the fastest growing extinction event in the planet’s history, for which most of us are blissfully ignorant.
 
This post is already going off on tangents, but it’s hard to stay on track when there are so many ramifications; because none of these issues are the Achilles’ heel to which the title refers.
 
We have the incurable disease of following leaders who will unleash the worst of humanity onto itself. I wrote a post back in 2015, a year before Trump was elected POTUS, that was very prescient given the events that have occurred since. There are two traits such leaders have that not only define them but paradoxically explain their success.
 
Firstly, they are narcissistic in the extreme, which means that their self-belief is unassailable, no matter what happens. The entire world can collapse around them and somehow they’re untouchable. Secondly, they always come to power in times of division, which they exploit and then escalate to even greater effect. Humans are most irrational in ingroup-outgroup situations, which could be anything from a family dispute to a nationwide political division. Narcissists thrive in this environment, creating a narrative that only resembles the reality inside their head, but which their followers accept unquestioningly.
 
I’ve talked about leadership in other posts, but only fleetingly, and it’s an inherent and necessary quality in almost all endeavours; be it on a sporting field, on an engineering project, in a theatre or in a ‘house’ of government. There is a Confucian saying (so neither Western nor modern): If you want to know the true worth of a person, observe the effects they have on other people’s lives. I’ve long contended that the best leaders are those who bring out the best in the people they lead, which is the opposite of narcissists, who bring out the worst.
 
I’ve argued elsewhere that we are at a crossroads, which will determine the future of humanity for decades, if not centuries ahead. No one can predict what this century will bring, in the same way that no one predicted all the changes that occurred in the last century. My only prediction is that the changes in this century will be even greater and more impactful than the last. And whether that will be for the better or the worse, I don’t believe anyone can say.
 
Do I have an answer? Of course not, but I will make some observations. Virtually my whole working life was spent on engineering projects, which have invariably involved an ingroup-outgroup dynamic. Many people believe that conflict is healthy because it creates competition and by some social-Darwinian effect, the best ideas succeed and are adopted. Well, I’ve seen the exact opposite, and I witness it in our political environment all the time.
 
In reality, what happens is that one side will look for, and find, something negative about every engineering solution to a problem that is proposed. This means that there is continuous stalemate and the project suffers in every way imaginable – morale is depleted, everything is drawn out and we have time and cost overruns, which feed the blame-game to new levels. At worst, the sides end up in legal dispute, where, I should point out, I’ve had considerable experience.
 
On the contrary, when sides work together and collaboratively, people compromise and respect the expertise of their counterparts. What happens is that problems and issues are resolved and the project is ultimately successful. A lot of this depends on the temperament and skills of the project leader. Leadership requires good people skills.
 
Someone once did a study in the United States in the last century (I no longer have the reference) where they looked for the traits of individuals who were eminently successful. And what they found was that it was not education or IQ that was the determining factor, though that helped. No, the single most important factor was the ability to form consensus.
 
If one looks at prolonged conflicts, like we’ve witnessed in Ireland or the Middle East, people involved in talks will tell you that the ‘hardliners’ will never find peace, only the moderates will. So, if there is a lesson to be learned, it’s not to follow leaders who sow and reap division, but those who are inclusive. That means giving up our ingroup-outgroup mentality, which appears impossible. But, until we do, the incurable disease will recur and we will self-destruct by simply following the cult that self-destructive narcissists are so masterfully capable of growing.
 

Saturday 11 June 2022

Does the "unreasonable effectiveness of Mathematics" suggest we are in a simulation?

 This was a question on Quora, and I provided 2 responses: one being a comment on someone else’s post (whom I follow); and the other being my own answer.

Some years ago, I wrote a post on this topic, but this is a different perspective, or 2 different perspectives. Also, in the last year, I saw a talk given by David Chalmers on the effects of virtual reality. He pointed out that when we’re in a virtual reality using a visor, we trick our brains into treating it as if it’s real. I don’t find this surprising, though I’ve never had the experience. As a sci-fi writer, I’ve imagined future theme parks that were completely, fully immersive simulations. But I don’t believe that provides an argument that we live in a simulation, for reasons I provide in my Quora responses, given below.

 

Comment:

 

Actually, we create a ‘simulacrum’ of the ‘observable’ world in our heads, which is different to what other species might have. For example, most birds have 300 degree vision, plus they see the world in slow motion compared to us.

 

And this simulacrum is so fantastic it actually ‘feels’ like it exists outside your head. How good is that? 

 

But here’s the thing: in all these cases (including other species) that simulacrum must have a certain degree of faithfulness or accuracy with ‘reality’, because we interact with it on a daily basis, and, guess what? It can kill you.

 

But there is a solipsist version of this, which happens when we dream, but it won’t kill you, as far as we can tell, because we usually wake up.

 

Maybe I should write this as a separate answer.

 

And I did:

 

One word answer: No.

 

But having said that, there are 2 parts to this question, the first part being the famous quote from the title of Eugene Wigner’s famous essay. But I prefer this quote from the essay itself, because it succinctly captures what the essay is all about.

 

It is difficult to avoid the impression that a miracle confronts us here… or the two miracles of the existence of laws of nature and of the human mind’s capacity to divine them.

 

This should be read in conjunction with another famous quote; this time from Einstein:

 

The most incomprehensible thing about the Universe is that it’s comprehensible.

 

And it’s comprehensible because its laws can be rendered in the language of mathematics and humans have the unique ability (at least on Earth) to comprehend that language even though it appears to be neverending.

 

And this leads into the philosophical debate going as far back as Plato and Aristotle: is mathematics invented or discovered?

 

The answer to that question is dependent on how you look at mathematics. Cosmologist and Fellow of the Royal Society, John Barrow, wrote a very good book on this very topic, called Pi in the Sky. In it, he makes the pertinent point that mathematics is not so much about numbers as the relationships between numbers. He goes further and observes that once you make this leap of cognitive insight, a whole new world opens up.

 

But here’s the thing: we have invented a system of numbers, most commonly to base 10, (but other systems as well), along with specific operators and notations that provide a language to describe and mentally manipulate these relationships. But the relationships themselves are not created by us: they become manifest in our explorations. To give an extremely basic example: prime numbers. You cannot create a prime number, they simply exist, and you can’t change one into a non-prime number or vice versa. And this is very basic, because primes are called the atoms of mathematics, because all the other ‘natural’ numbers can be derived from them.

 

An interest in the stars started early among humans, and eventually some very bright people, mainly Kepler and Newton, came to realise that the movement of the planets could be described very precisely by mathematics. And then Einstein, using Riemann geometry, vectors, calculus and matrices and something called the Lorenz transformation, was able to describe the planets even more accurately and even provide very accurate models of the entire observable universe, though recently we’ve come to the limits of this and we now need new theories and possibly new mathematics.


But there is something else that Einstein’s theories don’t tell us and that is that the planetary orbits are chaotic, which means they are unpredictable and that means eventually they could actually unravel. But here’s another thing: to calculate chaotic phenomena requires a computation to infinite decimal places. Therefore I contend the Universe can’t be a computer simulation. So that’s the long version of NO.

 

 

Footnote: Both my comment and my answer were ‘upvoted’ by Eric Platt, who has a PhD in mathematics (from University of Houston) and was a former software engineer at UCAR (University Corporation for Atmospheric Research).


Sunday 22 May 2022

We are metaphysical animals

 I’m reading a book called Metaphysical Animals (How Four Women Brought Philosophy Back To Life). The four women were Mary Midgley, Iris Murdoch, Philippa Foot and Elizabeth Anscombe. The first two I’m acquainted with and the last two, not. They were all at Oxford during the War (WW2) at a time when women were barely tolerated in academia and had to be ‘chaperoned’ to attend lectures. Also a time when some women students ended up marrying their tutors. 

The book is authored by Clare Mac Cumhaill and Rachael Wiseman, both philosophy lecturers who became friends with Mary Midgley in her final years (Mary died in 2018, aged 99). The book is part biographical of all 4 women and part discussion of the philosophical ideas they explored.

 

Bringing ‘philosophy back to life’ is an allusion to the response (backlash is too strong a word) to the empiricism, logical positivism and general rejection of metaphysics that had taken hold of English philosophy, also known as analytical philosophy. Iris spent time in postwar Paris where she was heavily influenced by existentialism and Jean-Paul Sartre, in particular, whom she met and conversed with. 

 

If I was to categorise myself, I’m a combination of analytical philosopher and existentialist, which I suspect many would see as a contradiction. But this isn’t deliberate on my part – more a consequence of pursuing my interests, which are science on one hand (with a liberal dose of mathematical Platonism) and how-to-live a ‘good life’ (to paraphrase Aristotle) on the other.

 

Iris was intellectually seduced by Sartre’s exhortation: “Man is nothing else but that which he makes of himself”. But as her own love life fell apart along with all its inherent dreams and promises, she found putting Sartre’s implicit doctrine, of standing solitarily and independently of one’s milieu, difficult to do in practice. I’m not sure if Iris was already a budding novelist at this stage of her life, but anyone who writes fiction knows that this is what it’s all about: the protagonist sailing their lone ship on a sea full of icebergs and other vessels, all of which are outside their control. Life, like the best fiction, is an interaction between the individual and everyone else they meet. Your moral compass, in particular, is often tested. Existentialism can be seen as an attempt to arise above this, but most of us don’t. 

 

Not surprisingly, Wittgenstein looms large in many of the pages, and at least one of the women, Elizabeth Anscombe, had significant interaction with him. With Wittgenstein comes an emphasis on language, which has arguably determined the path of philosophy since. I’m not a scholar of Wittgenstein by any stretch of the imagination, but one thing he taught, or that people took from him, was that the meaning we give to words is a consequence of how they are used in ordinary discourse. Language requires a widespread consensus to actually work. It’s something we rarely think about but we all take for granted, otherwise there would be no social discourse or interaction at all. There is an assumption that when I write these words, they have the same meaning for you as they do for me, otherwise I am wasting my time.

 

But there is a way in which language is truly powerful, and I have done this myself. I can write a passage that creates a scene inside your mind complete with characters who interact and can cause you to laugh or cry, or pretty much any other emotion, as if you were present; as if you were in a dream.

 

There are a couple of specific examples in the book which illustrate Wittgenstein’s influence on Elizabeth and how she used them in debate. They are both topics I have discussed myself without knowing of these previous discourses.

 

In 1947, so just after the war, Elizabeth presented a paper to the Cambridge Moral Sciences Club, which she began with the following disclosure:

 

Everywhere in this paper I have imitated Dr Wittgenstein’s ideas and methods of discussion. The best that I have written is a weak copy of some features of the original, and its value depends only on my capacity to understand and use Dr Wittgenstein’s work.

 

The subject of her talk was whether one can truly talk about the past, which goes back to the pre-Socratic philosopher, Parmenides. In her own words, paraphrasing Parmenides, ‘To speak of something past’ would then to ‘point our thought’ at ‘something there’, but out of reach. Bringing Wittgenstein into the discussion, she claimed that Parmenides specific paradox about the past arose ‘from the way that thought and language connect to the world’.

 

We apply language to objects by naming them, but, in the case of the past, the objects no longer exist. She attempts to resolve this epistemological dilemma by discussing the nature of time as we experience it, which is like a series of pictures that move on a timeline while we stay in the present. This is analogous to my analysis that everything we observe becomes the past as soon as it happens, which is exemplified every time someone takes a photo, but we remain in the present – the time for us is always ‘now’.

 

She explains that the past is a collective recollection, documented in documents and photos, so it’s dependent on a shared memory. I would say that this is what separates our recollection of a real event from a dream, which is solipsistic and not shared with anyone else. But it doesn’t explain why the past appears fixed and the future unknown, which she also attempted to address. But I don’t think this can be addressed without discussing physics.

 

Most physicists will tell you that the asymmetry between the past and future can only be explained by the second law of thermodynamics, but I disagree. I think it is described, if not explained, by quantum mechanics (QM) where the future is probabilistic with an infinitude of possible paths and classical physics is a probability of ONE because it’s already happened and been ‘observed’. In QM, the wave function that gives the probabilities and superpositional states is NEVER observed. The alternative is that all the futures are realised in alternative universes. Of course, Elizabeth Anscombe would know nothing of these conjectures.

 

But I would make the point that language alone does not resolve this. Language can only describe these paradoxes and dilemmas but not explain them.

 

Of course, there is a psychological perspective to this, which many people claim, including physicists, gives the only sense of time passing. According to them, it’s fixed: past, present and future; and our minds create this distinction. I think our minds create the distinction because only consciousness creates a reference point for the present. Everything non-sentient is in a causal relationship that doesn’t sense time. Photons of light, for example, exist in zero time, yet they determine causality. Only light separates everything in time as well as space. I’ve gone off-topic.

 

Elizabeth touched on the psychological aspect, possibly unintentionally (I’ve never read her paper, so I could be wrong) that our memories of the past are actually imagined. We use the same part of the brain to imagine the past as we do to imagine the future, but again, Elizabeth wouldn’t have known this. Nevertheless, she understood that our (only) knowledge of the past is a thought that we turn into language in order to describe it.

 

The other point I wish to discuss is a famous debate she had with C.S. Lewis. This is quite something, because back then, C.S. Lewis was a formidable intellectual figure. Elizabeth’s challenge was all the more remarkable because Lewis’s argument appeared on the surface to be very sound. Lewis argued that the ‘naturalist’ position was self-refuting if it was dependent on ‘reason’, because reason by definition (not his terminology) is based on the premise of cause and effect and human reason has no cause. That’s a simplification, nevertheless it’s the gist of it. Elizabeth’s retort:

 

What I shall discuss is this argument’s central claim that a belief in the validity of reason is inconsistent with the idea that human thought can be fully explained as the product of non-rational causes.

 

In effect, she argued that reason is what humans do perfectly naturally, even if the underlying ‘cause’ is unknown. Not knowing the cause does not make the reasoning irrational nor unnatural. Elizabeth specifically cited the language that Lewis used. She accused him of confusing the concepts of “reason”, “cause” and “explanation”.

 

My argument would be subtly different. For a start, I would contend that by ‘reason’, he meant ‘logic’, because drawing conclusions based on cause and effect is logic, even if the causal relations (under consideration) are assumed or implied rather than observed. And here I contend that logic is not a ‘thing’ – it’s not an entity; it’s an action - something we do. In the modern age, machines perform logic; sometimes better than we do.

 

Secondly, I would ask Lewis, does he think reason only happens in humans and not other animals? I would contend that animals also use logic, though without language. I imagine they’d visualise their logic rather than express it in vocal calls. The difference with humans is that we can perform logic at a whole different level, but the underpinnings in our brains are surely the same. Elizabeth was right: not knowing its physical origins does not make it irrational; they are separate issues.

 

Elizabeth had a strong connection to Wittgenstein right up to his death. She worked with him on a translation and edit of Philosophical Investigations, and he bequeathed her a third of his estate and a third of his copyright.

 

It’s apparent from Iris’s diaries and other sources that Elizabeth and Iris fell in love at one point in their friendship, which caused them both a lot of angst and guilt because of their Catholicism. Despite marrying, Iris later had an affair with Pip (Philippa).

 

Despite my discussion of just 2 of Elizabeth’s arguments, I don’t have the level of erudition necessary to address most of the topics that these 4 philosophers published in. Just reading the 4 page Afterwards, it’s clear that I haven’t even brushed the surface of what they achieved. Nevertheless, I have a philosophical perspective that I think finds some resonance with their mutual ideas. 

 

I’ve consistently contended that the starting point for my philosophy is that for each of us individually, there is an inner and outer world. It even dictates the way I approach fiction. 

 

In the latest issue of Philosophy Now (Issue 149, April/May 2022), Richard Oxenberg, who teaches philosophy at Endicott College in Beverly, Massachusetts, wrote an article titled, What Is Truth? wherein he describes an interaction between 2 people, but only from a purely biological and mechanical perspective, and asks, ‘What is missing?’ Well, even though he doesn’t spell it out, what is missing is the emotional aspect. Our inner world is dominated by emotional content and one suspects that this is not unique to humans. I’m pretty sure that other creatures feel emotions like fear, affection and attachment. What’s more I contend that this is what separates, not just us, but the majority of the animal kingdom, from artificial intelligence.

 

But humans are unique, even among other creatures, in our ability to create an inner world every bit as rich as the one we inhabit. And this creates a dichotomy that is reflected in our division of arts and science. There is a passage on page 230 (where the authors discuss R.G. Collingwood’s influence on Mary), and provide an unexpected definition.

 

Poetry, art, religion, history, literature and comedy are all metaphysical tools. They are how metaphysical animals explore, discover and describe what is real (and beautiful and good). (My emphasis.)

 

I thought this summed up what they mean with their coinage, metaphysical animals, which titles the book, and arguably describes humanity’s most unique quality. Descriptions of metaphysics vary and elude precise definition but the word, ‘transcendent’, comes to mind. By which I mean it’s knowledge or experience that transcends the physical world and is most evident in art, music and storytelling, but also includes mathematics in my Platonic worldview.


 

Footnote: I should point out that certain chapters in the book give considerable emphasis to moral philosophy, which I haven’t even touched on, so another reader might well discuss other perspectives.