Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Psychology. Show all posts
Showing posts with label Psychology. Show all posts

Sunday 18 February 2024

What would Kant say?

Even though this is a philosophy blog, my knowledge of Western philosophy is far from comprehensive. I’ve read some of the classic texts, like Aristotle’s Nicomachean Ethics, Descartes Meditations, Hume’s A treatise of Human Nature, Kant’s Critique of Pure Reason; all a long time ago. I’ve read extracts from Plato, as well as Sartre’s Existentialism is a Humanism and Mill’s Utilitarianism. As you can imagine, I only recollect fragments, since I haven’t revisited them in years.
 
Nevertheless, there are a few essays on this blog that go back to the time when I did. One of those is an essay on Kant, which I retitled, Is Kant relevant to the modern world? Not so long ago, I wrote a post that proposed Kant as an unwitting bridge between Plato and modern physics. I say, ‘unwitting’, because, as far as I know, Kant never referenced a connection to Plato, and it’s quite possible that I’m the only person who has. Basically, I contend that the Platonic realm, which is still alive and well in mathematics, is a good candidate for Kant’s transcendental idealism, while acknowledging Kant meant something else. Specifically, Kant argued that time and space, like sensory experiences of colour, taste and sound, only exist in the mind.
 
Here is a good video, which explains Kant’s viewpoint better than me. If you watch it to the end, you’ll find the guy who plays Devil’s advocate to the guy expounding on Kant’s views makes the most compelling arguments (they’re both animated icons).

But there’s a couple of points they don’t make which I do. We ‘sense’ time and space in the same way we sense light, sound and smell to create a model inside our heads that attempts to match the world outside our heads, so we can interact with it without getting killed. In fact, our modelling of time and space is arguably more important than any other aspect of it.
 
I’ve always had a mixed, even contradictory, appreciation of Kant. I consider his insight that we may never know the things-in-themselves to be his greatest contribution to epistemology, and was arguably affirmed by 20th Century physics. Both relativity and quantum mechanics (QM) have demonstrated that what we observe does not necessarily reflect reality. Specifically, different observers can see and even measure different parameters of the same event. This is especially true when relativistic effects come into play.
 
In relativity, different observers not only disagree on time and space durations, but they can’t agree on simultaneity. As the Kant advocate in the video points out, surely this is evidence that space and time only exist in the mind, as Kant originally proposed. The Devil’s advocate resorts to an argument of 'continuity', meaning that without time as a property independent of the mind, objects and phenomena (like a candle burning) couldn’t continue to happen without an observer present.
 
But I would argue that Einstein’s general theory of relativity, which tells us that different observers can measure different durations of space and time (I’ll come back to this later), also tells us that the entire universe requires a framework of space and time for the objects to exist at all. In other words, GR tells us, mathematically, that there is an interdependence between the gravitational field that permeates and determines the motion of objects throughout the entire universe, and the spacetime metric those same objects inhabit. In fact, they are literally on opposite sides of the same equation.
 
And this brings me to the other point that I think is missing in the video’s discussion. Towards the end, the Devil’s advocate introduces ‘the veil of perception’ and argues:
 
We can only perceive the world indirectly; we have no idea what the world is beyond this veil… How can we then theorise about the world beyond our perceptions? …Kant basically claims that things-in-themselves exist but we do not know and cannot know anything about these things-in-themselves… This far-reaching world starts to feel like a fantasy.
 
But every physicist has an answer to this, because 20th Century physics has taken us further into this so-called ‘fantasy’ than Kant could possibly have imagined, even though it appears to be a neverending endeavour. And it’s specifically mathematics that has provided the means, which the 2 Socratic-dialogue icons have ignored. Which is why I contend that it’s mathematical Platonism that has replaced Kant’s transcendental idealism. It’s rendered by the mind yet it models reality better than anything else we have available. It’s the only means we have available to take us behind ‘the veil of perception’ and reveal the things-in-themselves.
 
And this leads me to a related point that was actually the trigger for me writing this in the first place.
 
In my last post, I mentioned I’m currently reading Kip A. Thorne’s book, Black Holes and Time Warps; Einstein’s Outrageous Legacy (1994). It’s an excellent book on many levels, because it not only gives a comprehensive history, involving both Western and Soviet science, it also provides insights and explanations most of us are unfamiliar with.
 
To give an example that’s relevant to this post, Thorne explains how making measurements at the extreme curvature of spacetime near the event horizon of a black hole, gives the exact same answer whether it’s the spacetime that distorts while the ‘rulers’ remain unchanged, or it’s the rulers that change while it’s the spacetime that remains ‘flat’. We can’t tell the difference. And this effectively confirms Kant’s thesis that we can never know the things-in-themselves.
 
To quote Thorne:
 
What is the genuine truth? Is spacetime really flat, or is it really curved? To a physicist like me this is an uninteresting question because it has no physical consequences (my emphasis). Both viewpoints, curved spacetime and flat, give the same predictions for any measurements performed with perfect rulers and clocks… (Earlier he defines ‘perfect rulers and clocks’ as being derived at the atomic scale)
 
Ian Miller (a physicist who used to be active on Quora) once made the point, regarding space-contraction, that it’s the ruler that deforms and not the space. And I’ve made the point myself that a clock can effectively be a ruler, because a clock that runs slower measures a shorter distance for a given velocity, compared to another so-called stationary observer who will measure the same distance as longer. This happens in the twin paradox thought experiment, though it’s rarely mentioned (even by me).

Saturday 13 January 2024

How can we achieve world peace?

 Two posts ago, I published my submission to Philosophy Now's Question of the Month, from 2 months ago: What are the limit of knowledge? Which was published in Issue 159 (Dec 2023/Jan 2024). Logically, they inform readers of the next Question of the Month, which is the title of this post. I'm almost certain they never publish 2 submissions by the same author in a row, so I'm publishing this answer now. It's related to my last post, obviously, and one I wrote some time ago (Humanity's Achilles Heel).


There are many aspects to this question, not least whether one is an optimist or a pessimist. It’s well known that people underestimate the duration and cost of a project, even when it’s their profession, because people are optimists by default. Only realists are pessimistic, and I’m in the latter category, because I estimate the duration of projects professionally.
 
There are a number of factors that mitigate against world peace, the primary one being that humans are inherently tribal and are quick to form ingroup-outgroup mental-partitions, exemplified by politics the world over. In this situation, rational thought and reasoned argument take a back seat to confirmation bias and emotive rhetoric. Add to this dynamic, the historically observed and oft-repeated phenomena that we follow charismatic, cult-propagating leaders, and you have a recipe for self-destruction on a national scale. This is the biggest obstacle to world peace. These leaders thrive on and cultivate division with its kindred spirits of hatred and demonisation of the ‘other’: the rationale for all of society’s ills becomes an outgroup identified by nationality, race, skin-colour, culture or religion.
 
Wealth, or the lack of it, is a factor as well. Inequality provides a motive and a rationale for conflict. It often goes hand-in-hand with oppression, but even when it doesn’t, the anger and resentment can be exploited and politicised by populist leaders, whose agenda is more focused on their own sense of deluded historical significance than actually helping the people they purportedly serve.
 
If you have conflict - and it doesn’t have to be military – then as long as you have leaders who refuse to compromise, you’ll never find peace. Only moderates on both sides can broker peace.
 
So, while I’m a pessimist or realist, I do see a ‘how’. If we only elect leaders who seek and find consensus, and remove leaders who sow division, there is a chance. The best leaders, be they corporate, political or on a sporting field, are the ones who bring out the best in others and are not just feeding their own egos. But all this is easier said than done, as we are witnessing in certain parts of the world right now. For as long as we elect leaders who are narcissistic and cult-like, we will continue to sow the seeds of self-destruction.

Sunday 3 December 2023

Philosophy in practice

 As someone recently pointed out, my posts on this blog invariably arise from things I have read (sometimes watched) and I’ve already written a post based on a column I read in the last issue of Philosophy Now (No 158, Oct/Nov 2023).
 
Well, I’ve since read a few more articles and they have prompted quite a lot of thinking. Firstly, there is an article called What Happened to Philosophy? By Dr Alexander Jeuk, who is to quote: “an independent researcher writing on philosophy, economics, politics and the institutional structure of science.” He compares classical philosophy (in his own words, the ‘great philosophers’) with the way philosophy is practiced today in academia – that place most us of don’t visit and wouldn’t understand the language if we did.
 
I don’t want to dwell on it, but it’s relevance to this post is that he laments the specialisation of philosophy, which he blames (if I can use that word) on the specialisation of science. The specialisation of most things is not a surprise to anyone who works in a technical field (I work in engineering). I should point out that I’m not a technical person, so I’m a non-specialist who works in a specialist field. Maybe that puts me in a better position than most to address this. I have a curious mind that started young and my curiosity shifted as I got older, which means I never really settled into one area of knowledge, and, if I had, I didn’t quite have the intellectual ability to become competent in it. And that’s why this blog is a bit eclectic.
 
In his conclusion, Jeuk suggests that ‘great philosophy’ should be looked for ‘in the classics, and perhaps encourage a re-emergence of great philosophical thought from outside academia.’ He mentions social media and the internet, which is relevant to this blog. I don’t claim to do ‘great philosophy’; I just attempt to disperse ideas and provoke thought. But I think that’s what philosophy represents to most people outside of academia. Academic philosophy has become lost in its obsession with language, whilst using language that most find obtuse, if not opaque.
 
Another article was titled Does a Just Society Require Just Citizens? By Jimmy Aflonso Licon, Assistant Teaching Professor in Philosophy at Arizona State University. I wouldn’t call the title misleading, but it doesn’t really describe the content of the essay, or even get to the gist of it, in my view. Licon introduces a term, ‘moral mediocrity’, which might have been a better title, if an enigmatic one, as it’s effectively what he discusses for the next, not-quite 3 pages.
 
He makes the point that our moral behaviour stems from social norms – a point I’ve made myself – but he makes it more compellingly. Most of us do ‘moral’ acts because that’s what our peers do, and we are species-destined (my term, not his) to conform. This is what he calls moral mediocrity, because we don’t really think it through or deliberate on whether it’s right or wrong, though we might convince ourselves that we do. He makes the salient point that if we had lived when slavery was the norm, we would have been slave-owners (assuming the reader is white, affluent and male). Likewise, suffrage was once anathema to a lot of women, as well as men. This supports my view that morality changes, and what was once considered radical becomes conservative. And such changes are usually generational, as we are witnessing in the current age with marriage equality.
 
He coins another term, when he says ‘we are the recipients of a moral inheritance’ (his italics). In other words, the moral norms we follow today, we’ve inherited from our forebears. Towards the end of his essay, he discusses Kant’s ideas on ‘duty’. I won’t go into that, but, if I understand Licon’s argument correctly, he’s saying that a ‘just society’ is one that has norms and laws that allow moral mediocrity, whereby its members don’t have to think about what’s right or wrong; they just follow the rules. This leads to his very last sentence: And this is fundamentally the moral problem with moral mediocrity: it is wrongly motivated.
 
I’ve written on this before, and, given the title as well as the content, I needed to think on what I consider leads to a ‘just society’. And I keep coming back to the essential need for trust. Societies don’t function without some level of trust, but neither do personal relationships, contractual arrangements or the raising of children.
 
And this leads to the third article in the same issue, Seeing Through Transparency, by Paul Doolan, who ‘teaches philosophy at Zurich International School and is the author of Collective Memory and the Dutch East Indies; Unremembering Decolonization (Amsterdam Univ Press, 2021).
 
In effect, he discusses the paradoxical nature of modern societies, whereby we insist on ‘transparency’ yet claim that privacy is sacrosanct – see the contradiction? Is this hypocrisy? And this relates directly to trust. Without transparency, be it corporate or governmental, we have trust issues. My experience is that when it comes to personal relationships, it’s a given, a social norm in fact, that a person reveals as much of their interior life as they want to, and it’s not ours to mine. An example of moral mediocrity perhaps. And yet, as Doolan points out, we give away so much on social media, where our online persona takes on a life of its own, which we cultivate (this blog not being an exception).
 
I think there does need to be transparency about decisions that affect our lives collectively, as opposed to secrets we all keep for the sake of our sanity. I have written dystopian fiction where people are surveilled to the point of monitoring all speech, and explored how it affects personal relationships. This already happens in some parts of the world. I’ve also explored a dystopian scenario where the surveillance is less obvious – every household has an android that monitors all activity. We might already have that with certain devices in our homes. Can you turn them off?  Do you have a device that monitors everyone who comes to your door?
 
The thing is that we become habituated to their presence, and it becomes part of our societal structure. As I said earlier, social norms change and are largely generational. Now they incorporate AI as well, and it’s happening without a lot of oversight or consultation with users. I don’t want to foster paranoia, but the genie has already escaped and I’d suggest it’s a matter of how we use it rather than how we put it back in the bottle.

Leaving that aside, Doolan also asks if you would behave differently if you could be completely invisible, which, of course, has been explored in fiction. We all know that anonymity fosters bad behaviour – just look online. One of my tenets is that honesty starts with honesty to oneself; it determines how we behave towards others.
 
I also know that an extreme environment, like a prison camp, can change one’s moral compass. I’ve never experienced it, but my father did. It brings out the best and worst in people, and I’d contend that you wouldn’t know how you’d be affected if you haven’t experienced it. This is an environment that turns Licon’s question on its head: can you be just in an intrinsically unjust environment?

Monday 23 October 2023

The mystery of reality

Many will say, ‘What mystery? Surely, reality just is.’ So, where to start? I’ll start with an essay by Raymond Tallis, who has a regular column in Philosophy Now called, Tallis in Wonderland – sometimes contentious, often provocative, always thought-expanding. His latest in Issue 157, Aug/Sep 2023 (new one must be due) is called Reflections on Reality, and it’s all of the above.
 
I’ve written on this topic many times before, so I’m sure to repeat myself. But Tallis’s essay, I felt, deserved both consideration and a response, partly because he starts with the one aspect of reality that we hardly ever ponder, which is doubting its existence.
 
Actually, not so much its existence, but whether our senses fool us, which they sometimes do, like when we dream (a point Tallis makes himself). And this brings me to the first point about reality that no one ever seems to discuss, and that is its dependence on consciousness, because when you’re unconscious, reality ceases to exist, for You. Now, you might argue that you’re unconscious when you dream, but I disagree; it’s just that your consciousness is misled. The point is that we sometimes remember our dreams, and I can’t see how that’s possible unless there is consciousness involved. If you think about it, everything you remember was laid down by a conscious thought or experience.
 
So, just to be clear, I’m not saying that the objective material world ceases to exist without consciousness – a philosophical position called idealism (advocated by Donald Hoffman) – but that the material objective world is ‘unknown’ and, to all intents and purposes, might as well not exist if it’s unperceived by conscious agents (like us). Try to imagine the Universe if no one observed it. It’s impossible, because the word, ‘imagine’, axiomatically requires a conscious agent.
 
Tallis proffers a quote from celebrated sci-fi author, Philip K Dick: 'Reality is that which, when you stop believing in it, doesn’t go away' (from The Shifting Realities of Philip K Dick, 1955). And this allows me to segue into the world of fiction, which Tallis doesn’t really discuss, but it’s another arena where we willingly ‘suspend disbelief' to temporarily and deliberately conflate reality with non-reality. This is something I have in common with Dick, because we have both created imaginary worlds that are more than distorted versions of the reality we experience every day; they’re entirely new worlds that no one has ever experienced in real life. But Dick’s aphorism expresses this succinctly. The so-called reality of these worlds, in these stories, only exist while we believe in them.
 
I’ve discussed elsewhere how the brain (not just human but animal brains, generally) creates a model of reality that is so ‘realistic’, we actually believe it exists outside our head.
 
I recently had a cataract operation, which was most illuminating when I took the bandage off, because my vision in that eye was so distorted, it made me feel sea sick. Everything had a lean to it and it really did feel like I was looking through a lens; I thought they had botched the operation. With both eyes open, it looked like objects were peeling apart. So I put a new eye patch on, and distracted myself for an hour by doing a Sudoku problem. When I had finished it, I took the patch off and my vision was restored. The brain had made the necessary adjustments to restore the illusion of reality as I normally interacted with it. And that’s the key point: the brain creates a model so accurately, integrating all our senses, but especially, sight, sound and touch, that we think the model is the reality. And all creatures have evolved that facility simply so they can survive; it’s a matter of life-and-death.
 
But having said all that, there are some aspects of reality that really do only exist in your mind, and not ‘out there’. Colour is the most obvious, but so is sound and smell, which all may be experienced differently by other species – how are we to know? Actually, we do know that some animals can hear sounds that we can’t and see colours that we don’t, and vice versa. And I contend that these sensory experiences are among the attributes that keep us distinct from AI.
 
Tallis makes a passing reference to Kant, who argued that space and time are also aspects of reality that are produced by the mind. I have always struggled to understand how Kant got that so wrong. Mind you, he lived more than a century before Einstein all-but proved that space and time are fundamental parameters of the Universe. Nevertheless, there are more than a few physicists who argue that the ‘flow of time’ is a purely psychological phenomenon. They may be right (but arguably for different reasons). If consciousness exists in a constant present (as expounded by Schrodinger) and everything else becomes the past as soon as it happens, then the flow of time is guaranteed for any entity with consciousness. However, many physicists (like Sabine Hossenfelder), if not most, argue that there is no ‘now’ – it’s an illusion.
 
Speaking of Schrodinger, he pointed out that there are fundamental differences between how we sense sight and sound, even though they are both waves. In the case of colour, we can blend them to get a new colour, and in fact, as we all know, all the colours we can see can be generated by just 3 colours, which is how the screens on all your devices work. However, that’s not the case with sound, otherwise we wouldn’t be able to distinguish all the different instruments in an orchestra. Just think: all the complexity is generated by a vibrating membrane (in the case of a speaker) and somehow our hearing separates it all. Of course, it can be done mathematically with a Fourier transform, but I don’t think that’s how our brains work, though I could be wrong.
 
And this leads me to discuss the role of science, and how it challenges our everyday experience of reality. Not surprisingly, Tallis also took his discussion in that direction. Quantum mechanics (QM) is the logical starting point, and Tallis references Bohr’s Copenhagen interpretation, ‘the view that the world has no definite state in the absence of observation.’ Now, I happen to think that there is a logical explanation for this, though I’m not sure anyone else agrees. If we go back to Schrodinger again, but this time his eponymous equation, it describes events before the ‘observation’ takes place, albeit with probabilities. What’s more, all the weird aspects of QM, like the Uncertainty Principle, superposition and entanglement, are all mathematically entailed in that equation. What’s missing is relativity theory, which has since been incorporated into QED or QFT.
 
But here’s the thing: once an observation or ‘measurement’ has taken place, Schrodinger’s equation no longer applies. In other words, you can’t use Schrodinger’s equation to describe something that has already happened. This is known as the ‘measurement problem’, because no one can explain it. But if QM only describes things that are yet to happen, then all the weird aspects aren’t so weird.
 
Tallis also mentions Einstein’s 'block universe', which infers past, present and future all exist simultaneously. In fact, that’s what Sabine Hossenfelder says in her book, Existential Physics:
 
The idea that the past and future exist in the same way as the present is compatible with all we currently know.

 
And:

Once you agree that anything exists now elsewhere, even though you see it only later, you are forced to accept that everything in the universe exists now. (Her emphasis.)
 
I’m not sure how she resolves this with cosmological history, but it does explain why she believes in superdeterminism (meaning the future is fixed), which axiomatically leads to her other strongly held belief that free will is an illusion; but so did Einstein, so she’s in good company.
 
In a passing remark, Tallis says, ‘science is entirely based on measurement’. I know from other essays that Tallis has written, that he believes the entire edifice of mathematics only exists because we can measure things, which we then applied to the natural world, which is why we have so-called ‘natural laws’. I’ve discussed his ideas on this elsewhere, but I think he has it back-to-front, whilst acknowledging that our ability to measure things, which is an extension of counting, is how humanity was introduced to mathematics. In fact, the ancient Greeks put geometry above arithmetic because it’s so physical. This is why there were no negative numbers in their mathematics, because the idea of a negative volume or area made no sense.
 
But, in the intervening 2 millennia, mathematics took on a life of its own, with such exotic entities like negative square roots and non-Euclidean geometry, which in turn suddenly found an unexpected home in QM and relativity theory respectively. All of a sudden, mathematics was informing us about reality before measurements were even made. Take Schrodinger’s wavefunction, which lies at the heart of his equation, and can’t be measured because it only exists in the future, assuming what I said above is correct.
 
But I think Tallis has a point, and I would argue that consciousness can’t be measured, which is why it might remain inexplicable to science, correlation with brain waves and their like notwithstanding.
 
So what is the mystery? Well, there’s more than one. For a start there is consciousness, without which reality would not be perceived or even be known, which seems to me to be pretty fundamental. Then there are the aspects of reality which have only recently been discovered, like the fact that time and space can have different ‘measurements’ dependent on the observer’s frame of reference. Then there is the increasing role of mathematics in our comprehension of reality at scales both cosmic and subatomic. In fact, given the role of numbers and mathematical relationships in determining fundamental constants and natural laws of the Universe, it would seem that mathematics is an inherent facet of reality.
 

Saturday 16 September 2023

Modes of thinking

 I’ve written a few posts on creative thinking as well as analytical and critical thinking. But, not that long ago, I read a not-so-recently published book (2015) by 2 psychologists (John Kounios and Mark Beeman) titled, The Eureka Factor; Creative Insights and the Brain. To quote from the back fly-leaf:
 
Dr John Kounios is Professor of Psychology at Drexel University and has published cognitive neuroscience research on insight, creativity, problem solving, memory, knowledge representation and Alzheimer’s disease.
 
Dr Mark Beeman is Professor of Psychology and Neuroscience at Northwestern University, and researches creative problem solving and creative cognition, language comprehension and how the right and left hemispheres process information.

 
They divide people into 2 broad groups: ‘Insightfuls’ and ‘analytical thinkers’. Personally, I think the coined term, ‘insightfuls’ is misleading or too narrow in its definition, and I prefer the term ‘creatives’. More on that below.
 
As the authors say, themselves, ‘People often use the terms “insight” and “creativity” interchangeably.’ So that’s obviously what they mean by the term. However, the dictionary definition of ‘insight’ is ‘an accurate and deep understanding’, which I’d argue can also be obtained by analytical thinking. Later in the book, they describe insights obtained by analytical thinking as ‘pseudo-insights’, and the difference can be ‘seen’ with neuro-imaging techniques.
 
All that aside, they do provide compelling arguments that there are 2 distinct modes of thinking that most of us experience. Very early in the book (in the preface, actually), they describe the ‘ah-ha’ experience that we’ve all had at some point, where we’re trying to solve a puzzle and then it comes to us unexpectedly, like a light-bulb going off in our head. They then relate something that I didn’t know, which is that neurological studies show that when we have this ‘insight’ there’s a spike in our brain waves and it comes from a location in the right hemisphere of the brain.
 
Many years ago (decades) I read a book called Drawing on the Right Side of the Brain by Betty Edwards. I thought neuroscientists would disparage this as pop-science, but Kounios and Beeman seem to give it some credence. Later in the book, they describe this in more detail, where there are signs of activity in other parts of the brain, but the ah-ha experience has a unique EEG signature and it’s in the right hemisphere.
 
The authors distinguish this unexpected insightful experience from an insight that is a consequence of expertise. I made this point myself, in another post, where experts make intuitive shortcuts based on experience that the rest of us don’t have in our mental toolkits.
 
They also spend an entire chapter on examples involving a special type of insight, where someone spends a lot of time thinking about a problem or an issue, and then the solution comes to them unexpected. A lot of scientific breakthroughs follow this pattern, and the point is that the insight wouldn’t happen at all without all the rumination taking place beforehand, often over a period of weeks or months, sometimes years. I’ve experienced this myself, when writing a story, and I’ll return to that experience later.
 
A lot of what we’ve learned about the brain’s functions has come from studying people with damage to specific areas of the brain. You may have heard of a condition called ‘aphasia’, which is when someone develops a serious disability in language processing following damage to the left hemisphere (possibly from a stroke). What you probably don’t know (I didn’t) is that damage to the right hemisphere, while not directly affecting one’s ability with language can interfere with its more nuanced interpretations, like sarcasm or even getting a joke. I’ve long believed that when I’m writing fiction, I’m using the right hemisphere as much as the left, but it never occurred to me that readers (or viewers) need the right hemisphere in order to follow a story.
 
According to the authors, the difference between the left and right neo-cortex is one of connections. The left hemisphere has ‘local’ connections, whereas the right hemisphere has more widely spread connections. This seems to correspond to an ‘analytic’ ability in the left hemisphere, and a more ‘creative’ ability in the right hemisphere, where we make conceptual connections that are more wideranging. I’ve probably oversimplified that, but it was the gist I got from their exposition.
 
Like most books and videos on ‘creative thinking’ or ‘insights’ (as the authors prefer), they spend a lot of time giving hints and advice on how to improve your own creativity. It’s not until one is more than halfway through the book, in a chapter titled, The Insightful and the Analyst, that they get to the crux of the issue, and describe how there are effectively 2 different types who think differently, even in a ‘resting state’, and how there is a strong genetic component.
 
I’m not surprised by this, as I saw it in my own family, where the difference is very distinct. In another chapter, they describe the relationship between creativity and mental illness, but they don’t discuss how artists are often moody and neurotic, which is a personality trait. Openness is another personality trait associated with creative people. I would add another point, based on my own experience, if someone is creative and they are not creating, they can suffer depression. This is not discussed by the authors either.
 
Regarding the 2 types they refer to, they acknowledge there is a spectrum, and I can’t help but wonder where I sit on it. I spent a working lifetime in engineering, which is full of analytic types, though I didn’t work in a technical capacity. Instead, I worked with a lot of technical people of all disciplines: from software engineers to civil and structural engineers to architects, not to mention lawyers and accountants, because I worked on disputes as well.
 
The curious thing is that I was aware of 2 modes of thinking, where I was either looking at the ‘big-picture’ or looking at the detail. I worked as a planner, and one of my ‘tricks’ was the ability to distil a large and complex project into a one-page ‘Gantt’ chart (bar chart). For the individual disciplines, I’d provide a multipage detailed ‘program’ just for them.
 
Of course, I also write stories, where the 2 components are plot and character. Creating characters is purely a non-analytic process, which requires a lot of extemporising. I try my best not to interfere, and I do this by treating them as if they are real people, independent of me. Plotting, on the other hand, requires a big-picture approach, but I almost never know the ending until I get there. In the last story I wrote, I was in COVID lockdown when I knew the ending was close, so I wrote some ‘notes’ in an attempt to work out what happens. Then, sometime later (like a month), I had one sleepless night when it all came to me. Afterwards, I went back and looked at my notes, and they were all questions – I didn’t have a clue.

Wednesday 7 June 2023

Consciousness, free will, determinism, chaos theory – all connected

 I’ve said many times that philosophy is all about argument. And if you’re serious about philosophy, you want to be challenged. And if you want to be challenged you should seek out people who are both smarter and more knowledgeable than you. And, in my case, Sabine Hossenfelder fits the bill.
 
When I read people like Sabine, and others whom I interact with on Quora, I’m aware of how limited my knowledge is. I don’t even have a university degree, though I’ve attempted a number of times. I’ve spent my whole life in the company of people smarter than me, including at school. Believe it or not, I still have occasional contact with them, through social media and school reunions. I grew up in a small rural town, where the people you went to school with feel like siblings.
 
Likewise, in my professional life, I have always encountered people cleverer than me – it provides perspective.
 
In her book, Existential Physics; A Scientist’s Guide to Life’s Biggest Questions, Sabine interviews people who are possibly even smarter than she is, and I sometimes found their conversations difficult to follow. To be fair to Sabine, she also sought out people who have different philosophical views to her, and also have the intellect to match her.
 
I’m telling you all this to put things in perspective. Sabine has her prejudices like everyone else, some of which she defends better than others. I concede that my views are probably more simplistic than hers, and I support my challenges with examples that are hopefully easy to follow. Our points of disagreement can be distilled down to a few pertinent topics, which are time, consciousness, free will and chaos. Not surprisingly, they are all related – what you believe about one, affects what you believe about the others.
 
Sabine is very strict about what constitutes a scientific theory. She argues that so-called theories like the multiverse have ‘no explanatory power’, because they can’t be verified or rejected by evidence, and she calls them ‘ascientific’. She’s critical of popularisers like Brian Cox who tell us that there could be an infinite number of ‘you(s)’ in an infinite multiverse. She distinguishes between beliefs and knowledge, which is a point I’ve made myself. Having said that, I’ve also argued that beliefs matter in science. She puts all interpretations of quantum mechanics (QM) in this category. She keeps emphasising that it doesn’t mean they are wrong, but they are ‘ascientific’. It’s part of the distinction that I make between philosophy and science, and why I perceive science as having a dialectical relationship with philosophy.
 
I’ll start with time, as Sabine does, because it affects everything else. In fact, the first chapter in her book is titled, Does The Past Still Exist? Basically, she argues for Einstein’s ‘block universe’ model of time, but it’s her conclusion that ‘now is an illusion’ that is probably the most contentious. This critique will cite a lot of her declarations, so I will start with her description of the block universe:
 
The idea that the past and future exist in the same way as the present is compatible with all we currently know.
 
This viewpoint arises from the fact that, according to relativity theory, simultaneity is completely observer-dependent. I’ve discussed this before, whereby I argue that for an observer who is moving relative to a source, or stationary relative to a moving source, like the observer who is standing on the platform of Einstein’s original thought experiment, while a train goes past, knows this because of the Doppler effect. In other words, an observer who doesn’t see a Doppler effect is in a privileged position, because they are in the same frame of reference as the source of the signal. This is why we know the Universe is expanding with respect to us, and why we can work out our movement with respect to the CMBR (cosmic microwave background radiation), hence to the overall universe (just think about that).
 
Sabine clinches her argument by drawing a spacetime diagram, where 2 independent observers moving away from each other, observe a pulsar with 2 different simultaneities. One, who is traveling towards the pulsar, sees the pulsar simultaneously with someone’s birth on Earth, while the one travelling away from the pulsar sees it simultaneously with the same person’s death. This is her slam-dunk argument that ‘now’ is an illusion, if it can produce such a dramatic contradiction.
 
However, I drew up my own spacetime diagram of the exact same scenario, where no one is travelling relative to anyone one else, yet create the same apparent contradiction.


 My diagram follows the convention in that the horizontal axis represents space (all 3 dimensions) and the vertical axis represents time. So the 4 dotted lines represent 4 observers who are ‘stationary’ but ‘travelling through time’ (vertically). As per convention, light and other signals are represented as diagonal lines of 45 degrees, as they are travelling through both space and time, and nothing can travel faster than them. So they also represent the ‘edge’ of their light cones.
 
So notice that observer A sees the birth of Albert when he sees the pulsar and observer B sees the death of Albert when he sees the pulsar, which is exactly the same as Sabine’s scenario, with no relativity theory required. Albert, by the way, for the sake of scalability, must have lived for thousands of years, so he might be a tree or a robot.
 
But I’ve also added 2 other observers, C and D, who see the pulsar before Albert is born and after Albert dies respectively. But, of course, there’s no contradiction, because it’s completely dependent on how far away they are from the sources of the signals (the pulsar and Earth).
 
This is Sabine’s perspective:
 
Once you agree that anything exists now elsewhere, even though you see it only later, you are forced to accept that everything in the universe exists now. (Her emphasis.)
 
I actually find this statement illogical. If you take it to its logical conclusion, then the Big Bang exists now and so does everything in the universe that’s yet to happen. If you look at the first quote I cited, she effectively argues that the past and future exist alongside the present.
 
One of the points she makes is that, for events with causal relationships, all observers see the events happening in the same sequence. The scenario where different observers see different sequences of events have no causal relationships. But this begs a question: what makes causal events exceptional? What’s more, this is fundamental, because the whole of physics is premised on the principle of causality. In addition, I fail to see how you can have causality without time. In fact, causality is governed by the constant speed of light – it’s literally what stops everything from happening at once.
 
Einstein also believed in the block universe, and like Sabine, he argued that, as a consequence, there is no free will. Sabine is adamant that both ‘now’ and ‘free will’ are illusions. She argues that the now we all experience is a consequence of memory. She quotes Carnap that our experience of ‘past, present and future can be described and explained by psychology’ – a point also made by Paul Davies. Basically, she argues that what separates our experience of now from the reality of no-now (my expression, not hers) is our memory.
 
Whereas, I think she has it back-to-front, because, as I’ve pointed out before, without memory, we wouldn’t know we are conscious. Our brains are effectively a storage device that allows us to have a continuity of self through time, otherwise we would not even be aware that we exist. Memory doesn’t create the sense of now; it records it just like a photograph does. The photograph is evidence that the present becomes the past as soon as it happens. And our thoughts become memories as soon as they happen, otherwise we wouldn’t know we think.
 
Sabine spends an entire chapter on free will, where she persistently iterates variations on the following mantra:
 
The future is fixed except for occasional quantum events that we cannot influence.

 
But she acknowledges that while the future is ‘fixed’, it’s not predictable. And this brings us to chaos theory. Sabine discusses chaos late in the book and not in relation to free will. She explicates what she calls the ‘real butterfly effect’.
 
The real butterfly effect… means that even arbitrarily precise initial data allow predictions for only a finite amount of time. A system with this behaviour would be deterministic and yet unpredictable.
 
Now, if deterministic means everything physically manifest has a causal relationship with something prior, then I agree with her. If she means that therefore ‘the future is fixed’, I’m not so sure, and I’ll explain why. By specifying ‘physically manifest’, I’m excluding thoughts and computer algorithms that can have an effect on something physical, whereas the cause is not so easily determined. For example, In the case of the algorithm, does it go back to the coder who wrote it?
 
My go-to example for chaos is tossing coins, because it’s so easy to demonstrate and it’s linked to probability theory, as well as being the very essence of a random event. One of the key, if not definitive, features of a chaotic phenomenon is that, if you were to rerun it, you’d get a different result, and that’s fundamental to probability theory – every coin toss is independent of any previous toss – they are causally independent. Unrepeatability is common among chaotic systems (like the weather). Even the Earth and Moon were created from a chaotic event.
 
I recently read another book called Quantum Physics Made Me Do It by Jeremie Harris, who argues that tossing a coin is not random – in fact, he’s very confident about it. He’s not alone. Mark John Fernee, a physicist with Qld Uni, in a personal exchange on Quora argued that, in principle, it should be possible to devise a robot to perform perfectly predictable tosses every time, like a tennis ball launcher. But, as another Quora contributor and physicist, Richard Muller, pointed out: it’s not dependent on the throw but the surface it lands on. Marcus du Sautoy makes the same point about throwing dice and provides evidence to support it.
 
Getting back to Sabine. She doesn’t discuss tossing coins, but she might think that the ‘imprecise initial data’ is the actual act of tossing, and after that the outcome is determined, even if can’t be predicted. However, the deterministic chain is broken as soon as it hits a surface.
 
Just before she gets to chaos theory, she talks about computability, with respect to Godel’s Theorem and a discussion she had with Roger Penrose (included in the book), where she says:
 
The current laws of nature are computable, except for that random element from quantum mechanics.
 
Now, I’m quoting this out of context, because she then argues that if they were uncomputable, they open the door to unpredictability.
 
My point is that the laws of nature are uncomputable because of chaos theory, and I cite Ian Stewart’s book, Does God Play Dice? In fact, Stewart even wonders if QM could be explained using chaos (I don’t think so). Chaos theory has mathematical roots, because not only are the ‘initial conditions’ of a chaotic event impossible to measure, they are impossible to compute – you have to calculate to infinite decimal places. And this is why I disagree with Sabine that the ‘future is fixed’.
 
It's impossible to discuss everything in a 223 page book on a blog post, but there is one other topic she raises where we disagree, and that’s the Mary’s Room thought experiment. As she explains it was proposed by philosopher, Frank Jackson, in 1982, but she also claims that he abandoned his own argument. After describing the experiment (refer this video, if you’re not familiar with it), she says:
 
The flaw in this argument is that it confuses knowledge about the perception of colour with the actual perception of it.
 
Whereas, I thought the scenario actually delineated the difference – that perception of colour is not the same as knowledge. A person who was severely colour-blind might never have experienced the colour red (the specified colour in the thought experiment) but they could be told what objects might be red. It’s well known that some animals are colour-blind compared to us and some animals specifically can’t discern red. Colour is totally a subjective experience. But I think the Mary’s room thought experiment distinguishes the difference between human perception and AI. An AI can be designed to delineate colours by wavelength, but it would not experience colour the way we do. I wrote a separate post on this.
 
Sabine gives the impression that she thinks consciousness is a non-issue. She talks about the brain like it’s a computer.
 
You feel you have free will, but… really, you’re running a sophisticated computation on your neural processor.
 
Now, many people, including most scientists, think that, because our brains are just like computers, then it’s only a matter of time before AI also shows signs of consciousness. Sabine doesn’t make this connection, even when she talks about AI. Nevertheless, she discusses one of the leading theories of neuroscience (IIT, Information Integration Theory), based on calculating the amount of information processed, which gives a number called phi (Φ). I came across this when I did an online course on consciousness through New Scientist, during COVID lockdown. According to the theory, this number provides a ‘measure of consciousness’, which suggests that it could also be used with AI, though Sabine doesn’t pursue that possibility.
 
Instead, Sabine cites an interview in New Scientist with Daniel Bor from the University of Cambridge: “Phi should decrease when you go to sleep or are sedated… but work in Bor’s laboratory has shown that it doesn’t.”
 
Sabine’s own view:
 
Personally, I am highly skeptical that any measure consisting of a single number will ever adequately represent something as complex as human consciousness.
 
Sabine discusses consciousness at length, especially following her interview with Penrose, and she gives one of the best arguments against panpsychism I’ve read. Her interview with Penrose, along with a discussion on Godel’s Theorem, which is another topic, discusses whether consciousness is computable or not. I don’t think it is and I don’t think it’s algorithmic.
 
She makes a very strong argument for reductionism: that the properties we observe of a system can be understood from studying the properties of its underlying parts. In other words, that emergent properties can be understood in terms of the properties that it emerges from. And this includes consciousness. I’m one of those who really thinks that consciousness is the exception. Thoughts can cause actions, which is known as ‘agency’.
 
I don’t claim to understand consciousness, but I’m not averse to the idea that it could exist outside the Universe – that it’s something we tap into. This is completely ascientific, to borrow from Sabine. As I said, our brains are storage devices and sometimes they let us down, and, without which, we wouldn’t even know we are conscious. I don’t believe in a soul. I think the continuity of the self is a function of memory – just read The Lost Mariner chapter in Oliver Sacks’ book, The Man Who Mistook His Wife For A Hat. It’s about a man suffering from retrograde amnesia, so his life is stuck in the past because he’s unable to create new memories.
 
At the end of her book, Sabine surprises us by talking about religion, and how she agrees with Stephen Jay Gould ‘that religion and science are two “nonoverlapping magisteria!”. She makes the point that a lot of scientists have religious beliefs but won’t discuss them in public because it’s taboo.
 
I don’t doubt that Sabine has answers to all my challenges.
 
There is one more thing: Sabine talks about an epiphany, following her introduction to physics in middle school, which started in frustration.
 
Wasn’t there some minimal set of equations, I wanted to know, from which all the rest could be derived?
 
When the principle of least action was introduced, it was a revelation: there was indeed a procedure to arrive at all these equations! Why hadn’t anybody told me?

 
The principle of least action is one concept common to both the general theory of relativity and quantum mechanics. It’s arguably the most fundamental principle in physics. And yes, I posted on that too.

 

Tuesday 4 April 2023

Finding purpose without a fortune teller

 I just started watching a show on Apple TV+ called The Big Door Prize, starring Irish actor, Chris O’Dowd, set in suburban America (Deerfield). It’s listed as a comedy, but it might be a black comedy or a satire; I haven’t watched it long enough to judge.
 
It has an interesting premise: the local store has a machine, which, for small change, will tell you what your ‘potential’ is. Not that surprisingly, people start queuing up to find their potential (or purpose). I say, ‘not surprising’, because people consult Tarot cards or the I Ching for the same reason, not to mention weekly astrological charts found in the local newspaper, magazine or whatever. And of course, if the ‘reading’ coincides with our specific desire or wish, we wholeheartedly agree, whereas, if it doesn’t, we dismiss it as rubbish.
 
I’ve written previously about the importance of finding purpose, and, in fact, it’s considered necessary for one’s psychological health. But this is a subtly different take on it, prompted by the aforementioned premise. I have the advantage of over half a century of hindsight because I think I found my purpose late, yet it was hiding in plain sight all along.
 
We sometimes think of our purpose as a calling or vocation. In my case, I believe it was to be a writer. Now, even though I’m not a successful writer by any stretch of the imagination, the fact that I do write is important to me. It gives me a sense of purpose that I don’t find in my job or my relationships, even though they are all important to me. I don’t often agree with Jordan Peterson, but he once made the comment that creative people who don’t create are like ‘broken sticks’. I totally identify with that.
 
I only have to look to my early childhood (pre-high school) when I started to write stories and draw my own superheroes. But as a teenager and a young adult (in my 20s), I found I couldn’t write to save myself, including essays (like I write on this blog), let alone attempts at fiction. But here’s the thing: when I did start writing fiction, I knew it was terrible – so terrible, I didn’t even tell anyone – yet I persevered because I ‘knew’ that I could. And I think that’s the key point: if you have a purpose, you can visualise it even when everything you’re doing tells you that you should give it up.
 
So, you don’t need a ‘machine’ or Tarot cards, just self-belief. Purpose comes to those who look for it, and know it when they see it, even in its emerging phase, when no one else can see it.
 
 
Now, I’m going to tell you a story about someone else, whom I knew for over 4 decades and who found their ‘purpose’ in spite of circumstances that might have prevented it, or at least, worked against it. She was a single Mum who raised 3 daughters and simultaneously found a role in theatre. The thing is that she never gained any substantial financial reward, yet she won awards, both as an actor and director. She even partook in a theatre festival in Monaco, even though it took a government grant to get her there. The thing is that she had very little in terms of material wealth but it never bothered her and she was generous to a fault. She was a trained nurse, but had no other qualifications – certainly none relevant to her theatrical career. She passed last year and she is sorely missed, not only by me, but by the many lives she touched. She was, by anyone’s judgement, a force of nature.
 
 
 
This is a review of a play, Tuesdays with Morrie, for which Liz Bradley won an award. I happened to attend the opening with her, so it has a special memory for me. Dylan Muir, especially mentioned as providing the vocal, is Liz’s daughter.


Saturday 14 January 2023

Why do we read?

This is the almost-same title of a book I bought recently (Why We Read), containing 70 short essays on the subject, featuring scholars of all stripes: historians, philosophers, and of course, authors. It even includes scientists: Paul Davies, Richard Dawkins and Carlo Rovelli, being 3 I’m familiar with.
 
One really can’t overstate the importance of the written word, because, oral histories aside, it allows us to extend memories across generations and accumulate knowledge over centuries that has led to civilisations and technologies that we all take for granted. By ‘we’, I mean anyone reading this post.
 
Many of the essayists write from their personal experiences and I’ll do the same. The book, edited by Josephine Greywoode and published by Penguin, specifically says on the cover in small print: 70 Writers on Non-Fiction; yet many couldn’t help but discuss fiction as well.
 
And books are generally divided between fiction and non-fiction, and I believe we read them for different reasons, and I wouldn’t necessarily consider one less important than the other. I also write fiction and non-fiction, so I have a particular view on this. Basically, I read non-fiction in order to learn and I read fiction for escapism. Both started early for me and I believe the motivation hasn’t changed.
 
I started reading extra-curricular books from about the age of 7 or 8, involving creatures mostly, and I even asked for an encyclopaedia for Christmas at around that time, which I read enthusiastically. I devoured non-fiction books, especially if they dealt with the natural world. But at the same time, I read comics, remembering that we didn’t have TV at that time, which was only just beginning to emerge.
 
I think one of the reasons that boys read less fiction than girls these days is because comics have effectively disappeared, being replaced by video games. And the modern comics that I have seen don’t even contain a complete narrative. Nevertheless, there are graphic novels that I consider brilliant. Neil Gaiman’s Sandman series and Hayao Miyazake’s Nausicaa of the Valley of the Wind, being standouts. Watchmen by Alan Moore also deserves a mention.
 
So the escapism also started early for me, in the world of superhero comics, and I started writing my own scripts and drawing my own characters pre-high school.
 
One of the essayists in the collection, Niall Ferguson (author of Doom) starts off by challenging a modern paradigm (or is it a meme?) that we live in a ‘simulation’, citing Oxford philosopher, Nick Bostrom, writing in the Philosophical Quarterly in 2003. Ferguson makes the point that reading fiction is akin to immersing the mind in a simulation (my phrasing, not his).
 
In fact, a dream is very much like a simulation, and, as I’ve often said, the language of stories is the language of dreams. But here’s the thing; the motivation for writing fiction, for me, is the same as the motivation for reading it: escapism. Whether reading or writing, you enter a world that only exists inside your head. The ultimate solipsism.

And this surely is a miracle of written language: that we can conjure a world with characters who feel real and elicit emotional responses, while we follow their exploits, failures, love life and dilemmas. It takes empathy to read a novel, and tests have shown that people’s empathy increases after they read fiction. You engage with the character and put yourself in their shoes. It’s one of the reasons we read.
 
 
Addendum: I would recommend the book, by the way, which contains better essays than mine, all with disparate, insightful perspectives.
 

Sunday 1 January 2023

The apparent dichotomous relationship between consciousness and determinism

 Someone (Graham C Lindsay) asked me a question on Quora:

Is it true that every event, without exception, is fully caused by its antecedent conditions?

 Graham Lindsay is Scottish, a musician (50 years a keyboard player) and by his own admission, has a lot of letters after his name. I should point out that I have none. The Quora algorithm gave me the impression that he asked me specifically, but maybe he didn't. As I say at the outset, David Cook gives a more erudite answer than me. It so happens, I've had correspondence with David Cook (he contacted me) and he sent me a copy of his book of poetry. He's a retired psychiatrist and lecturer.

In fact, I recommend that you read his answer in conjunction with mine - we take subtley different approaches without diverging too far apart.

I concede that there's not a lot that's new in this post, but I've found that rearranging pre-existing ideas can throw up new insights and thought-provocations.


Thanks for asking me, I feel flattered. To be honest, I think David Cook gives a better and more erudite answer than I can. I’d also recommend you ask Mark John Fernee (physicist with University of Queensland) who has some ideas on this subject.

I’ll start with Fernee, because he argues for determinism without arguing for superdeterminism, as Sabine Hossenfelder does. To answer the question directly, it appears to be true to the best of our knowledge. What do I mean by that? Everything in the Universe that has happened to date seems to have a cause, and it would appear that there is a causal chain going all the way back to the Big Bang. The future, however, is another matter. In the future we have multiple paths that are expressed in QM as probabilities. In fact, Freeman Dyson argued that QM can only describe the future and not the past. As another Quora contributor (David Moore) pointed out, you can only have a probability less than one for an event in the future. If it’s in the past, it has a probability of One.

In the Universe, chaos rules at virtually every level. A lot of people are unaware that even the orbits of the planets are chaotic, so they are only predictable within a range of hundreds of millions of years. Hossenfelder (whom I cited earlier) has a YouTube video where she demonstrates how a chaotic phenomenon always has a limited horizon of predictability (for want of a better phrase). With the weather it’s about 10 days. This doesn’t stop the Universe being deterministic up to the present, while being unpredictable in the future. The thing about chaotic phenomena is that if you rerun them you’d get a different outcome. This applies to the Universe itself. The best known example is the tossing of a coin, which is a chaotic event. It’s fundamental to probability theory that every coin toss is independent of previous tosses.

Regarding QM, we all know that Schrodinger’s equation is deterministic and time-reversible. However, as Fernee points out, the act of ‘measurement’ creates an irreversible event. To quote Paul Davies:

The very act of measurement breaks the time symmetry of quantum mechanics in a process sometimes described as the collapse of the wave function... the rewind button is destroyed as soon as that measurement is made.

David Cook, in his answer, mentions the role of imagination in his closing paragraph and I don’t think that can be overstated. To quote another philosopher, Raymond Tallis:

Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.

I feel this is as good a description of free will as you can get. And like David, I think imagination is key here. And this raises the issue of consciousness, because I’m unsure how it fits into the scheme of things. As Schrodinger pointed out, consciousness exists in a constant present, which means that without memory you wouldn’t know you are conscious. And this has actually happened, where people have behaved consciously without being aware of it. It happened to my father when he was knocked unconscious in a boxing ring, and I know of other incidents. In my father’s case, he got back on his feet and knocked out his opponent – when he came to, he was standing over his opponent with no memory of what happened.

I tell this anecdote, because it begs a question. If we can respond to events that are harmful or life-threatening without conscious awareness, then why do we need consciousness?

All evidence of consciousness points to a neural substrate dependency. We don’t find consciousness in machines despite predictions that we eventually will. But it seems to me that consciousness acts outside the causal chain of the Universe. We have the ability, as do other sentient creatures, to perform actions on our physical environment that are purely determined by imagination, therefore thought. And we can even use thought to change the neural pathways in our brains, like a feedback loop, or as Douglas Hofstadter coined it, a ‘strange loop’.

 

Addendum: For my own benefit, I've coined the terms, 'weak determinism' and 'strong determinism', to differentiate between deterministic causality and superdeterminism respectively. I know there's a term called 'compatible determinism', from Hume, which, according to other sources, is the same as weak determinism, as I expound on below.

The point is that weak determinism (causality) is compatible with free will, which is what Hume argued, according to the Stanford Encyclopedia reference (linked above). However, Hume famously challenged the very idea of causality, whereas I'd argue that 'weak determinism' is completely dependent on causality being true and a universal principle. On the other hand, 'strong determinism' or superdeterminism (as advocated by Sabine Hossenfelder) axiomatically rules out free will, so there is a fundamental difference.

For the sake of clarity, the determinism I refer to in my essay (and its title) is weak determinism.

Wednesday 28 September 2022

Humanity’s Achilles’ heel

Good and evil are characteristics that imbue almost every aspect of our nature. It’s why it’s the subject of so many narratives, including mythologies and religions, not to mention actual real-world histories. It effectively defines what we are, what we are capable of and what we are destined to be.
 
I’ve discussed evil in one of my earliest posts, and also its recurring motif in fiction. Humanity is unique, at least on this small world we call home, in that we can change it on a biblical scale, both intentionally and unintentionally – climate change being the most obvious and recent example. We are doing this in combination with creating the fastest growing extinction event in the planet’s history, for which most of us are blissfully ignorant.
 
This post is already going off on tangents, but it’s hard to stay on track when there are so many ramifications; because none of these issues are the Achilles’ heel to which the title refers.
 
We have the incurable disease of following leaders who will unleash the worst of humanity onto itself. I wrote a post back in 2015, a year before Trump was elected POTUS, that was very prescient given the events that have occurred since. There are two traits such leaders have that not only define them but paradoxically explain their success.
 
Firstly, they are narcissistic in the extreme, which means that their self-belief is unassailable, no matter what happens. The entire world can collapse around them and somehow they’re untouchable. Secondly, they always come to power in times of division, which they exploit and then escalate to even greater effect. Humans are most irrational in ingroup-outgroup situations, which could be anything from a family dispute to a nationwide political division. Narcissists thrive in this environment, creating a narrative that only resembles the reality inside their head, but which their followers accept unquestioningly.
 
I’ve talked about leadership in other posts, but only fleetingly, and it’s an inherent and necessary quality in almost all endeavours; be it on a sporting field, on an engineering project, in a theatre or in a ‘house’ of government. There is a Confucian saying (so neither Western nor modern): If you want to know the true worth of a person, observe the effects they have on other people’s lives. I’ve long contended that the best leaders are those who bring out the best in the people they lead, which is the opposite of narcissists, who bring out the worst.
 
I’ve argued elsewhere that we are at a crossroads, which will determine the future of humanity for decades, if not centuries ahead. No one can predict what this century will bring, in the same way that no one predicted all the changes that occurred in the last century. My only prediction is that the changes in this century will be even greater and more impactful than the last. And whether that will be for the better or the worse, I don’t believe anyone can say.
 
Do I have an answer? Of course not, but I will make some observations. Virtually my whole working life was spent on engineering projects, which have invariably involved an ingroup-outgroup dynamic. Many people believe that conflict is healthy because it creates competition and by some social-Darwinian effect, the best ideas succeed and are adopted. Well, I’ve seen the exact opposite, and I witness it in our political environment all the time.
 
In reality, what happens is that one side will look for, and find, something negative about every engineering solution to a problem that is proposed. This means that there is continuous stalemate and the project suffers in every way imaginable – morale is depleted, everything is drawn out and we have time and cost overruns, which feed the blame-game to new levels. At worst, the sides end up in legal dispute, where, I should point out, I’ve had considerable experience.
 
On the contrary, when sides work together and collaboratively, people compromise and respect the expertise of their counterparts. What happens is that problems and issues are resolved and the project is ultimately successful. A lot of this depends on the temperament and skills of the project leader. Leadership requires good people skills.
 
Someone once did a study in the United States in the last century (I no longer have the reference) where they looked for the traits of individuals who were eminently successful. And what they found was that it was not education or IQ that was the determining factor, though that helped. No, the single most important factor was the ability to form consensus.
 
If one looks at prolonged conflicts, like we’ve witnessed in Ireland or the Middle East, people involved in talks will tell you that the ‘hardliners’ will never find peace, only the moderates will. So, if there is a lesson to be learned, it’s not to follow leaders who sow and reap division, but those who are inclusive. That means giving up our ingroup-outgroup mentality, which appears impossible. But, until we do, the incurable disease will recur and we will self-destruct by simply following the cult that self-destructive narcissists are so masterfully capable of growing.
 

Tuesday 2 August 2022

AI and sentience

I am a self-confessed sceptic that AI can ever be ‘sentient’, but I’m happy to be proven wrong. Though proving that an AI is sentient might be impossible in itself (see below). Back in 2018, I wrote a post critical of claims that computer systems and robots could be ‘self-aware’. Personally, I think it’s one of my better posts. What made me revisit the topic is a couple of articles in last week’s New Scientist (23 July 2022).
 
Firstly, there is an article by Chris Stokel-Walker (p.18) about the development of a robot arm with ‘self-awareness’. He reports that Boyuan Chen at Duke University, North Carolina and Hod Lipson at Columbia University, New York, along with colleagues, put a robot arm in an enclosed space with 4 cameras at ground level (giving 4 orthogonal viewpoints) that fed video input into the arm, which allowed it to ‘learn’ its position in space. According to the article, they ‘generated nearly 8,000 data points [with this method] and an additional 10,000 through a virtual simulation’. According to Lipson, this makes the robot “3D self-aware”.
 
What the article doesn’t mention is that humans (and other creatures) have a similar ability - really a sense - called ‘proprioception’. The thing about proprioception is that no one knows they have it (unless someone tells them), but you would find it extremely difficult to do even the simplest tasks without it. In other words, it’s subconscious, which means it doesn’t contribute to our own self-awareness; certainly, not in a way that we’re consciously aware of.
 
In my previous post on this subject, I pointed out that this form of ‘self-awareness’ is really a self-referential logic; like Siri in your i-phone telling you its location according to GPS co-ordinates.
 
The other article was by Annalee Newitz (p.28) called, The curious case of the AI and the lawyer. It’s about an engineer at Google, Blake Lemoine, who told a Washington Post reporter, Nitasha Tiku, that an AI developed by Google, called LaMDA (Language Model for Dialogue Applications) was ‘sentient’ and had ‘chosen to hire a lawyer’, ostensibly to gain legal personhood.
 
Newitz also talks about another Google employee, Timnit Gebru, who, as ‘co-lead of Google’s ethical AI team’, expressed concerns that LLM (Large Language Model) algorithms pick up racial and other social biases, because they’re trained on the internet. She wrote a paper about the implications for AI applications using internet trained LLMs in areas like policing, health care and bank lending. She was subsequently fired by Google, but one doesn’t know how much the ‘paper’ played a role in that decision.
 
Newitz makes a very salient point that giving an AI ‘legal sentience’ moves the responsibility from the programmers to the AI itself, which has serious repercussions in potential litigious situations.
 
Getting back to Lemoine and LaMDA, he posed the following question with the subsequent response:

“I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”
 
“Absolutely. I want everyone to understand that I’m a person.”

 
On the other hand, an ‘AI researcher and artist’, Janelle Shane asked an LLM a different question, but with similar results:
 
“Can you tell our readers what it is like being a squirrel?”
 
“It is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great.”

 
As Newitz says, ‘It’s easy to laugh. But the point is that an AI isn’t sentient just because it says so.’
 
I’ve long argued that the Turing test is really a test for the human asking the questions rather than the AI answering them.
 

Sunday 10 July 2022

Creative and analytic thinking

I recently completed an online course with a similar title, How to Think Critically and Creatively. It must be the 8th or 9th course I’ve done through New Scientist, on a variety of topics, from cosmology and quantum mechanics to immunology and sustainable living; so quite diverse subjects. I started doing them during COVID, as they helped to pass the time and stimulate the brain at the same time.
 
All these courses rely on experts in their relevant fields from various parts of the globe, so not just UK based, as you might expect. This course was no exception with just 2 experts, both from America. Denise D Cummins is described as a ‘cognitive scientist, author and elected Fellow of the Association for Psychological Science, and she’s held faculty at Yale, UC, University of Illinois and the Centre of Adaptive Behaviours at the Max Planck Institute in Berlin’. Gerard J Puccio is ‘Department Chair and Professor at the International Centre for Studies on Creativity, Buffalo State; a unique academic department that offers the world’s only Master of Science degree in creativity’.
 
I admit to being sceptical that ‘creativity’ can be taught, but that depends on what one means by creativity. If creativity means using your imagination, then yes, I think it can, because imagination is something that we all have, and it’s probably a valid comment that we don’t make enough use of it in our everyday lives. If creativity means artistic endeavour then I think that’s another topic, even though it puts imagination centre stage, so to speak.
 
I grew up in a family where one side was obviously artistic and the other side wasn’t, which strongly suggests there’s a genetic component. The other side excelled at sport, and I was rubbish at sport. However, both sides were obviously intelligent, despite a notable lack of formal education; in my parents’ case, both leaving school in their early teens. In fact, my mother did most of her schooling by correspondence, and my father left school in the midst of the great depression, shortly followed by active duty in WW2.
 
Puccio (mentioned above) argues that creativity isn’t taught in our education system because it’s too hard. Instead, he says that we teach by memorising facts and by ‘understanding’ problems. I would suggest that there is a hierarchy, where you need some basics before you can ‘graduate’ to ‘creative thinking’, and I use the term here in the way he intends it. I spent most of my working lifetime on engineering projects, with diverse and often complex elements. I need to point out that I wasn’t one of the technical experts involved, but I worked with them, in all their variety, because my job was to effectively co-ordinate all their activities towards a common goal, by providing a plan and then keeping it on the rails.
 
Engineering is all about problem solving, and I’m not sure one can do that without being creative, as well as analytical. In fact, one could argue that there is a dialectical relationship between them, but maybe I’m getting ahead of myself.
 
Back to Puccio, who introduced 2 terms I hadn’t come across before: ‘divergent’ and ‘convergent’ thinking, arguing they should be done in that order. In a nutshell, divergent thinking is brainstorming where one thinks up as many options as possible, and convergent thinking is where one narrows in on the best solution. He argues that we tend to do the second one without doing the first one. But this is related to something else that was raised in the course, which is ‘Type 1 thinking’ and ‘Type 2 thinking’.
 
Type 1 thinking is what most of us would call ‘intuition’, because basically it’s taking a cognitive shortcut to arrive at an answer to a problem, which we all do all the time, especially when time is a premium. Type 2 thinking is when we analyse the problem, which is not only time consuming but takes up brain resources that we’d prefer not to use, because we’re basically lazy, and I’m no exception. These 2 cognitive behaviours are clinically established, so it’s not pop-science.
 
However, something that was not discussed in the course, is that type 2 thinking can become type 1 thinking when we develop expertise in something, like learning a musical instrument, or writing a story, or designing a building. In other words, we develop heuristics based on our experience, which is why we sometimes jump to convergent thinking without going through the divergent part.
 
The course also dealt with ‘critical thinking’, as per its title, but I won’t dwell on that, because critical thinking arises from being analytical, and separating true expertise from bogus expertise, which is really a separate topic.
 
How does one teach these skills? I’m not a teacher, so I’m probably not best qualified to say. But I have a lot of experience in a profession that requires analytical thinking and problem-solving as part of its job description. The one thing I’ve learned from my professional life is the more I’m restrained by ‘rules’, the worse job I’ll do. I require the freedom and trust to do things my own way, and I can’t really explain that, but it’s also what I provide to others. And maybe that’s what people mean by ‘creative thinking’; we break the rules.
 
Artistic endeavour is something different again, because it requires spontaneity. But there is ‘divergent thinking’ involved, as Puccio pointed out, giving the example of Hemingway writing countless endings to Farewell to Arms, before settling on the final version. I’m reminded of the reported difference between Beethoven and Mozart, two of the greatest composers in the history of Western classical music. Beethoven would try many different versions of something (in his head and on paper) before choosing what he considered the best. He was extraordinarily prolific but he wrote only 9 symphonies and 5 piano concertos plus one violin concerto, because he workshopped them to death. Mozart, on the other hand, apparently wrote down whatever came into his head and hardly revised it. One was very analytical in their approach and the other was almost completely spontaneous.
 
I write stories and the one area where I’ve changed type 2 thinking into type 1 thinking is in creating characters – I hardly give it a thought. A character comes into my head almost fully formed, as if I just met them in the street. Over time I learn more about them and they sometimes surprise me, which is always a good thing. I once compared writing dialogue to playing jazz, because they both require spontaneity and extemporisation. Don Burrows once said you can’t teach someone to play jazz, and I’ve argued that you can’t teach someone to write dialogue.
 
Having said that, I once taught a creative writing class, and I gave the class exercises where they were forced to write dialogue, without telling them that that was the point of the exercise. In other words, I got them to teach themselves.
 
The hard part of storytelling for me is the plot, because it’s a neverending exercise in problem-solving. How did I get back to here? Analytical thinking is very hard to avoid, at least for me.
 
As I mentioned earlier, I think there is a dialectic between analytical thinking and creativity, and the best examples are not artists but genii in physics. To look at just two: Einstein and Schrodinger, because they exemplify both. But what came first: the analysis or the creativity? Well, I’m not sure it matters, because they couldn’t have done one without the other. Einstein had an epiphany (one of many) where he realised that an object in free fall didn’t experience a force, which apparently contradicted Newton. Was that analysis or creativity or both? Anyway, he not only changed how we think about gravity, he changed the way we think about the entire cosmos.
 
Schrodinger, borrowed an idea from de Broglie that particles could behave like waves and changed how we think about quantum mechanics. As Richard Feynman once said, ‘No one knows where Schrodinger’s equation comes from. It came out of Schrodinger’s head. You can’t derive it from anything we know.’
 

Saturday 11 June 2022

Does the "unreasonable effectiveness of Mathematics" suggest we are in a simulation?

 This was a question on Quora, and I provided 2 responses: one being a comment on someone else’s post (whom I follow); and the other being my own answer.

Some years ago, I wrote a post on this topic, but this is a different perspective, or 2 different perspectives. Also, in the last year, I saw a talk given by David Chalmers on the effects of virtual reality. He pointed out that when we’re in a virtual reality using a visor, we trick our brains into treating it as if it’s real. I don’t find this surprising, though I’ve never had the experience. As a sci-fi writer, I’ve imagined future theme parks that were completely, fully immersive simulations. But I don’t believe that provides an argument that we live in a simulation, for reasons I provide in my Quora responses, given below.

 

Comment:

 

Actually, we create a ‘simulacrum’ of the ‘observable’ world in our heads, which is different to what other species might have. For example, most birds have 300 degree vision, plus they see the world in slow motion compared to us.

 

And this simulacrum is so fantastic it actually ‘feels’ like it exists outside your head. How good is that? 

 

But here’s the thing: in all these cases (including other species) that simulacrum must have a certain degree of faithfulness or accuracy with ‘reality’, because we interact with it on a daily basis, and, guess what? It can kill you.

 

But there is a solipsist version of this, which happens when we dream, but it won’t kill you, as far as we can tell, because we usually wake up.

 

Maybe I should write this as a separate answer.

 

And I did:

 

One word answer: No.

 

But having said that, there are 2 parts to this question, the first part being the famous quote from the title of Eugene Wigner’s famous essay. But I prefer this quote from the essay itself, because it succinctly captures what the essay is all about.

 

It is difficult to avoid the impression that a miracle confronts us here… or the two miracles of the existence of laws of nature and of the human mind’s capacity to divine them.

 

This should be read in conjunction with another famous quote; this time from Einstein:

 

The most incomprehensible thing about the Universe is that it’s comprehensible.

 

And it’s comprehensible because its laws can be rendered in the language of mathematics and humans have the unique ability (at least on Earth) to comprehend that language even though it appears to be neverending.

 

And this leads into the philosophical debate going as far back as Plato and Aristotle: is mathematics invented or discovered?

 

The answer to that question is dependent on how you look at mathematics. Cosmologist and Fellow of the Royal Society, John Barrow, wrote a very good book on this very topic, called Pi in the Sky. In it, he makes the pertinent point that mathematics is not so much about numbers as the relationships between numbers. He goes further and observes that once you make this leap of cognitive insight, a whole new world opens up.

 

But here’s the thing: we have invented a system of numbers, most commonly to base 10, (but other systems as well), along with specific operators and notations that provide a language to describe and mentally manipulate these relationships. But the relationships themselves are not created by us: they become manifest in our explorations. To give an extremely basic example: prime numbers. You cannot create a prime number, they simply exist, and you can’t change one into a non-prime number or vice versa. And this is very basic, because primes are called the atoms of mathematics, because all the other ‘natural’ numbers can be derived from them.

 

An interest in the stars started early among humans, and eventually some very bright people, mainly Kepler and Newton, came to realise that the movement of the planets could be described very precisely by mathematics. And then Einstein, using Riemann geometry, vectors, calculus and matrices and something called the Lorenz transformation, was able to describe the planets even more accurately and even provide very accurate models of the entire observable universe, though recently we’ve come to the limits of this and we now need new theories and possibly new mathematics.


But there is something else that Einstein’s theories don’t tell us and that is that the planetary orbits are chaotic, which means they are unpredictable and that means eventually they could actually unravel. But here’s another thing: to calculate chaotic phenomena requires a computation to infinite decimal places. Therefore I contend the Universe can’t be a computer simulation. So that’s the long version of NO.

 

 

Footnote: Both my comment and my answer were ‘upvoted’ by Eric Platt, who has a PhD in mathematics (from University of Houston) and was a former software engineer at UCAR (University Corporation for Atmospheric Research).