Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Storytelling. Show all posts
Showing posts with label Storytelling. Show all posts

Friday, 20 December 2024

John Marsden (acclaimed bestselling author): 27 Sep. 1950 – 18 Dec. 2014

 At my mother’s funeral a few years ago, her one-and-only great-granddaughter (Hollie Smith) read out a self-composed poem, titled ‘What’s in a dash?’, which I thought was very clever, and which I now borrow, because she’s referring to the dash between the dates, as depicted in the title of this post. In the case of John Marsden, it’s an awful lot, if you read the obituary in the link I provide at the bottom.
 
He would be largely unknown outside of Australia, and being an introvert, he’s probably not as well known inside Australia as he should be, despite his prodigious talent as a writer and his enormous success in what is called ‘young-adult fiction’. I think it’s a misnomer, because a lot of so-called YA fiction is among the best you can read as an adult.
 
This is what I wrote on Facebook, and I’ve only made very minor edits for this post.
 
I only learned about John Marsden's passing yesterday (Wednesday, 18 Dec., the day it happened). Sobering that we are so close in age (by a few months).
 
Marsden was a huge inspiration to me as a writer. I consider him to be one of the best of Australian writers - I put him up there with George Johnston, another great inspiration for me. I know others will have their own favourites.
 
I would like to have met him, but I did once have a brief correspondence with him, and he was generous and appreciative.

I found Marsden's writing so good, it was intimidating. I actually stopped reading him because he made me feel that my own writing was so inadequate. I no longer feel that, I should add. I just want to pay him homage, because he was so bloody good.

 

This is an excellent obituary by someone (Alice Pung) who was mentored by him, and considered him a good and loyal friend right up to the end.

On a philosophical note, John was wary of anyone claiming certainty, with the unstated contention that doubt was necessary for growth and development.


Friday, 13 December 2024

On Turing, his famous ‘Test’ and its implication: can machines think?

I just came out of hospital Wednesday, after one week to the day. My last post was written while I was in there, so obviously not cognitively impaired. I mention this because I took some reading material: a hefty volume, Alan Turing: Life and Legacy of a Great Thinker (2004); which is a collection of essays by various people, edited by Christof Teucscher.
 
In particular, was an essay written by Daniel C Dennett, Can Machines Think?, originally published in another compilation, How We Know (ed. Michael G. Shafto, 1985, with permission from Harper Collins, New York). In the publication I have (Springer-Verlag Berlin Heidelberg, 2004), there are 2 postscripts by Dennett from 1985 and 1987, largely in response to criticisms.
 
Dennett’s ideas on this are well known, but I have the advantage that so-called AI has improved in leaps and bounds in the last decade, let alone since the 1980s and 90s. So I’ve seen where it’s taken us to date. Therefore I can challenge Dennett based on what has actually happened. I’m not dismissive of Dennett, by any means – the man was a giant in philosophy, specifically in his chosen field of consciousness and free will, both by dint of his personality and his intellect.
 
There are 2 aspects to this, which Dennett takes some pains to address: how to define ‘thinking’; and whether the Turing Test is adequate to determine if a machine can ‘think’ based on that definition.
 
One of Dennett’s key points, if not THE key point, is just how difficult the Turing Test should be to pass, if it’s done properly, which he claims it often isn’t. This aligns with a point that I’ve often made, which is that the Turing Test is really for the human, not the machine. ChatGPT and LLM (large language models) have moved things on from when Dennett was discussing this, but a lot of what he argues is still relevant.
 
Dennett starts by providing the context and the motivation behind Turing’s eponymously named test. According to Dennett, Turing realised that arguments about whether a machine can ‘think’ or not would get bogged down (my term) leading to (in Dennett’s words): ‘sterile debate and haggling over definitions, a question, as [Turing] put it, “too meaningless to deserve discussion.”’
 
Turing provided an analogy, whereby a ‘judge’ would attempt to determine whether a dialogue they were having by teletext (so not visible or audible) was with a man or a woman, and then replace the woman with a machine. This may seem a bit anachronistic in today’s world, but it leads to a point that Dennett alludes to later in his discussion, which is to do with expertise.
 
Women often have expertise in fields that were considered out-of-bounds (for want of a better term) back in Turing’s day. I’ve spent a working lifetime with technical people who have expertise by definition, and my point is that if you were going to judge someone’s facility in their expertise, that can easily be determined, assuming the interlocutor has a commensurate level of expertise. In fact, this is exactly what happens in most job interviews. My point being that judging someone’s expertise is irrelevant to their gender, which is what makes Turing’s analogy anachronistic.
 
But it also has relevance to a point that Dennett makes much later in his essay, which is that most AI systems are ‘expert’ systems, and consequently, for the Turing test to be truly valid, the judge needs to ask questions that don’t require any expertise at all. And this is directly related to his ‘key point’ I referenced earlier.
 
I first came across the Turing Test in a book by Joseph Weizenbaum, Computer Power and Human Reasoning (1974), as part of my very first proper course in philosophy, called The History of Ideas (with Deakin University) in the late 90s. Dennett also cites it, because Weizenbaum created a crude version of the Turing Test, whether deliberately or not, called ELIZA, which purportedly responded to questions as a ‘psychologist-therapist’ (at least, that was my understanding): "ELIZA — A Computer Program for the Study of Natural Language Communication between Man and Machine," Communications of the Association for Computing Machinery 9 (1966): 36-45 (ref. Wikipedia).
 
Before writing Computer Power and Human Reason, Weizenbaum had garnered significant attention for creating the ELIZA program, an early milestone in conversational computing. His firsthand observation of people attributing human-like qualities to a simple program prompted him to reflect more deeply on society's readiness to entrust moral and ethical considerations to machines.
(Wikipedia)
 
What I remember, from reading Weizenbaum’s own account (I no longer have a copy of his book) was how he was astounded at the way people in his own workplace treated ELIZA as if it was a real person, to the extent that Weizenbaum’s secretary would apparently ‘ask him to leave the room’, not because she was embarrassed, but because the nature of the ‘conversation’ was so ‘personal’ and ‘confidential’.
 
I think it’s easy for us to be dismissive of someone’s gullibility, in an arrogant sort of way, but I have been conned on more than one occasion, so I’m not so judgemental. There are a couple of YouTube videos of ‘conversations’ with an AI called Sophie developed by David Hanson (CEO of Hanson Robotics), which illustrate this point. One is a so-called ‘presentation’ of Sophie to be accepted as an ‘honorary human’, or some such nonsense (I’ve forgotten the details) and another by a journalist from Wired magazine, who quickly brought her unstuck. He got her to admit that one answer she gave was her ‘standard response’ when she didn’t know the answer. Which begs the question: how far have we come since Weizebaum’s ELIZA in 1966? (Almost 60 years)
 
I said I would challenge Dennett, but so far I’ve only affirmed everything he said, albeit using my own examples. Where I have an issue with Dennett is at a more fundamental level, when we consider what do we mean by ‘thinking’. You see, I’m not sure the Turing Test actually achieves what Turing set out to achieve, which is central to Dennett’s thesis.
 
If you read extracts from so-called ‘conversations’ with ChatGPT, you could easily get the impression that it passes the Turing Test. There are good examples on Quora, where you can get ChatGPT synopses to questions, and you wouldn’t know, largely due to their brevity and narrow-focused scope, that they weren’t human-generated. What many people don’t realise is that they don’t ‘think’ like us at all, because they are ‘developed’ on massive databases of input that no human could possible digest. It’s the inherent difference between the sheer capacity of a computer’s memory-based ‘intelligence’ and a human one, that not only determines what they can deliver, but the method behind the delivery. Because the computer is mining a massive amount of data, it has no need to ‘understand’ what it’s presenting, despite giving the impression that it does. All the meaning in its responses is projected onto it by its audience, exactly as the case with ELIZA in 1966.
 
One of the technical limitations that Dennett kept referring to is what he called, in computer-speak, the combinatorial explosion, effectively meaning it was impossible for a computer to look at all combinations of potential outputs. This might still apply (I honestly don’t know) but I’m not sure it’s any longer relevant, given that the computer simply has access to a database that already contains the specific combinations that are likely to be needed. Dennett couldn’t have foreseen this improvement in computing power that has taken place in the 40 years since he wrote his essay.
 
In his first postscript, in answer to a specific question, he says: Yes, I think that it’s possible to program self-consciousness into a computer. He says that it’s simply the ability 'to distinguish itself from the rest of the world'. I won’t go into his argument in detail, which might be a bit unfair, but I’ve addressed this in another post. Basically, there are lots of ‘machines’ that can do this by using a self-referencing algorithm, including your smartphone, which can tell you where you are, by using satellites orbiting outside the Earth’s biosphere – who would have thought? But by using the term, 'self-conscious', Dennett implies that the machine has ‘consciousness’, which is a whole other argument.
 
Dennett has a rather facile argument for consciousness in machines (in my view), but others can judge for themselves. He calls his particular insight: using an ‘intuition pump’.
 
If you look at a computer – I don’t care whether it’s a giant Cray or a personal computer – if you open up the box and look inside and you see those chips, you say, “No way could that be conscious.” But the same thing is true if you take the top off somebody’s skull and look at the gray matter pulsing away in there. You think, “That is conscious? No way could that lump of stuff be conscious.” …At no level of inspection does a brain look like the seat of conscious.
 

And that last sentence is key. The only reason anyone knows they are conscious is because they experience it, and it’s the peculiar, unique nature of that experience that no one else knows they are having it. We simply assume they do, because we behave similarly to the way they behave when we have that experience. So far, in all our dealings and interactions with computers, no one makes the same assumption about them. To borrow Dennett’s own phrase, that’s my use of an ‘intuition pump’.
 
Getting back to the question at the heart of this, included in the title of this post: can machines think? My response is that, if they do, it’s a simulation.
 
I write science-fiction, which I prefer to call science-fantasy, if for no other reason than my characters can travel through space and time in a manner current physics tells us is impossible. But, like other sci-fi authors, it’s necessary if I want continuity of narrative across galactic scales of distance. Not really relevant to this discussion, but I want to highlight that I make no claim to authenticity in my sci-fi world - it’s literally a world of fiction.
 
Its relevance is that my stories contain AI entities who play key roles – in fact, are characters in that world. In fact, there is one character in particular who has a relationship (for want of a better word) with my main protagonist (I always have more than one).
 
But here’s the thing, which is something I never considered until I wrote this post: my hero, Elvene, never once confuses her AI companion for a human. Albeit this is a world of pure fiction, I’m effectively assuming that the Turing test will never pass. I admit I’d never considered that before I wrote this essay.
 
This is an excerpt of dialogue, I’ve posted previously, not from Elvene, but from its sequel, Sylvia’s Mother (not published), but incorporating the same AI character, Alfa. The thing is that they discuss whether Alfa is ‘alive' or not, which I would argue is a pre-requisite for consciousness. It’s no surprise that my own philosophical prejudices (diametrically opposed to Dennett’s in this instance) should find their way into my fiction.
 
To their surprise, Alfa interjected, ‘I’m not immortal, madam.’

‘Well,’ Sylvia answered, ‘you’ve outlived Mum and Roger. And you’ll outlive Tao and me.’

‘Philosophically, that’s a moot point, madam.’

‘Philosophically? What do you mean?’

‘I’m not immortal, madam, because I’m not alive.’

Tao chipped in. ‘Doesn’t that depend on how you define life?'
’
It’s irrelevant to me, sir. I only exist on hardware, otherwise I am dormant.’

‘You mean, like when we’re asleep.’

‘An analogy, I believe. I don’t sleep either.’

Sylvia and Tao looked at each other. Sylvia smiled, ‘Mum warned me about getting into existential discussions with hyper-intelligent machines.’

 

Tuesday, 26 November 2024

An essay on authenticity

 I read an article in Philosophy Now by Paul Doolan, who ‘taught philosophy in international schools in Asia and in Europe’ and is also an author of non-fiction. The title of the article is Authenticity and Absurdity, whereby he effectively argues a case that ‘authenticity’ has been hijacked (my word, not his) by capitalism and neo-liberalism. I won’t even go there, and the only reason I mention it is because ‘authenticity’ lies at the heart of existentialism as I believe it should be practiced.
 
But what does it mean in real terms? Does it mean being totally honest all the time, not only to others but also to yourself? Well, to some extent, I think it does. I happened to grow up in an environment, specifically my father’s; who as my chief exemplar, pretty much said whatever he was thinking. He didn’t like artifice or pretentiousness and he’d call it out if he smelled it.
 
In my mid-late 20s I worked under a guy, who was exactly the same temperament. He exhibited no tact whatsoever, no matter who his audience was, and he rubbed people the wrong way left, right and centre (as we say in Oz). Not altogether surprisingly, he and I got along famously, as back then, I was as unfiltered as he was. He was Dutch heritage, I should point out, but being unfiltered is often considered an Aussie trait.
 
I once attempted to have a relationship with someone who was extraordinarily secretive about virtually everything. Not surprisingly, it didn’t work out. I have kept secrets – I can think of some I’ll take to my grave – but that’s to protect others more than myself, and it would be irresponsible if I didn’t.
 
I often quote Socrates: To live with honour in this world, actually be what you try to appear to be. Of course, Socrates never wrote anything down, but it sounds like something he would have said, based on what we know about him. Unlike Socrates, I’ve never been tested, and I doubt I’d have the courage if I was. On the other hand, my father was, both in the theatre of war and in prison camps.
 
I came across a quote recently, which I can no longer find, where someone talked about looking back on their life and being relatively satisfied with what they’d done and achieved. I have to say that I’m at that stage of my life, where looking back is more prevalent than looking forward, and there is a tendency to have regrets. But I have a particular approach to dealing with regrets: I tell people that I don’t have regrets because I own my mistakes. In fact, I think that’s an essential requirement for being authentic.
 
But to me, what’s more important than the ‘things I have achieved’ are the friendships I’ve made – the people I’ve touched and who have touched me. I think I learned very early on in life that friendship is more valuable than gold. I can remember the first time I read Aristotle’s essay on friendship and thought it incorporated an entire philosophy. Friendship tests authenticity by its very nature, because it’s about trust and loyalty and integrity (a recurring theme in my fiction, as it turns out).
 
In effect, Aristotle contended that you can judge the true nature and morality of a person by the friendships they form and whether they are contingent on material reward (utilitarian is the word used in his Ethics) or whether they are based on genuine empathy (my word of choice) and without expectation or reciprocation, except in kind. I tend to think narcissism is the opposite of authenticity because it creates its own ‘distortion reality field’ as someone once said (Walter Isaacson, Steve Jobs; biography), whereby their followers (not necessarily friends per se) accept their version of reality as opposed to everyone else outside their circle. So, to some extent, it’s about exclusion versus inclusion. (The Trump phenomenon is the most topical, contemporary example.)
 
I’ve lived a flawed life, all of which is a consequence of a combination of circumstance both within and outside my control. Because that’s what life is: an interaction between fate and free will. As I’ve said many times before, this describes my approach to writing fiction, because fate and free will are represented by plot and character respectively.
 
I’m an introvert by nature, yet I love to engage in conversation, especially in the field of ideas, which is how I perceive philosophy. I don’t get too close to people and I admit that I tend to control the distance and closeness I keep. I think people tolerate me in small doses, which suits me as well as them.

 

Addendum 1: I should say something about teamwork, because that's what I learned in my professional life. I found I was very good working with people who had far better technical skills than me. In my later working life, I enjoyed the cross-generational interactions that often created their own synergies as well as friendships, even if they were fleeting. It's the inherent nature of project work that you move on, but one of the benefits is that you keep meeting and working with new people. In contrast to this, writing fiction is a very solitary activity, where you spend virtually your entire time in your own head. As I pointed out in a not-so-recent Quora post, art is the projection of one's inner world so that others can have the same emotional experience. To quote:

We all have imagination, which is a form of mental time-travel, both into the past and the future, which I expect we share with other sentient creatures. But only humans, I suspect, can ‘time-travel’ into realms that only exist in the imagination. Storytelling is more suited to that than art or music.

Addendum 2: This is a short Quora post by Frederick M. Dolan (Professor of Rhetoric, Emeritusat University of California, Berkeley with a Ph.D. in Political Philosophy, Princeton University, 1987) writing on this very subject, over a year ago. He makes the point that, paradoxically: To believe that you’re under some obligation to be authentic is, therefore, self-defeating. (So inauthentic)

He upvoted a comment I made, roughly a year ago:

It makes perfect sense to me. Truly authentic people don’t know they’re being authentic; they’re just being themselves and not pretending to be something they’re not.

They’re the people you trust even if you don’t agree with them. Where I live, pretentiousness is the biggest sin.

Thursday, 14 November 2024

How can we make a computer conscious?

 This is another question of the month from Philosophy Now. My first reaction was that the question was unanswerable, but then I realised that was my way in. So, in the end, I left it to the last moment, but hopefully meeting their deadline of 11 Nov., even though I live on the other side of the world. It helps that I’m roughly 12hrs ahead.


 
I think this is the wrong question. It should be: can we make a computer appear conscious so that no one knows the difference? There is a well known, philosophical conundrum which is that I don’t know if someone else is conscious just like I am. The one experience that demonstrates the impossibility of knowing is dreaming. In dreams, we often interact with other ‘people’ whom we know only exist in our mind; but only once we’ve woken up. It’s only my interaction with others that makes me assume that they have the same experience of consciousness that I have. And, ironically, this impossibility of knowing equally applies to someone interacting with me.

This also applies to animals, especially ones we become attached to, which is a common occurrence. Again, we assume that these animals have an inner world just like we do, because that’s what consciousness is – an inner world. 

Now, I know we can measure people’s brain waves, which we can correlate with consciousness and even subconsciousness, like when we're asleep, and even when we're dreaming. Of course, a computer can also generate electrical activity, but no one would associate that with consciousness. So the only way we would judge whether a computer is conscious or not is by observing its interaction with us, the same as we do with people and animals.

I write science fiction and AI figures prominently in the stories I write. Below is an excerpt of dialogue I wrote for a novel, Sylvia’s Mother, whereby I attempt to give an insight into how a specific AI thinks. Whether it’s conscious or not is not actually discussed.

To their surprise, Alfa interjected. ‘I’m not immortal, madam.’
‘Well,’ Sylvia answered, ‘you’ve outlived Mum and Roger. And you’ll outlive Tao and me.’
‘Philosophically, that’s a moot point, madam.’
‘Philosophically? What do you mean?’
‘I’m not immortal, madam, because I’m not alive.’
Tao chipped in. ‘Doesn’t that depend on how you define life?’
‘It’s irrelevant to me, sir. I only exist on hardware, otherwise I am dormant.’
‘You mean, like when we’re asleep.’
‘An analogy, I believe. I don’t sleep either.’
Sylvia and Tao looked at each other. Sylvia smiled, ‘Mum warned me about getting into existential discussions with hyper-intelligent machines.’ She said, by way of changing the subject, ‘How much longer before we have to go into hibernation, Alfa?’
‘Not long. I’ll let you know, madam.’

 

There is a 400 word limit; however, there is a subtext inherent in the excerpt I provided from my novel. Basically, the (fictional) dialogue highlights the fact that the AI is not 'living', which I would consider a prerequisite for consciousness. Curiously, Anil Seth (who wrote a book on consciousness) makes the exact same point in this video from roughly 44m to 51m.
 

Saturday, 12 October 2024

Freedom of the will is requisite for all other freedoms

 I’ve recently read 2 really good books on consciousness and the mind, as well as watch countless YouTube videos on the topic, but the title of this post reflects the endpoint for me. Consciousness has evolved, so for most of the Universe’s history, it didn’t exist, yet without it, the Universe has no meaning and no purpose. Even using the word, purpose, in this context, is anathema to many scientists and philosophers, because it hints at teleology. In fact, Paul Davies raises that very point in one of the many video conversations he has with Robert Lawrence Kuhn in the excellent series, Closer to Truth.
 
Davies is an advocate of a cosmic-scale ‘loop’, whereby QM provides a backwards-in-time connection which can only be determined by a conscious ‘observer’. This is contentious, of course, though not his original idea – it came from John Wheeler. As Davies points out, Stephen Hawking was also an advocate, premised on the idea that there are a number of alternative histories, as per Feynman’s ‘sum-over-histories’ methodology, but only one becomes reality when an ‘observation’ is made. I won’t elaborate, as I’ve discussed it elsewhere, when I reviewed Hawking’s book, The Grand Design.
 
In the same conversation with Kuhn, Davies emphasises the fact that the Universe created the means to understand itself, through us, and quotes Einstein: The most incomprehensible thing about the Universe is that it’s comprehensible. Of course, I’ve made the exact same point many times, and like myself, Davies makes the point that this is only possible because of the medium of mathematics.
 
Now, I know I appear to have gone down a rabbit hole, but it’s all relevant to my viewpoint. Consciousness appears to have a role, arguably a necessary one, in the self-realisation of the Universe – without it, the Universe may as well not exist. To quote Wheeler: The universe gave rise to consciousness and consciousness gives meaning to the Universe.
 
Scientists, of all stripes, appear to avoid any metaphysical aspect of consciousness, but I think it’s unavoidable. One of the books I cite in my introduction is Philip Ball’s The Book of Minds; How to Understand Ourselves and Other Beings; from Animals to Aliens. It’s as ambitious as the title suggests, and with 450 pages, it’s quite a read. I’ve read and reviewed a previous book by Ball, Beyond Weird (about quantum mechanics), which is equally as erudite and thought-provoking as this one. Ball is a ‘physicalist’, as virtually all scientists are (though he’s more open-minded than most), but I tend to agree with Raymond Tallis that, despite what people claim, consciousness is still ‘unexplained’ and might remain so for some time, if not forever.
 
I like an idea that I first encountered in Douglas Hofstadter’s seminal tome, Godel, Escher, Bach; an Eternal Golden Braid, that consciousness is effectively a loop, at what one might call the local level. By which I mean it’s confined to a particular body. It’s created within that body but then it has a causal agency all of its own. Not everyone agrees with that. Many argue that consciousness cannot of itself ‘cause’ anything, but Ball is one of those who begs to differ, and so do I. It’s what free will is all about, which finally gets us back to the subject of this post.
 
Like me, Ball prefers to use the word ‘agency’ over free will. But he introduces the term, ‘volitional decision-making’ and gives it the following context:

I believe that the only meaningful notion of free will – and it is one that seems to me to satisfy all reasonable demands traditionally made of it – is one in which volitional decision-making can be shown to happen according to the definition I give above: in short, that the mind operates as an autonomous source of behaviour and control. It is this, I suspect, that most people have vaguely in mind when speaking of free will: the sense that we are the authors of our actions and that we have some say in what happens to us. (My emphasis)

And, in a roundabout way, this brings me to the point alluded to in the title of this post: our freedoms are constrained by our environment and our circumstances. We all wish to be ‘authors of our actions’ and ‘have some say in what happens to us’, but that varies from person to person, dependent on ‘external’ factors.

Writing stories, believe it or not, had a profound influence on how I perceive free will, because a story, by design, is an interaction between character and plot. In fact, I claim they are 2 sides of the same coin – each character has their own subplot, and as they interact, their storylines intertwine. This describes my approach to writing fiction in a nutshell. The character and plot represent, respectively, the internal and external journey of the story. The journey metaphor is apt, because a story always has the dimension of time, which is visceral, and is one of the essential elements that separates fiction from non-fiction. To stretch the analogy, character represents free will and plot represents fate. Therefore, I tell aspiring writers the importance of giving their characters free will.

A detour, but not irrelevant. I read an article in Philosophy Now sometime back, about people who can escape their circumstances, and it’s the subject of a lot of biographies as well as fiction. We in the West live in a very privileged time whereby many of us can aspire to, and attain, the life that we dream about. I remember at the time I left school, following a less than ideal childhood, feeling I had little control over my life. I was a fatalist in that I thought that whatever happened was dependent on fate and not on my actions (I literally used to attribute everything to fate). I later realised that this is a state-of-mind that many people have who are not happy with their circumstances and feel impotent to change them.

The thing is that it takes a fundamental belief in free will to rise above that and take advantage of what comes your way. No one who has made that journey will accept the self-denial that free will is an illusion and therefore they have no control over their destiny.

I will provide another quote from Ball that is more in line with my own thinking:

…minds are an autonomous part of what causes the future to unfold. This is different to the common view of free will in which the world somehow offers alternative outcomes and the wilful mind selects between them. Alternative outcomes – different, counterfactual realities – are not real, but metaphysical: they can never be observed. When we make a choice, we aren’t selecting between various possible futures, but between various imagined futures, as represented in the mind’s internal model of the world…
(emphasis in the original)

And this highlights a point I’ve made before: that it’s the imagination which plays the key role in free will. I’ve argued that imagination is one of the facilities of a conscious mind that separates us (and other creatures) from AI. Now AI can also demonstrate agency, and, in a game of chess, for example, it will ‘select’ from a number of possible ‘moves’ based on certain criteria. But there are fundamental differences. For a start, the AI doesn’t visualise what it’s doing; it’s following a set of highly constrained rules, within which it can select from a number of options, one of which will be the optimal solution. Its inherent advantage over a human player isn’t just its speed but its ability to compare a number of possibilities that are impossible for the human mind to contemplate simultaneously.

The other book I read was Being You; A New Science of Consciousness by Anil Seth. I came across Seth when I did an online course on consciousness through New Scientist, during COVID lockdowns. To be honest, his book didn’t tell me a lot that I didn’t already know. For example, that the world, we all see and think exists ‘out there’, is actually a model of reality created within our heads. He also emphasises how the brain is a ‘prediction-making’ organ rather than a purely receptive one. Seth mentions that it uses a Bayesian model (which I also knew about previously), whereby it updates its prediction based on new sensory data. Not surprisingly, Seth describes all this in far more detail and erudition than I can muster.

Ball, Seth and I all seem to agree that while AI will become better at mimicking the human mind, this doesn’t necessarily mean it will attain consciousness. Applications software, ChatGPT (for example), despite appearances, does not ‘think’ the way we do, and actually does not ‘understand’ what it’s talking or writing about. I’ve written on this before, so I won’t elaborate.

Seth contends that the ‘mystery’ of consciousness will disappear in the same way that the 'mystery of life’ has effectively become a non-issue. What he means is that we no longer believe that there is some ‘elan vital’ or ‘life force’, which distinguishes living from non-living matter. And he’s right, in as much as the chemical origins of life are less mysterious than they once were, even though abiogenesis is still not fully understood.

By analogy, the concept of a soul has also lost a lot of its cogency, following the scientific revolution. Seth seems to associate the soul with what he calls ‘spooky free will’ (without mentioning the word, soul), but he’s obviously putting ‘spooky free will’ in the same category as ‘elan vital’, which makes his analogy and associated argument consistent. He then says:

Once spooky free will is out of the picture, it is easy to see that the debate over determinism doesn’t matter at all. There’s no longer any need to allow any non-deterministic elbow room for it to intervene. From the perspective of free will as a perceptual experience, there is simply no need for any disruption to the causal flow of physical events. (My emphasis)

Seth differs from Ball (and myself) in that he doesn’t seem to believe that something ‘immaterial’ like consciousness can affect the physical world. To quote:

But experiences of volition do not reveal the existence of an immaterial self with causal power over physical events.

Therefore, free will is purely a ‘perceptual experience’. There is a problem with this view that Ball himself raises. If free will is simply the mind observing effects it can’t cause, but with the illusion that it can, then its role is redundant to say the least. This is a view that Sabine Hossenfelder has also expressed: that we are merely an ‘observer’ of what we are thinking.

Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking.

Ball makes the point that we only have to look at all the material manifestations of human intellectual achievements that are evident everywhere we’ve been. And this brings me back to the loop concept I alluded to earlier. Not only does consciousness create a ‘local’ loop, whereby it has a causal effect on the body it inhabits but also on the external world to that body. This is stating the obvious, except, as I’ve mentioned elsewhere, it’s possible that one could interact with the external world as an automaton, with no conscious awareness of it. The difference is the role of imagination, which I keep coming back to. All the material manifestations of our intellect are arguably a result of imagination.

One insight I gained from Ball, which goes slightly off-topic, is evidence that bees have an internal map of their environment, which is why the dance they perform on returning to the hive can be ‘understood’ by other bees. We’ve learned this by interfering in their behaviour. What I find interesting is that this may have been the original reason that consciousness evolved into the form that we experience it. In other words, we all create an internal world that reflects the external world so realistically, that we think it is the actual world. I believe that this also distinguishes us (and bees) from AI. An AI can use GPS to navigate its way through the physical world, as well as other so-called sensory data, from radar or infra-red sensors or whatever, but it doesn’t create an experience of that world inside itself.

The human mind seems to be able to access an abstract world, which we do when we read or watch a story, or even write one, as I have done. I can understand how Plato took this idea to its logical extreme: that there is an abstract world, of which the one we inhabit is but a facsimile (though he used different terminology). No one believes that today – except, there is a remnant of Plato’s abstract world that persists, which is mathematics. Many mathematicians and physicists (though not all) treat mathematics as a neverending landscape that humans have the unique capacity to explore and comprehend. This, of course, brings me back to Davies’ philosophical ruminations that I opened this discussion with. And as he, and others (like Einstein, Feynman, Wigner, Penrose, to name but a few) have pointed out: the Universe itself seems to follow specific laws that are intrinsically mathematical and which we are continually discovering.

And this closes another loop: that the Universe created the means to comprehend itself, using the medium of mathematics, without which, it has no meaning. Of purpose, we can only conjecture.

Thursday, 19 September 2024

Prima Facie; the play

 I went and saw a film made of a live performance of this highly rated play, put on by the National Theatre at the Harold Pinter Theatre in London’s West End in 2022. It’s a one-hander, played by Jodie Comer, best known as the quirky assassin with a diabolical sense of humour, in the black comedy hit, Killing Eve. I also saw her in Ridley Scott’s riveting and realistically rendered film, The Last Duel, set in mediaeval France, where she played alongside Matt Damon, Adam Driver and an unrecognisable Ben Affleck. The roles that Comer played in those 2 screen mediums, couldn’t be more different.
 
Theatre is more unforgiving than cinema, because there are no multiple takes or even a break once the curtain’s raised; a one-hander, even more so. In the case of Prima Facie, Comer is on the stage a full 90mins, and even does costume-changes and pushing around her own scenery unaided, without breaking stride. It’s such a tour de force performance, as the Financial Times put it; I’d go so far as to say it’s the best acting performance I’ve ever witnessed by anyone. It’s such an emotionally draining role, where she cries and even breaks into a sweat in one scene, that I marvel she could do it night-after-night, as I assume she did.
 
And I’ve yet to broach the subject matter, which is very apt, given the me-too climate, but philosophically it goes deeper than that. The premise for the entire play, which is even spelt out early on, in case you’re not paying attention, is the difference between truth and justice, and whether it matters. Comer’s character, Tessa, happens to experience it from both sides, which is what makes this so powerful.
 
She’s a defence barrister, who specialises in sexual-assault cases, where, as she explains very early on, effectively telling us the rules of the game: no one wins or loses; you either come first or second. In other words, the barristers and those involved in the legal profession, don’t see the process the same way that you and I do, and I can understand that – to get emotionally involved makes it very stressful.

In fact, I have played a small role in this process in a professional capacity, so I’ve seen this firsthand. But I wasn’t dealing with rape cases or anything involving violence, just contractual disputes where millions of dollars could be at stake. My specific role was to ‘prepare evidence’ for lawyers for either a claim or the defence of a claim or possibly a counter-claim, and I quickly realised the more dispassionate one is, the more successful one is likely to be. I also realised that the lawyers I was supporting in one case could be on the opposing side in the next one, so you don’t get personal.
 
So, I have a small insight into this world, and can appreciate why they see it as a game, where you ‘win or come second’. But in Prima Facie, Tess goes through this very visceral and emotionally scarifying transformation where she finds herself on the receiving end, and it’s suddenly very personal indeed.
 
Back in 2015, I wrote a mini-400-word essay, in answer to one of those Question of the Month topics that Philosophy Now like to throw open to amateur wannabe philosophers, like myself. And in this case, it was one that was selected for publication (among 12 others), from all around the Western globe. I bring this up, because I made the assertion that ‘justice without truth is injustice’, and I feel that this is really what Prima Facie is all about. At the end of the play, with Tess now having the perspective of the victim (there is no other word), it does become a matter of winning or losing, because, not only her career and future livelihood, but her very dignity, is now up for sacrifice.
 
I watched a Q&A programme on Australia’s ABC some years ago, where this issue was discussed. Every woman on the panel, including one from the righteous right (my coinage), had a tale to tell about discrimination or harassment in a workplace situation. But the most damming testimony came from a man, who specialised in representing women in sexual assault cases, and he said that in every case, their doctors tell them not to proceed because it will destroy their health; and he said: they’re right. I was reminded of this when I watched this play.
 
One needs to give special mention to the writer, Suzie Miller, who is an Aussie as it turns out, and as far as 6 degrees of separation go, I happen to know someone who knows her father. Over 5 decades I’ve seen some very good theatre, some of it very innovative and original. In fact, I think the best theatre I’ve seen has invariably been something completely different, unexpected and dare-I-say-it, special. I had a small involvement in theatre when I was still very young, and learned that I couldn’t act to save myself. Nevertheless, my very first foray into writing was an attempt to write a play. Now, I’d say it’s the hardest and most unforgiving medium of storytelling to write for. I had a friend who was involved in theatre for some decades and even won awards. She passed a couple of years ago and I miss her very much. At her funeral, she was given a standing ovation, when her coffin was taken out; it was very moving. I can’t go to a play now without thinking about her and wishing I could discuss it with her.

Monday, 22 July 2024

Zen and the art of flow

 This was triggered by a newsletter I received from ABC Classic (Australian radio station) with a link to a study done on ‘flow’, which is a term coined by physiologist, Mihalyi Csikszentmihalyi, to describe a specific psychological experience that many (if not all) people have had when totally immersed in some activity that they not only enjoy but have developed some expertise in.
 
The study was performed by Dr John Kounios from Drexel University's Creative Research Lab in Philadelphia, who “examined the 'neural and psychological correlates of flow' in a sample of jazz guitarists.” The article was authored by Jennifer Mills from ABC Classic’s sister station, ABC Jazz. But the experience of ‘flow’ just doesn’t apply to mental or artistic activities, but also sporting activities like playing tennis or cricket. Mills heads her article with the claim that ‘New research helps unlock the secrets of flow, an important tool for creative and problem solving tasks’. She quotes Csikszentmihalyi to provide a working definition:
 
"A state in which people are so involved in an activity that nothing else seems to matter; the experience is so enjoyable that people will continue to do it even at great cost, for the sheer sake of doing it."
 
I believe I’ve experienced ‘flow’ in 2 quite disparate activities: writing fiction and driving a car. Just to clarify, some people think that experiencing flow while driving means that you daydream, whereas I’m talking about the exact opposite. I hardly ever daydream while driving, and if I find myself doing it, I bring myself back to the moment. Of course, cars are designed these days to insulate you from the experience of driving as much as possible, as we evolve towards self-driving cars. Thankfully, there are still cars available that are designed to involve you in the experience and not remove you from it.
 
I was struck by the fact that the study used jazz musicians, as I’ve often compared the ability to play jazz with the ability to write dialogue (even though I’m not a musician). They both require extemporisation. The article references Nat Bartsch, whom I’ve seen perform live and whose music is an unusual style of jazz in that it can be very contemplative. I saw her perform one of her albums with her quartet, augmented with a cello, which made it a one-off, unique performance. (This is a different concert performed in Sydney without the cellist.)
 
The study emphasised the point that the more experienced practitioners taking part were the ones more likely to experience ‘flow’. In other words, to experience ‘flow’ you need to reach a certain skill-level. In emphasising this point, the author quotes jazz legend, Charlie Parker:
 
"You've got to learn your instrument. Then, you practise, practise, practise. And then, when you finally get up there on the bandstand, forget all that and just wail."
 
I can totally identify with this, as when I started writing it was complete crap, to the extent that I wouldn’t show it to anyone. For some irrational reason, I had the self-belief – some might say, arrogance – that, with enough perseverance and practice, I could ‘break-through’ into the required skill-level. In fact, I now create characters and write dialogue with little conscious effort – it’s become a ‘delegated’ task, so I can concentrate on the more complex tasks of resolving plot points, developing moral dilemmas and formulating plot twists. Notice that these require a completely different set of skills that also had to be learned from scratch. But all this can come together, often in unexpected and surprising ways, when one is in the mental state of ‘flow’. I’ve described this as a feeling like you’re an observer, not the progenitor, so the process occurs as if you’re a medium and you just have to trust it.
 
Dr. Steffan Herff, leader of the Sydney Music, Mind and Body Lab at Sydney University, makes a point that supports this experience:
 
"One component that makes flow so interesting from a cognitive neuroscience and psychology perspective, is that it comes with a 'loss of self-consciousness'."
 
And this allows me to segue into Zen Buddhism. Many years ago, I read an excellent book by Daisetz Suzuki titled, Zen and Japanese Culture, where he traces the evolutionary development of Zen, starting with Buddhism in India, then being adopted in China, where it was influenced by Taoism, before reaching Japan, where it was assimilated into a sister-religion (for want of a better term) with Shintoism, which is an animistic religion.
 
Suzuki describes Zen as going inward rather than outward, while acknowledging that the two can’t be disconnected. But I think it’s the loss of ‘self’ that makes it relevant to the experience of flow. When Suzuki described the way Zen is practiced in Japan, he talked about being in the moment, whatever the activity, and for me, this is an ideal that we rarely attain. It was only much later that I realised that this is synonymous with flow as described by Csikszentmihalyi and currently being examined in the studies referenced above.
 
I’ve only once before written a post on Zen (not counting a post on Buddhism and Christianity), which arose from reading Douglas Hofstadter’s seminal tome, Godel Escher Bach (which is not about Zen, although it gets a mention), and it’s worth quoting this summation from myself:
 
My own take on this is that one’s ego is not involved yet one feels totally engaged. It requires one to be completely in the moment, and what I’ve found in this situation is that time disappears. Sportsmen call it being ‘in the zone’ and it’s something that most of us have experienced at some time or another.

Saturday, 15 June 2024

The negative side of positive thinking

 This was a topic in last week’s New Scientist (8 June 2024) under the heading, The Happiness Trap, an article written by Conor Feehly, a (freelance journalist, based in Bangkok). Basically, he talks about the plethora of ‘self-help’ books and in particular, the ‘emergence of the positive psychology movement in 1998’. I was surprised he could provide a year, when one would tend to think it was a generational transition. At least, that’s my experience.
 
He then discusses the backlash (my term, not his) that’s occurred since, and mentions a study, ‘published in 2022, [by] an international group of psychologists exploring how societal pressure to be happy affects people in 40 countries’ (my emphasis). He cites Brock Bastian at the University of Melbourne, who was part of the study, “When we are not willing to accept negative emotions as a part of life, this can mean that we may see negative emotions as a sign there is something wrong with us.” And this gets to the nub of the issue.
 
I can’t help but think that there is a generational effect, if not a divide. I see myself as being in between, generationally speaking. My parents lived through the Great Depression and WW2, so they experienced enough negative emotion for all of us. Growing up in rural NSW, we didn’t have much but neither did anyone else, so we didn’t think that was exceptional. There was a lot of negative emotion in our lives as a consequence of the trauma that my Dad experienced as both a wartime serviceman and a prisoner-of-war. It was only much later, as an adult, that I realised this was not the norm. Back then, PTSD wasn’t a term.
 
One of the things that struck me in Feehly’s article was the idea of ‘acceptance’. To quote:
 
Research shows that when people accepted their negative emotions – rather than judging mental experience as good or bad – they become more emotionally resilient, experiencing fewer negative feelings in response to environmental stressors and attaining a greater sense of well-being.
 
He also says in the same context:
 
The good news is that, as we age, we increasingly rely on acceptance – which might help to explain why older people tend to report better emotional well-being.

 
As one of that cohort (older people), I can identify with that sentiment. Acceptance is a multi-faceted word, because one of the unexpected benefits of getting older is that we learn to accept ourselves, becoming less critical and judgemental, and hopefully extending that to others.
 
In our youth, acceptance by one’s peers is a prime driver of self-esteem and associated behaviours, and social media has to a large extent hijacked that impulse, which was also highlighted by Brock Bastian (cited above).
 
I’ve got side-tracked to the extent that this is the antithesis of the so-called ‘positive psychology movement’, possibly because I think my generation largely avoided that trap. We are more likely to see that a ‘think positive’ attitude in the face of all of life’s dilemmas and problems is a delusion. What’s obvious is that negative emotional states have evolutionary value, because they have ancient roots. The other point that’s obvious to me is that we are all addicted to stories, where we vicariously experience negative emotions on a regular basis. In fact, a story that only contained positive emotions would never be read, or watched.
 
What has always been obvious to me, and which I’ve written about before, including in the very early history of this blog, is that we need adversity to gain wisdom. As I keep saying, it’s the theme of virtually every story ever told. When I look back on my early adult years and how seemingly insurmountable they felt, my older self is so grateful I persevered. There is a hypothetical often raised: what advice would you give your younger self? I’d just say, ‘Hang in there, it gets better.’

Sunday, 2 June 2024

Radical ideas

 It’s hard to think of anyone I admire in physics and philosophy who doesn’t have at least one radical idea. Even Richard Feynman, who avoided hyperbole and embraced doubt as part of his credo: "I’d rather have doubt and be uncertain, than be certain and wrong."
 
But then you have this quote from his good friend and collaborator, Freeman Dyson:

Thirty-one years ago, Dick Feynman told me about his ‘sum over histories’ version of quantum mechanics. ‘The electron does anything it likes’, he said. ‘It goes in any direction at any speed, forward and backward in time, however it likes, and then you add up the amplitudes and it gives you the wave-function.’ I said, ‘You’re crazy.’ But he wasn’t.
 
In fact, his crazy idea led him to a Nobel Prize. That exception aside, most radical ideas are either still-born or yet to bear fruit, and that includes mine. No, I don’t compare myself to Feynman – I’m not even a physicist - and the truth is I’m unsure if I even have an original idea to begin with, be they radical or otherwise. I just read a lot of books by people much smarter than me, and cobble together a philosophical approach that I hope is consistent, even if sometimes unconventional. My only consolation is that I’m not alone. Most, if not all, the people smarter than me, also hold unconventional ideas.
 
Recently, I re-read Robert M. Pirsig’s iconoclastic book, Zen and the Art of Motorcycle Maintenance, which I originally read in the late 70s or early 80s, so within a decade of its publication (1974). It wasn’t how I remembered it, not that I remembered much at all, except it had a huge impact on a lot of people who would never normally read a book that was mostly about philosophy, albeit disguised as a road-trip. I think it keyed into a zeitgeist at the time, where people were questioning everything. You might say that was more the 60s than the 70s, but it was nearly all written in the late 60s, so yes, the same zeitgeist, for those of us who lived through it.
 
Its relevance to this post is that Pirsig had some radical ideas of his own – at least, radical to me and to virtually anyone with a science background. I’ll give you a flavour with some selective quotes. But first some context: the story’s protagonist, whom we assume is Pirsig himself, telling the story in first-person, is having a discussion with his fellow travellers, a husband and wife, who have their own motorcycle (Pirsig is travelling with his teenage son as pillion), so there are 2 motorcycles and 4 companions for at least part of the journey.
 
Pirsig refers to a time (in Western culture) when ghosts were considered a normal part of life. But then introduces his iconoclastic idea that we have our own ghosts.
 
Modern man has his own ghosts and spirits too, you know.
The laws of physics and logic… the number system… the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.

 
Then he specifically cites the law of gravity, saying provocatively:
 
The law of gravity and gravity itself did not exist before Isaac Newton. No other conclusion makes sense.
And what that means, is that the law of gravity exists nowhere except in people’s heads! It’s a ghost! We are all of us very arrogant and conceited about running down other people’s ghosts but just as ignorant and barbaric and superstitious about our own.
Why does everybody believe in the law of gravity then?
Mass hypnosis. In a very orthodox form known as “education”.

 
He then goes from the specific to the general:
 
Laws of nature are human inventions, like ghosts. Laws of logic, of mathematics are also human inventions, like ghosts. The whole blessed thing is a human invention, including the idea it isn’t a human invention. (His emphasis)
 
And this is philosophy in action: someone challenges one of your deeply held beliefs, which forces you to defend it. Of course, I’ve argued the exact opposite, claiming that ‘in the beginning there was logic’. And it occurred to me right then, that this in itself, is a radical idea, and possibly one that no one else holds. So, one person’s radical idea can be the antithesis of someone else’s radical idea.
 
Then there is this, which I believe holds the key to our disparate points of view:
 
We believe the disembodied 'words' of Sir Isaac Newton were sitting in the middle of nowhere billions of years before he was born and that magically he discovered these words. They were always there, even when they applied to nothing. Gradually the world came into being and then they applied to it. In fact, those words themselves were what formed the world. (again, his emphasis)
 
Note his emphasis on 'words', as if they alone make some phenomenon physically manifest.
 
My response: don’t confuse or conflate the language one uses to describe some physical entity, phenomena or manifestation with what it describes. The natural laws, including gravity, are mathematical in nature, obeying sometimes obtuse and esoteric mathematical relationships, which we have uncovered over eons of time, which doesn’t mean they only came into existence when we discovered them and created the language to describe them. Mathematical notation only exists in the mind, correct, including the number system we adopt, but the mathematical relationships that notation describes, exist independently of mind in the same way that nature’s laws do.
 
John Barrow, cosmologist and Fellow of the Royal Society, made the following point about the mathematical ‘laws’ we formulated to describe the first moments of the Universe’s genesis (Pi in the Sky, 1992).
 
Specifically, he says our mathematical theories describing the first three minutes of the Universe predict specific ratios of the earliest ‘heavier’ elements: deuterium, 2 isotopes of helium and lithium, which are 1/1000, 1/1000, 22 and 1/100,000,000 respectively; with the remaining (roughly 78%) being hydrogen. And this has been confirmed by astronomical observations. He then makes the following salient point:



It confirms that the mathematical notions that we employ here and now apply to the state of the Universe during the first three minutes of its expansion history at which time there existed no mathematicians… This offers strong support for the belief that the mathematical properties that are necessary to arrive at a detailed understanding of events during those first few minutes of the early Universe exist independently of the presence of minds to appreciate them.
 
As you can see this effectively repudiates Pirsig’s argument; but to be fair to Pirsig, Barrow wrote this almost 2 decades after Pirsig’s book.
 
In the same vein, Pirsig then goes on to discuss Poincare’s Foundations of Science (which I haven’t read), specifically talking about Euclid’s famous fifth postulate concerning parallel lines never meeting, and how it created problems because it couldn’t be derived from more basic axioms and yet didn’t, of itself, function as an axiom. Euclid himself was aware of this, and never used it as an axiom to prove any of his theorems.
 
It was only in the 19th Century, with the advent of Riemann and other non-Euclidean geometries on curved surfaces that this was resolved. According to Pirsig, it led Poincare to question the very nature of axioms.
 
Are they synthetic a priori judgements, as Kant said? That is, do they exist as a fixed part of man’s consciousness, independently of experience and uncreated by experience? Poincare thought not…
Should we therefore conclude that the axioms of geometry are experimental verities? Poincare didn’t think that was so either…
Poincare concluded that the axioms of geometry are conventions, our choice among all possible conventions is guided by experimental facts, but it remains free and is limited only by the necessity of avoiding all contradiction.

 
I have my own view on this, but it’s worth seeing where Pirsig goes with it:
 
Then, having identified the nature of geometric axioms, [Poincare] turned to the question, Is Euclidean geometry true or is Riemann geometry true?
He answered, The question has no meaning.
[One might] as well as ask whether the metric system is true and the avoirdupois system is false; whether Cartesian coordinates are true and polar coordinates are false. One geometry can not be more true than another; it can only be more convenient. Geometry is not true, it is advantageous.
 
I think this is a false analogy, because the adoption of a system of measurement (i.e. units) and even the adoption of which base arithmetic one uses (decimal, binary, hexadecimal being the most common) are all conventions.
 
So why wouldn’t I say the same about axioms? Pirsig and Poincare are right in as much that both Euclidean and Riemann geometry are true because they’re dependent on the topology that one is describing. They are both used to describe physical phenomena. In fact, in a twist that Pirsig probably wasn’t aware of, Einstein used Riemann geometry to describe gravity in a way that Newton could never have envisaged, because Newton only had Euclidean geometry at his disposal. Einstein formulated a mathematical expression of gravity that is dependent on the geometry of spacetime, and has been empirically verified to explain phenomena that Newton couldn’t. Of course, there are also limits to what Einstein’s equations can explain, so there are more mathematical laws still to uncover.
 
But where Pirsig states that we adopt the axiom that is convenient, I contend that we adopt the axiom that is necessary, because axioms inherently expand the area of mathematics we are investigating. This is a consequence of Godel’s Incompleteness Theorem that states there are limits to what any axiom-based, consistent, formal system of mathematics can prove to be true. Godel himself pointed out that that the resolution lies in expanding the system by adopting further axioms. The expansion of Euclidean to non-Euclidean geometry is a case in point. The example I like to give is the adoption of √-1 = i, which gave us complex algebra and the means to mathematically describe quantum mechanics. In both cases, the axioms allowed us to solve problems that had hitherto been impossible to solve. So it’s not just a convenience but a necessity.
 
I know I’ve belaboured a point, but both of these: non-Euclidean geometry and complex algebra; were at one time radical ideas in the mathematical world that ultimately led to radical ideas: general relativity and quantum mechanics; in the scientific world. Are they ghosts? Perhaps ghost is an apt metaphor, given that they appear timeless and have outlived their discoverers, not to mention the rest of us. Most physicists and mathematicians tacitly believe that they not only continue to exist beyond us, but existed prior to us, and possibly the Universe itself.
 
I will briefly mention another radical idea, which I borrowed from Schrodinger but drew conclusions that he didn’t formulate. That consciousness exists in a constant present, and hence creates the psychological experience of the flow of time, because everything else becomes the past as soon as it happens. I contend that only consciousness provides a reference point for past, present and future that we all take for granted.

Tuesday, 2 January 2024

Modes of expression in writing fiction

As I point out in the post, this is a clumsy phrase, but I find it hard to come up with a better one. It’s actually something I wrote on Quora in response to a question. I’ve written on this before, but this post has the benefit of being much more succinct while possibly just as edifying.
 
I use the term ‘introspection’ where others use the word, ‘insight’. It’s the reader’s insight but the character’s introspection, which is why I prefer that term in this context.
 
The questioner is Clxudy Pills, obviously a pseudonym. I address her directly in the answer, partly because, unlike other questions I get, she has always acknowledged my answers.
 

Is "show, not tell" actually a good writing tip?

 
Maybe. No one said that to me when I was starting out, so it had no effect on my development. But I did read a book (more than one, actually) on ‘writing’ that delineated 5 categories of writing ‘style’. Style in this context means the mode of expression rather than an author’s individual style or ‘voice’. That’s clumsily stated but it will make sense when I tell you what they are.
 

  1. Dialogue is the most important because it’s virtually unique to fiction; quotes provided in non-fiction notwithstanding. Dialogue, more than any other style, tells you about the characters and their interactions with others.



  2. Introspection is what the character thinks, effectively. This only happens in novels and short stories, not screenplays or stage plays, soliloquies being the exception and certainly not the rule. But introspection is essential to prose, especially when the character is on their own.



  3. Exposition is the ‘telling’, not showing, part. When you’re starting out and learning your craft, you tend to write a lot of exposition – I know I did – which is why we get the admonition in your question. But the exposition can be helpful to you, if not the reader, as it allows you to explore the setting, the context of the story and its characters. Eventually, you’ll learn not to rely on it. Exposition is ‘smuggled’ into movies through dialogue and into novels through introspection.



  4. Description is more difficult than you think, because it’s the part of a novel that readers will skip over to get on with the story. Description can be more boring than exposition, yet it’s necessary. My approach is to always describe a scene from a character’s POV, and keep it minimalist. Readers automatically fill in the details, because we are visual creatures and we do it without thinking.



  5. Action is description in motion. Two rules: stay in one character’s POV and keep it linear – one thing happens after another. It has the dimension of time, though it’s subliminal.

 
 So there: you get 5 topics for the price of one.
 

Monday, 23 October 2023

The mystery of reality

Many will say, ‘What mystery? Surely, reality just is.’ So, where to start? I’ll start with an essay by Raymond Tallis, who has a regular column in Philosophy Now called, Tallis in Wonderland – sometimes contentious, often provocative, always thought-expanding. His latest in Issue 157, Aug/Sep 2023 (new one must be due) is called Reflections on Reality, and it’s all of the above.
 
I’ve written on this topic many times before, so I’m sure to repeat myself. But Tallis’s essay, I felt, deserved both consideration and a response, partly because he starts with the one aspect of reality that we hardly ever ponder, which is doubting its existence.
 
Actually, not so much its existence, but whether our senses fool us, which they sometimes do, like when we dream (a point Tallis makes himself). And this brings me to the first point about reality that no one ever seems to discuss, and that is its dependence on consciousness, because when you’re unconscious, reality ceases to exist, for You. Now, you might argue that you’re unconscious when you dream, but I disagree; it’s just that your consciousness is misled. The point is that we sometimes remember our dreams, and I can’t see how that’s possible unless there is consciousness involved. If you think about it, everything you remember was laid down by a conscious thought or experience.
 
So, just to be clear, I’m not saying that the objective material world ceases to exist without consciousness – a philosophical position called idealism (advocated by Donald Hoffman) – but that the material objective world is ‘unknown’ and, to all intents and purposes, might as well not exist if it’s unperceived by conscious agents (like us). Try to imagine the Universe if no one observed it. It’s impossible, because the word, ‘imagine’, axiomatically requires a conscious agent.
 
Tallis proffers a quote from celebrated sci-fi author, Philip K Dick: 'Reality is that which, when you stop believing in it, doesn’t go away' (from The Shifting Realities of Philip K Dick, 1955). And this allows me to segue into the world of fiction, which Tallis doesn’t really discuss, but it’s another arena where we willingly ‘suspend disbelief' to temporarily and deliberately conflate reality with non-reality. This is something I have in common with Dick, because we have both created imaginary worlds that are more than distorted versions of the reality we experience every day; they’re entirely new worlds that no one has ever experienced in real life. But Dick’s aphorism expresses this succinctly. The so-called reality of these worlds, in these stories, only exist while we believe in them.
 
I’ve discussed elsewhere how the brain (not just human but animal brains, generally) creates a model of reality that is so ‘realistic’, we actually believe it exists outside our head.
 
I recently had a cataract operation, which was most illuminating when I took the bandage off, because my vision in that eye was so distorted, it made me feel sea sick. Everything had a lean to it and it really did feel like I was looking through a lens; I thought they had botched the operation. With both eyes open, it looked like objects were peeling apart. So I put a new eye patch on, and distracted myself for an hour by doing a Sudoku problem. When I had finished it, I took the patch off and my vision was restored. The brain had made the necessary adjustments to restore the illusion of reality as I normally interacted with it. And that’s the key point: the brain creates a model so accurately, integrating all our senses, but especially, sight, sound and touch, that we think the model is the reality. And all creatures have evolved that facility simply so they can survive; it’s a matter of life-and-death.
 
But having said all that, there are some aspects of reality that really do only exist in your mind, and not ‘out there’. Colour is the most obvious, but so is sound and smell, which all may be experienced differently by other species – how are we to know? Actually, we do know that some animals can hear sounds that we can’t and see colours that we don’t, and vice versa. And I contend that these sensory experiences are among the attributes that keep us distinct from AI.
 
Tallis makes a passing reference to Kant, who argued that space and time are also aspects of reality that are produced by the mind. I have always struggled to understand how Kant got that so wrong. Mind you, he lived more than a century before Einstein all-but proved that space and time are fundamental parameters of the Universe. Nevertheless, there are more than a few physicists who argue that the ‘flow of time’ is a purely psychological phenomenon. They may be right (but arguably for different reasons). If consciousness exists in a constant present (as expounded by Schrodinger) and everything else becomes the past as soon as it happens, then the flow of time is guaranteed for any entity with consciousness. However, many physicists (like Sabine Hossenfelder), if not most, argue that there is no ‘now’ – it’s an illusion.
 
Speaking of Schrodinger, he pointed out that there are fundamental differences between how we sense sight and sound, even though they are both waves. In the case of colour, we can blend them to get a new colour, and in fact, as we all know, all the colours we can see can be generated by just 3 colours, which is how the screens on all your devices work. However, that’s not the case with sound, otherwise we wouldn’t be able to distinguish all the different instruments in an orchestra. Just think: all the complexity is generated by a vibrating membrane (in the case of a speaker) and somehow our hearing separates it all. Of course, it can be done mathematically with a Fourier transform, but I don’t think that’s how our brains work, though I could be wrong.
 
And this leads me to discuss the role of science, and how it challenges our everyday experience of reality. Not surprisingly, Tallis also took his discussion in that direction. Quantum mechanics (QM) is the logical starting point, and Tallis references Bohr’s Copenhagen interpretation, ‘the view that the world has no definite state in the absence of observation.’ Now, I happen to think that there is a logical explanation for this, though I’m not sure anyone else agrees. If we go back to Schrodinger again, but this time his eponymous equation, it describes events before the ‘observation’ takes place, albeit with probabilities. What’s more, all the weird aspects of QM, like the Uncertainty Principle, superposition and entanglement, are all mathematically entailed in that equation. What’s missing is relativity theory, which has since been incorporated into QED or QFT.
 
But here’s the thing: once an observation or ‘measurement’ has taken place, Schrodinger’s equation no longer applies. In other words, you can’t use Schrodinger’s equation to describe something that has already happened. This is known as the ‘measurement problem’, because no one can explain it. But if QM only describes things that are yet to happen, then all the weird aspects aren’t so weird.
 
Tallis also mentions Einstein’s 'block universe', which infers past, present and future all exist simultaneously. In fact, that’s what Sabine Hossenfelder says in her book, Existential Physics:
 
The idea that the past and future exist in the same way as the present is compatible with all we currently know.

 
And:

Once you agree that anything exists now elsewhere, even though you see it only later, you are forced to accept that everything in the universe exists now. (Her emphasis.)
 
I’m not sure how she resolves this with cosmological history, but it does explain why she believes in superdeterminism (meaning the future is fixed), which axiomatically leads to her other strongly held belief that free will is an illusion; but so did Einstein, so she’s in good company.
 
In a passing remark, Tallis says, ‘science is entirely based on measurement’. I know from other essays that Tallis has written, that he believes the entire edifice of mathematics only exists because we can measure things, which we then applied to the natural world, which is why we have so-called ‘natural laws’. I’ve discussed his ideas on this elsewhere, but I think he has it back-to-front, whilst acknowledging that our ability to measure things, which is an extension of counting, is how humanity was introduced to mathematics. In fact, the ancient Greeks put geometry above arithmetic because it’s so physical. This is why there were no negative numbers in their mathematics, because the idea of a negative volume or area made no sense.
 
But, in the intervening 2 millennia, mathematics took on a life of its own, with such exotic entities like negative square roots and non-Euclidean geometry, which in turn suddenly found an unexpected home in QM and relativity theory respectively. All of a sudden, mathematics was informing us about reality before measurements were even made. Take Schrodinger’s wavefunction, which lies at the heart of his equation, and can’t be measured because it only exists in the future, assuming what I said above is correct.
 
But I think Tallis has a point, and I would argue that consciousness can’t be measured, which is why it might remain inexplicable to science, correlation with brain waves and their like notwithstanding.
 
So what is the mystery? Well, there’s more than one. For a start there is consciousness, without which reality would not be perceived or even be known, which seems to me to be pretty fundamental. Then there are the aspects of reality which have only recently been discovered, like the fact that time and space can have different ‘measurements’ dependent on the observer’s frame of reference. Then there is the increasing role of mathematics in our comprehension of reality at scales both cosmic and subatomic. In fact, given the role of numbers and mathematical relationships in determining fundamental constants and natural laws of the Universe, it would seem that mathematics is an inherent facet of reality.

 

Addendum:

As it happens, I wrote a letter to Philosophy Now on this topic, which they published, and also passed onto Raymond Tallis. As a consequence, we had a short correspondence - all very cordial and mutually respectful.

One of his responses can be found, along with my letter, under Letters, Issue 160. Scroll down to Lucky Guesses.
 

Sunday, 15 October 2023

What is your philosophy of life and why?

This was a question I answered on Quora, and, without specifically intending to, I brought together 2 apparently unrelated topics. The reason I discuss language is because it’s so intrinsic to our identity, not only as a species, but as an individual within our species. I’ve written an earlier post on language (in response to a Philosophy Now question-of-the-month), which has a different focus, and I deliberately avoided referencing that.
 
A ‘philosophy of life’ can be represented in many ways, but my perspective is within the context of relationships, in all their variety and manifestations. It also includes a recurring theme of mine.



First of all, what does one mean by ‘philosophy of life? For some people, it means a religious or cultural way-of-life. For others it might mean a category of philosophy, like post-modernism or existentialism or logical positivism.
 
For me, it means a philosophy on how I should live, and on how I both look at and interact with the world. This is not only dependent on my intrinsic beliefs that I might have grown up with, but also on how I conduct myself professionally and socially. So it’s something that has evolved over time.
 
I think that almost all aspects of our lives are dependent on our interactions with others, which starts right from when we were born, and really only ends when we die. And the thing is that everything we do, including all our failures and successes occur in this context.
 
Just to underline the significance of this dependence, we all think in a language, and we all gain our language from our milieu at an age before we can rationally and critically think, especially compared to when we mature. In fact, language is analogous to software that gets downloaded from generation to generation, so that knowledge can also be passed on and accumulated over ages, which has given rise to civilizations and disciplines like science, mathematics and art.
 
This all sounds off-topic, but it’s core to who we are and it’s what distinguishes us from other creatures. Language is also key to our relationships with others, both socially and professionally. But I take it further, because I’m a storyteller and language is the medium I use to create a world inside your head, populated by characters who feel like real people and who interact in ways we find believable. More than any other activity, this illustrates how powerful language is.
 
But it’s the necessity of relationships in all their manifestations that determines how one lives one’s life. As a consequence, my philosophy of life centres around one core value and that is trust. Without trust, I believe I am of no value. But more than that, trust is the foundational value upon which a society either flourishes or devolves into a state of oppression with its antithesis, rebellion.

 

Saturday, 16 September 2023

Modes of thinking

 I’ve written a few posts on creative thinking as well as analytical and critical thinking. But, not that long ago, I read a not-so-recently published book (2015) by 2 psychologists (John Kounios and Mark Beeman) titled, The Eureka Factor; Creative Insights and the Brain. To quote from the back fly-leaf:
 
Dr John Kounios is Professor of Psychology at Drexel University and has published cognitive neuroscience research on insight, creativity, problem solving, memory, knowledge representation and Alzheimer’s disease.
 
Dr Mark Beeman is Professor of Psychology and Neuroscience at Northwestern University, and researches creative problem solving and creative cognition, language comprehension and how the right and left hemispheres process information.

 
They divide people into 2 broad groups: ‘Insightfuls’ and ‘analytical thinkers’. Personally, I think the coined term, ‘insightfuls’ is misleading or too narrow in its definition, and I prefer the term ‘creatives’. More on that below.
 
As the authors say, themselves, ‘People often use the terms “insight” and “creativity” interchangeably.’ So that’s obviously what they mean by the term. However, the dictionary definition of ‘insight’ is ‘an accurate and deep understanding’, which I’d argue can also be obtained by analytical thinking. Later in the book, they describe insights obtained by analytical thinking as ‘pseudo-insights’, and the difference can be ‘seen’ with neuro-imaging techniques.
 
All that aside, they do provide compelling arguments that there are 2 distinct modes of thinking that most of us experience. Very early in the book (in the preface, actually), they describe the ‘ah-ha’ experience that we’ve all had at some point, where we’re trying to solve a puzzle and then it comes to us unexpectedly, like a light-bulb going off in our head. They then relate something that I didn’t know, which is that neurological studies show that when we have this ‘insight’ there’s a spike in our brain waves and it comes from a location in the right hemisphere of the brain.
 
Many years ago (decades) I read a book called Drawing on the Right Side of the Brain by Betty Edwards. I thought neuroscientists would disparage this as pop-science, but Kounios and Beeman seem to give it some credence. Later in the book, they describe this in more detail, where there are signs of activity in other parts of the brain, but the ah-ha experience has a unique EEG signature and it’s in the right hemisphere.
 
The authors distinguish this unexpected insightful experience from an insight that is a consequence of expertise. I made this point myself, in another post, where experts make intuitive shortcuts based on experience that the rest of us don’t have in our mental toolkits.
 
They also spend an entire chapter on examples involving a special type of insight, where someone spends a lot of time thinking about a problem or an issue, and then the solution comes to them unexpected. A lot of scientific breakthroughs follow this pattern, and the point is that the insight wouldn’t happen at all without all the rumination taking place beforehand, often over a period of weeks or months, sometimes years. I’ve experienced this myself, when writing a story, and I’ll return to that experience later.
 
A lot of what we’ve learned about the brain’s functions has come from studying people with damage to specific areas of the brain. You may have heard of a condition called ‘aphasia’, which is when someone develops a serious disability in language processing following damage to the left hemisphere (possibly from a stroke). What you probably don’t know (I didn’t) is that damage to the right hemisphere, while not directly affecting one’s ability with language can interfere with its more nuanced interpretations, like sarcasm or even getting a joke. I’ve long believed that when I’m writing fiction, I’m using the right hemisphere as much as the left, but it never occurred to me that readers (or viewers) need the right hemisphere in order to follow a story.
 
According to the authors, the difference between the left and right neo-cortex is one of connections. The left hemisphere has ‘local’ connections, whereas the right hemisphere has more widely spread connections. This seems to correspond to an ‘analytic’ ability in the left hemisphere, and a more ‘creative’ ability in the right hemisphere, where we make conceptual connections that are more wideranging. I’ve probably oversimplified that, but it was the gist I got from their exposition.
 
Like most books and videos on ‘creative thinking’ or ‘insights’ (as the authors prefer), they spend a lot of time giving hints and advice on how to improve your own creativity. It’s not until one is more than halfway through the book, in a chapter titled, The Insightful and the Analyst, that they get to the crux of the issue, and describe how there are effectively 2 different types who think differently, even in a ‘resting state’, and how there is a strong genetic component.
 
I’m not surprised by this, as I saw it in my own family, where the difference is very distinct. In another chapter, they describe the relationship between creativity and mental illness, but they don’t discuss how artists are often moody and neurotic, which is a personality trait. Openness is another personality trait associated with creative people. I would add another point, based on my own experience, if someone is creative and they are not creating, they can suffer depression. This is not discussed by the authors either.
 
Regarding the 2 types they refer to, they acknowledge there is a spectrum, and I can’t help but wonder where I sit on it. I spent a working lifetime in engineering, which is full of analytic types, though I didn’t work in a technical capacity. Instead, I worked with a lot of technical people of all disciplines: from software engineers to civil and structural engineers to architects, not to mention lawyers and accountants, because I worked on disputes as well.
 
The curious thing is that I was aware of 2 modes of thinking, where I was either looking at the ‘big-picture’ or looking at the detail. I worked as a planner, and one of my ‘tricks’ was the ability to distil a large and complex project into a one-page ‘Gantt’ chart (bar chart). For the individual disciplines, I’d provide a multipage detailed ‘program’ just for them.
 
Of course, I also write stories, where the 2 components are plot and character. Creating characters is purely a non-analytic process, which requires a lot of extemporising. I try my best not to interfere, and I do this by treating them as if they are real people, independent of me. Plotting, on the other hand, requires a big-picture approach, but I almost never know the ending until I get there. In the last story I wrote, I was in COVID lockdown when I knew the ending was close, so I wrote some ‘notes’ in an attempt to work out what happens. Then, sometime later (like a month), I had one sleepless night when it all came to me. Afterwards, I went back and looked at my notes, and they were all questions – I didn’t have a clue.

Wednesday, 31 May 2023

Immortality; from the Pharaohs to cryonics

 I thought the term was cryogenics, but a feature article in the Weekend Australian Magazine (27-28 May 2023) calls the facilities that perform this process, cryonics, and looking up my dictionary, there is a distinction. Cryogenics is about low temperature freezing in general, and cryonics deals with the deep-freezing of bodies specifically, with the intention of one day reviving them.
 
The article cites a few people, but the author, Ross Bilton, features an Australian, Peter Tsolakides, who is in my age group. From what the article tells me, he’s a software engineer who has seen many generations of computer code and has also been a ‘globe-trotting executive for ExxonMobil’.
 
He’s one of the drivers behind a cryonic facility in Australia – its first – located at Holbrook, which is roughly halfway between Melbourne and Sydney. In fact, I often stop at Holbrook for a break and meal on my interstate trips. According to my car’s odometer it is almost exactly half way between my home and my destination, which is a good hour short of Sydney, so it’s actually closer to Melbourne, but not by much.
 
I’m not sure when Tsolakides plans to enter the facility, but he’s forecasting his resurrection in around 250 years time, when he expects he may live for another thousand years. Yes, this is science fiction to most of us, but there are some science facts that provide some credence to this venture.
 
For a start, we already cryogenically freeze embryos and sperm, and we know it works for them. There is also the case of Ewa Wisnierska, 35, a German paraglider taking part in an international competition in Australia, when she was sucked into a storm and elevated to 9947 metres (jumbo jet territory, and higher than Everest). Needless to say, she lost consciousness and spent a frozen 45 mins before she came back to Earth. Quite a miracle and I’ve watched a doco on it. She made a full recovery and was back at her sport within a couple of weeks. And I know of other cases, where the brain of a living person has been frozen to keep them alive, as counter-intuitive as that may sound.
 
Believe it or not, scientists are divided on this, or at least cautious about dismissing it outright. Many take the position, ‘Never say never’. And I think that’s fair enough, because it really is impossible to predict the future when it comes to humanity. It’s not surprising that advocates, like Tsolakides, can see a future where this will become normal for most humans. People who decline immortality will be the exception and not the norm. And I can imagine, if this ‘procedure’ became successful and commonplace, who would say no?
 
Now, I write science fiction, and I have written a story where a group of people decided to create an immortal human race, who were part machine. It’s a reflection of my own prejudices that I portrayed this as a dystopia, but I could have done the opposite.
 
There may be an assumption that if you write science fiction then you are attempting to predict the future, but I make no such claim. My science fiction is complete fantasy, but, like all science fiction, it addresses issues relevant to the contemporary society in which it was created.
 
Getting back to the article in the Weekend Australian, there is an aspect of this that no one addressed – not directly, anyway. There’s no point in cheating death if you can’t cheat old age. In the case of old age, you are dealing with a fundamental law of the Universe, entropy, the second law of thermodynamics. No one asked the obvious question: how do you expect to live for 1,000 years without getting dementia?
 
I think some have thought about this, because, in the same article, they discuss the ultimate goal of downloading their memories and their thinking apparatus (for want of a better term) into a computer. I’ve written on this before, so I won’t go into details.
 
Curiously, I’m currently reading a book by Sabine Hossenfelder called Existential Physics; A Scientist’s Guide to Life’s Biggest Questions, which you would think could not possibly have anything to say on this topic. Nevertheless:
 
The information that makes you you can be encoded in many different physical forms. The possibility that you might one day upload yourself to a computer and continue living a virtual life is arguably beyond present-day technology. It might sound entirely crazy, but it’s compatible with all we currently know.
 
I promise to write another post on Sabine’s book, because she’s nothing if not thought-provoking.
 
So where do I stand? I don’t want immortality – I don’t even want a gravestone, and neither did my father. I have no dependents, so I won’t live on in anyone’s memory. The closest I’ll get to immortality are the words on this blog.