Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Storytelling. Show all posts
Showing posts with label Storytelling. Show all posts

Tuesday, 17 June 2025

Sympathy and empathy; what’s the difference?

 This arose from an article I read in Philosophy Now (Issue 167, April/May 2025) by James R. Robinson, who developed his ideas while writing his MA thesis in the Netherlands. It prompted me to write a letter, which was published in the next issue (168, June/July 2025). It was given pole position, which in many periodicals would earn the appellation ‘letter of the week’ (or month or whatever). But I may be reading too much into it, because Philosophy Now group their letters by category, according to the topic they are addressing. Anyway, being first, is a first for me.
 
They made some minor edits, which I’ve kept. The gist of my argument is that there is a dependency between sympathy and empathy, where sympathy is observed in one’s behaviour, but it stems from an empathy for another person – the ability to put ourselves in their shoes. This is implied in an example (provided by Robinson) rather than stated explicitly.
 
 
In response to James R. Robinson’s ‘Empathy & Sympathy’ in Issue 167, I contend that empathy is essential to a moral philosophy, both in theory and practice. For example, it’s implicit in Confucius’s rule of reciprocity, “Don’t do to others what you wouldn’t want done to yourself” and Jesus’s Golden Rule, “Do unto others as you’d have them do unto you.” Empathy is a requisite for the implementation of either. And as both a reader and writer of fiction, I know that stories wouldn’t work without empathy. Indeed, one study revealed that reading fiction improves empathy. The tests used ‘letter box’ photos of eyes to assess the subject’s ability to read the emotion of the characters behind the eyes (New Scientist, 25 June 2008).

The dependency between empathy and sympathy is implicit in the examples Robinson provides, like the parent picking up another parent’s child from school out of empathy for the person making the request. In most of these cases there is also the implicit understanding that the favour would be returned if the boot was on the other foot. Having said that, many of us perform small favours for strangers, knowing that one day we could be the stranger.

Robinson also introduces another term, ‘passions’; but based on the examples he gives – like pain – I would call them ‘sensations’ or ‘sensory responses’. Even anger is invariably a response to something. Fiction can also create sensory responses (or passions) of all varieties (except maybe physical pain, hunger, or thirst) – which suggests empathy might play a role there as well. In other words, we can feel someone else’s emotional pain, not to mention anger, or resentment, even if the person we’re empathising with is fictional.

The opposite to compassion is surely cruelty. We have world leaders who indulge in cruelty quite openly, which suggests it’s not an impediment to success; but it also suggests that there’s a cultural element that allows it. Our ability to demonise an outgroup is the cause of most political iniquities we witness, and this would require the conscious denial of sympathy and therefore empathy, because ultimately, it requires treating them as less than human, or as not-one-of-us.

Thursday, 29 May 2025

The role of the arts. Why did it evolve? Will AI kill it?

 As I mentioned in an earlier post this month, I’m currently reading Brian Greene’s book, Until the End of Time; Mind, Matter; and Our Search for Meaning in an Evolving Universe, which covers just about everything from cosmology to evolution to consciousness, free will, mythology, religion and creativity. He spends a considerable amount of time on storytelling, compared to other art forms, partly because it allows an easy segue from language to mythology to religion.
 
One of his points of extended discussion was in trying to answer the question: why did our propensity for the arts evolve, when it has no obvious survival value? He cites people like Steven Pinker, Brian Boyd (whom I discuss at length in another post) and even Darwin, among others. I won’t elaborate on these, partly due to space, and partly because I want to put forward my own perspective, as someone who actually indulges in an artistic activity, and who could see clearly how I inherited artistic genes from one side of my family (my mother’s side). No one showed the slightest inclination towards artistic endeavour on my father’s side (including my sister). But they all excelled in sport (including my sister), and I was rubbish at sport. One can see how sporting prowess could be a side-benefit to physical survival skills like hunting, but also achieving success in combat, which humans have a propensity for, going back to antiquity.
 
Yet our artistic skills are evident going back at least 30-40,000 years, in the form of cave-art, and one can imagine that other art forms like music and storytelling have been active for a similar period. My own view is that it’s sexual selection, which Greene discusses at length, citing Darwin among others, as well as detractors, like Pinker. The thing is that other species also show sexual selection, especially among birds, which I’ve discussed before a couple of times. The best known example is the peacock’s tail, but I suspect that birdsong also plays a role, not to mention the bower bird and the lyre bird. The lyre bird is an interesting one, because they too have an extravagant tail (I’m talking about the male of the species) which surely would be a hindrance to survival, and they perform a dance and are extraordinary mimics. And the only reason one can think that this might have evolutionary value at all is because the sole purpose of those specific attributes is to attract a mate.
 
And one can see how this is analogous to behaviour in humans, where it is the male who tends to attract females with their talents in music, in particular. As Greene points out, along with others, artistic attributes are a by-product of our formidable brains, but I think these talents would be useless if we hadn’t evolved in unison a particular liking for the product of these endeavours (also discussed by Greene), which we see even in the modern world. I’m talking about the fact that music and stories both seem to be essential sources of entertainment, evident in the success of streaming services, not to mention a rich history in literature, theatre, ballet and more recently, cinema.
 
I’ve written before that there are 2 distinct forms of cognitive ability: creative and analytical; and there is neurological evidence to support this. The point is that having an analytical brain is just as important as having a creative one, otherwise scientific theories and engineering feats, for which humans seem uniquely equipped to provide, would never have happened, even going back to ancient artefacts like Stonehenge and both the Egyptian and Mayan pyramids. Note that these all happened on different continents.
 
But there are times when the analytical and creative seem to have a synergistic effect, and this is particularly evident when it comes to scientific breakthroughs – a point, unsurprisingly, not lost on Greene, who cites Einstein’s groundbreaking discoveries in relativity theory as a case-in-point.
 
One point that Greene doesn’t make is that there has been a cultural evolution that has effectively overtaken biological evolution in humans, and only in humans I would suggest. And this has been a direct consequence of our formidable brains and everything that goes along with that, but especially language.
 
I’ve made the point before that our special skill – our superpower, if you will – is the ability to nest concepts within concepts, which we do with everything, not just language, but it would have started with language, one would think. And this is significant because we all think in a language, including the ability to manipulate abstract concepts in our minds that don’t even exist in the real world. And no where is this more apparent than in the art of storytelling, where we create worlds that only exist in the imagination of someone’s mind.
 
But this cultural evolution has created civilisations and all that they entail, and survival of the fittest has nothing to do with eking out an existence in some hostile wilderness environment. These days, virtually everyone who is reading this has no idea where their food comes from. However, success is measured by different parameters than the ability to produce food, even though food production is essential. These days success is measured by one’s ability to earn money and activities that require brain-power have a higher status and higher reward than so-called low-skilled jobs. In fact, in Australia, there is a shortage of trades because, for the last 2 generations at least, the emphasis, vocationally, has been in getting kids into university courses, when it’s not necessarily the best fit for the child. This is why the professional class (including myself) are often called ‘elitist’ in the culture wars and being a tradie is sometimes seen as a stigma, even though our society is just as dependent on them as they are on professionals. I know, because I’ve spent a working lifetime in a specific environment where you need both: engineering/construction.
 
Like all my posts, I’ve gone off-track but it’s all relevant. Like Greene, I can’t be sure how or why evolution in humans was propelled, if not hi-jacked, by art, but art in all its forms is part of the human condition. A life without music, stories and visual art – often in combination – is unimaginable.
 
And this brings me to the last question in my heading. It so happens that while I was reading about this in Greene’s thought-provoking book, I was also listening to a programme on ABC Classic (an Australian radio station) called Legends, which is weekly and where the presenter, Mairi Nicolson, talks about a legend in the classical music world for an hour, providing details about their life as well as broadcasting examples of their work. In this case, she had the legend in the studio (a rare occurrence), who was Anna Goldsworthy. To quote from Wikipedia: Anna Louise Goldsworthy is an Australian classical pianist, writer, academic, playwright, and librettist, known for her 2009 memoir Piano Lessons.

But the reason I bring this up is because Anna mentioned that she attended a panel discussion on the role of AI in the arts. Anna’s own position is that she sees a role for AI, but in doing the things that humans find boring, which is what we are already seeing in manufacturing. In fact, I’ve witnessed this first-hand. Someone on the panel made the point that AI would effectively democratise art (my term, based on what I gleaned from Anna’s recall) in the sense that anyone would be able to produce a work of art and it would cease to be seen as elitist as it is now. He obviously saw this as a good thing, but I suspect many in the audience, including Anna, would have been somewhat unimpressed if not alarmed. Apparently, someone on the panel challenged that perspective but Anna seemed to think the discussion had somehow veered into a particularly dissonant aberration of the culture wars.
 
I’m one of those who would be alarmed by such a development, because it’s the ultimate portrayal of art as a consumer product, similar to the way we now perceive food. And like food, it would mean that its consumption would be completely disconnected from its production.
 
What worries me is that the person on the panel making this announcement (remember, I’m reporting this second-hand) apparently had no appreciation of the creative process and its importance in a functioning human society going back tens of thousands of years.
 
I like to quote from one of the world’s most successful and best known artists, Paul McCartney, in a talk he gave to schoolchildren (don’t know where):
 
“I don't know how to do this. You would think I do, but it's not one of these things you ever know how to do.” (my emphasis)
 
And that’s the thing: creative people can’t explain the creative process to people who have never experienced it. It feels like we have made contact with some ethereal realm. On another post, I cite Douglas Hofstadter (from his famous Pulitzer-prize winning tome, Godel, Escher, Bach: An Eternal Golden Braid) quoting Escher:
 
"While drawing I sometimes feel as if I were a spiritualist medium, controlled by the creatures I am conjuring up."

 
Many people writing a story can identify with this, including myself. But one suspects that this also happens to people exploring the abstract world of mathematics. Humans have developed a sense that there is more to the world than what we see and feel and touch, which we attempt to reveal in all art forms, and this, in turn, has led to religion. Of course, Greene spends another entire chapter on that subject, and he also recognises the connection between mind, art and the seeking of meaning beyond a mortal existence.

Friday, 23 May 2025

Insights into writing fiction

 This is a series of posts I published on Quora recently, virtually in one day, in response to specific questions, so I thought it worth posting them here.
 
 
What made you start writing science fiction?
 
I was late coming to science fiction as a reader, partly because I studied science and the suspension of disbelief was more difficult as a result. In my teens I read James Bond and Carter Brown novels that my father had, plus superhero comics, which I’d been addicted to from a young age. I think all of these influenced my later writing. Mind you, I liked innovative TV shows like Star Trek and The Twilight Zone. The British TV show, The Avengers with Emma Peel and Steed was a favourite, which had sci-fi elements. So the seeds were there.
 
I came to sci-fi novels via fantasy, where the suspension of disbelief was mandatory. I remember 2 books which had a profound effect on me: The Lord of the Rings by JRR Tolkien and Dune by Frank Herbert; and I read them in that order. I then started to read Asimov, Clarke, Heinlein and Le Guin. What I liked about sci-fi was the alternative worlds and societies more than the space-travel and gizmos.
 
The first sci-fi I wrote was a screenplay for a teenage audience, called Kidnapped in Time, and it was liberating. I immediately realised that this was my genre. I combined a real-world scenario, based (very loosely*) on my own childhood with a complete fantasy world set in the future and on another planet, which included alien creatures. To be honest, I’ve never looked back.
 
Elvene was even more liberating, partly because I used a female protagonist. Not sure why that worked, but women love it, so I must have done something right. Since then, all my stories feature female protagonists, as well as males. Villains can be male or female or even robots.
 
Science fiction is essentially what-ifs. What if we genetically engineered humans? What if we had humanoid robots? What if we found life on another world? What if we colonised other worlds? What if we could travel intergalactic distances?
 
 
What unconventional writing techniques have you found most effective in crafting compelling characters?
 
How do you differentiate ‘unconventional’ from ‘conventional’? I don’t know if my techniques are one or the other. Characters come to me, similarly, I imagine, to the way melodies and tunes come to composers and songwriters. That wasn’t always the case. When I started out all the characters were different versions of me and not very believable.
 
It’s like acting, and so, in the beginning, I was a poor actor. Don’t ask me how I changed that, because I don’t know – just practice, I guess. To extend the analogy with composing, I compare writing dialogue to playing jazz, because they both require extemporisation. I don’t overthink it, to be honest. I somehow inhabit the character and they come alive. I imagine it’s the same as acting. I say, ‘imagine’, because I can’t act to save my life – I know, I’ve tried.
 
 
How do you balance originality and familiarity when creating characters and plots in your stories?

 
All fiction is a blend of fantasy and reality, and that blend is dependent on the genre and the author’s own proclivities. I like to cite the example of Ian Fleming’s James Bond novels, where the reality was the locations, but also details like what type of gun Bond used (Walther PPK) and the type of cigarettes he smoked (Turkish blend). The fantasy was in the plots, the larger-than-life villains and the femme-fatales with outlandish names.
 
My fiction is sci-fi, so the worlds and plots are total fantasy and the reality is all in the characters and the relationships they form.
 
 
*When I say ‘loosely’, the time and milieu is pretty much the same, but whereas the protagonist had a happy home life, despite having no memory of his mother (he lived with his father and older brother), I had a mother, a father and an older sister, but my home life was anything but happy. I make it a rule not to base characters on anyone I know.

Tuesday, 29 April 2025

Writing and philosophy

 I’ve been watching a lot of YouTube videos of Alan Moore, who’s probably best known for his graphic novels, Watchmen and V for Vendetta, both of which were turned into movies. He also wrote a Batman graphic novel, The Killing Joke, which was turned into an R rated animated movie (due to Batman having sex with Batgirl) with Mark Hamill voicing the Joker. I’m unsure if it has any fidelity to Moore’s work, which was critically acclaimed, whereas the movie received mixed reviews. I haven’t read the graphic novel, so I can’t comment.
 
On the other hand, I read Watchmen and saw the movie, which I reviewed on this blog, and thought they were both very good. I also saw V for Vendetta, starring Natalie Portman and Hugo Weaving, without having read Moore’s original. Moore also wrote a novel, Jerusalem, which I haven’t read, but is referenced frequently by Robin Ince in a video I cite below.
 
All that aside, it’s hard to know where to start with Alan Moore’s philosophy on writing, but the 8 Alan Moore quotes video is as good a place as any if you want a quick overview. For a more elaborate dialogue, there is a 3-way interview, obviously done over a video link, between Moore and Brian Catling, hosted by Robin Ince, with the online YouTube channel, How to Academy. They start off talking about imagination, but get into philosophy when all 3 of them start questioning what reality is, or if there is an objective reality at all.
 
My views on this are well known, and it’s a side-issue in the context of writing or creating imaginary worlds. Nevertheless, had I been party to the discussion, I would have simply mentioned Kant, and how he distinguishes between the ‘thing-in-itself’ and our perception of it. Implicit in that concept is the belief that there is an independent reality to our internal model of it, which is mostly created by a visual representation, but other senses, like hearing, touch and smell, also play a role. This is actually important when one gets into a discussion on fiction, but I don’t want to get ahead of myself. I just wish to make the point that we know there is an external objective reality because it can kill you. Note that a dream can’t kill you, which is a fundamental distinction between reality and a dreamscape. I make this point because I think a story, which takes place in your imagination, is like a dreamscape; so that difference carries over into fiction.
 
And on the subject of life-and-death, Moore references something he’d read on how evolution selects for ‘survivability’ not ‘truth’, though he couldn’t remember the source or the authors. However, I can, because I wrote about that too. He’s obviously referring to the joint paper written by Donald Hoffman and Chetan Prakash called Objects of Consciousness (Frontiers of Psychology, 2014). This depends on what one means by ‘truth’. If you’re talking about mathematical truths then yes, it has little to do with survivability (our modern-day dependence on technical infrastructure notwithstanding). On the other hand, if you’re talking about the accuracy of the internal model in your mind matching the objective reality external to your body, then your survivability is very much dependent on it.
 
Speaking of mathematics, Ince mentions Bertrand Russell giving up on mathematics and embracing philosophy because he failed to find a foundation that ensured its truth (my wording interpretating his interpretation). Basically, that’s correct, but it was Godel who put the spanner in the works with his famous Incompleteness Theorem, which effectively tells us that there will always exist mathematical truths that can’t be proven true. In other words, he concretely demonstrated (proved, in fact) that there is a distinction between truth and proof in mathematics. Proofs rely on axioms and all axioms have limitations in what they can prove, so you need to keep finding new axioms, and this infers that mathematics is a neverending endeavour. So it’s not the end of mathematics as we know it, but the exact opposite.
 
All of this has nothing to do with writing per se, but since they raised these issues, I felt compelled to deal with them.
 
At the core of this part of their discussion, is the unstated tenet that fiction and non-fiction are distinct, even if the boundary sometimes becomes blurred. A lot of fiction, if not all, contains factual elements. I like to cite Ian Fleming’s James Bond novels containing details like the gun Bond used (a Walther PPK) and the Bentley he drove, which had an Amherst Villiers supercharger. Bizarrely, I remember these trivial facts from a teenage obsession with all things Bond.
 
And this allows me to segue into something that Moore says towards the end of this 3-way discussion, when he talks specifically about fantasy. He says it needs to be rooted in some form of reality (my words), otherwise the reader won’t be able to imagine it at all. I’ve made this point myself, and give the example of my own novel, Elvene, which contains numerous fantasy elements, including both creatures that don’t exist on our world and technology that’s yet to be invented, if ever.
 
I’ve written about imagination before, because I argue it’s essential to free will, which is not limited to humans, though others may disagree. Imagination is a form of time travel, into the past, but more significantly, into the future. Episodic memories and imagination use the same part of the brain (so we are told); but only humans seem to have the facility to time travel into realms that don’t exist anywhere else other than the imagination. And this is why storytelling is a uniquely human activity.
 
I mentioned earlier how we create an internal world that’s effectively a simulation of the external world we interact with. In fact, my entire philosophy is rooted in the idea that we each of us have an internal and external world, which is how I can separate religion from science, because one is completely internal and the other is an epistemology of the physical universe from the cosmic scale to the infinitesimal. Mathematics is a medium that bridges them, and contributes to the Kantian notion that our perception may never completely match the objective reality. Mathematics provides models that increase our understanding while never quite completing it. Godel’s incompleteness theorem (referenced earlier) effectively limits physics as well. Totally off-topic, but philosophically important.
 
Its relevance to storytelling is that it’s a visual medium even when there are no visuals presented, which is why I contend that if we didn’t dream, stories wouldn’t work. In response to a question, Moore pointed out that, because he worked on graphic novels, he had to think about the story visually. I’ve made the point before that the best thing I ever did for my own writing was to take some screenwriting courses, because one is forced to think visually and imagine the story being projected onto a screen. In a screenplay, you can only write down what is seen and heard. In other words, you can’t write what a character is thinking. On the other hand, you can write an entire novel from inside a character’s head, and usually more than one. But if you tell a story from a character’s POV (point-of-view) you axiomatically feel what they’re feeling and see what they’re witnessing. This is the whole secret to novel-writing. It’s intrinsically visual, because we automatically create images even if the writer doesn’t provide them. So my method is to provide cues, knowing that the reader will fill in the blanks. No one specifically mentions this in the video, so it’s my contribution.
 
Something else that Moore, Catling and Ince discuss is how writing something down effectively changes the way they think. This is something I can identify with, both in fiction and non-fiction, but fiction specifically. It’s hard to explain this if you haven’t experienced it, but they spend a lot of time on it, so it’s obviously significant to them. In fiction, there needs to be a spontaneity – I’ve often compared it to playing jazz, even though I’m not a musician. So most of the time, you don’t know what you’re going to write until it appears on the screen or on paper, depending which medium you’re using. Moore says it’s like it’s in your hands instead of your head, which is certainly not true. But the act of writing, as opposed to speaking, is a different process, at least for Moore, and also for me.
 
I remember many years ago (decades) when I told someone (a dentist, actually) that I was writing a book. He said he assumed that novelists must dictate it, because he couldn’t imagine someone writing down thousands upon thousands of words. At the time, I thought his suggestion just as weird as he thought mine to be. I suspect some writers do. Philip Adams (Australian broadcaster and columnist) once confessed that he dictated everything he wrote. In my professional life, I have written reports for lawyers in contractual disputes, both in Australia and the US, for which I’ve received the odd kudos. In one instance, someone I was working with was using a cassette-like dictaphone and insisted I do the same, believing it would save time. So I did, in spite of my better judgement, and it was just terrible. Based on that one example, you’d be forgiven for thinking that I had no talent or expertise in that role. Of course, I re-wrote the whole thing, and was never asked to do it again.
 
I originally became interested in Moore’s YouTube videos because he talked about how writing affects you as a person and can also affect the world. I think to be a good writer of fiction you need to know yourself very well, and I suspect that is what he meant without actually saying it. The paradox with this is that you are always creating characters who are not you. I’ve said many times that the best fiction you write is where you’re completely detached – in a Zen state – sometimes called ‘flow’. Virtuoso musicians and top sportspersons will often make the same admission.
 
I believe having an existential philosophical approach to life is an important aspect to my writing, because it requires an authenticity that’s hard to explain. To be true to your characters you need to leave yourself out of it. Virtually all writers, including Moore, talk about treating their characters like real people, and you need to extend that to your villains if you want them to be realistic and believable, not stereotypes. Moore talks about giving multiple dimensions to his characters, which I won’t go into. Not because I don’t agree, but because I don’t over-analyse it. Characters just come to me and reveal themselves as the story unfolds; the same as they do for the reader.
 
What I’ve learned from writing fiction (which I’d self-describe as sci-fi/fantasy) – as opposed to what I didn’t know – is that, at the end of the day (or story), it’s all about relationships. Not just intimate relationships, but relationships between family members, between colleagues, between protagonists and AI, and between protagonists and antagonists. This is the fundamental grist for all stories.
 
Philosophy is arguably more closely related to writing than any other artform: there is a crossover and interdependency; because fiction deals with issues relevant to living and being.

Thursday, 6 February 2025

God and the problem of evil

 Philosophy Now (UK publication) that I’ve subscribed to for well over a decade now, is a bi-monthly (so 6 times a year) periodical, and it always has a theme. The theme for Dec 2024/Jan 2025 Issue 165 is The Return of God? In actuality, the articles inside covering that theme deal equally with atheism and theism, in quite diverse ways. It was an article titled A Critique of Pure Atheism (obvious allusion to Kant) by Andrew Likoudis that prompted me to write a Letter to the Editor, but I’m getting a little ahead of myself. Likoudis, by the way, is president of the Likoudis Legacy Foundation (an ecumenical research foundation), as well as the editor of 6 books, and studies communications at Towson University, which is in Maryland.
 
More than one article tackles the well-known ‘problem of evil’, and one of them even mentions Stephen Law’s not-so-well-known ‘Evil God’ argument. In the early days of this blog, which goes back 17 years, I spent a fair bit of time on Stephen’s blog where I indulged in discussions and arguments (with mostly other bloggers), most of which focused on atheism. In many of those arguments I found myself playing Devil’s advocate.
 
There is a more fundamental question behind the ‘existence of God’ question, which could be best framed as: Is evil necessary? I wrote a post on Evil very early in the life of this blog, in response to a book written by regular essayist for TIME magazine, Lance Morrow, titled Evil, An Investigation. Basically, I argued that evil is part of our evolutionary heritage, and is mostly, but not necessarily, manifest in our tribal nature, and our almost reflex tendency to demonise an outgroup, especially when things take a turn for the worse, either economically or socially or from a combination thereof. Historical examples abound. Some of the articles in Philosophy Now talk about ‘natural evil’, meaning natural disasters, which in the past (and sometimes in the present) are laid at the feet of God. In fact, so-called ‘acts of God’ have a legal meaning, when it comes to insurance claims and contractual issues (where I have some experience).
 
The thing is that ‘bad things happen’, with or without a God, with or without human agency. The natural world is more than capable of creating disasters, havoc and general destruction, with often fatal consequences. I’ve been reading the many articles in Philosophy Now somewhat sporadically, which is why, so far, I’ve only directly referenced one, being the one I responded to, while readily acknowledging that’s a tad unfair. As far as I can tell, no one mentions the Buddhist doctrine of the 4 Noble Truths, the first of which, basically says that everyone will experience some form of suffering in their lives. Even wealthy people get ill and are prone to diseases and have to deal with loss of loved ones. These experiences alone, are often enough reason for people to turn to religion. I’ve argued repeatedly and consistently that it’s how we deal with adversity that determines what sort of person we become and is what leads to what we call wisdom. It’s not surprising then, that we associate wisdom with age because, the longer one lives, the more adversity we experience and the more we hopefully learn from it.
 
One can’t talk about this without mentioning the role of fiction and storytelling. We are all drawn to stories from the ‘dark side’, which I’ve written about before. As a writer of fiction, I’m not immune to this. I’ve recently been watching a documentary series on the Batman movies, starting with Tim Burton, then Joel Schumacher and finally, Chris Nolan, all of which deal with the so-called dark side of this particular superhero, who is possibly unique among superheroes in flirting with the dark side of that universe. One of the ‘lessons’ gained from watching this doco is that Joel Schumacher’s sequel, Batman & Robin, which arguably attempted to eschew the dark side for a much lighter tone, all but destroyed the franchise. I confess I never saw that movie – I was turned off by the trailer (apparently for good reason). I’m one of those who thinks that Nolan’s The Dark Knight is the definitive Batman movie, with Heath Ledger’s Joker being one of the most iconic villain depictions ever.
 
A detour, but relevant. I’ve noticed that my own fiction has become darker, where I explore dystopian worlds – not unusual in science fiction. I’m reminded of a line from a Leonard Cohen song, ‘There’s a crack in everything; that’s how the light gets in’. I often think that applies to our lives, and it certainly applies to the fiction that I write. I create scenarios of potential doom and oppression, but there is always a light that emerges from somewhere that provides salvation and hope and sometimes redemption. The thing is that we need dark for the light to emerge and that is equally true of life. It’s not hard to imagine life as a test that we have to partake in, and I admit that I find this sometimes being manifest in my dreams as well as my fiction.
 
Having said that, I have an aversion to the idea that there is an afterlife with rewards and punishments dependant on how we live this life. For a start, we are not all tested equally. I only have to look at my father who was tested much more harshly than me, and like me, vehemently eschewed the idea of a God who punished his ‘children’ with everlasting torment. Hell and Heaven, like God himself, are projections when presented in this context: human constructs attempting to make sense of an apparently unjust world; and finding a correspondence in the Buddhist concept of reincarnation and karma, which I also reject. I was brought up with a Christian education, but at some point, I concluded that the biblical God was practically no more moral than the Devil – one only has to look at the story of Job, whom God effectively tortured to win a bet with the Devil.
 
If I can jump back to the previous paragraph before the last, I think we have to live with the consequences of our actions, and I’ve always imagined that I judge my life on my interactions with others rather than my achievements and failures. I don’t see death as an escape or transition, but quite literally an end, where, most significantly, I can no longer affect the world. My own view is that I’m part of some greater whole that not only includes humanity but the greater animal kingdom, and having the unique qualities of comprehension that other creatures don’t have, I have a special responsibility to them for their welfare as well as my own.
 
In this picture, I see God as a projection of my particular ideal, which is not reflected in any culture I’m aware of. I sometime think the Hindu concept of Brahman (also not referenced in Philosophy Now, from what I’ve read thus far) as a collective ‘mind’, which appealed to Erwin Schrodinger, in particular, comes closest to my idea of a God, which would mean that the problem of evil is axiomatically subsumed therein – we get the God we deserve.
 
This is the letter I wrote, which may or may not get published in a future edition:
 
I read with interest Andrew Likoudis’s essay, A Critique of Pure Atheism, because I think, like many (both theists and atheists), he conflates different concepts of God. In fact, as Karen Armstrong pointed out in her book, The History of God, there are 2 fundamentally different paths for believing in God. One path is via a mystical experience and the other path is a cerebral rationalisation of God as the Creator of the Universe and everything in it, which I’d call the prime raison d’etre of existence. In other words, without God there would not only be no universe, but no reason for it to exist. I believe Likoudis’s essay is a formulation of this latter concept, even though he expresses it in different terms.

Likoudis makes the valid point that empirical science is not the correct 'instrument', if I can use that term in this context, for ‘proving’ the existence of God, and for good reason. Raymond Tallis has pointed out, more than once, that science can only really deal with entities that can be measured or quantified, which is why mathematics plays such an important, if not essential, role in a lot of science; and physics, in particular.
 
Metaphysics, almost by definition, is outside the empiricist’s domain. I would argue that this includes consciousness, and despite measurable correlates with neuronal activity, consciousness itself can’t be measured. The only reason we believe someone else (not to mention other creatures) have consciousness is that their observed behaviour is similar to our own. Conscious experience is what we call mind, and mind is arguably the only connection between the Universe and God, which brings us closer to Armstrong’s argument for God based on mystical experience.

So I think the argument for God, as an experience similar to mind, has more resonance for believers than an argument for God as a Creator with mythical underpinnings. A point that Likoudis doesn't mention is that all the Gods of literature and religion have cultural origins, whereas an experience of God is purely subjective and can’t be shared. The idea that this experience of God is also the creator of the entire universe is a non sequitur. However, if one goes back to God being the raison d’etre for the Universe, then maybe God is the end result rather than its progenitor.

 
 
Footnote: I wrote a post back in 2021 in response to AC Grayling’s book, The God Argument, which is really a polemic against theism in general. You can judge for yourself whether my views are consistent or have changed.

Tuesday, 7 January 2025

Why are we addicted to stories involving struggle?

This is something I’ve written about before, so what can I possibly add? Sometimes the reframing of a question changes the emphasis. In this case, I wrote a post on Quora in response to a fairly vague question, which I took more seriously than the questioner probably expected. As I said, I’ve dealt with these themes before, but adding a very intimate family story adds emotional weight. It’s a story I’ve related before, but this time I elaborate in order to give it the significance I feel it deserves.
 
What are some universal themes in fiction?
 
There is ONE universal theme that’s found virtually everywhere, and its appeal is that it provides a potential answer to the question: What is the meaning of life?

In virtually every story that’s been told, going as far back as Homer’s Odyssey and up to the latest superhero movie, with everything else in between (in the Western canon, at least), you have a protagonist who has to deal with obstacles, hardships and tribulations. In other words, they are tested, often in extremis, and we all take part vicariously to the point that it becomes an addiction.

There is a quote from the I Ching, which I think sums it up perfectly.

Adversity is the opposite of success, but it can lead to success if it befalls the right person.

Most of us have to deal with some form of adversity in life; some more so than others. And none of us are unaffected by it. Socrates’ most famous saying: The unexamined life is not worth living; is a variation on this theme. He apparently said it when he was forced to face his death; the consequences of actions he had deliberately taken, but for which he refused to show regret.

And yes, I think this is the meaning of life, as it is lived. It’s why we expect to become wiser as we get older, because wisdom comes from dealing with adversity, whether it ultimately leads to success or not.

When I write a story, I put my characters through hell, and when they come out the other side, they are invariably wiser if not triumphant. I’ve had characters make the ultimate sacrifice, just like Socrates, because they would prefer to die for a principle than live with shame.

None of us know how we will behave if we are truly tested, though sometimes we get a hint in our dreams. Stories are another way of imagining ourselves in otherwise unimaginable situations. My father is one who was tested firsthand in battle and in prison. The repercussions were serious, not just for him, but for those of us who had to live with him in the aftermath.

He had a recurring dream where there was someone outside the house whom he feared greatly – it was literally his worst nightmare. One night he went outside and confronted them, killing them barehanded. He told me this when I was much older, naturally, but it reminded me of when Luke Skywalker confronted his doppelganger in The Empire Strikes Back. I’ve long argued that the language of stories is the language of dreams. In this case, the telling of my father’s dream reminded me of a scene from a movie that made me realise it was more potent than I’d imagined.

I’m unsure how my father would have turned out had he not faced his demon in such a dramatic and conclusive fashion. It obviously had a big impact on him; he saw it as a form of test, which he believed he’d ultimately passed. I find it interesting that it was not something he confronted the first time he was made aware of it – it simply scared him to death. Stories are surrogate dreams; they serve the same purpose if they have enough emotional force.

Life itself is a test that we all must partake in, and stories are a way of testing ourselves against scenarios we’re unlikely to confront in real life.

Sunday, 29 December 2024

The role of dissonance in art, not to mention science and mathematics

 I was given a book for a birthday present just after the turn of the century, titled A Terrible Beauty; The People and Ideas that Shaped the Modern Mind, by Peter Watson. A couple of things worth noting: it covers the history of the 20th Century, but not geo-politically as you might expect. Instead, he writes about the scientific discoveries alongside the arts and cultural innovations, and he talks about both with equal erudition, which is unusual.
 
The reason I mention this, is because I remember Watson talking about the human tendency to push something to its limits and then beyond. He gave examples in science, mathematics, art and music. A good example in mathematics is the adoption of √-1 (giving us ‘imaginary numbers’), which we are taught is impossible, then suddenly it isn’t. The thing is that it allows us to solve problems that were previously impossible in the same way that negative numbers give solutions to arithmetical subtractions that were previously unanswerable. There were no negative numbers in ancient Greece because their mathematics was driven by geometry, and the idea of a negative volume or area made no sense.
 
But in both cases: negative numbers and imaginary numbers; there is a cognitive dissonance that we have to overcome before we can gain familiarity and confidence in using them, or even understanding what they mean in the ‘real world’, which is the problem the ancient Greeks had. Most people reading this have no problem, conceptually, dealing with negative numbers, because, for a start, they’re an integral aspect of financial transactions – I suspect everyone reading this above a certain age has had experience with debt and loans.
 
On the other hand, I suspect a number of readers struggle with a conceptual appreciation of imaginary numbers. Some mathematicians will tell you that the term is a misnomer, and its origin would tend to back that up. Apparently, Rene Descartes coined the term, disparagingly, because, like the ancient Greek’s problem with negative numbers, he believed they had no relevance to the ‘real world’. And Descartes would have appreciated their usefulness in solving problems previously unsolvable, so I expect it would have been a real cognitive dissonance for him.
 
I’ve written an entire post on imaginary numbers, so I don’t want to go too far down that rabbit hole, but I think it’s a good example of what I’m trying to explicate. Imaginary numbers gave us something called complex algebra and opened up an entire new world of mathematics that is particularly useful in electrical engineering. But anyone who has studied physics in the last century is aware that, without imaginary numbers, an entire field of physics, quantum mechanics, would remain indescribable, let alone be comprehensible. The thing is that, even though most people have little or no understanding of QM, every electronic device you use depends on it. So, in their own way, imaginary numbers are just as important and essential to our lives as negative numbers are.
 
You might wonder how I deal with the cognitive dissonance that imaginary numbers induce. In QM, we have, at its most rudimentary level, something called Schrodinger’s equation, which he proposed in 1926 (“It’s not derived from anything we know,” to quote Richard Feynman) and Schrodinger quickly realised it relied on imaginary numbers – he couldn’t formulate it without them. But here’s the thing: Max Born, a contemporary of Schrodinger, formulated something we now call the Born rule that mathematically gets rid of the imaginary numbers (for the sake of brevity and clarity, I’ll omit the details) and this gives the probability of finding the object (usually an electron) in the real world. In fact, without the Born rule, Schrodinger’s equation is next-to-useless, and would have been consigned to the dustbin of history.
 
And that’s relevant, because prior to observing the particle, it’s in a superposition of states, described by Schrodinger’s equation as a wave function (Ψ), which some claim is a mathematical fiction. In other words, you need to get rid (clumsy phrasing, but accurate) of the imaginary component to make it relevant to the reality we actually see and detect. And the other thing is that once we have done that, the Schrodinger equation no longer applies – there is effectively a dichotomy between QM and classical physics (reality), which is called the 'measurement problem’. Roger Penrose gives a good account in this video interview. So, even in QM, imaginary numbers are associated with what we cannot observe.
 
That was a much longer detour than I intended, but I think it demonstrates the dissonance that seems necessary in science and mathematics, and arguably necessary for its progress; plus it’s a good example of the synergy between them that has been apparent since Newton.
 
My original intention was to talk about dissonance in music, and the trigger for this post was a YouTube video by musicologist, Rick Beato (pronounced be-arto), dissecting the Beatles song, Ticket to Ride, which he called, ‘A strange but perfect song’. In fact, he says, “It’s very strange in many ways: it’s rhythmically strange; it’s melodically strange too”. I’ll return to those specific points later. To call Beato a music nerd is an understatement, and he gives a technical breakdown that quite frankly, I can’t follow. I should point out that I’ve always had a good ‘ear’ that I inherited, and I used to sing, even though I can’t read music (neither could the Beatles). I realised quite young that I can hear things in music that others miss. Not totally relevant, but it might explain some things that I will expound upon later.
 
It's a lengthy, in-depth analysis, but if you go to 4.20-5.20, Beato actually introduces the term ‘dissonance’ after he describes how it applies. In effect, there is a dissonance between the notes that John Lennon sings and the chords he plays (on a 12-string guitar). And the thing is that we, the listener, don’t notice – someone (like Beato) has to point it out. Another quote from 15.00: “One of the reasons the Beatles songs are so memorable, is that they use really unusual dissonant notes at key points in the melody.”
 
The one thing that strikes you when you first hear Ticket to Ride is the unusual drum part. Ringo was very inventive and innovative, and became more adventurous, along with his bandmates, on later recordings. The Ticket to Ride drum part has become iconic: everyone knows it and recognises it. There is a good video where Ringo talks about it, along with another equally famous drum part he created. Beato barely mentions it, though right at the beginning, he specifically refers to the song as being ‘rhythmically strange’.
 
A couple of decades ago, can’t remember exactly when, I went and saw an entire Beatles concert put on by a rock band, augmented by orchestral strings and horn parts. It was in 2 parts with an intermission, and basically the 1st half was pre-Sergeant Pepper and the 2nd half, post. I can still remember that they opened the concert with Magical Mystery Tour and it blew me away. The thing is that they went to a lot of trouble to be faithful to the original recordings, and I realised that it was the first time I’d heard their music live, albeit with a cover band. And what immediately struck me was the unusual harmonics and rhythms they employed. Watching Beato’s detailed technical analysis puts this into context for me.
 
Going from imaginary numbers and quantum mechanics to one of The Beatles most popular songs may seem like a giant leap, but it highlights how dissonance is a universal principle for humans, and intrinsic to progression in both art and science.
 
Going back to Watson’s book that I reference in the introduction, another obvious example that he specifically talks about is Picasso’s cubism.
 
In storytelling, it may not be so obvious, and I think modern fiction has been influenced more by cinema than anything else, where the story needs to be more immediate and it needs to flow with minimal description. There is now an expectation that it puts you in the story – what we call immersion.
 
On another level, I’ve noticed a tendency on my part to create cognitive dissonance in my characters and therefore the reader. More than once, I have combined sexual desire with fear, which some may call perverse. I didn’t do this deliberately – a lot of my fiction contains elements I didn’t foresee. Maybe it says something about my own psyche, but I honestly don’t know.

Friday, 20 December 2024

John Marsden (acclaimed bestselling author): 27 Sep. 1950 – 18 Dec. 2024

 At my mother’s funeral a few years ago, her one-and-only great-granddaughter (Hollie Smith) read out a self-composed poem, titled ‘What’s in a dash?’, which I thought was very clever, and which I now borrow, because she’s referring to the dash between the dates, as depicted in the title of this post. In the case of John Marsden, it’s an awful lot, if you read the obituary in the link I provide at the bottom.
 
He would be largely unknown outside of Australia, and being an introvert, he’s probably not as well known inside Australia as he should be, despite his prodigious talent as a writer and his enormous success in what is called ‘young-adult fiction’. I think it’s a misnomer, because a lot of so-called YA fiction is among the best you can read as an adult.
 
This is what I wrote on Facebook, and I’ve only made very minor edits for this post.
 
I only learned about John Marsden's passing yesterday (Wednesday, 18 Dec., the day it happened). Sobering that we are so close in age (by a few months).
 
Marsden was a huge inspiration to me as a writer. I consider him to be one of the best of Australian writers - I put him up there with George Johnston, another great inspiration for me. I know others will have their own favourites.
 
I would like to have met him, but I did once have a brief correspondence with him, and he was generous and appreciative.

I found Marsden's writing so good, it was intimidating. I actually stopped reading him because he made me feel that my own writing was so inadequate. I no longer feel that, I should add. I just want to pay him homage, because he was so bloody good.

 

This is an excellent obituary by someone (Alice Pung) who was mentored by him, and considered him a good and loyal friend right up to the end.

On a philosophical note, John was wary of anyone claiming certainty, with the unstated contention that doubt was necessary for growth and development.


Friday, 13 December 2024

On Turing, his famous ‘Test’ and its implication: can machines think?

I just came out of hospital Wednesday, after one week to the day. My last post was written while I was in there, so obviously not cognitively impaired. I mention this because I took some reading material: a hefty volume, Alan Turing: Life and Legacy of a Great Thinker (2004); which is a collection of essays by various people, edited by Christof Teucscher.
 
In particular, was an essay written by Daniel C Dennett, Can Machines Think?, originally published in another compilation, How We Know (ed. Michael G. Shafto, 1985, with permission from Harper Collins, New York). In the publication I have (Springer-Verlag Berlin Heidelberg, 2004), there are 2 postscripts by Dennett from 1985 and 1987, largely in response to criticisms.
 
Dennett’s ideas on this are well known, but I have the advantage that so-called AI has improved in leaps and bounds in the last decade, let alone since the 1980s and 90s. So I’ve seen where it’s taken us to date. Therefore I can challenge Dennett based on what has actually happened. I’m not dismissive of Dennett, by any means – the man was a giant in philosophy, specifically in his chosen field of consciousness and free will, both by dint of his personality and his intellect.
 
There are 2 aspects to this, which Dennett takes some pains to address: how to define ‘thinking’; and whether the Turing Test is adequate to determine if a machine can ‘think’ based on that definition.
 
One of Dennett’s key points, if not THE key point, is just how difficult the Turing Test should be to pass, if it’s done properly, which he claims it often isn’t. This aligns with a point that I’ve often made, which is that the Turing Test is really for the human, not the machine. ChatGPT and LLM (large language models) have moved things on from when Dennett was discussing this, but a lot of what he argues is still relevant.
 
Dennett starts by providing the context and the motivation behind Turing’s eponymously named test. According to Dennett, Turing realised that arguments about whether a machine can ‘think’ or not would get bogged down (my term) leading to (in Dennett’s words): ‘sterile debate and haggling over definitions, a question, as [Turing] put it, “too meaningless to deserve discussion.”’
 
Turing provided an analogy, whereby a ‘judge’ would attempt to determine whether a dialogue they were having by teletext (so not visible or audible) was with a man or a woman, and then replace the woman with a machine. This may seem a bit anachronistic in today’s world, but it leads to a point that Dennett alludes to later in his discussion, which is to do with expertise.
 
Women often have expertise in fields that were considered out-of-bounds (for want of a better term) back in Turing’s day. I’ve spent a working lifetime with technical people who have expertise by definition, and my point is that if you were going to judge someone’s facility in their expertise, that can easily be determined, assuming the interlocutor has a commensurate level of expertise. In fact, this is exactly what happens in most job interviews. My point being that judging someone’s expertise is irrelevant to their gender, which is what makes Turing’s analogy anachronistic.
 
But it also has relevance to a point that Dennett makes much later in his essay, which is that most AI systems are ‘expert’ systems, and consequently, for the Turing test to be truly valid, the judge needs to ask questions that don’t require any expertise at all. And this is directly related to his ‘key point’ I referenced earlier.
 
I first came across the Turing Test in a book by Joseph Weizenbaum, Computer Power and Human Reasoning (1974), as part of my very first proper course in philosophy, called The History of Ideas (with Deakin University) in the late 90s. Dennett also cites it, because Weizenbaum created a crude version of the Turing Test, whether deliberately or not, called ELIZA, which purportedly responded to questions as a ‘psychologist-therapist’ (at least, that was my understanding): "ELIZA — A Computer Program for the Study of Natural Language Communication between Man and Machine," Communications of the Association for Computing Machinery 9 (1966): 36-45 (ref. Wikipedia).
 
Before writing Computer Power and Human Reason, Weizenbaum had garnered significant attention for creating the ELIZA program, an early milestone in conversational computing. His firsthand observation of people attributing human-like qualities to a simple program prompted him to reflect more deeply on society's readiness to entrust moral and ethical considerations to machines.
(Wikipedia)
 
What I remember, from reading Weizenbaum’s own account (I no longer have a copy of his book) was how he was astounded at the way people in his own workplace treated ELIZA as if it was a real person, to the extent that Weizenbaum’s secretary would apparently ‘ask him to leave the room’, not because she was embarrassed, but because the nature of the ‘conversation’ was so ‘personal’ and ‘confidential’.
 
I think it’s easy for us to be dismissive of someone’s gullibility, in an arrogant sort of way, but I have been conned on more than one occasion, so I’m not so judgemental. There are a couple of YouTube videos of ‘conversations’ with an AI called Sophie developed by David Hanson (CEO of Hanson Robotics), which illustrate this point. One is a so-called ‘presentation’ of Sophie to be accepted as an ‘honorary human’, or some such nonsense (I’ve forgotten the details) and another by a journalist from Wired magazine, who quickly brought her unstuck. He got her to admit that one answer she gave was her ‘standard response’ when she didn’t know the answer. Which begs the question: how far have we come since Weizebaum’s ELIZA in 1966? (Almost 60 years)
 
I said I would challenge Dennett, but so far I’ve only affirmed everything he said, albeit using my own examples. Where I have an issue with Dennett is at a more fundamental level, when we consider what do we mean by ‘thinking’. You see, I’m not sure the Turing Test actually achieves what Turing set out to achieve, which is central to Dennett’s thesis.
 
If you read extracts from so-called ‘conversations’ with ChatGPT, you could easily get the impression that it passes the Turing Test. There are good examples on Quora, where you can get ChatGPT synopses to questions, and you wouldn’t know, largely due to their brevity and narrow-focused scope, that they weren’t human-generated. What many people don’t realise is that they don’t ‘think’ like us at all, because they are ‘developed’ on massive databases of input that no human could possible digest. It’s the inherent difference between the sheer capacity of a computer’s memory-based ‘intelligence’ and a human one, that not only determines what they can deliver, but the method behind the delivery. Because the computer is mining a massive amount of data, it has no need to ‘understand’ what it’s presenting, despite giving the impression that it does. All the meaning in its responses is projected onto it by its audience, exactly as the case with ELIZA in 1966.
 
One of the technical limitations that Dennett kept referring to is what he called, in computer-speak, the combinatorial explosion, effectively meaning it was impossible for a computer to look at all combinations of potential outputs. This might still apply (I honestly don’t know) but I’m not sure it’s any longer relevant, given that the computer simply has access to a database that already contains the specific combinations that are likely to be needed. Dennett couldn’t have foreseen this improvement in computing power that has taken place in the 40 years since he wrote his essay.
 
In his first postscript, in answer to a specific question, he says: Yes, I think that it’s possible to program self-consciousness into a computer. He says that it’s simply the ability 'to distinguish itself from the rest of the world'. I won’t go into his argument in detail, which might be a bit unfair, but I’ve addressed this in another post. Basically, there are lots of ‘machines’ that can do this by using a self-referencing algorithm, including your smartphone, which can tell you where you are, by using satellites orbiting outside the Earth’s biosphere – who would have thought? But by using the term, 'self-conscious', Dennett implies that the machine has ‘consciousness’, which is a whole other argument.
 
Dennett has a rather facile argument for consciousness in machines (in my view), but others can judge for themselves. He calls his particular insight: using an ‘intuition pump’.
 
If you look at a computer – I don’t care whether it’s a giant Cray or a personal computer – if you open up the box and look inside and you see those chips, you say, “No way could that be conscious.” But the same thing is true if you take the top off somebody’s skull and look at the gray matter pulsing away in there. You think, “That is conscious? No way could that lump of stuff be conscious.” …At no level of inspection does a brain look like the seat of conscious.
 

And that last sentence is key. The only reason anyone knows they are conscious is because they experience it, and it’s the peculiar, unique nature of that experience that no one else knows they are having it. We simply assume they do, because we behave similarly to the way they behave when we have that experience. So far, in all our dealings and interactions with computers, no one makes the same assumption about them. To borrow Dennett’s own phrase, that’s my use of an ‘intuition pump’.
 
Getting back to the question at the heart of this, included in the title of this post: can machines think? My response is that, if they do, it’s a simulation.
 
I write science-fiction, which I prefer to call science-fantasy, if for no other reason than my characters can travel through space and time in a manner current physics tells us is impossible. But, like other sci-fi authors, it’s necessary if I want continuity of narrative across galactic scales of distance. Not really relevant to this discussion, but I want to highlight that I make no claim to authenticity in my sci-fi world - it’s literally a world of fiction.
 
Its relevance is that my stories contain AI entities who play key roles – in fact, are characters in that world. In fact, there is one character in particular who has a relationship (for want of a better word) with my main protagonist (I always have more than one).
 
But here’s the thing, which is something I never considered until I wrote this post: my hero, Elvene, never once confuses her AI companion for a human. Albeit this is a world of pure fiction, I’m effectively assuming that the Turing test will never pass. I admit I’d never considered that before I wrote this essay.
 
This is an excerpt of dialogue, I’ve posted previously, not from Elvene, but from its sequel, Sylvia’s Mother (not published), but incorporating the same AI character, Alfa. The thing is that they discuss whether Alfa is ‘alive' or not, which I would argue is a pre-requisite for consciousness. It’s no surprise that my own philosophical prejudices (diametrically opposed to Dennett’s in this instance) should find their way into my fiction.
 
To their surprise, Alfa interjected, ‘I’m not immortal, madam.’

‘Well,’ Sylvia answered, ‘you’ve outlived Mum and Roger. And you’ll outlive Tao and me.’

‘Philosophically, that’s a moot point, madam.’

‘Philosophically? What do you mean?’

‘I’m not immortal, madam, because I’m not alive.’

Tao chipped in. ‘Doesn’t that depend on how you define life?'
’
It’s irrelevant to me, sir. I only exist on hardware, otherwise I am dormant.’

‘You mean, like when we’re asleep.’

‘An analogy, I believe. I don’t sleep either.’

Sylvia and Tao looked at each other. Sylvia smiled, ‘Mum warned me about getting into existential discussions with hyper-intelligent machines.’

 

Tuesday, 26 November 2024

An essay on authenticity

 I read an article in Philosophy Now by Paul Doolan, who ‘taught philosophy in international schools in Asia and in Europe’ and is also an author of non-fiction. The title of the article is Authenticity and Absurdity, whereby he effectively argues a case that ‘authenticity’ has been hijacked (my word, not his) by capitalism and neo-liberalism. I won’t even go there, and the only reason I mention it is because ‘authenticity’ lies at the heart of existentialism as I believe it should be practiced.
 
But what does it mean in real terms? Does it mean being totally honest all the time, not only to others but also to yourself? Well, to some extent, I think it does. I happened to grow up in an environment, specifically my father’s; who as my chief exemplar, pretty much said whatever he was thinking. He didn’t like artifice or pretentiousness and he’d call it out if he smelled it.
 
In my mid-late 20s I worked under a guy, who was exactly the same temperament. He exhibited no tact whatsoever, no matter who his audience was, and he rubbed people the wrong way left, right and centre (as we say in Oz). Not altogether surprisingly, he and I got along famously, as back then, I was as unfiltered as he was. He was Dutch heritage, I should point out, but being unfiltered is often considered an Aussie trait.
 
I once attempted to have a relationship with someone who was extraordinarily secretive about virtually everything. Not surprisingly, it didn’t work out. I have kept secrets – I can think of some I’ll take to my grave – but that’s to protect others more than myself, and it would be irresponsible if I didn’t.
 
I often quote Socrates: To live with honour in this world, actually be what you try to appear to be. Of course, Socrates never wrote anything down, but it sounds like something he would have said, based on what we know about him. Unlike Socrates, I’ve never been tested, and I doubt I’d have the courage if I was. On the other hand, my father was, both in the theatre of war and in prison camps.
 
I came across a quote recently, which I can no longer find, where someone talked about looking back on their life and being relatively satisfied with what they’d done and achieved. I have to say that I’m at that stage of my life, where looking back is more prevalent than looking forward, and there is a tendency to have regrets. But I have a particular approach to dealing with regrets: I tell people that I don’t have regrets because I own my mistakes. In fact, I think that’s an essential requirement for being authentic.
 
But to me, what’s more important than the ‘things I have achieved’ are the friendships I’ve made – the people I’ve touched and who have touched me. I think I learned very early on in life that friendship is more valuable than gold. I can remember the first time I read Aristotle’s essay on friendship and thought it incorporated an entire philosophy. Friendship tests authenticity by its very nature, because it’s about trust and loyalty and integrity (a recurring theme in my fiction, as it turns out).
 
In effect, Aristotle contended that you can judge the true nature and morality of a person by the friendships they form and whether they are contingent on material reward (utilitarian is the word used in his Ethics) or whether they are based on genuine empathy (my word of choice) and without expectation or reciprocation, except in kind. I tend to think narcissism is the opposite of authenticity because it creates its own ‘distortion reality field’ as someone once said (Walter Isaacson, Steve Jobs; biography), whereby their followers (not necessarily friends per se) accept their version of reality as opposed to everyone else outside their circle. So, to some extent, it’s about exclusion versus inclusion. (The Trump phenomenon is the most topical, contemporary example.)
 
I’ve lived a flawed life, all of which is a consequence of a combination of circumstance both within and outside my control. Because that’s what life is: an interaction between fate and free will. As I’ve said many times before, this describes my approach to writing fiction, because fate and free will are represented by plot and character respectively.
 
I’m an introvert by nature, yet I love to engage in conversation, especially in the field of ideas, which is how I perceive philosophy. I don’t get too close to people and I admit that I tend to control the distance and closeness I keep. I think people tolerate me in small doses, which suits me as well as them.

 

Addendum 1: I should say something about teamwork, because that's what I learned in my professional life. I found I was very good working with people who had far better technical skills than me. In my later working life, I enjoyed the cross-generational interactions that often created their own synergies as well as friendships, even if they were fleeting. It's the inherent nature of project work that you move on, but one of the benefits is that you keep meeting and working with new people. In contrast to this, writing fiction is a very solitary activity, where you spend virtually your entire time in your own head. As I pointed out in a not-so-recent Quora post, art is the projection of one's inner world so that others can have the same emotional experience. To quote:

We all have imagination, which is a form of mental time-travel, both into the past and the future, which I expect we share with other sentient creatures. But only humans, I suspect, can ‘time-travel’ into realms that only exist in the imagination. Storytelling is more suited to that than art or music.

Addendum 2: This is a short Quora post by Frederick M. Dolan (Professor of Rhetoric, Emeritus at University of California, Berkeley with a Ph.D. in Political Philosophy, Princeton University, 1987) writing on this very subject, over a year ago. He makes the point that, paradoxically: To believe that you’re under some obligation to be authentic is, therefore, self-defeating. (So inauthentic)

He upvoted a comment I made, roughly a year ago:

It makes perfect sense to me. Truly authentic people don’t know they’re being authentic; they’re just being themselves and not pretending to be something they’re not.

They’re the people you trust even if you don’t agree with them. Where I live, pretentiousness is the biggest sin.

Thursday, 14 November 2024

How can we make a computer conscious?

 This is another question of the month from Philosophy Now. My first reaction was that the question was unanswerable, but then I realised that was my way in. So, in the end, I left it to the last moment, but hopefully meeting their deadline of 11 Nov., even though I live on the other side of the world. It helps that I’m roughly 12hrs ahead.


 
I think this is the wrong question. It should be: can we make a computer appear conscious so that no one knows the difference? There is a well known, philosophical conundrum which is that I don’t know if someone else is conscious just like I am. The one experience that demonstrates the impossibility of knowing is dreaming. In dreams, we often interact with other ‘people’ whom we know only exist in our mind; but only once we’ve woken up. It’s only my interaction with others that makes me assume that they have the same experience of consciousness that I have. And, ironically, this impossibility of knowing equally applies to someone interacting with me.

This also applies to animals, especially ones we become attached to, which is a common occurrence. Again, we assume that these animals have an inner world just like we do, because that’s what consciousness is – an inner world. 

Now, I know we can measure people’s brain waves, which we can correlate with consciousness and even subconsciousness, like when we're asleep, and even when we're dreaming. Of course, a computer can also generate electrical activity, but no one would associate that with consciousness. So the only way we would judge whether a computer is conscious or not is by observing its interaction with us, the same as we do with people and animals.

I write science fiction and AI figures prominently in the stories I write. Below is an excerpt of dialogue I wrote for a novel, Sylvia’s Mother, whereby I attempt to give an insight into how a specific AI thinks. Whether it’s conscious or not is not actually discussed.

To their surprise, Alfa interjected. ‘I’m not immortal, madam.’
‘Well,’ Sylvia answered, ‘you’ve outlived Mum and Roger. And you’ll outlive Tao and me.’
‘Philosophically, that’s a moot point, madam.’
‘Philosophically? What do you mean?’
‘I’m not immortal, madam, because I’m not alive.’
Tao chipped in. ‘Doesn’t that depend on how you define life?’
‘It’s irrelevant to me, sir. I only exist on hardware, otherwise I am dormant.’
‘You mean, like when we’re asleep.’
‘An analogy, I believe. I don’t sleep either.’
Sylvia and Tao looked at each other. Sylvia smiled, ‘Mum warned me about getting into existential discussions with hyper-intelligent machines.’ She said, by way of changing the subject, ‘How much longer before we have to go into hibernation, Alfa?’
‘Not long. I’ll let you know, madam.’

 

There is a 400 word limit; however, there is a subtext inherent in the excerpt I provided from my novel. Basically, the (fictional) dialogue highlights the fact that the AI is not 'living', which I would consider a prerequisite for consciousness. Curiously, Anil Seth (who wrote a book on consciousness) makes the exact same point in this video from roughly 44m to 51m.
 

Saturday, 12 October 2024

Freedom of the will is requisite for all other freedoms

 I’ve recently read 2 really good books on consciousness and the mind, as well as watch countless YouTube videos on the topic, but the title of this post reflects the endpoint for me. Consciousness has evolved, so for most of the Universe’s history, it didn’t exist, yet without it, the Universe has no meaning and no purpose. Even using the word, purpose, in this context, is anathema to many scientists and philosophers, because it hints at teleology. In fact, Paul Davies raises that very point in one of the many video conversations he has with Robert Lawrence Kuhn in the excellent series, Closer to Truth.
 
Davies is an advocate of a cosmic-scale ‘loop’, whereby QM provides a backwards-in-time connection which can only be determined by a conscious ‘observer’. This is contentious, of course, though not his original idea – it came from John Wheeler. As Davies points out, Stephen Hawking was also an advocate, premised on the idea that there are a number of alternative histories, as per Feynman’s ‘sum-over-histories’ methodology, but only one becomes reality when an ‘observation’ is made. I won’t elaborate, as I’ve discussed it elsewhere, when I reviewed Hawking’s book, The Grand Design.
 
In the same conversation with Kuhn, Davies emphasises the fact that the Universe created the means to understand itself, through us, and quotes Einstein: The most incomprehensible thing about the Universe is that it’s comprehensible. Of course, I’ve made the exact same point many times, and like myself, Davies makes the point that this is only possible because of the medium of mathematics.
 
Now, I know I appear to have gone down a rabbit hole, but it’s all relevant to my viewpoint. Consciousness appears to have a role, arguably a necessary one, in the self-realisation of the Universe – without it, the Universe may as well not exist. To quote Wheeler: The universe gave rise to consciousness and consciousness gives meaning to the Universe.
 
Scientists, of all stripes, appear to avoid any metaphysical aspect of consciousness, but I think it’s unavoidable. One of the books I cite in my introduction is Philip Ball’s The Book of Minds; How to Understand Ourselves and Other Beings; from Animals to Aliens. It’s as ambitious as the title suggests, and with 450 pages, it’s quite a read. I’ve read and reviewed a previous book by Ball, Beyond Weird (about quantum mechanics), which is equally as erudite and thought-provoking as this one. Ball is a ‘physicalist’, as virtually all scientists are (though he’s more open-minded than most), but I tend to agree with Raymond Tallis that, despite what people claim, consciousness is still ‘unexplained’ and might remain so for some time, if not forever.
 
I like an idea that I first encountered in Douglas Hofstadter’s seminal tome, Godel, Escher, Bach; an Eternal Golden Braid, that consciousness is effectively a loop, at what one might call the local level. By which I mean it’s confined to a particular body. It’s created within that body but then it has a causal agency all of its own. Not everyone agrees with that. Many argue that consciousness cannot of itself ‘cause’ anything, but Ball is one of those who begs to differ, and so do I. It’s what free will is all about, which finally gets us back to the subject of this post.
 
Like me, Ball prefers to use the word ‘agency’ over free will. But he introduces the term, ‘volitional decision-making’ and gives it the following context:

I believe that the only meaningful notion of free will – and it is one that seems to me to satisfy all reasonable demands traditionally made of it – is one in which volitional decision-making can be shown to happen according to the definition I give above: in short, that the mind operates as an autonomous source of behaviour and control. It is this, I suspect, that most people have vaguely in mind when speaking of free will: the sense that we are the authors of our actions and that we have some say in what happens to us. (My emphasis)

And, in a roundabout way, this brings me to the point alluded to in the title of this post: our freedoms are constrained by our environment and our circumstances. We all wish to be ‘authors of our actions’ and ‘have some say in what happens to us’, but that varies from person to person, dependent on ‘external’ factors.

Writing stories, believe it or not, had a profound influence on how I perceive free will, because a story, by design, is an interaction between character and plot. In fact, I claim they are 2 sides of the same coin – each character has their own subplot, and as they interact, their storylines intertwine. This describes my approach to writing fiction in a nutshell. The character and plot represent, respectively, the internal and external journey of the story. The journey metaphor is apt, because a story always has the dimension of time, which is visceral, and is one of the essential elements that separates fiction from non-fiction. To stretch the analogy, character represents free will and plot represents fate. Therefore, I tell aspiring writers the importance of giving their characters free will.

A detour, but not irrelevant. I read an article in Philosophy Now sometime back, about people who can escape their circumstances, and it’s the subject of a lot of biographies as well as fiction. We in the West live in a very privileged time whereby many of us can aspire to, and attain, the life that we dream about. I remember at the time I left school, following a less than ideal childhood, feeling I had little control over my life. I was a fatalist in that I thought that whatever happened was dependent on fate and not on my actions (I literally used to attribute everything to fate). I later realised that this is a state-of-mind that many people have who are not happy with their circumstances and feel impotent to change them.

The thing is that it takes a fundamental belief in free will to rise above that and take advantage of what comes your way. No one who has made that journey will accept the self-denial that free will is an illusion and therefore they have no control over their destiny.

I will provide another quote from Ball that is more in line with my own thinking:

…minds are an autonomous part of what causes the future to unfold. This is different to the common view of free will in which the world somehow offers alternative outcomes and the wilful mind selects between them. Alternative outcomes – different, counterfactual realities – are not real, but metaphysical: they can never be observed. When we make a choice, we aren’t selecting between various possible futures, but between various imagined futures, as represented in the mind’s internal model of the world…
(emphasis in the original)

And this highlights a point I’ve made before: that it’s the imagination which plays the key role in free will. I’ve argued that imagination is one of the facilities of a conscious mind that separates us (and other creatures) from AI. Now AI can also demonstrate agency, and, in a game of chess, for example, it will ‘select’ from a number of possible ‘moves’ based on certain criteria. But there are fundamental differences. For a start, the AI doesn’t visualise what it’s doing; it’s following a set of highly constrained rules, within which it can select from a number of options, one of which will be the optimal solution. Its inherent advantage over a human player isn’t just its speed but its ability to compare a number of possibilities that are impossible for the human mind to contemplate simultaneously.

The other book I read was Being You; A New Science of Consciousness by Anil Seth. I came across Seth when I did an online course on consciousness through New Scientist, during COVID lockdowns. To be honest, his book didn’t tell me a lot that I didn’t already know. For example, that the world, we all see and think exists ‘out there’, is actually a model of reality created within our heads. He also emphasises how the brain is a ‘prediction-making’ organ rather than a purely receptive one. Seth mentions that it uses a Bayesian model (which I also knew about previously), whereby it updates its prediction based on new sensory data. Not surprisingly, Seth describes all this in far more detail and erudition than I can muster.

Ball, Seth and I all seem to agree that while AI will become better at mimicking the human mind, this doesn’t necessarily mean it will attain consciousness. Applications software, ChatGPT (for example), despite appearances, does not ‘think’ the way we do, and actually does not ‘understand’ what it’s talking or writing about. I’ve written on this before, so I won’t elaborate.

Seth contends that the ‘mystery’ of consciousness will disappear in the same way that the 'mystery of life’ has effectively become a non-issue. What he means is that we no longer believe that there is some ‘elan vital’ or ‘life force’, which distinguishes living from non-living matter. And he’s right, in as much as the chemical origins of life are less mysterious than they once were, even though abiogenesis is still not fully understood.

By analogy, the concept of a soul has also lost a lot of its cogency, following the scientific revolution. Seth seems to associate the soul with what he calls ‘spooky free will’ (without mentioning the word, soul), but he’s obviously putting ‘spooky free will’ in the same category as ‘elan vital’, which makes his analogy and associated argument consistent. He then says:

Once spooky free will is out of the picture, it is easy to see that the debate over determinism doesn’t matter at all. There’s no longer any need to allow any non-deterministic elbow room for it to intervene. From the perspective of free will as a perceptual experience, there is simply no need for any disruption to the causal flow of physical events. (My emphasis)

Seth differs from Ball (and myself) in that he doesn’t seem to believe that something ‘immaterial’ like consciousness can affect the physical world. To quote:

But experiences of volition do not reveal the existence of an immaterial self with causal power over physical events.

Therefore, free will is purely a ‘perceptual experience’. There is a problem with this view that Ball himself raises. If free will is simply the mind observing effects it can’t cause, but with the illusion that it can, then its role is redundant to say the least. This is a view that Sabine Hossenfelder has also expressed: that we are merely an ‘observer’ of what we are thinking.

Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking.

Ball makes the point that we only have to look at all the material manifestations of human intellectual achievements that are evident everywhere we’ve been. And this brings me back to the loop concept I alluded to earlier. Not only does consciousness create a ‘local’ loop, whereby it has a causal effect on the body it inhabits but also on the external world to that body. This is stating the obvious, except, as I’ve mentioned elsewhere, it’s possible that one could interact with the external world as an automaton, with no conscious awareness of it. The difference is the role of imagination, which I keep coming back to. All the material manifestations of our intellect are arguably a result of imagination.

One insight I gained from Ball, which goes slightly off-topic, is evidence that bees have an internal map of their environment, which is why the dance they perform on returning to the hive can be ‘understood’ by other bees. We’ve learned this by interfering in their behaviour. What I find interesting is that this may have been the original reason that consciousness evolved into the form that we experience it. In other words, we all create an internal world that reflects the external world so realistically, that we think it is the actual world. I believe that this also distinguishes us (and bees) from AI. An AI can use GPS to navigate its way through the physical world, as well as other so-called sensory data, from radar or infra-red sensors or whatever, but it doesn’t create an experience of that world inside itself.

The human mind seems to be able to access an abstract world, which we do when we read or watch a story, or even write one, as I have done. I can understand how Plato took this idea to its logical extreme: that there is an abstract world, of which the one we inhabit is but a facsimile (though he used different terminology). No one believes that today – except, there is a remnant of Plato’s abstract world that persists, which is mathematics. Many mathematicians and physicists (though not all) treat mathematics as a neverending landscape that humans have the unique capacity to explore and comprehend. This, of course, brings me back to Davies’ philosophical ruminations that I opened this discussion with. And as he, and others (like Einstein, Feynman, Wigner, Penrose, to name but a few) have pointed out: the Universe itself seems to follow specific laws that are intrinsically mathematical and which we are continually discovering.

And this closes another loop: that the Universe created the means to comprehend itself, using the medium of mathematics, without which, it has no meaning. Of purpose, we can only conjecture.

Thursday, 19 September 2024

Prima Facie; the play

 I went and saw a film made of a live performance of this highly rated play, put on by the National Theatre at the Harold Pinter Theatre in London’s West End in 2022. It’s a one-hander, played by Jodie Comer, best known as the quirky assassin with a diabolical sense of humour, in the black comedy hit, Killing Eve. I also saw her in Ridley Scott’s riveting and realistically rendered film, The Last Duel, set in mediaeval France, where she played alongside Matt Damon, Adam Driver and an unrecognisable Ben Affleck. The roles that Comer played in those 2 screen mediums, couldn’t be more different.
 
Theatre is more unforgiving than cinema, because there are no multiple takes or even a break once the curtain’s raised; a one-hander, even more so. In the case of Prima Facie, Comer is on the stage a full 90mins, and even does costume-changes and pushing around her own scenery unaided, without breaking stride. It’s such a tour de force performance, as the Financial Times put it; I’d go so far as to say it’s the best acting performance I’ve ever witnessed by anyone. It’s such an emotionally draining role, where she cries and even breaks into a sweat in one scene, that I marvel she could do it night-after-night, as I assume she did.
 
And I’ve yet to broach the subject matter, which is very apt, given the me-too climate, but philosophically it goes deeper than that. The premise for the entire play, which is even spelt out early on, in case you’re not paying attention, is the difference between truth and justice, and whether it matters. Comer’s character, Tessa, happens to experience it from both sides, which is what makes this so powerful.
 
She’s a defence barrister, who specialises in sexual-assault cases, where, as she explains very early on, effectively telling us the rules of the game: no one wins or loses; you either come first or second. In other words, the barristers and those involved in the legal profession, don’t see the process the same way that you and I do, and I can understand that – to get emotionally involved makes it very stressful.

In fact, I have played a small role in this process in a professional capacity, so I’ve seen this firsthand. But I wasn’t dealing with rape cases or anything involving violence, just contractual disputes where millions of dollars could be at stake. My specific role was to ‘prepare evidence’ for lawyers for either a claim or the defence of a claim or possibly a counter-claim, and I quickly realised the more dispassionate one is, the more successful one is likely to be. I also realised that the lawyers I was supporting in one case could be on the opposing side in the next one, so you don’t get personal.
 
So, I have a small insight into this world, and can appreciate why they see it as a game, where you ‘win or come second’. But in Prima Facie, Tess goes through this very visceral and emotionally scarifying transformation where she finds herself on the receiving end, and it’s suddenly very personal indeed.
 
Back in 2015, I wrote a mini-400-word essay, in answer to one of those Question of the Month topics that Philosophy Now like to throw open to amateur wannabe philosophers, like myself. And in this case, it was one that was selected for publication (among 12 others), from all around the Western globe. I bring this up, because I made the assertion that ‘justice without truth is injustice’, and I feel that this is really what Prima Facie is all about. At the end of the play, with Tess now having the perspective of the victim (there is no other word), it does become a matter of winning or losing, because, not only her career and future livelihood, but her very dignity, is now up for sacrifice.
 
I watched a Q&A programme on Australia’s ABC some years ago, where this issue was discussed. Every woman on the panel, including one from the righteous right (my coinage), had a tale to tell about discrimination or harassment in a workplace situation. But the most damming testimony came from a man, who specialised in representing women in sexual assault cases, and he said that in every case, their doctors tell them not to proceed because it will destroy their health; and he said: they’re right. I was reminded of this when I watched this play.
 
One needs to give special mention to the writer, Suzie Miller, who is an Aussie as it turns out, and as far as 6 degrees of separation go, I happen to know someone who knows her father. Over 5 decades I’ve seen some very good theatre, some of it very innovative and original. In fact, I think the best theatre I’ve seen has invariably been something completely different, unexpected and dare-I-say-it, special. I had a small involvement in theatre when I was still very young, and learned that I couldn’t act to save myself. Nevertheless, my very first foray into writing was an attempt to write a play. Now, I’d say it’s the hardest and most unforgiving medium of storytelling to write for. I had a friend who was involved in theatre for some decades and even won awards. She passed a couple of years ago and I miss her very much. At her funeral, she was given a standing ovation, when her coffin was taken out; it was very moving. I can’t go to a play now without thinking about her and wishing I could discuss it with her.