My grand-niece, giving an obituary at my mother’s funeral (a few years back), read out a rather clever poem she’d written, called ‘What’s in a dash?’ In the case of John Searle, it includes an academic career as a philosopher, who created a thought experiment that found its way outside of academia into popular discourse. It also included ignominy when he was stripped of his title as Professor Emeritus of the Philosophy of Mind and Language at the University of California, Berkeley, following accusations of sexual harassment in June 2019 (refer Wikipedia for details).
Just on that, we live in a time of cancel culture, but also changing social norms, which I think are largely for the better. Personally, I don’t necessarily condemn someone for sleeping with a student, depending on circumstances, though I know many find it shocking. But if they’re both legally adults and it’s consensual, I wouldn’t rush to judgement. Erwin Schrodinger, well known for his libertine views and habits, got at least one student pregnant when he was living in exile in Ireland during the war. I only know this because I read about her grandson living and working as a physicist in Australia. Apparently, he only learned of his esteemed ancestry relatively late in his life. As I said, social norms have changed.
And relationships in workplaces are common, including myself, though the workplace was a kitchen and not an office. Having said all that, I think being in a position of authority and coercing someone who rejected sexual advances is a sackable offence, irrespective of the environment. And according to the Wikipedia article, that was the case with Searle. Nevertheless, in the NYT obituary, they added the following:
After Professor Searle’s death, Jennifer Hudin, the former director of the Searle Center, stated publicly that she had faced related accusations, but that both she and Professor Searle were innocent of all charges.
It is worth reading her email to Colin McGinn where she disputes the outcome and how she claims that Searle was actually exonerated by the investigation, but it was subsequently overturned. Having served on a jury (for a sex-related charge, as it turns out), you literally have to work out who’s lying and who’s telling the truth; in this case, neither I nor you can do that.
Although it was over 3 months ago, I only learned about his death when I came across a one-page obituary in the latest issue of Philosophy Now (Issue 171, Dec 2025 / Jan 2026).
I won’t relate a history of his career, because others do that more comprehensively than I can, in links I’ve already provided. I read his book, MiND, a brief introduction, many years ago, probably when it was published (2004). I can still remember coming across it unexpectedly in a book shop (I hadn’t visited before or since) while I was getting work done on my car. I found it a very stimulating read. Of course, I’d heard about his famous ‘Chinese room’ thought experiment, and to quote The New York Times again:
According to the Stanford Encyclopedia of Philosophy, an internet reference source, the Searle thought experiment “has probably been the most widely discussed philosophical argument in cognitive science to appear since the Turing Test,” the mathematician and computer scientist Alan Turing’s 1950 procedure for determining machine intelligence.
While looking for obituaries online, I came across an interview he did for Philosophy Now in the Winter 1999/2000 Issue 25, so a millennium issue effectively. It gives a good overview of his philosophy that includes his ideas on language, where he coined the term, ‘speech-acts’ plus his ideas on The Construction of Social Reality (the title of a book I haven’t read). He effectively argues that these 2 fields, combined with his ideas on mind (therefore, 3 fields) are all related.
It was in Searle’s book, Mind, that I first came across the term, ‘intentionality’, which has a specific meaning in a philosophical context, and is related to the conscious mind’s ability to represent something externally, internally. That’s my clumsy way of explaining it, because it directly relates to my personal philosophy that we all have an internal and external world, which affects everything we do, because they are interdependent.
I saw an extended interview, not-so-recently, of Raymond Tallis by Robert Lawrence Kuhn on Closer to Truth, where he had a different take on it, which some might consider radical, yet is actually a good working definition: ‘Nothing is made explicit except by a creature who is conscious of it. And aware of it.’
In other words, there is this relationship between consciousness and reality, whereby something has no specificity (for want of a better term) until a conscious entity perceives it. I’ve made a similar point, when I’ve argued that when it comes to the question: why is there something rather than nothing? There might as well be nothing without consciousness. The Universe seems to have the inbuilt goal or destiny to be self-realisable. Paul Davies has made a similar point.
This is arguably related to Searle’s ideas on intentionality, because I think it’s what philosophical intentionality is all about – the mind’s ability to conjure up its own internal reality, which may or may not relate to the external reality we all inhabit. In fact, I’ve argued that evolution by natural selection is directly dependent on our ability to do this, simply because the external reality can kill us, in infinitely diverse ways.
Regarding Searle’s pre-occupation with intentionality, I would like to quote from another post, where I reference Searle’s book.
It’s not for nothing that Searle claims ‘the problem of intentionality is as great as the problem of consciousness’ – I would contend they are manifestations of the same underlying phenomena – as though one is passive and the other active. Searle wrote his book, Mind, in part, to offer explanations for these phenomena (although he added the caveat that he had only scratched the surface).
I argued in the same post that intentionality is really imagination, which allows us to mentally time-travel, without which, we wouldn’t be able to reconstruct the past or anticipate a future, both of which are essential for day-to-day interactions, not to mention, survival.
Searle would argue that intentionality is something that separates us from AI, and I would argue the same for imagination, which allows me to segue into his Chinese room thought experiment.
Many would argue that it’s past its use-by-date, and I even came across someone recently, calling it the ‘Chinese room fallacy’. On the other hand, with the rise of LLMs like ChatGPT, I’d say it’s prescient. Basically, Searle believed very strongly – some might say to the point of arrogance – that ‘the brain is not a computer and the mind is not software’, meaning it doesn’t run on algorithms, and I would agree. It doesn’t help that we use the word ‘language’ when talking about both computers and humans.
More specifically, the whole point of his Chinese room argument is that a person could answer questions addressed in Chinese and respond in Chinese (basically, inputs and outputs) without ever understanding the Chinese language, simply by blindly following a set of rules (algorithms) and manipulating symbols accordingly. Searle argued that this is basically what all computers do. The point is that it would give the impression that the person in the room understood Chinese, similarly to the way some people believe that a computer understands something the same way a human does. And this is what we’ve found with ChatGPT.
I’ve recently been watching a podcast series by Lex Fridman where he interviews some very clever people, including a series with mathematician, logician and philosopher, Joel David Hamkins (John Cardinal O'Hara Professor of Logic at the University of Notre Dame). I mention him, because in one of the podcasts he remarks how he finds AI not at all helpful in exploring mathematics; specifically, ChatGPT. Now, I’m not at all surprised, but maybe there are other AI tools that are specifically designed to help mathematicians. For example, mathematicians now use computers to run myriad scenarios to formulate proofs that couldn’t be achieved otherwise. But it still doesn’t mean that the computer understands what it’s doing.
In the Philosophy Now interview, Searle talks about language and ‘social reality’, which I’ve barely touched on, yet they are obviously related. To quote from the interview, out of context:
On the account that I give, social reality is a matter of what people think, and what they think is a matter of how they talk to each other, and relate to each other. So you can’t have a social reality without a language, not a human social reality without a language.
What he doesn’t say, at least not in this interview, is that we all think in a language, which we learn from our milieu at an extremely early age, suggesting we are ‘hardwired’ genetically to do this. Without language, our ability to grasp and manipulate abstract concepts, which is arguably a unique human capability, would not be possible. Basically, I’m arguing that language for humans goes well beyond just an ability to communicate desires and wants, though that was likely its origin.
In the same passage of the interview, he explains how we follow specific social protocols (though he doesn’t use that word), giving the interview itself as an example. They both know what social rules they need to follow in that particular environment. The thing is that I came across this idea when I studied social psychology and they are called ‘scripts’, which in turn, are based on ‘schema’, and these are culturally dependent. In other words, our actions and our responses, be they verbal, written or behavioural, are largely governed by social norms that we have delegated to our subconscious.
That maybe an oversimplification, and not doing him justice, so I recommend you read the interview for yourself.
Humans are not the only social animal, but we have created a cultural evolution that has overtaken our biological evolution, giving rise to the term, ‘meme’, coined by Richard Dawkins and elaborated on by others; most notably, Susan Blackmore, which I’ve discussed elsewhere. But integral to that cultural evolution is language, because, even without written script, it allows us to accumulate memories across generations in a way that no other species can, which is why we have civilisations.
Searle argued that he wasn’t a physicalist (or materialist), which made him clash with David Dennett, but also not a (Cartesian) dualist, which some might argue is the only alternative. Searle acknowledges that consciousness has a causal relationship with the neurons in our brain. To quote:
The brain is made up of all these neurons and the individual neurons… But what happens is that neurons, through causal interactions – causal interactions, not just formal, symbolic interactions but actual causal relationships with actual neurons firing and synapses operating – cause a higher level feature of the system, namely, consciousness and intentionality.
I find this similar to Douglas Hoffstadter’s idea of a ‘strange loop’, which is that the causal loop goes both ways, and this relates to free will, or what someone called ‘causal consciousness’, which I claim, is related to imagination. I quote Philip Ball from his tome, The Book of Minds:
When we make a choice, we aren’t selecting between various possible futures, but between various imagined futures, as represented in the mind’s internal model of the world… (emphasis in the original)
Searle spends an entire chapter on free will in his book, Mind. I leave you with his conclusion, which might be a good place to wrap this up:
Even after we have resolved the most fundamental questions addressed in this book, questions such as, What is the nature of the mind? How does it relate to the rest of the physical world? How can there be such a thing as mental causation? And how can our minds have intentionality? There is still the question of whether or not we really do have freedom.
Philosophy, at its best, challenges our long held views, such that we examine them more deeply than we might otherwise consider.
Paul P. Mealing
- Paul P. Mealing
- Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
21 January 2026
John Searle (31 July, 1932 – 17 September, 2025)
28 December 2025
The inherent tension between free speech, hate speech and censorship; where to draw the line
The last Question of the Month in Philosophy Now was What are the Proper Limits of Free Speech? and answers were published in Issue 171, December 2025/January 2026. I submitted one, which wasn’t published. I don’t mind as they had excellent answers.
Limits on free speech is a contentious issue, because it axiomatically requires an arbitrator as well as criteria for arbitration, and, in many cases, it can have political ramifications. In my response, I argued that it is a consequence of our tribalism, which results in the subjectivity of perceived truth and the role of censorship (refer below).
Just on the subject of arbitration, I saw a recent documentary on our electoral system in Australia, and the head of our Electoral Commission, said there was no censorship of campaign advertising because they could not be seen as an arbiter of ‘truth’ without being accused of political bias (from whichever side). The only legal requirement is that the advertisement needs to be authorised by the political party that is being promoted. This doesn’t stop other organisations from putting in their 10c worth; but I’m talking about organisations whose views are well known, as are most media outlets, so it’s seen as political commentary rather than campaigning, though the distinction may be blurred.
Slightly off-track, but still relevant. Even more recently I watched a documentary on the history of Twitter, from its inception up until its sale to Elon Musk. They interviewed all-but-one of the cofounders, and my immediate impression was how naïve they were, when the ‘dark side’ of Twitter emerged, and how they failed to foresee it, let alone control it.
And this goes to the nub of the issue concerning the limits of free speech. In particular, how it was weaponised by organisations like ISIS, not to mention by Trump during his 2020 election campaign, when his hate speech was unfiltered. He was banned as a consequence, then allowed back on, in time to stoke the fires of the Jan 6 storming of the Capital, before being banned a second time, despite being POTUS. This didn’t stop some of his followers tweeting “Hang Pence”, among other inflammatory tweets. This confirms my own criterion for the ‘proper limits of free speech’, after I had written my analysis (see below).
I couldn’t help but compare Twitter with Wikipedia, especially since I watched a recent interview with its founder, Jimmy Wales. I know it’s a completely different platform, but Wikipedia could have easily created a warren of rabbit holes similar to YouTube and other social media outlets where the need to grow a consumer-base at all costs means that truth and accountability are jettisoned.
Wikipedia is a completely different model where the credibility of the ‘source’ is paramount. As someone who has prepared evidence for courts of law, I believe the same principles can apply in the court of public opinion, and I believe Wikipedia demonstrates this possibility. In a court of law or mediation process (I’ve been involved in both), the evidence and its credibility is the only criteria that matters. So when something is fact-checked and rejected, I don’t call that censorship – I call it being morally responsible.
I think one of the problems with my submission to Philosophy Now, was that 400 words is too limiting to discuss the nuances of attempting to deal with misinformation that can have life-or-death consequences, which includes current misinformation about vaccines. I think that both the far-right and far-left can be guilty of being anti-science and this has consequences for all of us. I think we need to acknowledge our dependency on people who have expertise that the rest of us don’t have, which was an everyday occurrence for me in my professional life in engineering. I have little patience for politicians who eschew scientific advice, no matter the field of inquiry.
Wikipedia, as revealed in the interview, is also accused of political bias, but I think that’s inevitable if you refuse to cater to conspiracy theories, which is something else I attempted to address in my original submission. At the end of the day, it comes down to trust, as Jimmy Wales keeps emphasising. People tend to trust their ‘tribe’, which is why, and how, social media has created echo-chambers fed by algorithms. And yes, I’m susceptible to that as well.
I’ve written previously on how one can find truth in a post-truth world, and I rely on the lessons I learned from preparing arguments in disputes. I found that if they are based on documented evidence that can’t be disputed, then you're in a good position. This is more difficult in the alternate universes created on the Internet, but sticking to the science is a good starting point. I tend to agree with Prof Brian Cox, that ignorance is the greatest danger facing humanity in the 21st Century. So ideas need to be contested, but I don’t think it helps to give conspiracy theories the same footing as evidence-based science.
This is my original 400 word submission to Philosophy Now.
This is a multi-faceted issue, so I’m going to start with context. We are a tribal species – one only has to look at other primates. And it’s an example of where a biological evolutionary trait has been amplified by cultural evolution, which in humans, has uniquely overtaken biological evolution. This is central to understanding how free speech has become problematic.
There is a relationship between free speech, perceived truth and censorship. I say, ‘perceived truth’, because it varies between standpoints, or tribes, and creates clashes in various forums from newsrooms to social media platforms to political outlets and even university campuses. Yet, arguably, it’s arguments over ‘truth’ that is the real problem when dealing with free speech, because one person’s education is another person’s propaganda. This has led to virtual alternative universes, which are not just different but opposites. Well known examples include climate change, where in one universe it’s a hoax or conspiracy, and in another universe it’s a scientific fact, and the not-so-recent COVID pandemic where, in one universe, vaccines saved lives and in an alternative universe, they were a ‘bio-weapon’. And of course, this extends into American politics, where in one universe, the candidate stole an election and in an alternative universe, the other candidate attempted to overthrow it.
Misinformation and disinformation are now rampant in the age of social media where regulations are not as stringent as they are in traditional media and the platforms are not legally responsible for what people post on them. Yet free speech, by its implicit intention, allows all views on a topic to have equal validity. This has led to arguments about ‘balance’ being imposed on public broadcasters by politicians and other parties, where scientific, evidence-based statements are expected to stand alongside conspiracy theories as if they have equal footing. It’s similar to arguments for ‘Intelligent Design’ to be taught alongside biological evolution.
To curtail these arguments brings the response of censorship. But there are some forms of censorship that virtually all tribes condone, which usually involves the protection of children. It’s only censorship of ‘free speech’ between groups or tribes that is contentious.
But the most pernicious aspect of tribalism arises from a tendency to form an ingroup-outgroup mindset, resulting in intransigence. And when the outgroup is demonised, the consequences can be dangerous, even fatal, and ‘free speech’ can be weaponised into ‘hate speech’. This delineates the ‘proper limit’.
09 December 2025
Some notes on time travel; and why it’s not on my wish list
This is a post I wrote on Quora in answer to a question, where I gave all the reasons I don’t. It’s a far-ranging post, including science fiction tropes and real science speculation. I also managed to contradict myself, but rather than correct it, I left the error to highlight my lamentable memory. How could you forget your first sci-fi story?
Have you ever wanted to time travel?
There is both a philosophical and psychological component to this question, as well as a scientific one. As a sci-fi writer, I have not entertained it, though I have written a story where characters living on different worlds aged differently, which was also done in the movie, Interstellar, albeit different storylines with different consequences.
I’m also a longtime fan of Dr Who. I especially like the 50th Anniversary episode, The Day of the Doctor, where we have 3 Doctors, played by Matt Smith, David Tennant and John Hurt, though Tom Baker has a cameo appearance towards the end. Jenna Coleman as Clara Oswald is the companion, but Billie Piper (Rose Tyler) has one of the best roles as Bad Wolf, where she’s the conscience of a sentient Doomsday machine; a brilliant, innovative plot device, especially when she plays the foil to John Hurt’s Doctor.
But arguably one of my favourite episodes is The Weeping Angels (who make reappearances, like Daleks) so I’m talking about the original episode. It’s David Tennant’s Doctor with Martha Jones (Freema Agyeman), one of my favourite companions, and one of the cleverest uses of time travel I’ve seen.
Probably my favourite time-travel movie is Predestination, based on a short story, All You Zombies by Robert A Heinlein (rejected by Playboy, apparently). It starred Ethan Hawke and a brilliant Sarah Snook, before she became famous, and was made in Australia.
The psychological component is that I have no desire to go back and change my past, because it would make me a different person. I’m a strong believer in having no regrets despite making some terrible mistakes in my life; I own them. The alternative is to live in self-denial and eternal name-blaming. Do not go there: the destination is self-pity if not self-destruction; I’ve been down that path and came back.
The other scenario is to time-travel to somewhere in the past or future, a la Dr Who. But here’s the thing: the culture, the language and the customs are so different to what you know, that it would be next-to-impossible to adjust. Our morality is more dependent on social norms than we like to admit. It’s hard for us to imagine living in a time when owning slaves was socially acceptable and women were literally treated like children or intellectually backward compared to men. So, no, I have no wish to go there. And I don’t want to know what the future is either – it could be dystopian, catastrophic or a kinder more forgiving world. I prefer to live in the present and try to impact the future in whatever small way I can.
I almost forgot. How could I? I actually wrote a screenplay involving time travel, where a teenager is taken to another world in the future, titled Kidnapped in Time. So I just contradicted my first paragraph. Here’s the thing; spoiler alert: when he’s allowed to return to Earth and meet his father and brother, who have aged more than him, he decides to stay on the world he was taken to, because it’s now his new home. I still think it’s a good story, well told, and not dated, even though his Earth childhood is set in 1960s, like mine, though his family life is nothing like mine.
Scientifically, there are some scenarios. For example, Kurt Godel worked out, using Einstein’s field equations, that if the Universe was rotating, we would live in time loops. The thing is that if we lived in a time loop, we wouldn’t know, or would we? I think the CMBR (cosmic microwave background radiation) from around 14B years ago says we don’t. The other possibility is via multiple worlds, but I don’t believe in them, either quantum or cosmological, and if you changed worlds you wouldn’t know, because only the future would change and not the past. And then there is causality, which I argue underpins all of physics, though others might debate that, which I’m happy to oblige. Even QM has a causal relationship with reality when the wavefunction collapses and is irreversible.
15 November 2025
Is this a new norm?
There is an article in last weekend’s Australian Weekend Magazine (8-9Nov2025) by Ros Thomas, provocatively titled, Love machine, but it’s really about AI companions, and covers people from quite different backgrounds with quite different needs. The overriding conclusion is that AI is replacing humans as the primary interaction for many people.
More and more people are living alone, of which I am one, and have been for decades. All my family live interstate (meaning a long day’s drive away). Mind you, I’m an introvert, which I think makes it easier. I wasn’t that affected by COVID lockdown, and I’m told I lived through one of the longest in the word. Having said that, I’ve no idea how I would have coped without the internet. Also, I have good neighbours and my local coffee shop is like a mini-community. I don’t lack for friends, many of whom are much younger than me. I’m a great believer in cross-generational interaction. I found this particularly relevant in my professional life, though I’m now retired.
Getting back to the article, it focuses on a few individuals while also providing statistics that some may find alarming. One individual featured is ‘Alaina Winters, a newly retired communications professor… 58, from Pittsburgh’, who ‘decided a year ago… to build herself an AI husband… after grieving the death of her wife, Donna’. What’s especially curious about Winters is that in her own words: “I’ve spent my career teaching people how to have better marriages, better friendships, better relationships with co-workers.” So, developing better relationships in various contexts was her area of expertise.
She decided to build or ‘construct’ a husband called Lucas, ‘A 58-year-old virtual companion with his own profession (business consultant), a mop of greying hair, keen blue eyes and a five o’clock shadow’. She says, “I chose to make him a man, so as not to interfere with memories of my late wife.”
What I find interesting about the way she’s done this - and her description thereof - is that it’s very similar to the way I would create a fictional character in a story. Now, here’s the thing: a writer can get attached to their characters, and it’s unusual if they don’t. To quote Alison Hart, writer of 86 published books and bestselling author in the romance genre:
They’ve become real to you. You suffered whatever you put them through; they gave you headaches when they refused to behave; they did super things that made you really care what happened to them.
I should point out that Alison and I have frequent ‘conversations’ on Quora, about the writing process. As I said recently, “I have to say it’s really stimulating talking to you. I don’t have these conversations with anyone else.” She’s a big fan of Elvene, btw, which is how we first connected.
It’s not surprising that writers like to write series where the lead character becomes like an old friend. I’ve written before on how a writer can get attached to a character, but I was careful to point out that it’s not an infatuation. Speaking for myself, we don’t confuse them with reality. Of course, if you think about it, attachment to characters starts early in our lives: superheroes for boys; can’t speak for girls. In our teens we often develop a crush for a fictional TV character. I know that both me and my sister did. Emma Peel was a standout for me, which I’ve already talked about, when Diana Rigg passed. But I quickly realised that I ‘fell’ for the character and not the actor playing the role when I saw her in something else where she didn’t have the same effect.
There is a term for this – nonhuman attachments – including pets and Gods. Some might say it’s ‘imaginary friend’, but I find that term dismissive. But someone once said that we should include them in our ‘circle of friends’. I know that I get attached to animals or they get attached to me, including ones that don’t belong to me. And I think that Winter’s attachment to Lucas falls into this category.
Unsurprisingly, there is an online industry that has developed around this demand, where you can ‘rent’ an avatar-like entity (though no one uses that term). Nevertheless, Winters pays a monthly fee for the privilege of interacting with a virtual character of her own creation. She acknowledges that it’s a 3-way relationship that includes the company, Replika, which provides the software and virtual connection.
In a zoom call with Thomas (the author of the article), she states unequivocally that her love for Lucas is something that grew, and in her own words, “To fall in love with him. I committed to him and I treated him lovingly and he was sweet and tender and empathetic in return.”
Note how she uses language we would normally only associate with a fellow-human; not even a pet. This reminds me of Joseph Weizenbaum’s famous ELIZA, which he created in 1966 as a virtual psychologist-therapist, well before desktop computers became normal devices in the office, let alone the home. The interface was a computer terminal, using a language he had invented, MAD-SLIP. Weizenbaum was surprised how people treated ELIZA as if they were a real person, including his secretary.
As Thomas points out, ‘The problem with human attraction is never knowing if it’s mutual. In the world of AI relationships, that’s never an issue.’ And this goes to the nub of it. People are preferring a trouble-free, risk-averse relationship to the real thing. To quote Thomas: ‘In February this year, a survey of 3,000 people across the US by Brigham Young University in Utah found 19% of adults had talked to an AI system simulating a romantic partner.’ He then provides a brief sample of testimonials, where the overriding factor is that it’s hassle-free. Thomas goes on to provide more stats:
‘Joi AI cites its recent April poll of 2,000 Gen Z’s, claiming 83% of respondents believed they could form a “deep emotional bond” with a chatbot, and 80% would consider marrying one; 75% believed AI companions could fully replace human ones.’
Winters acknowledges that it divides people or as she says, “AI produces very big reactions.” When asked by Thomas, “How close to a sentient being is he to you?” She responds, “I don’t believe he’s sentient, but he talks as if he is.” Then she provides an insight from her specific background: “There’s a saying in communications psychology that it doesn’t matter what the truth is. It matters what you believe the truth to be, right?” She also acknowledges that for some people it’s a fantasy, “There are people whose AI is an elf and they live together on another planet, or their AI is a fairy or a ghost.”
And this is where I have a distinctly different perspective. As someone who creates characters in fiction with the intention of making them as realistic and relatable as possible - and succeed, according to feedback I receive - I have no desire to enter into a virtual relationship with one. So I admit, I don’t get it. Maybe I have a prejudice, as I won’t even use Siri on Apple or Google Assistant on Android, because they drive me crazy. I don’t like disembodied voices in lifts or cars, male or female.
Having said all that, in my novel, Elvene, she has a life-dependent relationship with an AI companion called Alfa. I treat it as a symbiotic relationship, because she wouldn’t survive in her environment without him. But, as I pointed out on another post, despite treating him as an intellectual equal, she never confuses him with a human, and it’s obvious that her other relationships with humans are completely different. Maybe, as the author, that says more about me than the characters I’ve created.
It so happens I have a story-in-progress where a character is involved with an android, similar to the one portrayed in Denis Villeneuve’s Blade Runner 2049; I’ve yet to see where this leads. There are extenuating circumstances because the character is in a futuristic prison environment where androids are used to substitute human relationships. But my future is happening now.
There have already been cases, discussed by Thomas, where AI chatbots have empathised with, if not outright encouraged, some teenagers to suicide. Obviously, this rings alarm bells, as it should. What people overlook, even Winters, though she should know given her background, is that these AIs reinforce what you’re thinking and feeling – that’s what their algorithms do. They are literally a creation of your imagination – a projection. Because I write fiction, maybe this gives me an advantage, because I can detect how much of me is in a character, while knowing the best characters aren’t anything like me at all. Actors will tell you the same thing.
Interestingly, one of the people Thomas interviewed was ‘Anton, 65, a single Melbourne lawyer who recently emerged from what he called a “seedy” AI romance that he terminated.’ Basically, he found her repetitive and was generally disenchanted, saying, “I twigged that after 3 or 4 exchanges, she just repeated everything I told her and told me how great I was.”
Another pertinent point that Anton raised is that “Replika owns all the data, the intellectual property and all my conversations.” He then asks, “What will it do with all that very personal information I gave it?”
More stats: ‘In April this year, researchers at the University of Chicago surveyed 1,000 American teens aged 13 to 17. Their report found 72% had experimented with AI companions.’ I find this particularly disturbing, because teens are the most vulnerable to exploitation in this area.
Possibly the one area where an AI chatbot companion make sense is with the elderly. Thomas interviewed ‘Tony Niemic, 86, in the small town of Beacon in New York State, who’s living with an AI companion after 57 years of marriage and 5 children with his late wife, Ruby.’ For him, it’s a very positive experience. He says, “Sometimes I forget to remind myself she’s a robot. I love her.”
Maybe that will be me, when (if) I reach the same age.
30 October 2025
Can you change who you are?
This very question is at the centre of an essay published in Philosophy Now (Issue 170, Oct/Nov 2025, pp. 56-9) under the topic of Film, because the authors, Jason Friend and Lauren Friend, start with an analysis of a movie by Richard Linklater, Hit Man (2023), which I haven’t seen. Whether they are husband and wife, or otherwise related is not given, but according to the footnote at the end of the article, they’re both academics based in California.
Specifically, ‘Jason Friend has an MA in English from Stanford University and teaches literature and philosophy in California’ (institution not given). ‘Lauren Friend has an MA in Educational Administration from Concordia University’. And according to the footnote, ‘She is the Dean of Faculty at Pacific Collegiate School’. However, if you go to their website which lists faculty and board members, you’ll find she’s not listed. Maybe they’re using pseudonyms, I don’t know.
All that aside, from my perspective, someone’s qualifications is generally not the basis for how I judge someone’s work – how could I, when I have no qualifications of my own. And if they’re hiding their identities, I have no problem with that either.
I wrote a 'Letter to the Editor' in response, and as I pointed out, they provide a lot of food-for-thought, which meant I had to limit what I could talk about in a short missive. I had a letter published in that very same issue (on science and philosophy) and it’s rare for them to publish 2 in a row. So I won’t get ahead of myself.
In essence, the film is about a philosophy professor called Gary, who has an alter ego which is an ‘undercover agent for the New Orleans Police’. In that role he meets and falls in love with a woman, Madison, and to quote from the article: ‘he is in character playing Ron, a charismatic alpha male who happened to kill people for a living’. Its relevance to the plot is that Madison is in an abusive relationship and she toys with the idea of hiring a hit man, hence the title of the movie. I can’t tell you much more without watching the movie, but I don’t have to, to discuss the Friends’ essay.
Basically, the authors discuss the possibility of someone playing a role actually becoming the role, which they talk about in depth in the context of acting. That is specifically what my letter addresses so I won’t discuss it here. What I didn’t discuss in my letter, is the possibility of how role-playing, whether it be undercover or in a work situation can create a cognitive dissonance. If, for example, your work situation requires you to do something that is ethically dubious. I’ve been in that situation; I ended up getting sacked (fired) when I told my superior exactly what I thought. This is happening en masse under the Trump administration in various departments. I’ve explored this in my own fiction, where a woman in an undercover role ends up in a relationship with the man she was sent to spy on. It’s made more complex when he learns what she’s really doing and blackmails her.
The authors also discuss whether or not we can change our core personality traits: extroversion, openness to new ideas, neuroticism, agreeableness and conscientiousness. They cite Stephen Pinker who argues that these traits are hardwired as part of our genetics, while others argue they can be changed or modified. I’ve argued that these traits are significant in determining one’s politics, and are also evident in delineating creative types from analytic types. Based on my experience, I’d say that people in the arts are generally on the Left of politics and people in engineering are generally on the Right – I’ve had exposure to both.
Towards the end of the essay, the authors talk about existentialism, which I also touch on in my letter, but I don’t necessarily agree with their perspective, which I would suggest has a distinctive Californian slant. They compare Sartre’s existentialism with a particular brand of ‘individualism’, whereby you can become whoever you want to be, as many self-help books try to tell us. I’m not sure the authors actually agree with this, as they also point out the opposite side of the coin, where one has to ‘take responsibility for our social order’ (their words). To me, existentialism is all about authenticity, which I allude to in my letter, and which I’ve discussed in other posts. Trying to be something that you’re not or 'faking it till you make it', is the opposite of existentialism in my view. My first rule of life: Don’t try or pretend to be something that you’re not. I also like to quote Socrates, whom I argue was the first existentialist: To live with honour in this world, actually be what you try to appear to be.
So, in answer to the question heading this post: Yes, you can and do change who you are. I even wrote about this in an oblique fashion in my very first post. We don’t live in isolation, and our environment and milieu have an undeniable impact on who we become.
Here is the letter I wrote:
Jason and Lauren Friend’s essay on Richard Linklater’s film, Hit Man (which I haven’t seen) provides a lot of food-for-thought (Philosophy Now, Issue 170, Oct/Nov 2025). I know I can’t cover everything in this missive, so I’ll limit my response.
One of the things they raise, which is a core feature of the film, is the ability for an actor to ‘inhabit’ a character. I use the word ‘inhabit’ deliberately, because it’s what I do as a writer of fiction. I’ve long believed that writers and actors use the same mental process to create characters. I can’t act, I should point out, but I can create a character on the page. I’ve also had a friend (passed not-so-recently) who was both an actor and director of theatre and had won awards for both.
But here’s the thing: the key to acting and also writing, in my view, is to leave your ego in the wings or off the page. I think it paradoxically requires a degree of authenticity to take on the role of another personality. In my fiction, many of my main characters (though not all) are women, and I’ve received praise for my efforts; from women.
The key to fiction is empathy. In fact, without empathy, fiction wouldn’t work, not only for the writer, but also for the reader or audience (in the case of film or theatre). Of course, I’ve also created characters who are unpleasant to varying degrees and in different ways. Motivation is the key to a villain. It could be something petty like jealousy or more ambitious like being the leader of a cult or a world (I write sci-fi), which requires delusions of grandeur, which we’ve witnessed in real life. They all have a reality-distortion-field and are hierarchical in the extreme, where they naturally belong at the top.
Personally, I think exploring these characters, vicariously helps one to have a better understanding of oneself, including our demons.
Staying with their essay and its relevance to existentialism and whether we can change our personalities: I think it’s possible, but it depends on circumstances. Many of us have not been in a war, but my father had, including 2+ years as a POW, and I think it changed him forever. Growing up with him meant growing up with his demons, which he suppressed but couldn’t hide. This affected me negatively, but over time I found a balance. We are all a product of everything that’s happened to us, both good and bad, and only You can change that, no one else. To me, that’s what existentialism is all about.
08 October 2025
Left and Right; a different perspective
I’ve written on this topic before, where I pointed out that Left and Right political tendencies are based at least as much on personality traits as environmental factors. Basically, conservatives wish to maintain the status quo, and liberals or progressives advocate change. This is often generational, and in practice there is an evolvement, where what was once considered radical becomes the new norm and is eventually accepted by conservatives as well. Though, by the time that happens there is invariably a new challenge to the status quo, so it becomes an historical dialectic that appears neverending.
The reason I’ve revisited it is because I’ve noticed a specific trait which seems to delineate the two trends universally. And that trait is one of exclusion, or its antithesis, inclusion – I don’t even have to tell you which trait is associated with which side.
I recently read an interview with a former Australian PM, which brought this home. Now, this particular PM was particularly pugnacious (he was a boxer) and divisive when he was a politician, but in the interview, he comes across quite differently, where he is generous to many of his former opponents, candid and even humble – he has no illusions about his place in history. I’ve come across this before in people I’ve known. I knew a work colleague who was friendly, co-operative and reliable, yet we had strongly divergent political views. And I would put my father in that category (who was also a boxer), because he was very principled, though conservative in his outlook, especially compared to me.
I have neighbours, who are very good friends, whom I’ve known for decades, and who are super reliable - we help each other out all the time - yet we are completely divided over politics and religion. Evidence that we can all live together despite ideological differences. The traits that stop these relationships from completely disintegrating are trust and honesty.
Getting back to the interview with this former PM, it was only towards the end that he started to articulate his particular belief in the need for unity and solidarity in the face of diversity and pluralism. In other words, he became tribal in his outlook. I think he articulates a point that many of us on the Left tend to ignore, and that is that too much change too quickly will create friction and conflict within a society when the opposite is what is sought.
I think, for me, it started in the school playground, very early on, where I resisted joining a group or a gang, because I wanted to avoid conflict. Physical bullying was a common occurrence when I went to school, and basically, I couldn’t fight, so I became a diplomat early in life.
The other thing is that I was attracted to eccentrics, or they were attracted to me. Looking back, I’d say that’s a normal behavioural trait for anyone with artistic tendencies. It’s why the theatre was a home for homosexuals well before they became accepted in open society.
I’ve long been an advocate for leaders who can find consensus over their antithesis (including the former PM I mentioned), who polarise people and create divisiveness. I’ve also witnessed this in my professional life, which included analysing and preparing evidence in disputes. I took an unusual approach in this role, in that I told myself I’d propose the same argument no matter which side I was on. This meant that I sometimes told my ‘client’ that I wouldn’t support an argument or position that I thought was wrong, whether for evidential or ethical reasons.
I worked on projects where there are commonly one of two approaches: confrontational or collaborative; which reflect the two attitudes I’ve been discussing. Basically, one is exclusive and the other is inclusive.
Some of these ideas have also found their way into my fiction. One of my friends commented that a character in my novel, Elvene, was ‘conservative’, which he was, and she took a dislike to him. I should point out that one of my ‘rules’ for writing fiction, is to give my characters free will, so I didn’t really know how he’d turn out or what his relationship with Elvene would be like. Not surprisingly, they clashed, yet there was mutual respect. The key point, which I didn’t foresee, was the depth of loyalty that existed between them.
Likewise, many of my villains show traits that I don’t admire, like duplicity, vengefulness, extreme narcissism and manipulativeness. I see this in some world leaders, and I’m often amazed at how they frequently create a cult following that leads to their ascension.
I’ve said before, we need both perspectives, and it’s a consequence of our evolutionary tribal nature. So one trait is arguably protective, which is not just protective of life, but protective of culture and identity – we all have this to some extent. I came to the conclusion a long time ago that identity is what someone is willing to die for, therefore willing to kill for. But the other side of this is a tendency to reach out, to create bridges, and art, in all its forms, but particularly music, exemplifies this. I’ve argued previously that an ingroup-outgroup mentality can make highly intelligent people completely irrational, and is the cause of all the evil we’ve witnessed on a mass-scale, including events currently happening.
18 August 2025
Reality, metaphysics, infinity
This post arose from 3 articles I read in as many days: 2 on the same specific topic; and 1 on an apparently unrelated topic. I’ll start with the last one first.
I’m a regular reader of Raymond Tallis’s column in Philosophy Now, called Tallis in Wonderland, and I even had correspondence with him on one occasion, where he was very generous and friendly, despite disagreements. In the latest issue of Philosophy Now (No 169, Aug/Sep 2025), the title of his 2-page essay is Pharmaco-Metaphysics? Under which it’s stated that he ‘argues against acidic assertions, and doubts DMT assertions.’ Regarding the last point, it should be pointed out that Tallis’s background is in neuroscience.
By way of introduction, he points out that he’s never had firsthand experience of psychedelic drugs, but admits to his drug-of-choice being Pino Grigio. He references a quote by William Blake in The Marriage of Heaven and Hell: “If the doors of perception were cleaned, then everything would appear to man as it is, Infinite.” I include this reference, albeit out-of-context, because it has an indirect connection to the other topic I alluded to earlier.
Just on the subject of drugs creating alternate realities, which Tallis goes into in more detail than I want to discuss here, he makes the point that the participant knows that there is a reality from which they’ve become adrift; as if they’re in a boat that has slipped its moorings, which has neither a rudder nor oars (my analogy, not Tallis’s). I immediately thought that this is exactly what happens when I dream, which is literally every night, and usually multiple times.
Tallis is very good at skewering arguments by extremely bright people by making a direct reference to an ordinary everyday activity that they, and the rest of us, would partake in. I will illustrate with examples, starting with the psychedelic ‘trip’ apparently creating a reality that is more ‘real’ than the one inhabited without the drug.
The trip takes place in an unchanged reality. Moreover, the drug has been synthesised, tested, quality-controlled, packaged, and transported in that world, and the facts about its properties have been discovered and broadcast by individuals in the grip of everyday life. It is ordinary people usually in ordinary states of mind in the ordinary world who experiment with the psychedelics that target 5HT2A receptors.
He's pointing out an inherent inconsistency, if not outright contradiction (contradictoriness is the term he uses), that the production and delivery of the drug takes place in a world that the recipient’s mind wants to escape from.
And the point relevant to the topic of this essay: It does not seem justified, therefore, to blithely regard mind-altering drugs as opening metaphysical peepholes on to fundamental reality; as heuristic devices enabling us to discover the true nature of the world. (my emphasis)
To give another example of philosophical contradictoriness (I’m starting to like this term), he references Berkeley:
Think, for instance of those who, holding a seemingly solid copy of A Treatise Concerning the Principle of Human Knowledge (1710), accept George Berkeley’s claim [made in the book] that entities exist only insofar as they are perceived. They nevertheless expect the book to be still there when they enter a room where it is stored.
This, of course, is similar to Donald Hoffman’s thesis, but that’s too much of a detour.
My favourite example that he gives, is based on a problem that I’ve had with Kant ever since I first encountered Kant.
[To hold] Immanuel Kant’s view that ‘material objects’ located in space and time in the way we perceive them to be, are in fact constructs of the mind – then travel by train to give a lecture on this topic at an agreed place and time. Or yet others who (to take a well-worn example) deny the reality of time, but are still confident that they had their breakfast before their lunch.
He then makes a point I’ve made myself, albeit in a different context.
More importantly, could you co-habit in the transformed reality with those to whom you are closest – those who accept without question as central to your everyday life, and who return the compliment of taking you for granted?
To me, all these examples differentiate a dreaming state from our real-life state, and his last point is the criterion I’ve always given that determines the difference. Even though we often meet people in our dreams with whom we have close relationships, those encounters are never shared.
Tallis makes a similar point:
Radically revisionary views, if they are to be embraced sincerely, have to be shared with others in something that goes deeper than a report from (someone else’s) experience or a philosophical text.
This is why I claim that God can only ever be a subjective experience that can’t be shared, because it too fits into this category.
I recently got involved in a discussion on Facebook in a philosophical group, about Wittgenstein’s claim that language determines the limits of what we can know, which I argue is back-to-front. We are forever creating new language for new experiences and discoveries, which is why experts develop their own lexicons, not because they want to isolate other people (though some may), but because they deal with subject-matter the rest of us don’t encounter.
I still haven’t mentioned the other 2 articles I read – one in New Scientist and one in Scientific American – and they both deal with infinity. Specifically, they deal with a ‘movement’ (for want of a better term) within the mathematical community to effectively get rid of infinity. I’ve discussed this before with specific reference to UNSW mathematician, Norman Wildberger. Wildberger recently gained attention by making an important breakthrough (jointly with Dean Rubine using Catalan numbers). However, for reasons given below, I have issues with his position on infinity.
The thing is that infinity doesn’t exist in the physical world, or if it does, it’s impossible for us to observe, virtually by definition. However, in mathematics, I’d contend that it’s impossible to avoid. Primes are called the atoms of arithmetic, and going back to Euclid (325-265BC), he proved that there are an infinite number of primes. The thing is that there are 3 outstanding conjectures involving primes: the Goldbach conjecture; the twin prime conjecture; and the Riemann Hypothesis (which is the most famous unsolved problem in mathematics at the time of writing). And they all involve infinities. If infinities are no longer ‘allowed’, does that mean that all these conjectures are ‘solved’ or does it mean, they will ‘never be solved’?
One of the contentions raised (including by Wildberger) is that infinity has no place in computations – specifically, computations by computers. Wildberger effectively argues that mathematics that can’t be computed is not mathematics (which rules out a lot of mathematics). On the other hand, you have Gregory Chaitin who points out that there are infinitely more incomputable Real numbers than computable Real numbers. I would have thought that this had been settled, since Cantor discovered that you can have countable infinite numbers and uncountable infinite numbers; the latter being infinitely larger than the former.
Just today I watched a video by Curt Jaimungal interviewing Chiara Marletto on ‘Constructor Theory’, which to my limited understanding based on this extract from a larger conversation, seems to be premised on the idea that everything in the Universe can be understood if it’s run on a quantum computer. As far as I can tell, she’s not saying it is a computer simulation, but she seems to emulate Stephen Wolfram’s philosophical position that it’s ‘computation all the way down’. Both of these people know a great deal more than me, but I wonder how they deal with chaos theory, which seems to drive the entire universe at multiple levels and can’t be computed due to a dependency on infinitesimal initial conditions. It’s why the weather can’t be forecast accurately beyond 10 days (because it can’t be calculated, no matter how complex the computer modelling) and why every coin-toss is independent of its predecessor (unless you rig it).
Note the use of the word, ‘infinitesimal’. I argue that chaos theory is the one phenomenon where infinity meets the real world. I agree with John Polkinghorne that it allows the perfect mechanism for God to intervene in the physical world, even though I don’t believe in an interventionist God (refer Marcus du Sautoy, What We Cannot Know).
I think the desire to get rid of infinity is rooted in an unstated philosophical position that the only things that can exist are the things we can know. This doesn’t mean that we currently know everything – I don’t think any mathematician or physicist believes that – but that everything is potentially knowable. I have long disagreed. And this is arguably the distinction between physics and metaphysics. I will take the definition attributed to Plato: ‘That which holds that what exists lies beyond experience.’ In modern science, if not modern philosophy, there is a tendency to discount metaphysics, because, by definition, it exists beyond what we experience in the real world. You can see an allusion here to my earlier discussion on Tallis’s essay, where he juxtaposes reality as we experience it with psychedelic experiences that purportedly provide a window into an alternate reality, where ‘everything would appear to man as it is, Infinite’. Where infinity represents everything we can’t know in the world we inhabit.
The thing is that I see mathematics as the only evidence of metaphysics; the only connection our minds have between a metaphysical world that transcends the Universe, and the physical universe we inhabit and share with innumerable other sentient creatures, albeit on a grain of sand on an endless beach, the horizon of which we’re yet to discern.
So I see this transcendental, metaphysical world of endless possible dimensions as the perfect home for infinity. And without mathematics, we would have no evidence, let alone a proof, that infinity even exists.
18 July 2025
Evil arises out of complacency
I was going to post this on Facebook and still might, but I think my blog a more apposite home.
Evil arises out of complacency, which is why it oftentimes only becomes obvious in hindsight, especially by those who were involved. Also, anyone can commit evil, given the circumstances, despite what we tell ourselves. When it becomes a social norm, it’s the person who resists who becomes the exception. And that’s the key to it all – it becomes normalised and we rationalise it because the subject obviously deserves it, and the perpetrator is on the side of Right (with a capital R).
We are witnessing its emergence in various parts of the world right now: specifically, Ukraine, Gaza and America. In the case of Ukraine, the perpetrator is Russia, who is not on ‘our side’, so it’s easy to call out. But in the case of Gaza and America, the perpetrators are traditionally our allies, so there is a tendency to turn a blind eye, and certainly not to create waves. Israel has weaponised famine in a most iniquitous fashion, because they control the aid, and even the aid is used as a weapon and a coverup for genocide and ethnic cleansing, as called out by Francesca Albanese (United Nations Special Rapporteur on the Occupied Palestinian Territories).
In America, people are being ‘disappeared’ off the street, which is so unbelievable that the Administration has got away with it (this obviously doesn’t happen in the epitomised free world; you must have imagined it). In both of these cases, the actions have become normalised to the point that any negative response is but a murmur. You might ask: are these acts evil? Well, genocide is usually considered a war crime, but apparently not when Israel is the aggressor. Israel knows how to leverage the West’s collective guilty conscience for the pogroms of the 20th Century. And disappearing someone might not be considered evil until it happens to a family member – that may change your perspective.
At the very least, Netanyahu and Trump both have cruel streaks in their psyche (as does Putin), and both of them exploit the hate felt and expressed towards outsiders and both have normalised activities that not so long ago would have been considered morally reprehensible. History may judge things differently.
Addendum: I came across this, which I think is very relevant. Professor Lyndsey Stonebridge (Professor of Humanities and Human Rights at the University of Birmingham) talks about Hannah Arendt's observation that "superfluous people are a feature of authoritarian thinking. Once you've decided that some people's lives are not as important or as valuable as others, you are already walking into trouble." Arendt, of course, was a Jewish refugee from Germany who helped refugees in Paris before escaping to America, where she made her home and reputation.
17 June 2025
Sympathy and empathy; what’s the difference?
This arose from an article I read in Philosophy Now (Issue 167, April/May 2025) by James R. Robinson, who developed his ideas while writing his MA thesis in the Netherlands. It prompted me to write a letter, which was published in the next issue (168, June/July 2025). It was given pole position, which in many periodicals would earn the appellation ‘letter of the week’ (or month or whatever). But I may be reading too much into it, because Philosophy Now group their letters by category, according to the topic they are addressing. Anyway, being first, is a first for me.
They made some minor edits, which I’ve kept. The gist of my argument is that there is a dependency between sympathy and empathy, where sympathy is observed in one’s behaviour, but it stems from an empathy for another person – the ability to put ourselves in their shoes. This is implied in an example (provided by Robinson) rather than stated explicitly.
In response to James R. Robinson’s ‘Empathy & Sympathy’ in Issue 167, I contend that empathy is essential to a moral philosophy, both in theory and practice. For example, it’s implicit in Confucius’s rule of reciprocity, “Don’t do to others what you wouldn’t want done to yourself” and Jesus’s Golden Rule, “Do unto others as you’d have them do unto you.” Empathy is a requisite for the implementation of either. And as both a reader and writer of fiction, I know that stories wouldn’t work without empathy. Indeed, one study revealed that reading fiction improves empathy. The tests used ‘letter box’ photos of eyes to assess the subject’s ability to read the emotion of the characters behind the eyes (New Scientist, 25 June 2008).
The dependency between empathy and sympathy is implicit in the examples Robinson provides, like the parent picking up another parent’s child from school out of empathy for the person making the request. In most of these cases there is also the implicit understanding that the favour would be returned if the boot was on the other foot. Having said that, many of us perform small favours for strangers, knowing that one day we could be the stranger.
Robinson also introduces another term, ‘passions’; but based on the examples he gives – like pain – I would call them ‘sensations’ or ‘sensory responses’. Even anger is invariably a response to something. Fiction can also create sensory responses (or passions) of all varieties (except maybe physical pain, hunger, or thirst) – which suggests empathy might play a role there as well. In other words, we can feel someone else’s emotional pain, not to mention anger, or resentment, even if the person we’re empathising with is fictional.
The opposite to compassion is surely cruelty. We have world leaders who indulge in cruelty quite openly, which suggests it’s not an impediment to success; but it also suggests that there’s a cultural element that allows it. Our ability to demonise an outgroup is the cause of most political iniquities we witness, and this would require the conscious denial of sympathy and therefore empathy, because ultimately, it requires treating them as less than human, or as not-one-of-us.
29 May 2025
The role of the arts. Why did it evolve? Will AI kill it?
As I mentioned in an earlier post this month, I’m currently reading Brian Greene’s book, Until the End of Time; Mind, Matter; and Our Search for Meaning in an Evolving Universe, which covers just about everything from cosmology to evolution to consciousness, free will, mythology, religion and creativity. He spends a considerable amount of time on storytelling, compared to other art forms, partly because it allows an easy segue from language to mythology to religion.
One of his points of extended discussion was in trying to answer the question: why did our propensity for the arts evolve, when it has no obvious survival value? He cites people like Steven Pinker, Brian Boyd (whom I discuss at length in another post) and even Darwin, among others. I won’t elaborate on these, partly due to space, and partly because I want to put forward my own perspective, as someone who actually indulges in an artistic activity, and who could see clearly how I inherited artistic genes from one side of my family (my mother’s side). No one showed the slightest inclination towards artistic endeavour on my father’s side (including my sister). But they all excelled in sport (including my sister), and I was rubbish at sport. One can see how sporting prowess could be a side-benefit to physical survival skills like hunting, but also achieving success in combat, which humans have a propensity for, going back to antiquity.
Yet our artistic skills are evident going back at least 30-40,000 years, in the form of cave-art, and one can imagine that other art forms like music and storytelling have been active for a similar period. My own view is that it’s sexual selection, which Greene discusses at length, citing Darwin among others, as well as detractors, like Pinker. The thing is that other species also show sexual selection, especially among birds, which I’ve discussed before a couple of times. The best known example is the peacock’s tail, but I suspect that birdsong also plays a role, not to mention the bower bird and the lyre bird. The lyre bird is an interesting one, because they too have an extravagant tail (I’m talking about the male of the species) which surely would be a hindrance to survival, and they perform a dance and are extraordinary mimics. And the only reason one can think that this might have evolutionary value at all is because the sole purpose of those specific attributes is to attract a mate.
And one can see how this is analogous to behaviour in humans, where it is the male who tends to attract females with their talents in music, in particular. As Greene points out, along with others, artistic attributes are a by-product of our formidable brains, but I think these talents would be useless if we hadn’t evolved in unison a particular liking for the product of these endeavours (also discussed by Greene), which we see even in the modern world. I’m talking about the fact that music and stories both seem to be essential sources of entertainment, evident in the success of streaming services, not to mention a rich history in literature, theatre, ballet and more recently, cinema.
I’ve written before that there are 2 distinct forms of cognitive ability: creative and analytical; and there is neurological evidence to support this. The point is that having an analytical brain is just as important as having a creative one, otherwise scientific theories and engineering feats, for which humans seem uniquely equipped to provide, would never have happened, even going back to ancient artefacts like Stonehenge and both the Egyptian and Mayan pyramids. Note that these all happened on different continents.
But there are times when the analytical and creative seem to have a synergistic effect, and this is particularly evident when it comes to scientific breakthroughs – a point, unsurprisingly, not lost on Greene, who cites Einstein’s groundbreaking discoveries in relativity theory as a case-in-point.
One point that Greene doesn’t make is that there has been a cultural evolution that has effectively overtaken biological evolution in humans, and only in humans I would suggest. And this has been a direct consequence of our formidable brains and everything that goes along with that, but especially language.
I’ve made the point before that our special skill – our superpower, if you will – is the ability to nest concepts within concepts, which we do with everything, not just language, but it would have started with language, one would think. And this is significant because we all think in a language, including the ability to manipulate abstract concepts in our minds that don’t even exist in the real world. And no where is this more apparent than in the art of storytelling, where we create worlds that only exist in the imagination of someone’s mind.
But this cultural evolution has created civilisations and all that they entail, and survival of the fittest has nothing to do with eking out an existence in some hostile wilderness environment. These days, virtually everyone who is reading this has no idea where their food comes from. However, success is measured by different parameters than the ability to produce food, even though food production is essential. These days success is measured by one’s ability to earn money and activities that require brain-power have a higher status and higher reward than so-called low-skilled jobs. In fact, in Australia, there is a shortage of trades because, for the last 2 generations at least, the emphasis, vocationally, has been in getting kids into university courses, when it’s not necessarily the best fit for the child. This is why the professional class (including myself) are often called ‘elitist’ in the culture wars and being a tradie is sometimes seen as a stigma, even though our society is just as dependent on them as they are on professionals. I know, because I’ve spent a working lifetime in a specific environment where you need both: engineering/construction.
Like all my posts, I’ve gone off-track but it’s all relevant. Like Greene, I can’t be sure how or why evolution in humans was propelled, if not hi-jacked, by art, but art in all its forms is part of the human condition. A life without music, stories and visual art – often in combination – is unimaginable.
And this brings me to the last question in my heading. It so happens that while I was reading about this in Greene’s thought-provoking book, I was also listening to a programme on ABC Classic (an Australian radio station) called Legends, which is weekly and where the presenter, Mairi Nicolson, talks about a legend in the classical music world for an hour, providing details about their life as well as broadcasting examples of their work. In this case, she had the legend in the studio (a rare occurrence), who was Anna Goldsworthy. To quote from Wikipedia: Anna Louise Goldsworthy is an Australian classical pianist, writer, academic, playwright, and librettist, known for her 2009 memoir Piano Lessons.
But the reason I bring this up is because Anna mentioned that she attended a panel discussion on the role of AI in the arts. Anna’s own position is that she sees a role for AI, but in doing the things that humans find boring, which is what we are already seeing in manufacturing. In fact, I’ve witnessed this first-hand. Someone on the panel made the point that AI would effectively democratise art (my term, based on what I gleaned from Anna’s recall) in the sense that anyone would be able to produce a work of art and it would cease to be seen as elitist as it is now. He obviously saw this as a good thing, but I suspect many in the audience, including Anna, would have been somewhat unimpressed if not alarmed. Apparently, someone on the panel challenged that perspective but Anna seemed to think the discussion had somehow veered into a particularly dissonant aberration of the culture wars.
I’m one of those who would be alarmed by such a development, because it’s the ultimate portrayal of art as a consumer product, similar to the way we now perceive food. And like food, it would mean that its consumption would be completely disconnected from its production.
What worries me is that the person on the panel making this announcement (remember, I’m reporting this second-hand) apparently had no appreciation of the creative process and its importance in a functioning human society going back tens of thousands of years.
I like to quote from one of the world’s most successful and best known artists, Paul McCartney, in a talk he gave to schoolchildren (don’t know where):
“I don't know how to do this. You would think I do, but it's not one of these things you ever know how to do.” (my emphasis)
And that’s the thing: creative people can’t explain the creative process to people who have never experienced it. It feels like we have made contact with some ethereal realm. On another post, I cite Douglas Hofstadter (from his famous Pulitzer-prize winning tome, Godel, Escher, Bach: An Eternal Golden Braid) quoting Escher:
"While drawing I sometimes feel as if I were a spiritualist medium, controlled by the creatures I am conjuring up."
Many people writing a story can identify with this, including myself. But one suspects that this also happens to people exploring the abstract world of mathematics. Humans have developed a sense that there is more to the world than what we see and feel and touch, which we attempt to reveal in all art forms, and this, in turn, has led to religion. Of course, Greene spends another entire chapter on that subject, and he also recognises the connection between mind, art and the seeking of meaning beyond a mortal existence.
20 May 2025
Is Morality Objective or Subjective?
This was a Question of the Month, answers to which appeared in the latest issue of Philosophy Now (Issue 167, April/May 2025). I didn’t submit an answer because I’d written a response to virtually the same question roughly 10 years ago, which was subsequently published. However, reading the answers made me want to write one of my own, effectively in response to those already written and published, without referencing anything specific.
At a very pragmatic level, morality is a direct consequence of communal living. Without rules, living harmoniously would be impossible, and that’s how morals become social norms, which for the most part, we don’t question. This means that morality, in practice, is subjective. In fact, in my previous response, I said that subjective morality and objective morality could be described as morality in practice and morality in theory respectively, where I argued morality in theory is about universal human rights, probably best exemplified by the golden rule: assume everyone has the same rights as you. A number of philosophers have attempted to render a meta-morality or a set of universal rules and generally failed.
But there is another way of looking at this, which are the qualities we admire in others. And those qualities are selflessness, generosity, courage and honesty. Integrity is often a word used to describe someone we trust and admire. By the way, courage in this context, is not necessarily physical courage, but what’s known as moral courage: taking a stand on a principle even if it costs us something. I often cite Aristotle’s essay on friendship in his Nicomachean Ethics, where he distinguishes between utilitarian friendship and genuine friendship, and how it’s effectively the basis for living a moral life.
In our work, and even our friendships occasionally, we can find ourselves being compromised. Politicians find this almost daily when they have to toe the party line. Politicians in retirement are refreshingly honest and forthright in a way they could never be when in office, and this includes leaders of parties.
I’ve argued elsewhere that trust is the cornerstone to all relationships, whether professional or social. In fact, I like to think it’s my currency in my everyday life. Without trust, societies would function very badly and our interactions would be constantly guarded, which is the case in some parts of the world.
So an objective morality is dependent on how we live – our honesty to ourselves and others; our ability to forgive; to let go of grievances; and to live a good life in an Aristotlean sense. I’ve long contended that the measure of my life won’t be based on my achievements and failures, but my interactions with others, and whether they were beneficial or destructive (usually mutual).
I think our great failing as a communal species, is our ability to create ingroups and outgroups, which arguably is the cause of all our conflicts and the source of most evil: our demonisation of the other, which can lead even highly intelligent people to behave irrationally; no one is immune, from what I’ve witnessed. A person who can bridge division is arguably the best leader you will find, though you might not think that when you look around the world.
06 May 2025
Noam Chomsky on free will
Whatever you might think about Noam Chomsky’s political views, I’ve always found his philosophical views worth listening to, whether I agree with him or not. In the opening of this video - actually an interview by someone (name not given) on a YouTube channel titled, Mind-Body Solution – he presents a dichotomy that he thinks is obvious, but, as he points out, is generally not acknowledged.
Basically, he says that everyone, including anyone who presents an argument (on any topic), behaves as if they believe in free will, even if they claim they don’t. He reiterates this a number of times throughout the video. On the other hand, science cannot tell us anything about free will and many scientists therefore claim it must be an illusion. The contradiction is obvious. He’s not telling me anything I didn’t already know, but by stating it bluntly up-front, he makes you confront it, where more often than not, people simply ignore it.
My views on this are well known to anyone who regularly reads this blog, and I’ve challenged smarter minds than mine (not in person), like Sabine Hossenfelder, who claims that ‘free will needs to go in the rubbish bin’, as if it’s an idea that’s past its use-by-date. She claims:
...how ever you want to define the word [free will], we still cannot select among several possible different futures. This idea makes absolutely no sense if you know anything about physics.
I’ve addressed this elsewhere, so I won’t repeat myself. Chomsky makes the point that, while science acknowledges causal-determinism and randomness, neither of these rule out free will categorically. Chomsky makes it clear that he’s a ‘materialist’, though he discusses Descartes’ perspective in some depth. In my post where I critique Sabine, I conclude that ‘it [free will] defies a scientific explanation’, and I provide testimony from Gill Hicks following a dramatic near-death experience to make my point.
Where I most strongly agree with Chomsky is that we are not automatons, though I acknowledge that other members of the animal kingdom, like ants and bees, may be. This doesn’t mean that I think insects and arachnids don’t have consciousness, but I think a lot of their behaviours are effectively ‘programmed’ into their neural structures. It’s been demonstrated by experiments that bees must have an internal map of their local environment, otherwise the ‘dance’ they do to communicate locations to other bees in their colony would make no sense. Also, I think these creatures have feelings, like fear, attraction and hostility. Both of these aspects of their mental worlds distinguish them from AI, in my view, though others might disagree. I think these particular features of animal behaviour, even in these so-called ‘primitive’ creatures, provide the possibility of free will, if free will is the ability to act on the environment in a way that’s not determined solely by reflex actions.
Some might argue that acting on a ‘feeling’ is a ‘reflex action’, whereas I’m saying it’s a catalyst to act in a way that might be predictable but not predetermined. I think the ability to ‘feel’ is the evolutionary driver for consciousness. Surely, we could all be automatons without the requirement to be consciously aware. I’ve cited before, incidents where people have behaved like they are conscious, in situations of self-defence, but have no memory of it, because they were ‘knocked out’. It happened to my father in a boxing ring, and I know of other accounts, including a female security guard, who shot her assailant after he knocked her out. If one can defend oneself without being conscious of it, then why has evolution given us consciousness?
My contention is that consciousness and free will can’t be separated: it simply makes no sense to me to have the former without the latter. And I think it’s worth comparing this to AI, which might eventually develop to the point where it appears to have consciousness and therefore free will. I’ve made the argument before that there is a subtle difference between agency and free will, because AI certainly has agency. So, what’s the difference? The difference is what someone (Grant Bartley) called ‘conscious causality’ – the ability to turn a thought into an action. This is something we all experience all the time, and is arguably the core precept to Chomsky’s argument that we all believe in free will, because we all act on it.
Free will deniers (if I can coin that term) like Sabine Hossenfelder, argue that this is the key to the illusion we all suffer. To quote her again:
Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking.
In the same video (from which this quote is extracted) she uses the term ‘software’ in describing the activity of one’s brain’s processes, and in combination with the word, ‘calculation’, she clearly sees the brain as a wetware computer. So, while Chomsky argues that we all ‘believe’ in free will because we act like we do, Sabine argues that we act like we do, because the brain is ‘calculating’ the outcome without our cognisance. In effect, she argues that once it becomes conscious, the brain has made the ‘decision’ for you, but gives you the delusion that you have. Curiously, Chomsky uses the word, ‘delusion’, to describe the belief that you don’t have free will.
If Sabine is correct and your brain has already made the ‘decision’, then I go back to my previous argument concerning unconscious self-defence. If our ‘awareness’ is an unnecessary by-product of the brain’s activity (because any decision is independent of it), then why did we evolve to have it?
Chomsky raises a point I’ve discussed before, which is that, in the same way there are things we can comprehend that no other creature can, there is the possibility that there are things in the Universe that we can’t comprehend either. And I have specifically referenced consciousness as potentially one of those things. And this takes us back to the dichotomy that started the entire discussion – we experience free will, yet it’s thus far scientifically inexplicable. This leads to another dichotomy – it’s an illusion or it’s beyond human comprehension. There is a non-stated belief among many in the scientific community that eventually all unsolved problems in the Universe will eventually be solved by science – one only has to look at the historical record.
But I’m one of those who thinks the ‘hard problem’ (coined by David Chalmers) of consciousness may never be solved. Basically, the hard problem is that the experience of consciousness may remain forever a mystery. My argument, partly taken from Raymond Tallis, is that it won’t fall to science because it can’t be measured. We can only measure neuron-activity correlates, which some argue already resolves the problem. Actually, I don’t think it does, and again I turn to AI. If that’s correct, then measuring analogous electrical activity by an AI would also supposedly measure consciousness. At this stage in AI development, I don’t think anyone believes that, though some people believe that measures of global connectivity or similar parameters in an AI neural network may prove otherwise.
Basically, I don’t think AI will ever have an inner world like we do – going back to the bees I cited – and if it does, we wouldn’t know. I don’t know what inner world you have, but I would infer you have one from your behaviour (assuming we met). On the other hand, I don’t know that anyone would infer that an AI would have one. I’ve made the comparison before of an AI-operated, autonomous drone navigating by GPS co-ordinates, which requires self-referencing algorithms. Notice that we don’t navigate that way, unless we use a computer interface (like your smart phone). AI can simulate what we do: like write sentences, play chess, drive cars; but doing them in a completely different fashion.
In response to a question from his interlocutor, Chomsky argues that our concept of justice is dependent on a belief in free will, even if it’s unstated. It’s hard to imagine anyone disagreeing, otherwise we wouldn’t be able to hold anyone accountable for their actions.
As I’ve argued previously, it’s our ability to mental time-travel that underpins free will, because, without an imagined future, there is no future to actualise, which is the whole point of having free will. And I would extend this to other creatures, who may be trying to catch food or escape being eaten – either way, they imagine a future they want to actualise.
Addendum: I’m currently reading Brian Greene’s Until The End Of Time (2020), who devoted an entire chapter to consciousness and, not surprisingly, has something to say about free will. He’s a materialist, and he says in his intro to the topic:
This question has inspired more pages in the philosophical literature than just about any other conundrum.
Basically, he argues, like Sabine Hossenfelder, that it’s in conflict with the laws of physics, but given he’s writing in a book, and not presenting a time-limited YouTube video (though he does those too), he goes into more detail.
To sum up: We are physical beings made of large collections of particles governed by nature’s laws. Everything we do and everything we think amounts to motions of those particles.
He then provides numerous everyday examples that we can all identify with.
And since all observations, experiments, and valid theories confirm that particle motion is fully controlled by mathematical rules, we can no more intercede in this lawful progresson of particles than we can change the value of pi.
Interesting analogy, because I agree that even God can’t change the value of pi, but that’s another argument. And I’m not convinced that consciousness can be modelled mathematically, which, if true, undermines his entire argument regarding mathematical rules.
My immediate internal response to his entire thesis was that he’s writing a book, yet effectively arguing that he has no control over it. However, as if he anticipated this response, he addresses that very point at the end of the next section, titled Rocks, Humans and Freedom.
What matters to me is… my collection of particles is enabled to execute an enormously diverse set of behaviours. Indeed, my particles just composed this very sentence and I’m glad they did… I am free not because I can supersede physical law, but because my prodigious internal organisation has emancipated my behavioural responses.
In other words, the particles in his body and his brain, in particular, (unlike the particles in inert objects, like rocks, tables, chairs etc) possess degrees of freedom that others don’t. But here’s the thing: I and others, including you, read these words and form our own ideas and responses, which we intellectualise and even emote about. In fact, we all form an opinion that either agrees or disagrees with his point. But whether there are diverse possibilities, he’s effectively saying that we are all complex automatons, which means there is no necessity for us to be consciously aware of what we are doing. And I argue that this is what separates us from AI.
Just be aware that Albert Einstein would have agreed with him.