This is a question I’ve never seen asked, let alone answered. I think there are good reasons for that, which I’ll come to later.
The latest issue of Philosophy Now (Issue 159, Dec 2023/Jan 2024), which I’ve already referred to in 2 previous posts, has as its theme (they always have a theme), Freewill Versus Determinism. I’ll concentrate on an article by the Editor, Grant Bartley, titled What Is Free Will? That’s partly because he and I have similar views on the topic, and partly because reading the article led me to ask the question at the head of this post (I should point out that he never mentions AI).
It's a lengthy article, meaning I won’t be able to fully do it justice, or even cover all aspects that he discusses. For instance, towards the end, he posits a personal ‘pet’ theory that there is a quantum aspect to the internal choice we make in our minds. And he even provides a link to videos he’s made on this topic. I mention this in passing, and will make 2 comments: one, I also have ‘pet’ theories, so I can’t dismiss him out-of-hand; and two, I haven’t watched the videos, so I can’t comment on its plausibility.
He starts with an attempt to define what we mean by free will, and what it doesn’t mean. For instance, he differentiates between subconscious choices, which he calls ‘impulses’ and free will, which requires a conscious choice. He also differentiates what he calls ‘making a decision’. I will quote him directly, as I still see this involving free will, if it’s based on making a ‘decision’ from alternative possibilities (as he explains).
…sometimes, our decision-making is a choice, that is, mentally deciding between alternative possibilities present to your awareness. But your mind doesn’t always explicitly present you with multiple choices from which to choose. Sometimes no distinct options are present in your awareness, and you must cause your next contents of your mind on the basis of the present content, through intuition and imagination. This is not choice so much as making a decision. (My emphasis)
This is worth a detour, because I see what he’s describing in this passage as the process I experience when writing fiction, which is ‘creating’. In this case, some of the content, if not all of it, is subconscious. When you write a story, it feels to you (but no one else) that the characters are real and the story you’re telling already exists. Nevertheless, I still think there’s an element of free will, because you make choices and judgements about what your imagination presents to your consciousness. As I said, this is a detour.
I don’t think this is what he’s referring to, and I’ll come back to it later when I introduce AI into the discussion. Meanwhile, I’ll discuss what I think is the nub of his thesis and my own perspective, which is the apparent dependency between consciousness and free will.
If conscious causation is not real, why did consciousness evolve at all? What would be the function of awareness if it can’t change behaviour? How could an impotent awareness evolve if it cannot change what the brain’s going to do to help the human body or its genes survive? (Italics in the original)
This is a point I’ve made myself, but Bartley goes further and argues “Since determinism can’t answer these questions, we can know determinism is false.” This is the opposite to Sabine Hossenfelder’s argument (declaration really) that ‘free will is an illusion [therefore false]’.
Note that Bartley coins the term, ‘conscious causation’, as a de facto synonym for free will. In fact, he says this explicitly in his conclusion: “If you say there is no free will, you’re basically saying there is no such thing as conscious causation.” I’d have to agree.
I made the point in another post that consciousness seems to act outside the causal chain of the Universe, and I feel that’s what Bartley is getting at. In fact, he explicitly cites Kant on this point, who (according to Bartley) “calls the will ‘transcendental’…” He talks at length about ‘soft (or weak) determinism’ and ‘strong determinism’, which I’ve also discussed. Now, the usual argument is that consciousness is ‘caused’ by neuron activity, therefore strong determinism is not broken.
To quote Hossenfelder: Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking. (Hossenfelder even uses the term ‘software’ to describe what does the ‘calculating’ in your brain.)
And this allows me to segue into AI, because what Hossenfelder describes is what we expect a computer to do. The thing is that while most scientists (and others) believe that AI will eventually become conscious (not sure what Hossenfelder thinks), I’ve never heard or seen anyone argue that AI will have free will. And this is why I don’t think the question at the head of this post has ever been asked. Many of the people who believe that AI will become conscious also don’t believe free will exists.
There is another component to this, which I’ve raised before and that’s imagination. I like to quote Raymond Tallis (neuroscientist and also a contributor to Philosophy Now).
Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another. (My emphasis)
Now, in another post, I argued that AI can’t have imagination in the way we experience it, yet I acknowledge that AI can look at numerous possibilities (like in a game of chess) and 'choose' what it ‘thinks’ is the optimum action. So, in this sense, AI would have ‘agency’, but that’s not free will, because it’s not ‘conscious causation’. And in this sense, I agree with Bartley that ‘making a decision’ does not constitute free will, if it’s what an AI does. So the difference is consciousness. To quote from that same post on this topic.
But the key here is imagination. It is because we can imagine a future that we attempt to bring it about - that's free will. And what we imagine is affected by our past, our emotions and our intellectual considerations, but that doesn't make it predetermined.
So, if imagination and consciousness are both faculties that separate us from AI, then I can’t see AI having free will, even though it will make ‘decisions’ based on data it receives (as inputs), and those decisions may not be predictable.
And this means that AI may not be deterministic either, in the ‘strong’ sense. One of the differences with humans, and other creatures that evolved consciousness, is that consciousness can apparently change the neural pathways of the brain, which I’d argue is the ‘strange loop’ posited by Douglas Hofstadter. (I have discussed free will and brain-plasticity in another post)
But there’s another way of looking at this, which differentiates humans from AI. Our decision-making is a combination of logical reasoning and emotion. AI only uses logic, and even then, it uses logic differently to us. It uses a database of samples and possibilities to come up with a ‘decision’ (or output), but without using the logic to arise at that decision the way we would. In other words, it doesn’t ‘understand’ the decision, like when it translates between languages, for example.
There is a subconscious and conscious component to our decision-making. Arguably, the subconscious component is analogous to what a computer does with algorithm-based software (as per Hossenfelder’s description). But there is no analogous conscious component in AI, which makes a choice or decision. In other words, there is no ‘conscious causation’, therefore no free will, as per Bartley’s definition.
Philosophy, at its best, challenges our long held views, such that we examine them more deeply than we might otherwise consider.
Paul P. Mealing
- Paul P. Mealing
- Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Wednesday 24 January 2024
Can AI have free will?
Saturday 13 January 2024
How can we achieve world peace?
Two posts ago, I published my submission to Philosophy Now's Question of the Month, from 2 months ago: What are the limit of knowledge? Which was published in Issue 159 (Dec 2023/Jan 2024). Logically, they inform readers of the next Question of the Month, which is the title of this post. I'm almost certain they never publish 2 submissions by the same author in a row, so I'm publishing this answer now. It's related to my last post, obviously, and one I wrote some time ago (Humanity's Achilles Heel).
There are many aspects to this question, not least whether one is an optimist or a pessimist. It’s well known that people underestimate the duration and cost of a project, even when it’s their profession, because people are optimists by default. Only realists are pessimistic, and I’m in the latter category, because I estimate the duration of projects professionally.
There are a number of factors that mitigate against world peace, the primary one being that humans are inherently tribal and are quick to form ingroup-outgroup mental-partitions, exemplified by politics the world over. In this situation, rational thought and reasoned argument take a back seat to confirmation bias and emotive rhetoric. Add to this dynamic, the historically observed and oft-repeated phenomena that we follow charismatic, cult-propagating leaders, and you have a recipe for self-destruction on a national scale. This is the biggest obstacle to world peace. These leaders thrive on and cultivate division with its kindred spirits of hatred and demonisation of the ‘other’: the rationale for all of society’s ills becomes an outgroup identified by nationality, race, skin-colour, culture or religion.
Wealth, or the lack of it, is a factor as well. Inequality provides a motive and a rationale for conflict. It often goes hand-in-hand with oppression, but even when it doesn’t, the anger and resentment can be exploited and politicised by populist leaders, whose agenda is more focused on their own sense of deluded historical significance than actually helping the people they purportedly serve.
If you have conflict - and it doesn’t have to be military – then as long as you have leaders who refuse to compromise, you’ll never find peace. Only moderates on both sides can broker peace.
So, while I’m a pessimist or realist, I do see a ‘how’. If we only elect leaders who seek and find consensus, and remove leaders who sow division, there is a chance. The best leaders, be they corporate, political or on a sporting field, are the ones who bring out the best in others and are not just feeding their own egos. But all this is easier said than done, as we are witnessing in certain parts of the world right now. For as long as we elect leaders who are narcissistic and cult-like, we will continue to sow the seeds of self-destruction.
Saturday 6 January 2024
Bad things happen when good people do nothing
At present there are 2 conflicts holding the world’s attention – they are different, yet similar. They both involve invasions, one arguably justified, involving a response to a cowardly attack, and the other based on the flimsiest of suppositions. But what they highlight is a double-standard in the policies of Western governments in how they respond to the humanitarian crises that inevitably result from such incursions.
I’m talking about the war in Ukraine, following Russia’s invasion 2 years ago next month, and Israel’s war in Gaza, following Hamas’s attack on 7 Oct. 2023, killing around 1200 people and taking an estimated 240 hostages; a reported 120 still in captivity (at the time of writing).
According to the UN, 'Gaza faces the "highest ever recorded" levels of food insecurity', as reported on the Guardian website (21 Dec 2023). And it was reported on the news today (6 Jan 2024) that ‘Gaza is uninhabitable’. Discussions within the UN have been going on for over a month, yet have been unable to unlock a stalemate concerning humanitarian aid that requires a cessation of hostilities, despite the obvious existential need.
Noelia Monge, the head of emergencies for Action Against Hunger, said: “Everything we are doing is insufficient to meet the needs of 2 million people. It is difficult to find flour and rice, and people have to wait hours to access latrines and wash themselves. We are experiencing an emergency like I have never seen before.” (Source: Guardian)
I don’t think it’s an exaggeration to say that this is a humanitarian crisis of unprecedented proportions in modern times. It’s one thing for Israel to invade a country that harbours a mortal enemy, but it is another to destroy all infrastructure, medical facilities and cut off supplies of food and essential services, without taking any responsibility. And this is the double-standard we are witnessing. Everyone in the West condemns Putin’s attack on Ukrainian civilians, their homes and infrastructure, and calls them out as ‘war crimes’. No one has the courage to level the same accusation at Benjamin Netanyahu, despite the growing, unprecedented humanitarian crisis created by his implacable declaration to ‘destroy Hamas’. Has anyone pointed out that it’s impossible to destroy Hamas without destroying Gaza? Because that’s what he’s demonstrating.
The UN’s hunger monitoring system, Integrated Food Security Phase Classification (IPC), issued a report saying the “most likely scenario” in Gaza is that by 7 February “the entire population in the Gaza Strip [about 2.2 million people] would be at “crisis or worse” levels of hunger. (Source: Guardian)
In America, you have the perverse situation where many in the Republican Party want to withdraw support from Volodymyr Zelensky while providing military aid to Israel. They are, in effect, supporting both invasions, though they wouldn’t couch it in those terms.
Israel has a special status in Western eyes, consequential to the unconscionable genocide that Jews faced under Nazi Germany. It has led to a tendency, albeit unspoken, that Israel has special privileges when it comes to defending their State. This current conflict is a test of the West’s conscience. How much of a moral bankruptcy are we willing to countenance, before we say enough is enough, and that humanity needs to win.
Tuesday 2 January 2024
Modes of expression in writing fiction
As I point out in the post, this is a clumsy phrase, but I find it hard to come up with a better one. It’s actually something I wrote on Quora in response to a question. I’ve written on this before, but this post has the benefit of being much more succinct while possibly just as edifying.
I use the term ‘introspection’ where others use the word, ‘insight’. It’s the reader’s insight but the character’s introspection, which is why I prefer that term in this context.
The questioner is Clxudy Pills, obviously a pseudonym. I address her directly in the answer, partly because, unlike other questions I get, she has always acknowledged my answers.
Is "show, not tell" actually a good writing tip?
Maybe. No one said that to me when I was starting out, so it had no effect on my development. But I did read a book (more than one, actually) on ‘writing’ that delineated 5 categories of writing ‘style’. Style in this context means the mode of expression rather than an author’s individual style or ‘voice’. That’s clumsily stated but it will make sense when I tell you what they are.
- Dialogue is the most important because it’s virtually unique to fiction; quotes provided in non-fiction notwithstanding. Dialogue, more than any other style, tells you about the characters and their interactions with others.
- Introspection is what the character thinks, effectively. This only happens in novels and short stories, not screenplays or stage plays, soliloquies being the exception and certainly not the rule. But introspection is essential to prose, especially when the character is on their own.
- Exposition is the ‘telling’, not showing, part. When you’re starting out and learning your craft, you tend to write a lot of exposition – I know I did – which is why we get the admonition in your question. But the exposition can be helpful to you, if not the reader, as it allows you to explore the setting, the context of the story and its characters. Eventually, you’ll learn not to rely on it. Exposition is ‘smuggled’ into movies through dialogue and into novels through introspection.
- Description is more difficult than you think, because it’s the part of a novel that readers will skip over to get on with the story. Description can be more boring than exposition, yet it’s necessary. My approach is to always describe a scene from a character’s POV, and keep it minimalist. Readers automatically fill in the details, because we are visual creatures and we do it without thinking.
- Action is description in motion. Two rules: stay in one character’s POV and keep it linear – one thing happens after another. It has the dimension of time, though it’s subliminal.
So there: you get 5 topics for the price of one.
Sunday 31 December 2023
What are the limits of knowledge?
This was the Question of the Month in Philosophy Now (Issue 157, August/September 2023) and 11 answers were published in Issue 159, December 2023/January 2024, including mine, which I now post complete with minor edits.
Some people think that language determines the limits of knowledge, yet it merely describes what we know rather than limits it, and humans have always had the facility to create new language to depict new knowledge.
There are many types of knowledge, but I’m going to restrict myself to knowledge of the natural world. The ancient Greeks were possibly the first to intuit that the natural world had its own code. The Pythagoreans appreciated that musical pitch had a mathematical relationship, and that some geometrical figures contained numerical ratios. They made the giant conceptual leap that this could possibly be a key to understanding the Cosmos itself.
Jump forward two millennia, and their insight has borne more fruit than they could possibly have imagined. Richard Feynman made the following observation about mathematics in The Character of Physical Law: “Physicists cannot make a conversation in any other language. If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in. She offers her information only in one form.”
Meanwhile, the twentieth century logician Kurt Gödel proved that in any self-consistent, axiom-based, formal mathematical system, there will always be mathematical truths that can’t be proved true using that system. However, they potentially can be proved if one expands the axioms of the system. This infers that there is no limit to mathematical truths.
Alonso Church’s ‘paradox of unknowability’ states, “unless you know it all, there will always be truths that are by their very nature unknowable.” This applies to the physical universe itself. Specifically, since the vast majority of the Universe is unobservable, and possibly infinite in extent, most of it will remain forever unknowable. Given that the limits of knowledge are either infinite or unknowable in both the mathematical and physical worlds, then those limits are like a horizon that retreats as we advance towards it.
Sunday 3 December 2023
Philosophy in practice
As someone recently pointed out, my posts on this blog invariably arise from things I have read (sometimes watched) and I’ve already written a post based on a column I read in the last issue of Philosophy Now (No 158, Oct/Nov 2023).
Well, I’ve since read a few more articles and they have prompted quite a lot of thinking. Firstly, there is an article called What Happened to Philosophy? By Dr Alexander Jeuk, who is to quote: “an independent researcher writing on philosophy, economics, politics and the institutional structure of science.” He compares classical philosophy (in his own words, the ‘great philosophers’) with the way philosophy is practiced today in academia – that place most us of don’t visit and wouldn’t understand the language if we did.
I don’t want to dwell on it, but it’s relevance to this post is that he laments the specialisation of philosophy, which he blames (if I can use that word) on the specialisation of science. The specialisation of most things is not a surprise to anyone who works in a technical field (I work in engineering). I should point out that I’m not a technical person, so I’m a non-specialist who works in a specialist field. Maybe that puts me in a better position than most to address this. I have a curious mind that started young and my curiosity shifted as I got older, which means I never really settled into one area of knowledge, and, if I had, I didn’t quite have the intellectual ability to become competent in it. And that’s why this blog is a bit eclectic.
In his conclusion, Jeuk suggests that ‘great philosophy’ should be looked for ‘in the classics, and perhaps encourage a re-emergence of great philosophical thought from outside academia.’ He mentions social media and the internet, which is relevant to this blog. I don’t claim to do ‘great philosophy’; I just attempt to disperse ideas and provoke thought. But I think that’s what philosophy represents to most people outside of academia. Academic philosophy has become lost in its obsession with language, whilst using language that most find obtuse, if not opaque.
Another article was titled Does a Just Society Require Just Citizens? By Jimmy Aflonso Licon, Assistant Teaching Professor in Philosophy at Arizona State University. I wouldn’t call the title misleading, but it doesn’t really describe the content of the essay, or even get to the gist of it, in my view. Licon introduces a term, ‘moral mediocrity’, which might have been a better title, if an enigmatic one, as it’s effectively what he discusses for the next, not-quite 3 pages.
He makes the point that our moral behaviour stems from social norms – a point I’ve made myself – but he makes it more compellingly. Most of us do ‘moral’ acts because that’s what our peers do, and we are species-destined (my term, not his) to conform. This is what he calls moral mediocrity, because we don’t really think it through or deliberate on whether it’s right or wrong, though we might convince ourselves that we do. He makes the salient point that if we had lived when slavery was the norm, we would have been slave-owners (assuming the reader is white, affluent and male). Likewise, suffrage was once anathema to a lot of women, as well as men. This supports my view that morality changes, and what was once considered radical becomes conservative. And such changes are usually generational, as we are witnessing in the current age with marriage equality.
He coins another term, when he says ‘we are the recipients of a moral inheritance’ (his italics). In other words, the moral norms we follow today, we’ve inherited from our forebears. Towards the end of his essay, he discusses Kant’s ideas on ‘duty’. I won’t go into that, but, if I understand Licon’s argument correctly, he’s saying that a ‘just society’ is one that has norms and laws that allow moral mediocrity, whereby its members don’t have to think about what’s right or wrong; they just follow the rules. This leads to his very last sentence: And this is fundamentally the moral problem with moral mediocrity: it is wrongly motivated.
I’ve written on this before, and, given the title as well as the content, I needed to think on what I consider leads to a ‘just society’. And I keep coming back to the essential need for trust. Societies don’t function without some level of trust, but neither do personal relationships, contractual arrangements or the raising of children.
And this leads to the third article in the same issue, Seeing Through Transparency, by Paul Doolan, who ‘teaches philosophy at Zurich International School and is the author of Collective Memory and the Dutch East Indies; Unremembering Decolonization (Amsterdam Univ Press, 2021).
In effect, he discusses the paradoxical nature of modern societies, whereby we insist on ‘transparency’ yet claim that privacy is sacrosanct – see the contradiction? Is this hypocrisy? And this relates directly to trust. Without transparency, be it corporate or governmental, we have trust issues. My experience is that when it comes to personal relationships, it’s a given, a social norm in fact, that a person reveals as much of their interior life as they want to, and it’s not ours to mine. An example of moral mediocrity perhaps. And yet, as Doolan points out, we give away so much on social media, where our online persona takes on a life of its own, which we cultivate (this blog not being an exception).
I think there does need to be transparency about decisions that affect our lives collectively, as opposed to secrets we all keep for the sake of our sanity. I have written dystopian fiction where people are surveilled to the point of monitoring all speech, and explored how it affects personal relationships. This already happens in some parts of the world. I’ve also explored a dystopian scenario where the surveillance is less obvious – every household has an android that monitors all activity. We might already have that with certain devices in our homes. Can you turn them off? Do you have a device that monitors everyone who comes to your door?
The thing is that we become habituated to their presence, and it becomes part of our societal structure. As I said earlier, social norms change and are largely generational. Now they incorporate AI as well, and it’s happening without a lot of oversight or consultation with users. I don’t want to foster paranoia, but the genie has already escaped and I’d suggest it’s a matter of how we use it rather than how we put it back in the bottle.
Leaving that aside, Doolan also asks if you would behave differently if you could be completely invisible, which, of course, has been explored in fiction. We all know that anonymity fosters bad behaviour – just look online. One of my tenets is that honesty starts with honesty to oneself; it determines how we behave towards others.
I also know that an extreme environment, like a prison camp, can change one’s moral compass. I’ve never experienced it, but my father did. It brings out the best and worst in people, and I’d contend that you wouldn’t know how you’d be affected if you haven’t experienced it. This is an environment that turns Licon’s question on its head: can you be just in an intrinsically unjust environment?