Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Wednesday, 24 January 2024

Can AI have free will?

This is a question I’ve never seen asked, let alone answered. I think there are good reasons for that, which I’ll come to later.
 
The latest issue of Philosophy Now (Issue 159, Dec 2023/Jan 2024), which I’ve already referred to in 2 previous posts, has as its theme (they always have a theme), Freewill Versus Determinism. I’ll concentrate on an article by the Editor, Grant Bartley, titled What Is Free Will? That’s partly because he and I have similar views on the topic, and partly because reading the article led me to ask the question at the head of this post (I should point out that he never mentions AI).
 
It's a lengthy article, meaning I won’t be able to fully do it justice, or even cover all aspects that he discusses. For instance, towards the end, he posits a personal ‘pet’ theory that there is a quantum aspect to the internal choice we make in our minds. And he even provides a link to videos he’s made on this topic. I mention this in passing, and will make 2 comments: one, I also have ‘pet’ theories, so I can’t dismiss him out-of-hand; and two, I haven’t watched the videos, so I can’t comment on its plausibility.
 
He starts with an attempt to define what we mean by free will, and what it doesn’t mean. For instance, he differentiates between subconscious choices, which he calls ‘impulses’ and free will, which requires a conscious choice. He also differentiates what he calls ‘making a decision’. I will quote him directly, as I still see this involving free will, if it’s based on making a ‘decision’ from alternative possibilities (as he explains).
 
…sometimes, our decision-making is a choice, that is, mentally deciding between alternative possibilities present to your awareness. But your mind doesn’t always explicitly present you with multiple choices from which to choose. Sometimes no distinct options are present in your awareness, and you must cause your next contents of your mind on the basis of the present content, through intuition and imagination. This is not choice so much as making a decision. (My emphasis)
 
This is worth a detour, because I see what he’s describing in this passage as the process I experience when writing fiction, which is ‘creating’. In this case, some of the content, if not all of it, is subconscious. When you write a story, it feels to you (but no one else) that the characters are real and the story you’re telling already exists. Nevertheless, I still think there’s an element of free will, because you make choices and judgements about what your imagination presents to your consciousness. As I said, this is a detour.
 
I don’t think this is what he’s referring to, and I’ll come back to it later when I introduce AI into the discussion. Meanwhile, I’ll discuss what I think is the nub of his thesis and my own perspective, which is the apparent dependency between consciousness and free will.
 
If conscious causation is not real, why did consciousness evolve at all? What would be the function of awareness if it can’t change behaviour? How could an impotent awareness evolve if it cannot change what the brain’s going to do to help the human body or its genes survive?
(Italics in the original)
 
This is a point I’ve made myself, but Bartley goes further and argues “Since determinism can’t answer these questions, we can know determinism is false.” This is the opposite to Sabine Hossenfelder’s argument (declaration really) that ‘free will is an illusion [therefore false]’.
 
Note that Bartley coins the term, ‘conscious causation’, as a de facto synonym for free will. In fact, he says this explicitly in his conclusion: “If you say there is no free will, you’re basically saying there is no such thing as conscious causation.” I’d have to agree.
 
I made the point in another post that consciousness seems to act outside the causal chain of the Universe, and I feel that’s what Bartley is getting at. In fact, he explicitly cites Kant on this point, who (according to Bartley) “calls the will ‘transcendental’…” He talks at length about ‘soft (or weak) determinism’ and ‘strong determinism’, which I’ve also discussed. Now, the usual argument is that consciousness is ‘caused’ by neuron activity, therefore strong determinism is not broken.
 
To quote Hossenfelder: Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking. (Hossenfelder even uses the term ‘software’ to describe what does the ‘calculating’ in your brain.)
 
And this allows me to segue into AI, because what Hossenfelder describes is what we expect a computer to do. The thing is that while most scientists (and others) believe that AI will eventually become conscious (not sure what Hossenfelder thinks), I’ve never heard or seen anyone argue that AI will have free will. And this is why I don’t think the question at the head of this post has ever been asked. Many of the people who believe that AI will become conscious also don’t believe free will exists.
 
There is another component to this, which I’ve raised before and that’s imagination. I like to quote Raymond Tallis (neuroscientist and also a contributor to Philosophy Now).
 
Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.
(My emphasis)
 
Now, in another post, I argued that AI can’t have imagination in the way we experience it, yet I acknowledge that AI can look at numerous possibilities (like in a game of chess) and 'choose' what it ‘thinks’ is the optimum action. So, in this sense, AI would have ‘agency’, but that’s not free will, because it’s not ‘conscious causation’. And in this sense, I agree with Bartley that ‘making a decision’ does not constitute free will, if it’s what an AI does. So the difference is consciousness. To quote from that same post on this topic.
 
But the key here is imagination. It is because we can imagine a future that we attempt to bring it about - that's free will. And what we imagine is affected by our past, our emotions and our intellectual considerations, but that doesn't make it predetermined.
 
So, if imagination and consciousness are both faculties that separate us from AI, then I can’t see AI having free will, even though it will make ‘decisions’ based on data it receives (as inputs), and those decisions may not be predictable.
 
And this means that AI may not be deterministic either, in the ‘strong’ sense. One of the differences with humans, and other creatures that evolved consciousness, is that consciousness can apparently change the neural pathways of the brain, which I’d argue is the ‘strange loop’ posited by Douglas Hofstadter. (I have discussed free will and brain-plasticity in another post)
 
But there’s another way of looking at this, which differentiates humans from AI. Our decision-making is a combination of logical reasoning and emotion. AI only uses logic, and even then, it uses logic differently to us. It uses a database of samples and possibilities to come up with a ‘decision’ (or output), but without using the logic to arise at that decision the way we would. In other words, it doesn’t ‘understand’ the decision, like when it translates between languages, for example.
 
There is a subconscious and conscious component to our decision-making. Arguably, the subconscious component is analogous to what a computer does with algorithm-based software (as per Hossenfelder’s description). But there is no analogous conscious component in AI, which makes a choice or decision. In other words, there is no ‘conscious causation’, therefore no free will, as per Bartley’s definition.
 

4 comments:

Anonymous said...

Terrible reasoning. Here's another much more scientific take on the possibility there is no free will when you consider how space-time may work. Pay attention (what you wrote doesn't state a point): https://www.youtube.com/watch?v=wwSzpaTHyS8&t=24s&ab_channel=Kurzgesagt%E2%80%93InaNutshell

jjjldskfjalksdj said...

You don't read widely enough to know this question has been asked and answered thousands of time before you decided to make your erroneous assumption that this question is being originally expressed by you. https://www.youtube.com/watch?v=PN3uupR6zHE&ab_channel=TalkingToAI https://www.youtube.com/watch?v=Nefo1Mr6qoE&ab_channel=BigThink https://www.youtube.com/watch?v=GmlrEgLGozw&ab_channel=TheDiaryOfACEO https://youtube.com/shorts/a_b1yi-AQUs?si=z2UZKvKmjaZRppvv Expand your thinking, you know very little. And are pretty full of yourself even when your writing is very thin on plausibility.

Paul P. Mealing said...

Like the guy says, 'We don't know.'
I write philosophical arguments that are debatable - not set in stone. I don't expect everyone to agree with me, and obviously, many don't.

The best advocate I've come across for superdeterminism is Sabine Hossenfelder, and as I've said many times, I recommend her YouTube videos, even if I don't agree with her. And I've read both of her books, which I also recommend.

I specifically tackle her take on free will here

And I discuss one of her books here

Paul P. Mealing said...

That last comment was in response to the first post from 'Anonymous'.

In response to the second comment from jjjldskfjalksdj (are you the same person?), I say thanks for all the video references. I will check them out.

The second one, at least, is pretty good, and I'd agree with pretty much everything she says, which is not about 'giving AI free will' (as per the title), but about 'giving AI rights', where she provides compelling arguments that we shouldn't.

The first one is typical Chat-GPT3, but it's asking: what is free will? and not whether AI has free will? It's clever by providing a video, and we don't know if it's been edited or not to make it look more convincing. I'm yet to watch the rest.

And yes, you're right, I know very little. In fact, 'WE' know very little, and I've made that point many times.

I admit it's a contentious topic, which is why I write about it.