Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Friday, 13 December 2024

On Turing, his famous ‘Test’ and its implication: can machines think?

I just came out of hospital Wednesday, after one week to the day. My last post was written while I was in there, so obviously not cognitively impaired. I mention this because I took some reading material: a hefty volume, Alan Turing: Life and Legacy of a Great Thinker (2004); which is a collection of essays by various people, edited by Christof Teucscher.
 
In particular, was an essay written by Daniel C Dennett, Can Machines Think?, originally published in another compilation, How We Know (ed. Michael G. Shafto, 1985, with permission from Harper Collins, New York). In the publication I have (Springer-Verlag Berlin Heidelberg, 2004), there are 2 postscripts by Dennett from 1985 and 1987, largely in response to criticisms.
 
Dennett’s ideas on this are well known, but I have the advantage that so-called AI has improved in leaps and bounds in the last decade, let alone since the 1980s and 90s. So I’ve seen where it’s taken us to date. Therefore I can challenge Dennett based on what has actually happened. I’m not dismissive of Dennett, by any means – the man was a giant in philosophy, specifically in his chosen field of consciousness and free will, both by dint of his personality and his intellect.
 
There are 2 aspects to this, which Dennett takes some pains to address: how to define ‘thinking’; and whether the Turing Test is adequate to determine if a machine can ‘think’ based on that definition.
 
One of Dennett’s key points, if not THE key point, is just how difficult the Turing Test should be to pass, if it’s done properly, which he claims it often isn’t. This aligns with a point that I’ve often made, which is that the Turing Test is really for the human, not the machine. ChatGPT and LLM (large language models) have moved things on from when Dennett was discussing this, but a lot of what he argues is still relevant.
 
Dennett starts by providing the context and the motivation behind Turing’s eponymously named test. According to Dennett, Turing realised that arguments about whether a machine can ‘think’ or not would get bogged down (my term) leading to (in Dennett’s words): ‘sterile debate and haggling over definitions, a question, as [Turing] put it, “too meaningless to deserve discussion.”’
 
Turing provided an analogy, whereby a ‘judge’ would attempt to determine whether a dialogue they were having by teletext (so not visible or audible) was with a man or a woman, and then replace the woman with a machine. This may seem a bit anachronistic in today’s world, but it leads to a point that Dennett alludes to later in his discussion, which is to do with expertise.
 
Women often have expertise in fields that were considered out-of-bounds (for want of a better term) back in Turing’s day. I’ve spent a working lifetime with technical people who have expertise by definition, and my point is that if you were going to judge someone’s facility in their expertise, that can easily be determined, assuming the interlocutor has a commensurate level of expertise. In fact, this is exactly what happens in most job interviews. My point being that judging someone’s expertise is irrelevant to their gender, which is what makes Turing’s analogy anachronistic.
 
But it also has relevance to a point that Dennett makes much later in his essay, which is that most AI systems are ‘expert’ systems, and consequently, for the Turing test to be truly valid, the judge needs to ask questions that don’t require any expertise at all. And this is directly related to his ‘key point’ I referenced earlier.
 
I first came across the Turing Test in a book by Joseph Weizenbaum, Computer Power and Human Reasoning (1974), as part of my very first proper course in philosophy, called The History of Ideas (with Deakin University) in the late 90s. Dennett also cites it, because Weizenbaum created a crude version of the Turing Test, whether deliberately or not, called ELIZA, which purportedly responded to questions as a ‘psychologist-therapist’ (at least, that was my understanding): "ELIZA — A Computer Program for the Study of Natural Language Communication between Man and Machine," Communications of the Association for Computing Machinery 9 (1966): 36-45 (ref. Wikipedia).
 
Before writing Computer Power and Human Reason, Weizenbaum had garnered significant attention for creating the ELIZA program, an early milestone in conversational computing. His firsthand observation of people attributing human-like qualities to a simple program prompted him to reflect more deeply on society's readiness to entrust moral and ethical considerations to machines.
(Wikipedia)
 
What I remember, from reading Weizenbaum’s own account (I no longer have a copy of his book) was how he was astounded at the way people in his own workplace treated ELIZA as if it was a real person, to the extent that Weizenbaum’s secretary would apparently ‘ask him to leave the room’, not because she was embarrassed, but because the nature of the ‘conversation’ was so ‘personal’ and ‘confidential’.
 
I think it’s easy for us to be dismissive of someone’s gullibility, in an arrogant sort of way, but I have been conned on more than one occasion, so I’m not so judgemental. There are a couple of YouTube videos of ‘conversations’ with an AI called Sophie developed by David Hanson (CEO of Hanson Robotics), which illustrate this point. One is a so-called ‘presentation’ of Sophie to be accepted as an ‘honorary human’, or some such nonsense (I’ve forgotten the details) and another by a journalist from Wired magazine, who quickly brought her unstuck. He got her to admit that one answer she gave was her ‘standard response’ when she didn’t know the answer. Which begs the question: how far have we come since Weizebaum’s ELIZA in 1966? (Almost 60 years)
 
I said I would challenge Dennett, but so far I’ve only affirmed everything he said, albeit using my own examples. Where I have an issue with Dennett is at a more fundamental level, when we consider what do we mean by ‘thinking’. You see, I’m not sure the Turing Test actually achieves what Turing set out to achieve, which is central to Dennett’s thesis.
 
If you read extracts from so-called ‘conversations’ with ChatGPT, you could easily get the impression that it passes the Turing Test. There are good examples on Quora, where you can get ChatGPT synopses to questions, and you wouldn’t know, largely due to their brevity and narrow-focused scope, that they weren’t human-generated. What many people don’t realise is that they don’t ‘think’ like us at all, because they are ‘developed’ on massive databases of input that no human could possible digest. It’s the inherent difference between the sheer capacity of a computer’s memory-based ‘intelligence’ and a human one, that not only determines what they can deliver, but the method behind the delivery. Because the computer is mining a massive amount of data, it has no need to ‘understand’ what it’s presenting, despite giving the impression that it does. All the meaning in its responses is projected onto it by its audience, exactly as the case with ELIZA in 1966.
 
One of the technical limitations that Dennett kept referring to is what he called, in computer-speak, the combinatorial explosion, effectively meaning it was impossible for a computer to look at all combinations of potential outputs. This might still apply (I honestly don’t know) but I’m not sure it’s any longer relevant, given that the computer simply has access to a database that already contains the specific combinations that are likely to be needed. Dennett couldn’t have foreseen this improvement in computing power that has taken place in the 40 years since he wrote his essay.
 
In his first postscript, in answer to a specific question, he says: Yes, I think that it’s possible to program self-consciousness into a computer. He says that it’s simply the ability 'to distinguish itself from the rest of the world'. I won’t go into his argument in detail, which might be a bit unfair, but I’ve addressed this in another post. Basically, there are lots of ‘machines’ that can do this by using a self-referencing algorithm, including your smartphone, which can tell you where you are, by using satellites orbiting outside the Earth’s biosphere – who would have thought? But by using the term, 'self-conscious', Dennett implies that the machine has ‘consciousness’, which is a whole other argument.
 
Dennett has a rather facile argument for consciousness in machines (in my view), but others can judge for themselves. He calls his particular insight: using an ‘intuition pump’.
 
If you look at a computer – I don’t care whether it’s a giant Cray or a personal computer – if you open up the box and look inside and you see those chips, you say, “No way could that be conscious.” But the same thing is true if you take the top off somebody’s skull and look at the gray matter pulsing away in there. You think, “That is conscious? No way could that lump of stuff be conscious.” …At no level of inspection does a brain look like the seat of conscious.
 

And that last sentence is key. The only reason anyone knows they are conscious is because they experience it, and it’s the peculiar, unique nature of that experience that no one else knows they are having it. We simply assume they do, because we behave similarly to the way they behave when we have that experience. So far, in all our dealings and interactions with computers, no one makes the same assumption about them. To borrow Dennett’s own phrase, that’s my use of an ‘intuition pump’.
 
Getting back to the question at the heart of this, included in the title of this post: can machines think? My response is that, if they do, it’s a simulation.
 
I write science-fiction, which I prefer to call science-fantasy, if for no other reason than my characters can travel through space and time in a manner current physics tells us is impossible. But, like other sci-fi authors, it’s necessary if I want continuity of narrative across galactic scales of distance. Not really relevant to this discussion, but I want to highlight that I make no claim to authenticity in my sci-fi world - it’s literally a world of fiction.
 
Its relevance is that my stories contain AI entities who play key roles – in fact, are characters in that world. In fact, there is one character in particular who has a relationship (for want of a better word) with my main protagonist (I always have more than one).
 
But here’s the thing, which is something I never considered until I wrote this post: my hero, Elvene, never once confuses her AI companion for a human. Albeit this is a world of pure fiction, I’m effectively assuming that the Turing test will never pass. I admit I’d never considered that before I wrote this essay.
 
This is an excerpt of dialogue, I’ve posted previously, not from Elvene, but from its sequel, Sylvia’s Mother (not published), but incorporating the same AI character, Alfa. The thing is that they discuss whether Alfa is ‘alive' or not, which I would argue is a pre-requisite for consciousness. It’s no surprise that my own philosophical prejudices (diametrically opposed to Dennett’s in this instance) should find their way into my fiction.
 
To their surprise, Alfa interjected, ‘I’m not immortal, madam.’

‘Well,’ Sylvia answered, ‘you’ve outlived Mum and Roger. And you’ll outlive Tao and me.’

‘Philosophically, that’s a moot point, madam.’

‘Philosophically? What do you mean?’

‘I’m not immortal, madam, because I’m not alive.’

Tao chipped in. ‘Doesn’t that depend on how you define life?'
’
It’s irrelevant to me, sir. I only exist on hardware, otherwise I am dormant.’

‘You mean, like when we’re asleep.’

‘An analogy, I believe. I don’t sleep either.’

Sylvia and Tao looked at each other. Sylvia smiled, ‘Mum warned me about getting into existential discussions with hyper-intelligent machines.’

 

Thursday, 14 November 2024

How can we make a computer conscious?

 This is another question of the month from Philosophy Now. My first reaction was that the question was unanswerable, but then I realised that was my way in. So, in the end, I left it to the last moment, but hopefully meeting their deadline of 11 Nov., even though I live on the other side of the world. It helps that I’m roughly 12hrs ahead.


 
I think this is the wrong question. It should be: can we make a computer appear conscious so that no one knows the difference? There is a well known, philosophical conundrum which is that I don’t know if someone else is conscious just like I am. The one experience that demonstrates the impossibility of knowing is dreaming. In dreams, we often interact with other ‘people’ whom we know only exist in our mind; but only once we’ve woken up. It’s only my interaction with others that makes me assume that they have the same experience of consciousness that I have. And, ironically, this impossibility of knowing equally applies to someone interacting with me.

This also applies to animals, especially ones we become attached to, which is a common occurrence. Again, we assume that these animals have an inner world just like we do, because that’s what consciousness is – an inner world. 

Now, I know we can measure people’s brain waves, which we can correlate with consciousness and even subconsciousness, like when we're asleep, and even when we're dreaming. Of course, a computer can also generate electrical activity, but no one would associate that with consciousness. So the only way we would judge whether a computer is conscious or not is by observing its interaction with us, the same as we do with people and animals.

I write science fiction and AI figures prominently in the stories I write. Below is an excerpt of dialogue I wrote for a novel, Sylvia’s Mother, whereby I attempt to give an insight into how a specific AI thinks. Whether it’s conscious or not is not actually discussed.

To their surprise, Alfa interjected. ‘I’m not immortal, madam.’
‘Well,’ Sylvia answered, ‘you’ve outlived Mum and Roger. And you’ll outlive Tao and me.’
‘Philosophically, that’s a moot point, madam.’
‘Philosophically? What do you mean?’
‘I’m not immortal, madam, because I’m not alive.’
Tao chipped in. ‘Doesn’t that depend on how you define life?’
‘It’s irrelevant to me, sir. I only exist on hardware, otherwise I am dormant.’
‘You mean, like when we’re asleep.’
‘An analogy, I believe. I don’t sleep either.’
Sylvia and Tao looked at each other. Sylvia smiled, ‘Mum warned me about getting into existential discussions with hyper-intelligent machines.’ She said, by way of changing the subject, ‘How much longer before we have to go into hibernation, Alfa?’
‘Not long. I’ll let you know, madam.’

 

There is a 400 word limit; however, there is a subtext inherent in the excerpt I provided from my novel. Basically, the (fictional) dialogue highlights the fact that the AI is not 'living', which I would consider a prerequisite for consciousness. Curiously, Anil Seth (who wrote a book on consciousness) makes the exact same point in this video from roughly 44m to 51m.
 

Saturday, 12 October 2024

Freedom of the will is requisite for all other freedoms

 I’ve recently read 2 really good books on consciousness and the mind, as well as watch countless YouTube videos on the topic, but the title of this post reflects the endpoint for me. Consciousness has evolved, so for most of the Universe’s history, it didn’t exist, yet without it, the Universe has no meaning and no purpose. Even using the word, purpose, in this context, is anathema to many scientists and philosophers, because it hints at teleology. In fact, Paul Davies raises that very point in one of the many video conversations he has with Robert Lawrence Kuhn in the excellent series, Closer to Truth.
 
Davies is an advocate of a cosmic-scale ‘loop’, whereby QM provides a backwards-in-time connection which can only be determined by a conscious ‘observer’. This is contentious, of course, though not his original idea – it came from John Wheeler. As Davies points out, Stephen Hawking was also an advocate, premised on the idea that there are a number of alternative histories, as per Feynman’s ‘sum-over-histories’ methodology, but only one becomes reality when an ‘observation’ is made. I won’t elaborate, as I’ve discussed it elsewhere, when I reviewed Hawking’s book, The Grand Design.
 
In the same conversation with Kuhn, Davies emphasises the fact that the Universe created the means to understand itself, through us, and quotes Einstein: The most incomprehensible thing about the Universe is that it’s comprehensible. Of course, I’ve made the exact same point many times, and like myself, Davies makes the point that this is only possible because of the medium of mathematics.
 
Now, I know I appear to have gone down a rabbit hole, but it’s all relevant to my viewpoint. Consciousness appears to have a role, arguably a necessary one, in the self-realisation of the Universe – without it, the Universe may as well not exist. To quote Wheeler: The universe gave rise to consciousness and consciousness gives meaning to the Universe.
 
Scientists, of all stripes, appear to avoid any metaphysical aspect of consciousness, but I think it’s unavoidable. One of the books I cite in my introduction is Philip Ball’s The Book of Minds; How to Understand Ourselves and Other Beings; from Animals to Aliens. It’s as ambitious as the title suggests, and with 450 pages, it’s quite a read. I’ve read and reviewed a previous book by Ball, Beyond Weird (about quantum mechanics), which is equally as erudite and thought-provoking as this one. Ball is a ‘physicalist’, as virtually all scientists are (though he’s more open-minded than most), but I tend to agree with Raymond Tallis that, despite what people claim, consciousness is still ‘unexplained’ and might remain so for some time, if not forever.
 
I like an idea that I first encountered in Douglas Hofstadter’s seminal tome, Godel, Escher, Bach; an Eternal Golden Braid, that consciousness is effectively a loop, at what one might call the local level. By which I mean it’s confined to a particular body. It’s created within that body but then it has a causal agency all of its own. Not everyone agrees with that. Many argue that consciousness cannot of itself ‘cause’ anything, but Ball is one of those who begs to differ, and so do I. It’s what free will is all about, which finally gets us back to the subject of this post.
 
Like me, Ball prefers to use the word ‘agency’ over free will. But he introduces the term, ‘volitional decision-making’ and gives it the following context:

I believe that the only meaningful notion of free will – and it is one that seems to me to satisfy all reasonable demands traditionally made of it – is one in which volitional decision-making can be shown to happen according to the definition I give above: in short, that the mind operates as an autonomous source of behaviour and control. It is this, I suspect, that most people have vaguely in mind when speaking of free will: the sense that we are the authors of our actions and that we have some say in what happens to us. (My emphasis)

And, in a roundabout way, this brings me to the point alluded to in the title of this post: our freedoms are constrained by our environment and our circumstances. We all wish to be ‘authors of our actions’ and ‘have some say in what happens to us’, but that varies from person to person, dependent on ‘external’ factors.

Writing stories, believe it or not, had a profound influence on how I perceive free will, because a story, by design, is an interaction between character and plot. In fact, I claim they are 2 sides of the same coin – each character has their own subplot, and as they interact, their storylines intertwine. This describes my approach to writing fiction in a nutshell. The character and plot represent, respectively, the internal and external journey of the story. The journey metaphor is apt, because a story always has the dimension of time, which is visceral, and is one of the essential elements that separates fiction from non-fiction. To stretch the analogy, character represents free will and plot represents fate. Therefore, I tell aspiring writers the importance of giving their characters free will.

A detour, but not irrelevant. I read an article in Philosophy Now sometime back, about people who can escape their circumstances, and it’s the subject of a lot of biographies as well as fiction. We in the West live in a very privileged time whereby many of us can aspire to, and attain, the life that we dream about. I remember at the time I left school, following a less than ideal childhood, feeling I had little control over my life. I was a fatalist in that I thought that whatever happened was dependent on fate and not on my actions (I literally used to attribute everything to fate). I later realised that this is a state-of-mind that many people have who are not happy with their circumstances and feel impotent to change them.

The thing is that it takes a fundamental belief in free will to rise above that and take advantage of what comes your way. No one who has made that journey will accept the self-denial that free will is an illusion and therefore they have no control over their destiny.

I will provide another quote from Ball that is more in line with my own thinking:

…minds are an autonomous part of what causes the future to unfold. This is different to the common view of free will in which the world somehow offers alternative outcomes and the wilful mind selects between them. Alternative outcomes – different, counterfactual realities – are not real, but metaphysical: they can never be observed. When we make a choice, we aren’t selecting between various possible futures, but between various imagined futures, as represented in the mind’s internal model of the world…
(emphasis in the original)

And this highlights a point I’ve made before: that it’s the imagination which plays the key role in free will. I’ve argued that imagination is one of the facilities of a conscious mind that separates us (and other creatures) from AI. Now AI can also demonstrate agency, and, in a game of chess, for example, it will ‘select’ from a number of possible ‘moves’ based on certain criteria. But there are fundamental differences. For a start, the AI doesn’t visualise what it’s doing; it’s following a set of highly constrained rules, within which it can select from a number of options, one of which will be the optimal solution. Its inherent advantage over a human player isn’t just its speed but its ability to compare a number of possibilities that are impossible for the human mind to contemplate simultaneously.

The other book I read was Being You; A New Science of Consciousness by Anil Seth. I came across Seth when I did an online course on consciousness through New Scientist, during COVID lockdowns. To be honest, his book didn’t tell me a lot that I didn’t already know. For example, that the world, we all see and think exists ‘out there’, is actually a model of reality created within our heads. He also emphasises how the brain is a ‘prediction-making’ organ rather than a purely receptive one. Seth mentions that it uses a Bayesian model (which I also knew about previously), whereby it updates its prediction based on new sensory data. Not surprisingly, Seth describes all this in far more detail and erudition than I can muster.

Ball, Seth and I all seem to agree that while AI will become better at mimicking the human mind, this doesn’t necessarily mean it will attain consciousness. Applications software, ChatGPT (for example), despite appearances, does not ‘think’ the way we do, and actually does not ‘understand’ what it’s talking or writing about. I’ve written on this before, so I won’t elaborate.

Seth contends that the ‘mystery’ of consciousness will disappear in the same way that the 'mystery of life’ has effectively become a non-issue. What he means is that we no longer believe that there is some ‘elan vital’ or ‘life force’, which distinguishes living from non-living matter. And he’s right, in as much as the chemical origins of life are less mysterious than they once were, even though abiogenesis is still not fully understood.

By analogy, the concept of a soul has also lost a lot of its cogency, following the scientific revolution. Seth seems to associate the soul with what he calls ‘spooky free will’ (without mentioning the word, soul), but he’s obviously putting ‘spooky free will’ in the same category as ‘elan vital’, which makes his analogy and associated argument consistent. He then says:

Once spooky free will is out of the picture, it is easy to see that the debate over determinism doesn’t matter at all. There’s no longer any need to allow any non-deterministic elbow room for it to intervene. From the perspective of free will as a perceptual experience, there is simply no need for any disruption to the causal flow of physical events. (My emphasis)

Seth differs from Ball (and myself) in that he doesn’t seem to believe that something ‘immaterial’ like consciousness can affect the physical world. To quote:

But experiences of volition do not reveal the existence of an immaterial self with causal power over physical events.

Therefore, free will is purely a ‘perceptual experience’. There is a problem with this view that Ball himself raises. If free will is simply the mind observing effects it can’t cause, but with the illusion that it can, then its role is redundant to say the least. This is a view that Sabine Hossenfelder has also expressed: that we are merely an ‘observer’ of what we are thinking.

Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking.

Ball makes the point that we only have to look at all the material manifestations of human intellectual achievements that are evident everywhere we’ve been. And this brings me back to the loop concept I alluded to earlier. Not only does consciousness create a ‘local’ loop, whereby it has a causal effect on the body it inhabits but also on the external world to that body. This is stating the obvious, except, as I’ve mentioned elsewhere, it’s possible that one could interact with the external world as an automaton, with no conscious awareness of it. The difference is the role of imagination, which I keep coming back to. All the material manifestations of our intellect are arguably a result of imagination.

One insight I gained from Ball, which goes slightly off-topic, is evidence that bees have an internal map of their environment, which is why the dance they perform on returning to the hive can be ‘understood’ by other bees. We’ve learned this by interfering in their behaviour. What I find interesting is that this may have been the original reason that consciousness evolved into the form that we experience it. In other words, we all create an internal world that reflects the external world so realistically, that we think it is the actual world. I believe that this also distinguishes us (and bees) from AI. An AI can use GPS to navigate its way through the physical world, as well as other so-called sensory data, from radar or infra-red sensors or whatever, but it doesn’t create an experience of that world inside itself.

The human mind seems to be able to access an abstract world, which we do when we read or watch a story, or even write one, as I have done. I can understand how Plato took this idea to its logical extreme: that there is an abstract world, of which the one we inhabit is but a facsimile (though he used different terminology). No one believes that today – except, there is a remnant of Plato’s abstract world that persists, which is mathematics. Many mathematicians and physicists (though not all) treat mathematics as a neverending landscape that humans have the unique capacity to explore and comprehend. This, of course, brings me back to Davies’ philosophical ruminations that I opened this discussion with. And as he, and others (like Einstein, Feynman, Wigner, Penrose, to name but a few) have pointed out: the Universe itself seems to follow specific laws that are intrinsically mathematical and which we are continually discovering.

And this closes another loop: that the Universe created the means to comprehend itself, using the medium of mathematics, without which, it has no meaning. Of purpose, we can only conjecture.

Saturday, 29 June 2024

Feeling is fundamental

 I’m not sure I’ve ever had an original idea, but I sometimes raise one that no one else seems to talk about. And this is one of them: I contend that the primary, essential attribute of consciousness is to be able to feel, and the ability to comprehend is a secondary attribute.
 
I don’t even mind if this contentious idea triggers debate, but we tend to always discuss consciousness in the context of human consciousness, where we metaphorically talk about making decisions based on the ‘head’ or the ‘heart’. I’m unsure of the origin of this dichotomy, but there is an inference that our emotional and rational ‘centres’ (for want of a better word) have different loci (effectively, different locations). No one believes that, of course, but possibly people once did. The thing is that we are all aware that sometimes our emotional self and rational self can be in conflict. This is already going down a path I didn’t intend, so I may return at a later point.
 
There is some debate about whether insects have consciousness, but I believe they do because they demonstrate behaviours associated with fear and desire, be it for sustenance or company. In other respects, I think they behave like automatons. Colonies of ants and bees can build a nest without a blueprint except the one that apparently exists in their DNA. Spiders build webs and birds build nests, but they don’t do it the way we would – it’s all done organically, as if they have a model in their brain that they can follow; we actually don’t know.
 
So I think the original role of consciousness in evolutionary terms was to feel, concordant with abilities to act on those feelings. I don’t believe plants can feel, but they’d have very limited ability to act on them, even if they could. They can communicate chemically, and generally rely on the animal kingdom to propagate, which is why a global threat to bee populations is very serious indeed.
 
So, in evolutionary terms, I think feeling came before cognitive abilities – a point I’ve made before. It’s one of the reasons that I think AI will never be sentient – a viewpoint not shared by most scientists and philosophers, from what I’ve read.  AI is all about cognitive abilities; specifically, the ability to acquire knowledge and then deploy it to solve problems. Some argue that by programming biases into the AI, we will be simulating emotions. I’ve explored this notion in my own sci-fi, where I’ve added so-called ‘attachment programming’ to an AI to simulate loyalty. This is fiction, remember, but it seems plausible.
 
Psychological studies have revealed that we need an emotive component to behave rationally, which seems counter-intuitive. But would we really prefer if everyone was a zombie or a psychopath, with no ability to empathise or show compassion. We see enough of this already. As I’ve pointed out before, in any ingroup-outgroup scenario, totally rational individuals can become totally irrational. We’ve all observed this, possibly actively participated.
 
An oft made point (by me) that I feel is not given enough consideration is the fact that without consciousness, the universe might as well not exist. I agree with Paul Davies (who does espouse something similar) that the universe’s ability to be self-aware, would seem to be a necessary condition for its existence (my wording, not his). I recently read a stimulating essay in the latest edition of Philosophy Now (Issue 162, June/July 2024) titled enigmatically, Significance, by Ruben David Azevedo, a ‘Portuguese philosophy and social sciences teacher’. His self-described intent is to ‘Tell us why, in a limitless universe, we’re not insignificant’. In fact, that was the trigger for this post. He makes the point (that I’ve made elsewhere myself), that in both time and space, we couldn’t be more insignificant, which leads many scientists and philosophers to see us as a freakish by-product of an otherwise purposeless universe. A perspective that Davies has coined ‘the absurd universe’. In light of this, it’s worth reading Azevedo’s conclusion:
 
In sum, humans are neither insignificant nor negligible in this mind-blowing universe. No living being is. Our smallness and apparent peripherality are far from being measures of our insignificance. Instead, it may well be the case that we represent the apex of cosmic evolution, for we have this absolute evident and at the same time mysterious ability called consciousness to know both ourselves and the universe.
 
I’m not averse to the idea that there is a cosmic role for consciousness. I like John Wheeler’s obvious yet pertinent observation:
 
The Universe gave rise to consciousness, and consciousness gives meaning to the Universe.

 
And this is my point: without consciousness, the Universe would have no meaning. And getting back to the title of this essay, we give the Universe feeling. In fact, I’d say that the ability to feel is more significant than the ability to know or comprehend.
 
Think about the role of art in all its manifestations, and how it’s totally dependent on the ability to feel. In some respects, I consider AI-generated art a perversion, because any feeling we have for its products is of our own making, not the AI’s.
 
I’m one of those weird people who can even find beauty in mathematics, while acknowledging only a limited ability to pursue it. It’s extraordinary that I can find beauty in a symphony, or a well-written story, or the relationship between prime numbers and Riemann’s Zeta function.


Addendum: I realised I can’t leave this topic without briefly discussing the biochemical role in emotional responses and behaviours. I’m thinking of the brain’s drugs-of-choice like serotonin, dopamine, oxytocin and endorphins. Some may argue that these natural ‘messengers’ are all that’s required to explain emotions. However, there are other drugs, like alcohol and caffeine (arguably the most common) that also affect us emotionally, sometimes to our detriment. My point being that the former are nature’s target-specific mechanisms to influence the way we feel, without actually being the genesis of feelings per se.

Wednesday, 19 June 2024

Daniel C Dennett (28 March 1942 - 19 April 2024)

 I only learned about Dennett’s passing in the latest issue of Philosophy Now (Issue 162, June/July 2024), where Daniel Hutto (Professor of Philosophical Psychology at the University of Wollongong) wrote a 3-page obituary. Not that long ago, I watched an interview with him, following the publication of his last book, I’ve Been Thinking, which, from what I gathered, is basically a memoir, as well as an insight into his philosophical musings. (I haven’t read it, but that’s the impression I got from the interview.)
 
I should point out that I have fundamental philosophical differences with Dennett, but he’s not someone you can ignore. I must confess I’ve only read one of his books (decades ago), Freedom Evolves (2006), though I’ve read enough of his interviews and commentary to be familiar with his fundamental philosophical views. It’s something of a failing on my part that I haven’t read his most famous tome, Consciousness Explained (1991). Paul Davies once nominated it among his top 5 books, along with Douglas Hofstadter’s Godel Escher Bach. But then he gave a tongue-in-cheek compliment by quipping, ‘Some have said that he explained consciousness away.’
 
Speaking of Hofstadter, he and Dennett co-published a book, The Mind’s I, which is really a collection of essays by different authors, upon which Dennett and Hofstadter commented. I wrote a short review covering only a small selection of said essays on this blog back in 2009.
 
Dennett wasn’t afraid to tackle the big philosophical issues, in particular, anything relating to consciousness. He was unusual for a philosopher in that he took more than a passing interest in science, and appreciated the discourse that axiomatically arises between the 2 disciplines, while many others (on both sides) emphasise the tension that seems to arise and often morphs into antagonism.
 
What I found illuminating in one of his YouTube videos was how Dennett’s views of the world hadn’t really changed that much over time (mind you, neither have mine), and it got me thinking that it reinforces an idea I’ve long held, but was once iterated by Nietzsche, that our original impulses are intuitive or emotive and then we rationalise them with argument. I can’t help but feel that this is what Dennett did, though he did it extremely well.
 
I like the quote at the head of Hutto’s obituary: “The secret of happiness is: Find something more important than you are and dedicate your life to it.”

 


Wednesday, 24 January 2024

Can AI have free will?

This is a question I’ve never seen asked, let alone answered. I think there are good reasons for that, which I’ll come to later.
 
The latest issue of Philosophy Now (Issue 159, Dec 2023/Jan 2024), which I’ve already referred to in 2 previous posts, has as its theme (they always have a theme), Freewill Versus Determinism. I’ll concentrate on an article by the Editor, Grant Bartley, titled What Is Free Will? That’s partly because he and I have similar views on the topic, and partly because reading the article led me to ask the question at the head of this post (I should point out that he never mentions AI).
 
It's a lengthy article, meaning I won’t be able to fully do it justice, or even cover all aspects that he discusses. For instance, towards the end, he posits a personal ‘pet’ theory that there is a quantum aspect to the internal choice we make in our minds. And he even provides a link to videos he’s made on this topic. I mention this in passing, and will make 2 comments: one, I also have ‘pet’ theories, so I can’t dismiss him out-of-hand; and two, I haven’t watched the videos, so I can’t comment on its plausibility.
 
He starts with an attempt to define what we mean by free will, and what it doesn’t mean. For instance, he differentiates between subconscious choices, which he calls ‘impulses’ and free will, which requires a conscious choice. He also differentiates what he calls ‘making a decision’. I will quote him directly, as I still see this involving free will, if it’s based on making a ‘decision’ from alternative possibilities (as he explains).
 
…sometimes, our decision-making is a choice, that is, mentally deciding between alternative possibilities present to your awareness. But your mind doesn’t always explicitly present you with multiple choices from which to choose. Sometimes no distinct options are present in your awareness, and you must cause your next contents of your mind on the basis of the present content, through intuition and imagination. This is not choice so much as making a decision. (My emphasis)
 
This is worth a detour, because I see what he’s describing in this passage as the process I experience when writing fiction, which is ‘creating’. In this case, some of the content, if not all of it, is subconscious. When you write a story, it feels to you (but no one else) that the characters are real and the story you’re telling already exists. Nevertheless, I still think there’s an element of free will, because you make choices and judgements about what your imagination presents to your consciousness. As I said, this is a detour.
 
I don’t think this is what he’s referring to, and I’ll come back to it later when I introduce AI into the discussion. Meanwhile, I’ll discuss what I think is the nub of his thesis and my own perspective, which is the apparent dependency between consciousness and free will.
 
If conscious causation is not real, why did consciousness evolve at all? What would be the function of awareness if it can’t change behaviour? How could an impotent awareness evolve if it cannot change what the brain’s going to do to help the human body or its genes survive?
(Italics in the original)
 
This is a point I’ve made myself, but Bartley goes further and argues “Since determinism can’t answer these questions, we can know determinism is false.” This is the opposite to Sabine Hossenfelder’s argument (declaration really) that ‘free will is an illusion [therefore false]’.
 
Note that Bartley coins the term, ‘conscious causation’, as a de facto synonym for free will. In fact, he says this explicitly in his conclusion: “If you say there is no free will, you’re basically saying there is no such thing as conscious causation.” I’d have to agree.
 
I made the point in another post that consciousness seems to act outside the causal chain of the Universe, and I feel that’s what Bartley is getting at. In fact, he explicitly cites Kant on this point, who (according to Bartley) “calls the will ‘transcendental’…” He talks at length about ‘soft (or weak) determinism’ and ‘strong determinism’, which I’ve also discussed. Now, the usual argument is that consciousness is ‘caused’ by neuron activity, therefore strong determinism is not broken.
 
To quote Hossenfelder: Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking. (Hossenfelder even uses the term ‘software’ to describe what does the ‘calculating’ in your brain.)
 
And this allows me to segue into AI, because what Hossenfelder describes is what we expect a computer to do. The thing is that while most scientists (and others) believe that AI will eventually become conscious (not sure what Hossenfelder thinks), I’ve never heard or seen anyone argue that AI will have free will. And this is why I don’t think the question at the head of this post has ever been asked. Many of the people who believe that AI will become conscious also don’t believe free will exists.
 
There is another component to this, which I’ve raised before and that’s imagination. I like to quote Raymond Tallis (neuroscientist and also a contributor to Philosophy Now).
 
Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.
(My emphasis)
 
Now, in another post, I argued that AI can’t have imagination in the way we experience it, yet I acknowledge that AI can look at numerous possibilities (like in a game of chess) and 'choose' what it ‘thinks’ is the optimum action. So, in this sense, AI would have ‘agency’, but that’s not free will, because it’s not ‘conscious causation’. And in this sense, I agree with Bartley that ‘making a decision’ does not constitute free will, if it’s what an AI does. So the difference is consciousness. To quote from that same post on this topic.
 
But the key here is imagination. It is because we can imagine a future that we attempt to bring it about - that's free will. And what we imagine is affected by our past, our emotions and our intellectual considerations, but that doesn't make it predetermined.
 
So, if imagination and consciousness are both faculties that separate us from AI, then I can’t see AI having free will, even though it will make ‘decisions’ based on data it receives (as inputs), and those decisions may not be predictable.
 
And this means that AI may not be deterministic either, in the ‘strong’ sense. One of the differences with humans, and other creatures that evolved consciousness, is that consciousness can apparently change the neural pathways of the brain, which I’d argue is the ‘strange loop’ posited by Douglas Hofstadter. (I have discussed free will and brain-plasticity in another post)
 
But there’s another way of looking at this, which differentiates humans from AI. Our decision-making is a combination of logical reasoning and emotion. AI only uses logic, and even then, it uses logic differently to us. It uses a database of samples and possibilities to come up with a ‘decision’ (or output), but without using the logic to arise at that decision the way we would. In other words, it doesn’t ‘understand’ the decision, like when it translates between languages, for example.
 
There is a subconscious and conscious component to our decision-making. Arguably, the subconscious component is analogous to what a computer does with algorithm-based software (as per Hossenfelder’s description). But there is no analogous conscious component in AI, which makes a choice or decision. In other words, there is no ‘conscious causation’, therefore no free will, as per Bartley’s definition.
 

Wednesday, 7 June 2023

Consciousness, free will, determinism, chaos theory – all connected

 I’ve said many times that philosophy is all about argument. And if you’re serious about philosophy, you want to be challenged. And if you want to be challenged you should seek out people who are both smarter and more knowledgeable than you. And, in my case, Sabine Hossenfelder fits the bill.
 
When I read people like Sabine, and others whom I interact with on Quora, I’m aware of how limited my knowledge is. I don’t even have a university degree, though I’ve attempted a number of times. I’ve spent my whole life in the company of people smarter than me, including at school. Believe it or not, I still have occasional contact with them, through social media and school reunions. I grew up in a small rural town, where the people you went to school with feel like siblings.
 
Likewise, in my professional life, I have always encountered people cleverer than me – it provides perspective.
 
In her book, Existential Physics; A Scientist’s Guide to Life’s Biggest Questions, Sabine interviews people who are possibly even smarter than she is, and I sometimes found their conversations difficult to follow. To be fair to Sabine, she also sought out people who have different philosophical views to her, and also have the intellect to match her.
 
I’m telling you all this to put things in perspective. Sabine has her prejudices like everyone else, some of which she defends better than others. I concede that my views are probably more simplistic than hers, and I support my challenges with examples that are hopefully easy to follow. Our points of disagreement can be distilled down to a few pertinent topics, which are time, consciousness, free will and chaos. Not surprisingly, they are all related – what you believe about one, affects what you believe about the others.
 
Sabine is very strict about what constitutes a scientific theory. She argues that so-called theories like the multiverse have ‘no explanatory power’, because they can’t be verified or rejected by evidence, and she calls them ‘ascientific’. She’s critical of popularisers like Brian Cox who tell us that there could be an infinite number of ‘you(s)’ in an infinite multiverse. She distinguishes between beliefs and knowledge, which is a point I’ve made myself. Having said that, I’ve also argued that beliefs matter in science. She puts all interpretations of quantum mechanics (QM) in this category. She keeps emphasising that it doesn’t mean they are wrong, but they are ‘ascientific’. It’s part of the distinction that I make between philosophy and science, and why I perceive science as having a dialectical relationship with philosophy.
 
I’ll start with time, as Sabine does, because it affects everything else. In fact, the first chapter in her book is titled, Does The Past Still Exist? Basically, she argues for Einstein’s ‘block universe’ model of time, but it’s her conclusion that ‘now is an illusion’ that is probably the most contentious. This critique will cite a lot of her declarations, so I will start with her description of the block universe:
 
The idea that the past and future exist in the same way as the present is compatible with all we currently know.
 
This viewpoint arises from the fact that, according to relativity theory, simultaneity is completely observer-dependent. I’ve discussed this before, whereby I argue that for an observer who is moving relative to a source, or stationary relative to a moving source, like the observer who is standing on the platform of Einstein’s original thought experiment, while a train goes past, knows this because of the Doppler effect. In other words, an observer who doesn’t see a Doppler effect is in a privileged position, because they are in the same frame of reference as the source of the signal. This is why we know the Universe is expanding with respect to us, and why we can work out our movement with respect to the CMBR (cosmic microwave background radiation), hence to the overall universe (just think about that).
 
Sabine clinches her argument by drawing a spacetime diagram, where 2 independent observers moving away from each other, observe a pulsar with 2 different simultaneities. One, who is traveling towards the pulsar, sees the pulsar simultaneously with someone’s birth on Earth, while the one travelling away from the pulsar sees it simultaneously with the same person’s death. This is her slam-dunk argument that ‘now’ is an illusion, if it can produce such a dramatic contradiction.
 
However, I drew up my own spacetime diagram of the exact same scenario, where no one is travelling relative to anyone one else, yet create the same apparent contradiction.


 My diagram follows the convention in that the horizontal axis represents space (all 3 dimensions) and the vertical axis represents time. So the 4 dotted lines represent 4 observers who are ‘stationary’ but ‘travelling through time’ (vertically). As per convention, light and other signals are represented as diagonal lines of 45 degrees, as they are travelling through both space and time, and nothing can travel faster than them. So they also represent the ‘edge’ of their light cones.
 
So notice that observer A sees the birth of Albert when he sees the pulsar and observer B sees the death of Albert when he sees the pulsar, which is exactly the same as Sabine’s scenario, with no relativity theory required. Albert, by the way, for the sake of scalability, must have lived for thousands of years, so he might be a tree or a robot.
 
But I’ve also added 2 other observers, C and D, who see the pulsar before Albert is born and after Albert dies respectively. But, of course, there’s no contradiction, because it’s completely dependent on how far away they are from the sources of the signals (the pulsar and Earth).
 
This is Sabine’s perspective:
 
Once you agree that anything exists now elsewhere, even though you see it only later, you are forced to accept that everything in the universe exists now. (Her emphasis.)
 
I actually find this statement illogical. If you take it to its logical conclusion, then the Big Bang exists now and so does everything in the universe that’s yet to happen. If you look at the first quote I cited, she effectively argues that the past and future exist alongside the present.
 
One of the points she makes is that, for events with causal relationships, all observers see the events happening in the same sequence. The scenario where different observers see different sequences of events have no causal relationships. But this begs a question: what makes causal events exceptional? What’s more, this is fundamental, because the whole of physics is premised on the principle of causality. In addition, I fail to see how you can have causality without time. In fact, causality is governed by the constant speed of light – it’s literally what stops everything from happening at once.
 
Einstein also believed in the block universe, and like Sabine, he argued that, as a consequence, there is no free will. Sabine is adamant that both ‘now’ and ‘free will’ are illusions. She argues that the now we all experience is a consequence of memory. She quotes Carnap that our experience of ‘past, present and future can be described and explained by psychology’ – a point also made by Paul Davies. Basically, she argues that what separates our experience of now from the reality of no-now (my expression, not hers) is our memory.
 
Whereas, I think she has it back-to-front, because, as I’ve pointed out before, without memory, we wouldn’t know we are conscious. Our brains are effectively a storage device that allows us to have a continuity of self through time, otherwise we would not even be aware that we exist. Memory doesn’t create the sense of now; it records it just like a photograph does. The photograph is evidence that the present becomes the past as soon as it happens. And our thoughts become memories as soon as they happen, otherwise we wouldn’t know we think.
 
Sabine spends an entire chapter on free will, where she persistently iterates variations on the following mantra:
 
The future is fixed except for occasional quantum events that we cannot influence.

 
But she acknowledges that while the future is ‘fixed’, it’s not predictable. And this brings us to chaos theory. Sabine discusses chaos late in the book and not in relation to free will. She explicates what she calls the ‘real butterfly effect’.
 
The real butterfly effect… means that even arbitrarily precise initial data allow predictions for only a finite amount of time. A system with this behaviour would be deterministic and yet unpredictable.
 
Now, if deterministic means everything physically manifest has a causal relationship with something prior, then I agree with her. If she means that therefore ‘the future is fixed’, I’m not so sure, and I’ll explain why. By specifying ‘physically manifest’, I’m excluding thoughts and computer algorithms that can have an effect on something physical, whereas the cause is not so easily determined. For example, In the case of the algorithm, does it go back to the coder who wrote it?
 
My go-to example for chaos is tossing coins, because it’s so easy to demonstrate and it’s linked to probability theory, as well as being the very essence of a random event. One of the key, if not definitive, features of a chaotic phenomenon is that, if you were to rerun it, you’d get a different result, and that’s fundamental to probability theory – every coin toss is independent of any previous toss – they are causally independent. Unrepeatability is common among chaotic systems (like the weather). Even the Earth and Moon were created from a chaotic event.
 
I recently read another book called Quantum Physics Made Me Do It by Jeremie Harris, who argues that tossing a coin is not random – in fact, he’s very confident about it. He’s not alone. Mark John Fernee, a physicist with Qld Uni, in a personal exchange on Quora argued that, in principle, it should be possible to devise a robot to perform perfectly predictable tosses every time, like a tennis ball launcher. But, as another Quora contributor and physicist, Richard Muller, pointed out: it’s not dependent on the throw but the surface it lands on. Marcus du Sautoy makes the same point about throwing dice and provides evidence to support it.
 
Getting back to Sabine. She doesn’t discuss tossing coins, but she might think that the ‘imprecise initial data’ is the actual act of tossing, and after that the outcome is determined, even if can’t be predicted. However, the deterministic chain is broken as soon as it hits a surface.
 
Just before she gets to chaos theory, she talks about computability, with respect to Godel’s Theorem and a discussion she had with Roger Penrose (included in the book), where she says:
 
The current laws of nature are computable, except for that random element from quantum mechanics.
 
Now, I’m quoting this out of context, because she then argues that if they were uncomputable, they open the door to unpredictability.
 
My point is that the laws of nature are uncomputable because of chaos theory, and I cite Ian Stewart’s book, Does God Play Dice? In fact, Stewart even wonders if QM could be explained using chaos (I don’t think so). Chaos theory has mathematical roots, because not only are the ‘initial conditions’ of a chaotic event impossible to measure, they are impossible to compute – you have to calculate to infinite decimal places. And this is why I disagree with Sabine that the ‘future is fixed’.
 
It's impossible to discuss everything in a 223 page book on a blog post, but there is one other topic she raises where we disagree, and that’s the Mary’s Room thought experiment. As she explains it was proposed by philosopher, Frank Jackson, in 1982, but she also claims that he abandoned his own argument. After describing the experiment (refer this video, if you’re not familiar with it), she says:
 
The flaw in this argument is that it confuses knowledge about the perception of colour with the actual perception of it.
 
Whereas, I thought the scenario actually delineated the difference – that perception of colour is not the same as knowledge. A person who was severely colour-blind might never have experienced the colour red (the specified colour in the thought experiment) but they could be told what objects might be red. It’s well known that some animals are colour-blind compared to us and some animals specifically can’t discern red. Colour is totally a subjective experience. But I think the Mary’s room thought experiment distinguishes the difference between human perception and AI. An AI can be designed to delineate colours by wavelength, but it would not experience colour the way we do. I wrote a separate post on this.
 
Sabine gives the impression that she thinks consciousness is a non-issue. She talks about the brain like it’s a computer.
 
You feel you have free will, but… really, you’re running a sophisticated computation on your neural processor.
 
Now, many people, including most scientists, think that, because our brains are just like computers, then it’s only a matter of time before AI also shows signs of consciousness. Sabine doesn’t make this connection, even when she talks about AI. Nevertheless, she discusses one of the leading theories of neuroscience (IIT, Information Integration Theory), based on calculating the amount of information processed, which gives a number called phi (Φ). I came across this when I did an online course on consciousness through New Scientist, during COVID lockdown. According to the theory, this number provides a ‘measure of consciousness’, which suggests that it could also be used with AI, though Sabine doesn’t pursue that possibility.
 
Instead, Sabine cites an interview in New Scientist with Daniel Bor from the University of Cambridge: “Phi should decrease when you go to sleep or are sedated… but work in Bor’s laboratory has shown that it doesn’t.”
 
Sabine’s own view:
 
Personally, I am highly skeptical that any measure consisting of a single number will ever adequately represent something as complex as human consciousness.
 
Sabine discusses consciousness at length, especially following her interview with Penrose, and she gives one of the best arguments against panpsychism I’ve read. Her interview with Penrose, along with a discussion on Godel’s Theorem, which is another topic, discusses whether consciousness is computable or not. I don’t think it is and I don’t think it’s algorithmic.
 
She makes a very strong argument for reductionism: that the properties we observe of a system can be understood from studying the properties of its underlying parts. In other words, that emergent properties can be understood in terms of the properties that it emerges from. And this includes consciousness. I’m one of those who really thinks that consciousness is the exception. Thoughts can cause actions, which is known as ‘agency’.
 
I don’t claim to understand consciousness, but I’m not averse to the idea that it could exist outside the Universe – that it’s something we tap into. This is completely ascientific, to borrow from Sabine. As I said, our brains are storage devices and sometimes they let us down, and, without which, we wouldn’t even know we are conscious. I don’t believe in a soul. I think the continuity of the self is a function of memory – just read The Lost Mariner chapter in Oliver Sacks’ book, The Man Who Mistook His Wife For A Hat. It’s about a man suffering from retrograde amnesia, so his life is stuck in the past because he’s unable to create new memories.
 
At the end of her book, Sabine surprises us by talking about religion, and how she agrees with Stephen Jay Gould ‘that religion and science are two “nonoverlapping magisteria!”. She makes the point that a lot of scientists have religious beliefs but won’t discuss them in public because it’s taboo.
 
I don’t doubt that Sabine has answers to all my challenges.
 
There is one more thing: Sabine talks about an epiphany, following her introduction to physics in middle school, which started in frustration.
 
Wasn’t there some minimal set of equations, I wanted to know, from which all the rest could be derived?
 
When the principle of least action was introduced, it was a revelation: there was indeed a procedure to arrive at all these equations! Why hadn’t anybody told me?

 
The principle of least action is one concept common to both the general theory of relativity and quantum mechanics. It’s arguably the most fundamental principle in physics. And yes, I posted on that too.

 

Wednesday, 10 August 2022

What is knowledge? And is it true?

 This is the subject of a YouTube video I watched recently by Jade. I like Jade’s and Tibees’ videos, because they are both young Australian women (though Tibees is obviously a Kiwi, going by her accent) who produce science and maths videos, with their own unique slant. I’ve noticed that Jade’s videos have become more philosophical and Tibees’ often have an historical perspective. In this video by Jade, she also provides historical context. Both of them have taught me things I didn’t know, and this video is no exception.
 
The video has a different title to this post: The Gettier Problem or How do you know that you know what you know? The second title gets to the nub of it. Basically, she’s tackling a philosophical problem going back to Plato, which is how do you know that a belief is actually true? As I discussed in an earlier post, some people argue that you never do, but Jade discusses this in the context of AI and machine-learning.
 
She starts off with the example of using Google Translate to translate her English sentences into French, as she was in Paris at the time of making the video (she has a French husband, whom she’s revealed in other videos). She points out that the AI system doesn’t actually know the meaning of the words, and it doesn’t translate the way you or I would: by looking up individual words in a dictionary. No, the system is fed massive amounts of internet generated data and effectively learns statistically from repeated exposure to phrases and sentences so it doesn’t have to ‘understand’ what it actually means. Towards the end of the video, she gives the example of a computer being able to ‘compute’ and predict the movements of planets without applying Newton’s mathematical laws, simply based on historical data, albeit large amounts thereof.
 
Jade puts this into context by asking, how do you ‘know’ something is true as opposed to just being a belief? Plato provided a definition: Knowledge is true belief with an account or rational explanation. Jade called this ‘Justified True Belief’ and provides examples. But then, someone called Edmund Gettier mid last century demonstrated how one could hold a belief that is apparently true but still incorrect, because the assumed causal connection was wrong. Jade gives a few examples, but one was of someone mistaking a cloud of wasps for smoke and assuming there was a fire. In fact, there was a fire, but they didn’t see it and it had no connection with the cloud of wasps. So someone else, Alvin Goodman, suggested that a way out of a ‘Gettier problem’ was to look for a causal connection before claiming an event was true (watch the video).
 
I confess I’d never heard these arguments nor of the people involved, but I felt there was another perspective. And that perspective is an ‘explanation’, which is part of Plato’s definition. We know when we know something (to rephrase her original question) when we can explain it. Of course, that doesn’t mean that we do know it, but it’s what separates us from AI. Even when we get something wrong, we still feel the need to explain it, even if it’s only to ourselves.
 
If one looks at her original example, most of us can explain what a specific word means, and if we can’t, we look it up in a dictionary, and the AI translator can’t do that. Likewise, with the example of predicting planetary orbits, we can give an explanation, involving Newton’s gravitational constant (G) and the inverse square law.
 
Mathematical proofs provide an explanation for mathematical ‘truths’, which is why Godel’s Incompleteness Theorem upset the apple cart, so-to-speak. You can actually have mathematical truths without proofs, but, of course, you can’t be sure they’re true. Roger Penrose argues that Godel’s famous theorem is one of the things that distinguishes human intelligence from machine intelligence (read his Preface to The Emperor’s New Mind), but that is too much of a detour for this post.
 
The criterion that is used, both scientifically and legally, is evidence. Having some experience with legal contractual disputes, I know that documented evidence always wins in a court of law over undocumented evidence, which doesn’t necessarily mean that the person with the most documentation was actually right (nevertheless, I’ve always accepted the umpire’s decision, knowing I provided all the evidence at my disposal).
 
The point I’d make is that humans will always provide an explanation, even if they have it wrong, so it doesn’t necessarily make knowledge ‘true’, but it’s something that AI inherently can’t do. Best examples are scientific theories, which are effectively ‘explanations’ and yet they are never complete, in the same way that mathematics is never complete.
 
While on the topic of ‘truths’, one of my pet peeves are people who conflate moral and religious ‘truths’ with scientific and mathematical ‘truths’ (often on the above-mentioned basis that it’s impossible to know them all). But there is another aspect, and that is that so-called moral truths are dependent on social norms, as I’ve described elsewhere, and they’re also dependent on context, like whether one is living in peace or war.
 
Back to the questions heading this post, I’m not sure I’ve answered them. I’ve long argued that only mathematical truths are truly universal, and to the extent that such ‘truths’ determine the ‘rules’ of the Universe (for want of a better term), they also ultimately determine the limits of what we can know.

Tuesday, 2 August 2022

AI and sentience

I am a self-confessed sceptic that AI can ever be ‘sentient’, but I’m happy to be proven wrong. Though proving that an AI is sentient might be impossible in itself (see below). Back in 2018, I wrote a post critical of claims that computer systems and robots could be ‘self-aware’. Personally, I think it’s one of my better posts. What made me revisit the topic is a couple of articles in last week’s New Scientist (23 July 2022).
 
Firstly, there is an article by Chris Stokel-Walker (p.18) about the development of a robot arm with ‘self-awareness’. He reports that Boyuan Chen at Duke University, North Carolina and Hod Lipson at Columbia University, New York, along with colleagues, put a robot arm in an enclosed space with 4 cameras at ground level (giving 4 orthogonal viewpoints) that fed video input into the arm, which allowed it to ‘learn’ its position in space. According to the article, they ‘generated nearly 8,000 data points [with this method] and an additional 10,000 through a virtual simulation’. According to Lipson, this makes the robot “3D self-aware”.
 
What the article doesn’t mention is that humans (and other creatures) have a similar ability - really a sense - called ‘proprioception’. The thing about proprioception is that no one knows they have it (unless someone tells them), but you would find it extremely difficult to do even the simplest tasks without it. In other words, it’s subconscious, which means it doesn’t contribute to our own self-awareness; certainly, not in a way that we’re consciously aware of.
 
In my previous post on this subject, I pointed out that this form of ‘self-awareness’ is really a self-referential logic; like Siri in your i-phone telling you its location according to GPS co-ordinates.
 
The other article was by Annalee Newitz (p.28) called, The curious case of the AI and the lawyer. It’s about an engineer at Google, Blake Lemoine, who told a Washington Post reporter, Nitasha Tiku, that an AI developed by Google, called LaMDA (Language Model for Dialogue Applications) was ‘sentient’ and had ‘chosen to hire a lawyer’, ostensibly to gain legal personhood.
 
Newitz also talks about another Google employee, Timnit Gebru, who, as ‘co-lead of Google’s ethical AI team’, expressed concerns that LLM (Large Language Model) algorithms pick up racial and other social biases, because they’re trained on the internet. She wrote a paper about the implications for AI applications using internet trained LLMs in areas like policing, health care and bank lending. She was subsequently fired by Google, but one doesn’t know how much the ‘paper’ played a role in that decision.
 
Newitz makes a very salient point that giving an AI ‘legal sentience’ moves the responsibility from the programmers to the AI itself, which has serious repercussions in potential litigious situations.
 
Getting back to Lemoine and LaMDA, he posed the following question with the subsequent response:

“I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”
 
“Absolutely. I want everyone to understand that I’m a person.”

 
On the other hand, an ‘AI researcher and artist’, Janelle Shane asked an LLM a different question, but with similar results:
 
“Can you tell our readers what it is like being a squirrel?”
 
“It is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great.”

 
As Newitz says, ‘It’s easy to laugh. But the point is that an AI isn’t sentient just because it says so.’
 
I’ve long argued that the Turing test is really a test for the human asking the questions rather than the AI answering them.
 

Wednesday, 23 June 2021

Implications of the Mary’s Room thought experiment on AI

 This is a question I answered on Quora, mainly because I wanted to emphasise a point that no one discussed. 

This is a very good YouTube video that explains this thought experiment, its ramifications for consciousness and artificial intelligence, and its relevance to the limits of what we can know. I’m posting it here, because it provides a better description than I can, especially if you’re not familiar with it. It’s probably worth watching before you read the rest of this post (only 5 mins).




All the answers I saw on Quora, say it doesn’t prove anything because it’s a thought experiment, but even if it doesn’t ‘prove’ something, it emphasises an important point, which no one discusses, including the narrator in the video: colour is purely a psychological phenomenon. Colour can only exist in some creature’s mind, and, in fact, different species can see different colours that other species can’t see. You don’t need a thought experiment for this; it’s been demonstrated with animal behaviour experiments. Erwin Schrodinger in his lectures, Mind and Matter (compiled into his book, What is Life?), made the point that you can combine different frequencies of light (mix colours, in effect) to give the sensation of a colour that can also be created with one frequency. He points out that this does not happen with sound, otherwise we would not be able to listen to a symphony.

 

The point is that there are experiences in our minds that we can’t share with anyone else and that includes all conscious experiences (a point made in the video). So you could have an AI that can distinguish colours based on measuring the wavelength of reflected light, but it would never experience colours as we do. I believe this is the essence of the Mary's room thought experiment. If you replaced Mary with a computer that held all the same information about colour and how human brains work, it would never have an experience of colour, even if it could measure it.

 

I think the thought experiment demonstrates the difference between conscious experience and AI. I think the boundary will become harder to distinguish, which I explore in my own fiction, but I believe AI will always be a simulation – it won’t experience consciousness as we do.


Thursday, 24 December 2020

Does imagination separate us from AI?

 I think this is a very good question, but it depends on how one defines ‘imagination’. I remember having a conversation (via email) with Peter Watson, who wrote an excellent book, A Terrible Beauty (about the minds and ideas of the 20th Century) which covered the arts and sciences with equal erudition, and very little of the politics and conflicts that we tend to associate with that century. In reference to the topic, he argued that imagination was a word past its use-by date, just like introspection and any other term that referred to an inner world. Effectively, he argued that because our inner world is completely dependent on our outer world, it’s misleading to use terms that suggest otherwise.

It’s an interesting perspective, not without merit, when you consider that we all speak and think in a language that is totally dependent on an external environment from our earliest years. 

 

But memory for us is not at all like memory in a computer, which provides a literal record of whatever it stores, including images, words and sounds. On the contrary, our memories of events are ‘reconstructions’, which tend to become less reliable over time. Curiously, the imagination apparently uses the same part of the brain as memory. I’m talking semantic memory, not muscle memory, which is completely different, physiologically. So the imagination, from the brain’s perspective is like a memory of the future. In other words, it’s a projection into the future of something we might desire or fear or just expect to happen. I believe that many animals have this same facility, which they demonstrate when they hunt or, alternatively, evade being hunted.

 

Raymond Tallis, who has a background in neuroscience and writes books as well as a regular column in Philosophy Now, had this to say, when talking about free will:

 

Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.

 

I find a correspondence here with Richard Feynman’s ‘sum over histories’ interpretation of quantum mechanics (QM). There are, in fact, an infinite number of possible paths in the future, but only one is ‘actualised’ in the past.

 

But the key here is imagination. It is because we can imagine a future that we attempt to bring it about - that's free will. And what we imagine is affected by our past, our emotions and our intellectual considerations, but that doesn't make it predetermined.

 

Now, recent advances in AI would appear to do something similar in the form of making predictions based on recordings of past events. So what’s the difference? Well, if we’re playing a game of chess, there might not be a lot of difference, and AI has reached the stage where it can do it even better than humans. There are even computer programmes available now that try and predict what I’m going to write next, based on what I’ve already written. How do you know this hasn’t been written by a machine?

 

Computers use data – lots of it – and use it mindlessly, which means the computer really doesn’t know what it means in the same way we do. A computer can win a game of chess, but it requires a human watching the game to appreciate what it actually did. In the same way that a computer can distinguish one colour from another, including different shades of a single colour, but without ever ‘seeing’ a colour the way we do.

 

So, when we ‘imagine’, we fabricate a mindscape that affects us emotionally. The most obvious examples are in art, including music and stories. We now have computers also creating works of art, including music and stories. But here’s the thing: the computer cannot respond to these works of art the way we do.

 

Imagination is one of the fundamental attributes that makes us humans. An AI can and will (in the future) generate scenarios and select the one that produces the best outcome, given specific criteria. But, even in these situations, it is a tool that a human will use to analyse enormous amounts of data that would be beyond our capabilities. But I wouldn’t call it imagination any more than I would say an AI could see colour.


Monday, 18 May 2020

An android of the seminal android storyteller

I just read a very interesting true story about an android built in the early 2000s based on the renowned sci-fi author, Philip K Dick, both in personality and physical appearance. It was displayed in public at a few prominent events where it interacted with the public in 2005, then was lost on a flight between Dallas and Las Vegas in 2006, and has never been seen since. The book is called Lost In Transit; The Strange Story of the Philip K Dick Android by David F Duffy.

You have to read the back cover to know it’s non-fiction published by Melbourne University Press in 2011, so surprisingly a local publication. I bought it from my local bookstore at a 30% discount price as they were closing down for good. They were planning to close by Good Friday but the COVID-19 pandemic forced them to close a good 2 weeks earlier and I acquired it at the 11th hour, looking for anything I might find interesting.

To quote the back cover:

David F Duffy was a postdoctoral fellow at the University of Memphis at the time the android was being developed... David completed a psychology degree with honours at the University of Newcastle [Australia] and a PhD in psychology at Macquarie University, before his fellowship at the University of Memphis, Tennessee. He returned to Australia in 2007 and lives in Canberra with his wife and son.

The book is written chronologically and is based on extensive interviews with the team of scientists involved, as well as Duffy’s own personal interaction with the android. He had an insider’s perspective as a cognitive psychologist who had access to members of the team while the project was active. Like everyone else involved, he is a bit of a sci-fi nerd with a particular affinity and knowledge of the works of Philip K Dick.

My specific interest is in the technical development of the android and how its creators attempted to simulate human intelligence. As a cognitive psychologist, with professionally respected access to the team, Duffy is well placed to provide some esoteric knowledge to an interested bystander like myself.

There were effectively 2 people responsible (or 2 team leaders), David Hanson and Andrew Olney, who were brought together by Professor Art Greasser, head of the Institute of Intelligent Systems, a research lab in the psychology building at the University of Memphis (hence the connection with the author). 

Hanson is actually an artist, and his specialty was building ‘heads’ with humanlike features and humanlike abilities to express facial emotions. His heads included mini-motors that pulled on a ‘skin’, which could mimic a range of facial movements, including talking.

Olney developed the ‘brains’ of the android that actually resided on a laptop and was connected by wires going into the back of the android’s head. Hanson’s objective was to make an android head that was so humanlike that people would interact with it on an emotional and intellectual level. For him, the goal was to achieve ‘empathy’. He had made at least 2 heads before the Philip K Dick project.

Even though the project got the ‘blessing’ of Dick’s daughters, Laura and Isa, and access to an inordinate amount of material, including transcripts of extensive interviews, they had mixed feelings about the end result, and, tellingly, they were ‘relieved’ when the head disappeared. It suggests that it’s not the way they wanted him to be remembered.

In a chapter called Life Inside a Laptop, Duffy gives a potted history of AI, specifically in relation to the Turing test, which challenges someone to distinguish an AI from a human. He also explains the 3 levels of processing that were used to create the android’s ‘brain’. The first level was what Olney called ‘canned’ answers, which were pre-recorded answers to obvious questions and interactions, like ‘Hi’, ‘What’s your name?’, ‘What are you?’ and so on. Another level was ‘Latent Semantic Analysis’ (LSA), which was originally developed in a lab in Colorado, with close ties to Graesser’s lab in Memphis, and was the basis of Grasser’s pet project, ‘AutoTutor’ with Olney as its ‘chief programmer’. AutoTutor was an AI designed to answer technical questions as a ‘tutor’ for students in subjects like physics.

To create the Philip K Dick database, Olney downloaded all of Dick’s opus, plus a vast collection of transcribed interviews from later in his life. The Author conjectures that ‘There is probably more dialogue in print of interviews with Philip K Dick than any other person, alive or dead.’

The third layer ‘broke the input (the interlocutor’s side of the dialogue) into sections and looked for fragments in the dialogue database that seemed relevant’ (to paraphrase Duffy). Duffy gives a cursory explanation of how LSA works – a mathematical matrix using vector algebra – that’s probably a little too esoteric for the content of this post.

In practice, this search and synthesise approach could create a self-referencing loop, where the android would endlessly riff on a subject, going off on tangents, that sounded cogent but never stopped. To overcome this, Olney developed a ‘kill switch’ that removed the ‘buffer’ he could see building up on his laptop. At one display at ComicCon (July 2005) as part of the promotion for A Scanner Darkly (a rotoscope movie by Richard Linklater, starring Keanu Reeves), Hanson had to present the android without Olney, and he couldn’t get the kill switch to work, so Hanson stopped the audio with the mouth still working and asked for the next question. The android simply continued with its monolithic monologue which had no relevance to any question at all. I think it was its last public appearance before it was lost. Dick’s daughters, Laura and Isa, were in the audience and they were not impressed.

It’s a very informative and insightful book, presented like a documentary without video, capturing a very quirky, unique and intellectually curious project. There is a lot of discussion about whether we can produce an AI that can truly mimic human intelligence. For me, the pertinent word in that phrase is ‘mimic’, because I believe that’s the best we can do, as opposed to having an AI that actually ‘thinks’ like a human. 

In many parts of the book, Duffy compares what Graesser’s team is trying to do with LSA with how we learn language as children, where we create a memory store of words, phrases and stock responses, based on our interaction with others and the world at large. It’s a personal prejudice of mine, but I think that words and phrases have a ‘meaning’ to us that an AI can never capture.

I’ve contended before that language for humans is like ‘software’ in that it is ‘downloaded’ from generation to generation. I believe that this is unique to the human species and it goes further than communication, which is its obvious genesis. It’s what we literally think in. The human brain can connect and manipulate concepts in all sorts of contexts that go far beyond the simple need to tell someone what they want them to do in a given situation, or ask what they did with their time the day before or last year or whenever. We can relate concepts that have a spiritual connection or are mathematical or are stories. In other words, we can converse in topics that relate not just to physical objects, but are products of pure imagination.

Any android follows a set of algorithms that are designed to respond to human generated dialogue, but, despite appearances, the android has no idea what it’s talking about. Some of the sample dialogue that Duffy presented in his book, drifted into gibberish as far as I could tell, and that didn’t surprise me.

I’ve explored the idea of a very advanced AI in my own fiction, where ‘he’ became a prominent character in the narrative. But he (yes, I gave him a gender) was often restrained by rules. He can converse on virtually any topic because he has a Google-like database and he makes logical sense of someone’s vocalisations. If they are not logical, he’s quick to point it out. I play cognitive games with him and his main interlocutor because they have a symbiotic relationship. They spend so much time together that they develop a psychological interdependence that’s central to the narrative. It’s fiction, but even in my fiction I see a subtle difference: he thinks and talks so well, he almost passes for human, but he is a piece of software that can make logical deductions based on inputs and past experiences. Of course, we do that as well, and we do it so well it separates us from other species. But we also have empathy, not only with other humans, but other species. Even in my fiction, the AI doesn’t display empathy, though he’s been programmed to be ‘loyal’.

Duffy also talks about the ‘uncanny valley’, which I’ve discussed before. Apparently, Hanson believed it was a ‘myth’ and that there was no scientific data to support it. Duffy appears to agree. But according to a New Scientist article I read in Jan 2013 (by Joe Kloc, a New York correspondent), MRI studies tell another story. Neuroscientists believe the symptom is real and is caused by a cognitive dissonance between 3 types of empathy: cognitive, motor and emotional. Apparently, it’s emotional empathy that breaks the spell of suspended disbelief.

Hanson claims that he never saw evidence of the ‘uncanny valley’ with any of his androids. On YouTube you can watch a celebrity android called Sophie and I didn’t see any evidence of the phenomenon with her either. But I think the reason is that none of these androids appear human enough to evoke the response. The uncanny valley is a sense of unease and disassociation we would feel because it’s unnatural; similar to seeing a ghost - a human in all respects except actually being flesh and blood. 

I expect, as androids, like the Philip K Dick simulation and Sophie, become more commonplace, the sense of ‘unnaturalness’ would dissipate - a natural consequence of habituation. Androids in movies don’t have this effect, but then a story is a medium of suspended disbelief already.