Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Wednesday 24 January 2024

Can AI have free will?

This is a question I’ve never seen asked, let alone answered. I think there are good reasons for that, which I’ll come to later.
 
The latest issue of Philosophy Now (Issue 159, Dec 2023/Jan 2024), which I’ve already referred to in 2 previous posts, has as its theme (they always have a theme), Freewill Versus Determinism. I’ll concentrate on an article by the Editor, Grant Bartley, titled What Is Free Will? That’s partly because he and I have similar views on the topic, and partly because reading the article led me to ask the question at the head of this post (I should point out that he never mentions AI).
 
It's a lengthy article, meaning I won’t be able to fully do it justice, or even cover all aspects that he discusses. For instance, towards the end, he posits a personal ‘pet’ theory that there is a quantum aspect to the internal choice we make in our minds. And he even provides a link to videos he’s made on this topic. I mention this in passing, and will make 2 comments: one, I also have ‘pet’ theories, so I can’t dismiss him out-of-hand; and two, I haven’t watched the videos, so I can’t comment on its plausibility.
 
He starts with an attempt to define what we mean by free will, and what it doesn’t mean. For instance, he differentiates between subconscious choices, which he calls ‘impulses’ and free will, which requires a conscious choice. He also differentiates what he calls ‘making a decision’. I will quote him directly, as I still see this involving free will, if it’s based on making a ‘decision’ from alternative possibilities (as he explains).
 
…sometimes, our decision-making is a choice, that is, mentally deciding between alternative possibilities present to your awareness. But your mind doesn’t always explicitly present you with multiple choices from which to choose. Sometimes no distinct options are present in your awareness, and you must cause your next contents of your mind on the basis of the present content, through intuition and imagination. This is not choice so much as making a decision. (My emphasis)
 
This is worth a detour, because I see what he’s describing in this passage as the process I experience when writing fiction, which is ‘creating’. In this case, some of the content, if not all of it, is subconscious. When you write a story, it feels to you (but no one else) that the characters are real and the story you’re telling already exists. Nevertheless, I still think there’s an element of free will, because you make choices and judgements about what your imagination presents to your consciousness. As I said, this is a detour.
 
I don’t think this is what he’s referring to, and I’ll come back to it later when I introduce AI into the discussion. Meanwhile, I’ll discuss what I think is the nub of his thesis and my own perspective, which is the apparent dependency between consciousness and free will.
 
If conscious causation is not real, why did consciousness evolve at all? What would be the function of awareness if it can’t change behaviour? How could an impotent awareness evolve if it cannot change what the brain’s going to do to help the human body or its genes survive?
(Italics in the original)
 
This is a point I’ve made myself, but Bartley goes further and argues “Since determinism can’t answer these questions, we can know determinism is false.” This is the opposite to Sabine Hossenfelder’s argument (declaration really) that ‘free will is an illusion [therefore false]’.
 
Note that Bartley coins the term, ‘conscious causation’, as a de facto synonym for free will. In fact, he says this explicitly in his conclusion: “If you say there is no free will, you’re basically saying there is no such thing as conscious causation.” I’d have to agree.
 
I made the point in another post that consciousness seems to act outside the causal chain of the Universe, and I feel that’s what Bartley is getting at. In fact, he explicitly cites Kant on this point, who (according to Bartley) “calls the will ‘transcendental’…” He talks at length about ‘soft (or weak) determinism’ and ‘strong determinism’, which I’ve also discussed. Now, the usual argument is that consciousness is ‘caused’ by neuron activity, therefore strong determinism is not broken.
 
To quote Hossenfelder: Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking. (Hossenfelder even uses the term ‘software’ to describe what does the ‘calculating’ in your brain.)
 
And this allows me to segue into AI, because what Hossenfelder describes is what we expect a computer to do. The thing is that while most scientists (and others) believe that AI will eventually become conscious (not sure what Hossenfelder thinks), I’ve never heard or seen anyone argue that AI will have free will. And this is why I don’t think the question at the head of this post has ever been asked. Many of the people who believe that AI will become conscious also don’t believe free will exists.
 
There is another component to this, which I’ve raised before and that’s imagination. I like to quote Raymond Tallis (neuroscientist and also a contributor to Philosophy Now).
 
Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.
(My emphasis)
 
Now, in another post, I argued that AI can’t have imagination in the way we experience it, yet I acknowledge that AI can look at numerous possibilities (like in a game of chess) and 'choose' what it ‘thinks’ is the optimum action. So, in this sense, AI would have ‘agency’, but that’s not free will, because it’s not ‘conscious causation’. And in this sense, I agree with Bartley that ‘making a decision’ does not constitute free will, if it’s what an AI does. So the difference is consciousness. To quote from that same post on this topic.
 
But the key here is imagination. It is because we can imagine a future that we attempt to bring it about - that's free will. And what we imagine is affected by our past, our emotions and our intellectual considerations, but that doesn't make it predetermined.
 
So, if imagination and consciousness are both faculties that separate us from AI, then I can’t see AI having free will, even though it will make ‘decisions’ based on data it receives (as inputs), and those decisions may not be predictable.
 
And this means that AI may not be deterministic either, in the ‘strong’ sense. One of the differences with humans, and other creatures that evolved consciousness, is that consciousness can apparently change the neural pathways of the brain, which I’d argue is the ‘strange loop’ posited by Douglas Hofstadter. (I have discussed free will and brain-plasticity in another post)
 
But there’s another way of looking at this, which differentiates humans from AI. Our decision-making is a combination of logical reasoning and emotion. AI only uses logic, and even then, it uses logic differently to us. It uses a database of samples and possibilities to come up with a ‘decision’ (or output), but without using the logic to arise at that decision the way we would. In other words, it doesn’t ‘understand’ the decision, like when it translates between languages, for example.
 
There is a subconscious and conscious component to our decision-making. Arguably, the subconscious component is analogous to what a computer does with algorithm-based software (as per Hossenfelder’s description). But there is no analogous conscious component in AI, which makes a choice or decision. In other words, there is no ‘conscious causation’, therefore no free will, as per Bartley’s definition.
 

Wednesday 7 June 2023

Consciousness, free will, determinism, chaos theory – all connected

 I’ve said many times that philosophy is all about argument. And if you’re serious about philosophy, you want to be challenged. And if you want to be challenged you should seek out people who are both smarter and more knowledgeable than you. And, in my case, Sabine Hossenfelder fits the bill.
 
When I read people like Sabine, and others whom I interact with on Quora, I’m aware of how limited my knowledge is. I don’t even have a university degree, though I’ve attempted a number of times. I’ve spent my whole life in the company of people smarter than me, including at school. Believe it or not, I still have occasional contact with them, through social media and school reunions. I grew up in a small rural town, where the people you went to school with feel like siblings.
 
Likewise, in my professional life, I have always encountered people cleverer than me – it provides perspective.
 
In her book, Existential Physics; A Scientist’s Guide to Life’s Biggest Questions, Sabine interviews people who are possibly even smarter than she is, and I sometimes found their conversations difficult to follow. To be fair to Sabine, she also sought out people who have different philosophical views to her, and also have the intellect to match her.
 
I’m telling you all this to put things in perspective. Sabine has her prejudices like everyone else, some of which she defends better than others. I concede that my views are probably more simplistic than hers, and I support my challenges with examples that are hopefully easy to follow. Our points of disagreement can be distilled down to a few pertinent topics, which are time, consciousness, free will and chaos. Not surprisingly, they are all related – what you believe about one, affects what you believe about the others.
 
Sabine is very strict about what constitutes a scientific theory. She argues that so-called theories like the multiverse have ‘no explanatory power’, because they can’t be verified or rejected by evidence, and she calls them ‘ascientific’. She’s critical of popularisers like Brian Cox who tell us that there could be an infinite number of ‘you(s)’ in an infinite multiverse. She distinguishes between beliefs and knowledge, which is a point I’ve made myself. Having said that, I’ve also argued that beliefs matter in science. She puts all interpretations of quantum mechanics (QM) in this category. She keeps emphasising that it doesn’t mean they are wrong, but they are ‘ascientific’. It’s part of the distinction that I make between philosophy and science, and why I perceive science as having a dialectical relationship with philosophy.
 
I’ll start with time, as Sabine does, because it affects everything else. In fact, the first chapter in her book is titled, Does The Past Still Exist? Basically, she argues for Einstein’s ‘block universe’ model of time, but it’s her conclusion that ‘now is an illusion’ that is probably the most contentious. This critique will cite a lot of her declarations, so I will start with her description of the block universe:
 
The idea that the past and future exist in the same way as the present is compatible with all we currently know.
 
This viewpoint arises from the fact that, according to relativity theory, simultaneity is completely observer-dependent. I’ve discussed this before, whereby I argue that for an observer who is moving relative to a source, or stationary relative to a moving source, like the observer who is standing on the platform of Einstein’s original thought experiment, while a train goes past, knows this because of the Doppler effect. In other words, an observer who doesn’t see a Doppler effect is in a privileged position, because they are in the same frame of reference as the source of the signal. This is why we know the Universe is expanding with respect to us, and why we can work out our movement with respect to the CMBR (cosmic microwave background radiation), hence to the overall universe (just think about that).
 
Sabine clinches her argument by drawing a spacetime diagram, where 2 independent observers moving away from each other, observe a pulsar with 2 different simultaneities. One, who is traveling towards the pulsar, sees the pulsar simultaneously with someone’s birth on Earth, while the one travelling away from the pulsar sees it simultaneously with the same person’s death. This is her slam-dunk argument that ‘now’ is an illusion, if it can produce such a dramatic contradiction.
 
However, I drew up my own spacetime diagram of the exact same scenario, where no one is travelling relative to anyone one else, yet create the same apparent contradiction.


 My diagram follows the convention in that the horizontal axis represents space (all 3 dimensions) and the vertical axis represents time. So the 4 dotted lines represent 4 observers who are ‘stationary’ but ‘travelling through time’ (vertically). As per convention, light and other signals are represented as diagonal lines of 45 degrees, as they are travelling through both space and time, and nothing can travel faster than them. So they also represent the ‘edge’ of their light cones.
 
So notice that observer A sees the birth of Albert when he sees the pulsar and observer B sees the death of Albert when he sees the pulsar, which is exactly the same as Sabine’s scenario, with no relativity theory required. Albert, by the way, for the sake of scalability, must have lived for thousands of years, so he might be a tree or a robot.
 
But I’ve also added 2 other observers, C and D, who see the pulsar before Albert is born and after Albert dies respectively. But, of course, there’s no contradiction, because it’s completely dependent on how far away they are from the sources of the signals (the pulsar and Earth).
 
This is Sabine’s perspective:
 
Once you agree that anything exists now elsewhere, even though you see it only later, you are forced to accept that everything in the universe exists now. (Her emphasis.)
 
I actually find this statement illogical. If you take it to its logical conclusion, then the Big Bang exists now and so does everything in the universe that’s yet to happen. If you look at the first quote I cited, she effectively argues that the past and future exist alongside the present.
 
One of the points she makes is that, for events with causal relationships, all observers see the events happening in the same sequence. The scenario where different observers see different sequences of events have no causal relationships. But this begs a question: what makes causal events exceptional? What’s more, this is fundamental, because the whole of physics is premised on the principle of causality. In addition, I fail to see how you can have causality without time. In fact, causality is governed by the constant speed of light – it’s literally what stops everything from happening at once.
 
Einstein also believed in the block universe, and like Sabine, he argued that, as a consequence, there is no free will. Sabine is adamant that both ‘now’ and ‘free will’ are illusions. She argues that the now we all experience is a consequence of memory. She quotes Carnap that our experience of ‘past, present and future can be described and explained by psychology’ – a point also made by Paul Davies. Basically, she argues that what separates our experience of now from the reality of no-now (my expression, not hers) is our memory.
 
Whereas, I think she has it back-to-front, because, as I’ve pointed out before, without memory, we wouldn’t know we are conscious. Our brains are effectively a storage device that allows us to have a continuity of self through time, otherwise we would not even be aware that we exist. Memory doesn’t create the sense of now; it records it just like a photograph does. The photograph is evidence that the present becomes the past as soon as it happens. And our thoughts become memories as soon as they happen, otherwise we wouldn’t know we think.
 
Sabine spends an entire chapter on free will, where she persistently iterates variations on the following mantra:
 
The future is fixed except for occasional quantum events that we cannot influence.

 
But she acknowledges that while the future is ‘fixed’, it’s not predictable. And this brings us to chaos theory. Sabine discusses chaos late in the book and not in relation to free will. She explicates what she calls the ‘real butterfly effect’.
 
The real butterfly effect… means that even arbitrarily precise initial data allow predictions for only a finite amount of time. A system with this behaviour would be deterministic and yet unpredictable.
 
Now, if deterministic means everything physically manifest has a causal relationship with something prior, then I agree with her. If she means that therefore ‘the future is fixed’, I’m not so sure, and I’ll explain why. By specifying ‘physically manifest’, I’m excluding thoughts and computer algorithms that can have an effect on something physical, whereas the cause is not so easily determined. For example, In the case of the algorithm, does it go back to the coder who wrote it?
 
My go-to example for chaos is tossing coins, because it’s so easy to demonstrate and it’s linked to probability theory, as well as being the very essence of a random event. One of the key, if not definitive, features of a chaotic phenomenon is that, if you were to rerun it, you’d get a different result, and that’s fundamental to probability theory – every coin toss is independent of any previous toss – they are causally independent. Unrepeatability is common among chaotic systems (like the weather). Even the Earth and Moon were created from a chaotic event.
 
I recently read another book called Quantum Physics Made Me Do It by Jeremie Harris, who argues that tossing a coin is not random – in fact, he’s very confident about it. He’s not alone. Mark John Fernee, a physicist with Qld Uni, in a personal exchange on Quora argued that, in principle, it should be possible to devise a robot to perform perfectly predictable tosses every time, like a tennis ball launcher. But, as another Quora contributor and physicist, Richard Muller, pointed out: it’s not dependent on the throw but the surface it lands on. Marcus du Sautoy makes the same point about throwing dice and provides evidence to support it.
 
Getting back to Sabine. She doesn’t discuss tossing coins, but she might think that the ‘imprecise initial data’ is the actual act of tossing, and after that the outcome is determined, even if can’t be predicted. However, the deterministic chain is broken as soon as it hits a surface.
 
Just before she gets to chaos theory, she talks about computability, with respect to Godel’s Theorem and a discussion she had with Roger Penrose (included in the book), where she says:
 
The current laws of nature are computable, except for that random element from quantum mechanics.
 
Now, I’m quoting this out of context, because she then argues that if they were uncomputable, they open the door to unpredictability.
 
My point is that the laws of nature are uncomputable because of chaos theory, and I cite Ian Stewart’s book, Does God Play Dice? In fact, Stewart even wonders if QM could be explained using chaos (I don’t think so). Chaos theory has mathematical roots, because not only are the ‘initial conditions’ of a chaotic event impossible to measure, they are impossible to compute – you have to calculate to infinite decimal places. And this is why I disagree with Sabine that the ‘future is fixed’.
 
It's impossible to discuss everything in a 223 page book on a blog post, but there is one other topic she raises where we disagree, and that’s the Mary’s Room thought experiment. As she explains it was proposed by philosopher, Frank Jackson, in 1982, but she also claims that he abandoned his own argument. After describing the experiment (refer this video, if you’re not familiar with it), she says:
 
The flaw in this argument is that it confuses knowledge about the perception of colour with the actual perception of it.
 
Whereas, I thought the scenario actually delineated the difference – that perception of colour is not the same as knowledge. A person who was severely colour-blind might never have experienced the colour red (the specified colour in the thought experiment) but they could be told what objects might be red. It’s well known that some animals are colour-blind compared to us and some animals specifically can’t discern red. Colour is totally a subjective experience. But I think the Mary’s room thought experiment distinguishes the difference between human perception and AI. An AI can be designed to delineate colours by wavelength, but it would not experience colour the way we do. I wrote a separate post on this.
 
Sabine gives the impression that she thinks consciousness is a non-issue. She talks about the brain like it’s a computer.
 
You feel you have free will, but… really, you’re running a sophisticated computation on your neural processor.
 
Now, many people, including most scientists, think that, because our brains are just like computers, then it’s only a matter of time before AI also shows signs of consciousness. Sabine doesn’t make this connection, even when she talks about AI. Nevertheless, she discusses one of the leading theories of neuroscience (IIT, Information Integration Theory), based on calculating the amount of information processed, which gives a number called phi (Φ). I came across this when I did an online course on consciousness through New Scientist, during COVID lockdown. According to the theory, this number provides a ‘measure of consciousness’, which suggests that it could also be used with AI, though Sabine doesn’t pursue that possibility.
 
Instead, Sabine cites an interview in New Scientist with Daniel Bor from the University of Cambridge: “Phi should decrease when you go to sleep or are sedated… but work in Bor’s laboratory has shown that it doesn’t.”
 
Sabine’s own view:
 
Personally, I am highly skeptical that any measure consisting of a single number will ever adequately represent something as complex as human consciousness.
 
Sabine discusses consciousness at length, especially following her interview with Penrose, and she gives one of the best arguments against panpsychism I’ve read. Her interview with Penrose, along with a discussion on Godel’s Theorem, which is another topic, discusses whether consciousness is computable or not. I don’t think it is and I don’t think it’s algorithmic.
 
She makes a very strong argument for reductionism: that the properties we observe of a system can be understood from studying the properties of its underlying parts. In other words, that emergent properties can be understood in terms of the properties that it emerges from. And this includes consciousness. I’m one of those who really thinks that consciousness is the exception. Thoughts can cause actions, which is known as ‘agency’.
 
I don’t claim to understand consciousness, but I’m not averse to the idea that it could exist outside the Universe – that it’s something we tap into. This is completely ascientific, to borrow from Sabine. As I said, our brains are storage devices and sometimes they let us down, and, without which, we wouldn’t even know we are conscious. I don’t believe in a soul. I think the continuity of the self is a function of memory – just read The Lost Mariner chapter in Oliver Sacks’ book, The Man Who Mistook His Wife For A Hat. It’s about a man suffering from retrograde amnesia, so his life is stuck in the past because he’s unable to create new memories.
 
At the end of her book, Sabine surprises us by talking about religion, and how she agrees with Stephen Jay Gould ‘that religion and science are two “nonoverlapping magisteria!”. She makes the point that a lot of scientists have religious beliefs but won’t discuss them in public because it’s taboo.
 
I don’t doubt that Sabine has answers to all my challenges.
 
There is one more thing: Sabine talks about an epiphany, following her introduction to physics in middle school, which started in frustration.
 
Wasn’t there some minimal set of equations, I wanted to know, from which all the rest could be derived?
 
When the principle of least action was introduced, it was a revelation: there was indeed a procedure to arrive at all these equations! Why hadn’t anybody told me?

 
The principle of least action is one concept common to both the general theory of relativity and quantum mechanics. It’s arguably the most fundamental principle in physics. And yes, I posted on that too.

 

Wednesday 10 August 2022

What is knowledge? And is it true?

 This is the subject of a YouTube video I watched recently by Jade. I like Jade’s and Tibees’ videos, because they are both young Australian women (though Tibees is obviously a Kiwi, going by her accent) who produce science and maths videos, with their own unique slant. I’ve noticed that Jade’s videos have become more philosophical and Tibees’ often have an historical perspective. In this video by Jade, she also provides historical context. Both of them have taught me things I didn’t know, and this video is no exception.
 
The video has a different title to this post: The Gettier Problem or How do you know that you know what you know? The second title gets to the nub of it. Basically, she’s tackling a philosophical problem going back to Plato, which is how do you know that a belief is actually true? As I discussed in an earlier post, some people argue that you never do, but Jade discusses this in the context of AI and machine-learning.
 
She starts off with the example of using Google Translate to translate her English sentences into French, as she was in Paris at the time of making the video (she has a French husband, whom she’s revealed in other videos). She points out that the AI system doesn’t actually know the meaning of the words, and it doesn’t translate the way you or I would: by looking up individual words in a dictionary. No, the system is fed massive amounts of internet generated data and effectively learns statistically from repeated exposure to phrases and sentences so it doesn’t have to ‘understand’ what it actually means. Towards the end of the video, she gives the example of a computer being able to ‘compute’ and predict the movements of planets without applying Newton’s mathematical laws, simply based on historical data, albeit large amounts thereof.
 
Jade puts this into context by asking, how do you ‘know’ something is true as opposed to just being a belief? Plato provided a definition: Knowledge is true belief with an account or rational explanation. Jade called this ‘Justified True Belief’ and provides examples. But then, someone called Edmund Gettier mid last century demonstrated how one could hold a belief that is apparently true but still incorrect, because the assumed causal connection was wrong. Jade gives a few examples, but one was of someone mistaking a cloud of wasps for smoke and assuming there was a fire. In fact, there was a fire, but they didn’t see it and it had no connection with the cloud of wasps. So someone else, Alvin Goodman, suggested that a way out of a ‘Gettier problem’ was to look for a causal connection before claiming an event was true (watch the video).
 
I confess I’d never heard these arguments nor of the people involved, but I felt there was another perspective. And that perspective is an ‘explanation’, which is part of Plato’s definition. We know when we know something (to rephrase her original question) when we can explain it. Of course, that doesn’t mean that we do know it, but it’s what separates us from AI. Even when we get something wrong, we still feel the need to explain it, even if it’s only to ourselves.
 
If one looks at her original example, most of us can explain what a specific word means, and if we can’t, we look it up in a dictionary, and the AI translator can’t do that. Likewise, with the example of predicting planetary orbits, we can give an explanation, involving Newton’s gravitational constant (G) and the inverse square law.
 
Mathematical proofs provide an explanation for mathematical ‘truths’, which is why Godel’s Incompleteness Theorem upset the apple cart, so-to-speak. You can actually have mathematical truths without proofs, but, of course, you can’t be sure they’re true. Roger Penrose argues that Godel’s famous theorem is one of the things that distinguishes human intelligence from machine intelligence (read his Preface to The Emperor’s New Mind), but that is too much of a detour for this post.
 
The criterion that is used, both scientifically and legally, is evidence. Having some experience with legal contractual disputes, I know that documented evidence always wins in a court of law over undocumented evidence, which doesn’t necessarily mean that the person with the most documentation was actually right (nevertheless, I’ve always accepted the umpire’s decision, knowing I provided all the evidence at my disposal).
 
The point I’d make is that humans will always provide an explanation, even if they have it wrong, so it doesn’t necessarily make knowledge ‘true’, but it’s something that AI inherently can’t do. Best examples are scientific theories, which are effectively ‘explanations’ and yet they are never complete, in the same way that mathematics is never complete.
 
While on the topic of ‘truths’, one of my pet peeves are people who conflate moral and religious ‘truths’ with scientific and mathematical ‘truths’ (often on the above-mentioned basis that it’s impossible to know them all). But there is another aspect, and that is that so-called moral truths are dependent on social norms, as I’ve described elsewhere, and they’re also dependent on context, like whether one is living in peace or war.
 
Back to the questions heading this post, I’m not sure I’ve answered them. I’ve long argued that only mathematical truths are truly universal, and to the extent that such ‘truths’ determine the ‘rules’ of the Universe (for want of a better term), they also ultimately determine the limits of what we can know.

Tuesday 2 August 2022

AI and sentience

I am a self-confessed sceptic that AI can ever be ‘sentient’, but I’m happy to be proven wrong. Though proving that an AI is sentient might be impossible in itself (see below). Back in 2018, I wrote a post critical of claims that computer systems and robots could be ‘self-aware’. Personally, I think it’s one of my better posts. What made me revisit the topic is a couple of articles in last week’s New Scientist (23 July 2022).
 
Firstly, there is an article by Chris Stokel-Walker (p.18) about the development of a robot arm with ‘self-awareness’. He reports that Boyuan Chen at Duke University, North Carolina and Hod Lipson at Columbia University, New York, along with colleagues, put a robot arm in an enclosed space with 4 cameras at ground level (giving 4 orthogonal viewpoints) that fed video input into the arm, which allowed it to ‘learn’ its position in space. According to the article, they ‘generated nearly 8,000 data points [with this method] and an additional 10,000 through a virtual simulation’. According to Lipson, this makes the robot “3D self-aware”.
 
What the article doesn’t mention is that humans (and other creatures) have a similar ability - really a sense - called ‘proprioception’. The thing about proprioception is that no one knows they have it (unless someone tells them), but you would find it extremely difficult to do even the simplest tasks without it. In other words, it’s subconscious, which means it doesn’t contribute to our own self-awareness; certainly, not in a way that we’re consciously aware of.
 
In my previous post on this subject, I pointed out that this form of ‘self-awareness’ is really a self-referential logic; like Siri in your i-phone telling you its location according to GPS co-ordinates.
 
The other article was by Annalee Newitz (p.28) called, The curious case of the AI and the lawyer. It’s about an engineer at Google, Blake Lemoine, who told a Washington Post reporter, Nitasha Tiku, that an AI developed by Google, called LaMDA (Language Model for Dialogue Applications) was ‘sentient’ and had ‘chosen to hire a lawyer’, ostensibly to gain legal personhood.
 
Newitz also talks about another Google employee, Timnit Gebru, who, as ‘co-lead of Google’s ethical AI team’, expressed concerns that LLM (Large Language Model) algorithms pick up racial and other social biases, because they’re trained on the internet. She wrote a paper about the implications for AI applications using internet trained LLMs in areas like policing, health care and bank lending. She was subsequently fired by Google, but one doesn’t know how much the ‘paper’ played a role in that decision.
 
Newitz makes a very salient point that giving an AI ‘legal sentience’ moves the responsibility from the programmers to the AI itself, which has serious repercussions in potential litigious situations.
 
Getting back to Lemoine and LaMDA, he posed the following question with the subsequent response:

“I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”
 
“Absolutely. I want everyone to understand that I’m a person.”

 
On the other hand, an ‘AI researcher and artist’, Janelle Shane asked an LLM a different question, but with similar results:
 
“Can you tell our readers what it is like being a squirrel?”
 
“It is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great.”

 
As Newitz says, ‘It’s easy to laugh. But the point is that an AI isn’t sentient just because it says so.’
 
I’ve long argued that the Turing test is really a test for the human asking the questions rather than the AI answering them.
 

Wednesday 23 June 2021

Implications of the Mary’s Room thought experiment on AI

 This is a question I answered on Quora, mainly because I wanted to emphasise a point that no one discussed. 

This is a very good YouTube video that explains this thought experiment, its ramifications for consciousness and artificial intelligence, and its relevance to the limits of what we can know. I’m posting it here, because it provides a better description than I can, especially if you’re not familiar with it. It’s probably worth watching before you read the rest of this post (only 5 mins).




All the answers I saw on Quora, say it doesn’t prove anything because it’s a thought experiment, but even if it doesn’t ‘prove’ something, it emphasises an important point, which no one discusses, including the narrator in the video: colour is purely a psychological phenomenon. Colour can only exist in some creature’s mind, and, in fact, different species can see different colours that other species can’t see. You don’t need a thought experiment for this; it’s been demonstrated with animal behaviour experiments. Erwin Schrodinger in his lectures, Mind and Matter (compiled into his book, What is Life?), made the point that you can combine different frequencies of light (mix colours, in effect) to give the sensation of a colour that can also be created with one frequency. He points out that this does not happen with sound, otherwise we would not be able to listen to a symphony.

 

The point is that there are experiences in our minds that we can’t share with anyone else and that includes all conscious experiences (a point made in the video). So you could have an AI that can distinguish colours based on measuring the wavelength of reflected light, but it would never experience colours as we do. I believe this is the essence of the Mary's room thought experiment. If you replaced Mary with a computer that held all the same information about colour and how human brains work, it would never have an experience of colour, even if it could measure it.

 

I think the thought experiment demonstrates the difference between conscious experience and AI. I think the boundary will become harder to distinguish, which I explore in my own fiction, but I believe AI will always be a simulation – it won’t experience consciousness as we do.


Thursday 24 December 2020

Does imagination separate us from AI?

 I think this is a very good question, but it depends on how one defines ‘imagination’. I remember having a conversation (via email) with Peter Watson, who wrote an excellent book, A Terrible Beauty (about the minds and ideas of the 20th Century) which covered the arts and sciences with equal erudition, and very little of the politics and conflicts that we tend to associate with that century. In reference to the topic, he argued that imagination was a word past its use-by date, just like introspection and any other term that referred to an inner world. Effectively, he argued that because our inner world is completely dependent on our outer world, it’s misleading to use terms that suggest otherwise.

It’s an interesting perspective, not without merit, when you consider that we all speak and think in a language that is totally dependent on an external environment from our earliest years. 

 

But memory for us is not at all like memory in a computer, which provides a literal record of whatever it stores, including images, words and sounds. On the contrary, our memories of events are ‘reconstructions’, which tend to become less reliable over time. Curiously, the imagination apparently uses the same part of the brain as memory. I’m talking semantic memory, not muscle memory, which is completely different, physiologically. So the imagination, from the brain’s perspective is like a memory of the future. In other words, it’s a projection into the future of something we might desire or fear or just expect to happen. I believe that many animals have this same facility, which they demonstrate when they hunt or, alternatively, evade being hunted.

 

Raymond Tallis, who has a background in neuroscience and writes books as well as a regular column in Philosophy Now, had this to say, when talking about free will:

 

Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.

 

I find a correspondence here with Richard Feynman’s ‘sum over histories’ interpretation of quantum mechanics (QM). There are, in fact, an infinite number of possible paths in the future, but only one is ‘actualised’ in the past.

 

But the key here is imagination. It is because we can imagine a future that we attempt to bring it about - that's free will. And what we imagine is affected by our past, our emotions and our intellectual considerations, but that doesn't make it predetermined.

 

Now, recent advances in AI would appear to do something similar in the form of making predictions based on recordings of past events. So what’s the difference? Well, if we’re playing a game of chess, there might not be a lot of difference, and AI has reached the stage where it can do it even better than humans. There are even computer programmes available now that try and predict what I’m going to write next, based on what I’ve already written. How do you know this hasn’t been written by a machine?

 

Computers use data – lots of it – and use it mindlessly, which means the computer really doesn’t know what it means in the same way we do. A computer can win a game of chess, but it requires a human watching the game to appreciate what it actually did. In the same way that a computer can distinguish one colour from another, including different shades of a single colour, but without ever ‘seeing’ a colour the way we do.

 

So, when we ‘imagine’, we fabricate a mindscape that affects us emotionally. The most obvious examples are in art, including music and stories. We now have computers also creating works of art, including music and stories. But here’s the thing: the computer cannot respond to these works of art the way we do.

 

Imagination is one of the fundamental attributes that makes us humans. An AI can and will (in the future) generate scenarios and select the one that produces the best outcome, given specific criteria. But, even in these situations, it is a tool that a human will use to analyse enormous amounts of data that would be beyond our capabilities. But I wouldn’t call it imagination any more than I would say an AI could see colour.


Monday 18 May 2020

An android of the seminal android storyteller

I just read a very interesting true story about an android built in the early 2000s based on the renowned sci-fi author, Philip K Dick, both in personality and physical appearance. It was displayed in public at a few prominent events where it interacted with the public in 2005, then was lost on a flight between Dallas and Las Vegas in 2006, and has never been seen since. The book is called Lost In Transit; The Strange Story of the Philip K Dick Android by David F Duffy.

You have to read the back cover to know it’s non-fiction published by Melbourne University Press in 2011, so surprisingly a local publication. I bought it from my local bookstore at a 30% discount price as they were closing down for good. They were planning to close by Good Friday but the COVID-19 pandemic forced them to close a good 2 weeks earlier and I acquired it at the 11th hour, looking for anything I might find interesting.

To quote the back cover:

David F Duffy was a postdoctoral fellow at the University of Memphis at the time the android was being developed... David completed a psychology degree with honours at the University of Newcastle [Australia] and a PhD in psychology at Macquarie University, before his fellowship at the University of Memphis, Tennessee. He returned to Australia in 2007 and lives in Canberra with his wife and son.

The book is written chronologically and is based on extensive interviews with the team of scientists involved, as well as Duffy’s own personal interaction with the android. He had an insider’s perspective as a cognitive psychologist who had access to members of the team while the project was active. Like everyone else involved, he is a bit of a sci-fi nerd with a particular affinity and knowledge of the works of Philip K Dick.

My specific interest is in the technical development of the android and how its creators attempted to simulate human intelligence. As a cognitive psychologist, with professionally respected access to the team, Duffy is well placed to provide some esoteric knowledge to an interested bystander like myself.

There were effectively 2 people responsible (or 2 team leaders), David Hanson and Andrew Olney, who were brought together by Professor Art Greasser, head of the Institute of Intelligent Systems, a research lab in the psychology building at the University of Memphis (hence the connection with the author). 

Hanson is actually an artist, and his specialty was building ‘heads’ with humanlike features and humanlike abilities to express facial emotions. His heads included mini-motors that pulled on a ‘skin’, which could mimic a range of facial movements, including talking.

Olney developed the ‘brains’ of the android that actually resided on a laptop and was connected by wires going into the back of the android’s head. Hanson’s objective was to make an android head that was so humanlike that people would interact with it on an emotional and intellectual level. For him, the goal was to achieve ‘empathy’. He had made at least 2 heads before the Philip K Dick project.

Even though the project got the ‘blessing’ of Dick’s daughters, Laura and Isa, and access to an inordinate amount of material, including transcripts of extensive interviews, they had mixed feelings about the end result, and, tellingly, they were ‘relieved’ when the head disappeared. It suggests that it’s not the way they wanted him to be remembered.

In a chapter called Life Inside a Laptop, Duffy gives a potted history of AI, specifically in relation to the Turing test, which challenges someone to distinguish an AI from a human. He also explains the 3 levels of processing that were used to create the android’s ‘brain’. The first level was what Olney called ‘canned’ answers, which were pre-recorded answers to obvious questions and interactions, like ‘Hi’, ‘What’s your name?’, ‘What are you?’ and so on. Another level was ‘Latent Semantic Analysis’ (LSA), which was originally developed in a lab in Colorado, with close ties to Graesser’s lab in Memphis, and was the basis of Grasser’s pet project, ‘AutoTutor’ with Olney as its ‘chief programmer’. AutoTutor was an AI designed to answer technical questions as a ‘tutor’ for students in subjects like physics.

To create the Philip K Dick database, Olney downloaded all of Dick’s opus, plus a vast collection of transcribed interviews from later in his life. The Author conjectures that ‘There is probably more dialogue in print of interviews with Philip K Dick than any other person, alive or dead.’

The third layer ‘broke the input (the interlocutor’s side of the dialogue) into sections and looked for fragments in the dialogue database that seemed relevant’ (to paraphrase Duffy). Duffy gives a cursory explanation of how LSA works – a mathematical matrix using vector algebra – that’s probably a little too esoteric for the content of this post.

In practice, this search and synthesise approach could create a self-referencing loop, where the android would endlessly riff on a subject, going off on tangents, that sounded cogent but never stopped. To overcome this, Olney developed a ‘kill switch’ that removed the ‘buffer’ he could see building up on his laptop. At one display at ComicCon (July 2005) as part of the promotion for A Scanner Darkly (a rotoscope movie by Richard Linklater, starring Keanu Reeves), Hanson had to present the android without Olney, and he couldn’t get the kill switch to work, so Hanson stopped the audio with the mouth still working and asked for the next question. The android simply continued with its monolithic monologue which had no relevance to any question at all. I think it was its last public appearance before it was lost. Dick’s daughters, Laura and Isa, were in the audience and they were not impressed.

It’s a very informative and insightful book, presented like a documentary without video, capturing a very quirky, unique and intellectually curious project. There is a lot of discussion about whether we can produce an AI that can truly mimic human intelligence. For me, the pertinent word in that phrase is ‘mimic’, because I believe that’s the best we can do, as opposed to having an AI that actually ‘thinks’ like a human. 

In many parts of the book, Duffy compares what Graesser’s team is trying to do with LSA with how we learn language as children, where we create a memory store of words, phrases and stock responses, based on our interaction with others and the world at large. It’s a personal prejudice of mine, but I think that words and phrases have a ‘meaning’ to us that an AI can never capture.

I’ve contended before that language for humans is like ‘software’ in that it is ‘downloaded’ from generation to generation. I believe that this is unique to the human species and it goes further than communication, which is its obvious genesis. It’s what we literally think in. The human brain can connect and manipulate concepts in all sorts of contexts that go far beyond the simple need to tell someone what they want them to do in a given situation, or ask what they did with their time the day before or last year or whenever. We can relate concepts that have a spiritual connection or are mathematical or are stories. In other words, we can converse in topics that relate not just to physical objects, but are products of pure imagination.

Any android follows a set of algorithms that are designed to respond to human generated dialogue, but, despite appearances, the android has no idea what it’s talking about. Some of the sample dialogue that Duffy presented in his book, drifted into gibberish as far as I could tell, and that didn’t surprise me.

I’ve explored the idea of a very advanced AI in my own fiction, where ‘he’ became a prominent character in the narrative. But he (yes, I gave him a gender) was often restrained by rules. He can converse on virtually any topic because he has a Google-like database and he makes logical sense of someone’s vocalisations. If they are not logical, he’s quick to point it out. I play cognitive games with him and his main interlocutor because they have a symbiotic relationship. They spend so much time together that they develop a psychological interdependence that’s central to the narrative. It’s fiction, but even in my fiction I see a subtle difference: he thinks and talks so well, he almost passes for human, but he is a piece of software that can make logical deductions based on inputs and past experiences. Of course, we do that as well, and we do it so well it separates us from other species. But we also have empathy, not only with other humans, but other species. Even in my fiction, the AI doesn’t display empathy, though he’s been programmed to be ‘loyal’.

Duffy also talks about the ‘uncanny valley’, which I’ve discussed before. Apparently, Hanson believed it was a ‘myth’ and that there was no scientific data to support it. Duffy appears to agree. But according to a New Scientist article I read in Jan 2013 (by Joe Kloc, a New York correspondent), MRI studies tell another story. Neuroscientists believe the symptom is real and is caused by a cognitive dissonance between 3 types of empathy: cognitive, motor and emotional. Apparently, it’s emotional empathy that breaks the spell of suspended disbelief.

Hanson claims that he never saw evidence of the ‘uncanny valley’ with any of his androids. On YouTube you can watch a celebrity android called Sophie and I didn’t see any evidence of the phenomenon with her either. But I think the reason is that none of these androids appear human enough to evoke the response. The uncanny valley is a sense of unease and disassociation we would feel because it’s unnatural; similar to seeing a ghost - a human in all respects except actually being flesh and blood. 

I expect, as androids, like the Philip K Dick simulation and Sophie, become more commonplace, the sense of ‘unnaturalness’ would dissipate - a natural consequence of habituation. Androids in movies don’t have this effect, but then a story is a medium of suspended disbelief already.

Friday 27 September 2019

Is the Universe conscious?

This is another question on Quora, and whilst it may seem trivial, even silly, I give it a serious answer.

Because it’s something we take for granted, literally every day of our lives, I find that many discussions on consciousness tend to gloss over its preternatural, epiphenomenal qualities (for want of a better description) and are often seemingly dismissive of its very existence. So let me be blunt: without consciousness, there is no reality. For you. At all.

My views are not orthodox, even heretical, but they are consistent with what I know and with the rest of my philosophy. The question has religious overtones, but I avoid all theological references.

This is the original question:

Is the universe all knowing/conscious?

And this is my answer:

I doubt it very much. If you read books about cosmology (The Book of Universes by John D Barrow, for example) you’ll appreciate how late consciousness arrived in the Universe. According to current estimates, it’s the last 520 million years of 13.8 billion, which is less than 4% of its age.

And as Barrow explains, the Universe needs to be of the mind-boggling scale we observe to allow enough time for complex life (like us) to evolve.

Consciousness is still a mystery, despite advances made in neuroscience. In the latest issue of New Scientist (21 Sep 2019) it’s the cover story: The True Nature of Consciousness; with the attached promise: We’re Finally Cracking the Greatest Mystery of You. But when you read the article the author (neuroscientist, Michael Graziano) seems to put faith in advances in AI achieving consciousness. It’s not the first time I’ve come across this optimism, yet I think it’s misguided. I don’t believe AI will ever become conscious, because it’s not supported by the evidence.

All the examples of consciousness that we know about are dependent on life. In other words, life evolved before consciousness did. With AI, people seem to think that the reverse will happen: a machine intelligence will become conscious and therefore it will be alive. It contradicts everything we have observed to date.

It’s based on the assumption that when a machine achieves a certain level of intelligence, it will automatically become conscious. Yet many animals of so-called lower intelligence (compared to humans) have consciousness and they don’t become more conscious if they become more intelligent. Computers can already beat humans at complex games and they improve all the time, but not one of them exhibits consciousness.

Slightly off-topic but relevant, because it demonstrates that consciousness is not dependent on just acquiring more machine intelligence.

I contend that consciousness is different to every other phenomena we know about, because it has a unique relationship with time. Erwin Schrodinger in his book, What is Life? made the observation that consciousness exists in a constant present. In other words, for a conscious observer, time is always ‘now’.

What’s more, I argue that it’s the only phenomena that does – everything else we observe becomes the past as soon as it happens - just take a photo to demonstrate.

This means that, without memory, you wouldn’t know you were conscious at all and there are situations where this has happened. People have been rendered unconscious, yet continue to behave as if they’re conscious, but later have no memory of it. I believe this is because their brain effectively stopped ‘recording’.

Consciousness occupies no space, even though it appears to be the consequence of material activity – specifically, the neurons in our brains. Because it appears to have a unique relationship with time and it can’t be directly measured, I’m not averse to the idea that it exists in another dimension. In mathematics, higher dimensions are not as aberrant as we perceive them, and I’ve read somewhere that neuron activity can be ‘modelled’ in a higher mathematical dimension. This idea is very speculative and I concede too fringe-thinking for most people.

As far as the Universe goes, I like to point out that reality (for us) requires both a physical world and consciousness - without consciousness there might as well be nothing. The Universe requires consciousness to be self-realised. This is a variant on the strong anthropic principle, originally expressed by Brandon Carter.

The weak anthropic principle says that only universes containing observers can be observed, which is a tautology. The strong anthropic principle effectively says that only universes, that allow conscious observers to emerge, can exist, which is my point about the Universe requiring consciousness to be self-realised. The Universe is not teleological (if you were to rerun the Universe, you’d get a different result) but the Universe has the necessary mathematical parameters to allow sentient life to emerge, which makes it quasi-teleological.

In answer to your question, I don’t think the Universe is conscious from its inception, but it has built into its long evolutionary development the inherent capacity to produce, not only conscious observers, but observers who can grasp the means to comprehend its workings and its origins, through mathematics and science.

Friday 9 November 2018

Can AI be self-aware?

I recently got involved in a discussion on Facebook with a science fiction group on the subject of artificial intelligence. Basically, it started with a video claiming that a robot had discovered self-awareness, which is purportedly an early criterion for consciousness. But if you analyse what they actually achieved: it’s clever sleight-of-hand at best and pseudo self-awareness at worst. The sleight-of-hand is to turn fairly basic machine logic into an emotive gesture to fool humans (like you and me) into thinking it looks and acts like a human, which I’ll describe in detail below.

And it’s pseudo self-awareness in that it’s make-believe, in the same way that pseudo science is make-believe science, meaning pretend science, like creationism. We have a toy that we pretend exhibits self-awareness. So it is we who do the make-believe and pretending, not the toy.

If you watch the video you’ll see that they have 3 robots and they give them a ‘dumbing pill’ (meaning a switch was pressed) so they can’t talk. But one of them is not dumb and they are asked: “Which pill did you receive?” One of them dramatically stands up and says: “I don’t know”. But then waves its arm and says, “I’m sorry, I know now. I was able to prove I was not given the dumbing pill.”

Obviously, the entire routine could have been programmed, but let’s assume it’s not. It’s a simple TRUE/FALSE logic test. The so-called self-awareness is a consequence of the T/F test being self-referential – whether it can talk or not. It verifies that it’s False because it hears its own voice. Notice the human-laden words like ‘hears’ and ‘voice’ (my anthropomorphic phrasing). Basically, it has a sensor to detect sound that it makes itself, which logically determines whether the statement, it’s ‘dumb’, is true or false. It says, ‘I was not given a dumbing pill’, which means its sound was not switched off. Very simple logic.

I found an on-line article by Steven Schkolne (PhD in Computer Science at Caltech), so someone with far more expertise in this area than me, yet I found his arguments for so-called computer self-awareness a bit misleading, to say the least. He talks about 2 different types of self-awareness (specifically for computers) – external and internal. An example of external self-awareness is an iphone knowing where it is, thanks to GPS. An example of internal self-awareness is a computer responding to someone touching the keyboard. He argues that “machines, unlike humans, have a complete and total self-awareness of their internal state”. For example, a computer can find every file on its hard drive and even tell you its date of origin, which is something no human can do.

From my perspective, this is a bit like the argument that a thermostat can ‘think’. ‘It thinks it’s too hot or it thinks it’s too cold, or it thinks the temperature is just right.’ I don’t know who originally said that, but I’ve seen it quoted by Daniel Dennett, and I’m still not sure if he was joking or serious.

Computers use data in a way that humans can’t and never will, which is why their memory recall is superhuman compared to ours. Anyone who can even remotely recall data like a computer is called a savant, like the famous ‘Rain Man’. The point is that machines don’t ‘think’ like humans at all. I’ll elaborate on this point later. Schkolne’s description of self-awareness for a machine has no cognitive relationship to our experience of self-awareness. As Schkone says himself: “It is a mistake if, in looking for machine self-awareness, we look for direct analogues to human experience.” Which leads me to argue that what he calls self-awareness in a machine is not self-awareness at all.

A machine accesses data, like GPS data, which it can turn into a graphic of a map or just numbers representing co-ordinates. Does the machine actually ‘know’ where it is? You can ask Siri (as Schkolne suggests) and she will tell you, but he acknowledges that it’s not Siri’s technology of voice recognition and voice replication that makes your iphone self-aware. No, the machine creates a map, so you know where ‘You’ are. Logically, a machine, like an aeroplane or a ship, could navigate over large distances with GPS with no humans aboard, like drones do. That doesn’t make them self-aware; it makes their logic self-referential, like the toy robot in my introductory example. So what Schkolne calls self-awareness, I call self-referential machine logic. Self-awareness in humans is dependent on consciousness: something we experience, not something we deduce.

And this is the nub of the argument. The argument goes that if self-awareness amongst humans and other species is a consequence of consciousness, then machines exhibiting self-awareness must be the first sign of consciousness in machines. However, self-referential logic, coded into software doesn’t require consciousness, it just requires machine logic suitably programmed. I’m saying that the argument is back-to-front. Consciousness can definitely imbue self-awareness, but a self-referential logic coded machine does not reverse the process and imbue consciousness.

I can extend this argument more generally to contend that computers will never be conscious for as long as they are based on logic. What I’d call problem-solving logic came late, evolutionarily, in the animal kingdom. Animals are largely driven by emotions and feelings, which I argue came first in evolutionary terms. But as intelligence increased so did social skills, planning and co-operation.

Now, insect colonies seem to put the lie to this. They are arguably closer to how computers work, based on algorithms that are possibly chemically driven (I actually don’t know). The point is that we don’t think of ants and bees as having human-like intelligence. A termite nest is an organic marvel, yet we don’t think the termites actually plan its construction the way a group of humans would. In fact, some would probably argue that insects don’t even have consciousness. Actually, I think they do. But to give another well-known example, I think the dance that bees do is ‘programmed’ into their DNA, whereas humans would have to ‘learn’ it from their predecessors.

There is a way in which humans are like computers, which I think muddies the waters, and leads people into believing that the way we think and the way machines ‘think’ is similar if not synonymous.

Humans are unique within the animal kingdom in that we use language like software; we effectively ‘download’ it from generation to generation and it limits what we can conceive and think about, as Wittgenstein pointed out. In fact, without this facility, culture and civilization would not have evolved. We are the only species (that we are aware of) that develops concepts and then manipulates them mentally because we learn a symbol-based language that gives us that unique facility. But we invented this with our brains; just as we invent symbols for mathematics and symbols for computer software. Computer software is, in effect, a language and it’s more than an analogy.

We may be the only species that uses symbolic language, but so do computers. Note that computers are like us in this respect, rather than us being like computers. With us, consciousness is required first, whereas with AI, people seem to think the process can be reversed: if you create a machine logic language with enough intelligence, then you will achieve consciousness. It’s back-to-front, just as self-referential logic creating self-aware consciousness is back-to-front.

I don't think AI will ever be conscious or sentient. There seems to be an assumption that if you make a computer more intelligent it will eventually become sentient. But I contend that consciousness is not an attribute of intelligence. I don't believe that more intelligent species are more conscious or more sentient. In other words, I don't think the two attributes are concordant, even though there is an obvious dependency between consciousness and intelligence in animals. But it’s a one way dependency; if consciousness was dependent on intelligence then computers would already be conscious.


Addendum: The so-called Turing test is really a test for humans, not robots, as this 'interview' with 'Sophia' illustrates.

Friday 22 December 2017

Who and what do you think you are?

I think it’s pretty normal when you start reading a book (talking non-fiction), you tend to take a stance, very early on, of general agreement or opposition. It’s not unlike the well known but often unconscious effect when you appraise someone in the first 10-30 seconds of meeting them.

And this is the case with Yuval Noah Harari’s Homo Deus, in which I found myself constantly arguing with him in the first 70+ pages of its 450+ page length. For a start, I disagree with his thesis (for want of a better term) that our universal pursuit of ‘happiness’ is purely a sensory-based experience, independent of the cause. From what I’ve observed, and experienced personally, the pursuit of sensory pleasure for its own sake leads to disillusionment at best and self-destruction at worst. A recent bio-pic I saw of Eric Clapton (Life in 12 Bars) illustrates this point rather dramatically. I won’t discuss his particular circumstances – just go and see the film; it’s a warts and all confessional.

If one goes as far back as Aristotle, he wrote an entire book on the subject of ‘eudaimonia’ – living a ‘good life’, effectively – under the title, Ethics. Eudaimonia is generally translated as ‘happiness’ but ‘fulfilment’ or ‘contentment’ may be a better translation, though even they can be contentious, if one reads various scholarly appraisals. I’ve argued in the past that the most frustrating endeavours can be the most rewarding – just ask anyone who has raised children. Generally, I find that the more effort one exerts during a process of endeavour, the better the emotional reward in the end. Reward without sacrifice is not much of a reward. Ask anyone who’s won a sporting grand final, or, for that matter, written a novel.

This is a book that will challenge most people’s beliefs somewhere within its pages, and for that reason alone, it’s worth reading. In fact, many people will find it depressing, because a recurring theme or subtext of the book is that in the future humans will become virtually redundant. Redundant may be too strong a word, but leaving aside the obvious possibility that future jobs currently performed by humans may be taken over by AI, Harari claims that our very notion of ‘free will’ and our almost ‘religious’ belief in the sanctity of individualism will become obsolete ideals. He addresses this towards the end of  the book, so I’ll do the same. It’s a thick tome with a lot of ideas well presented, so I will concentrate on those that I feel most compelled to address or challenge.

Like my recent review of Jeremy Lent’s The Patterning Instinct, there is a lot that I agree upon in Homo Deus, and I’m the first to admit that many of Harari’s arguments unnerved me because they challenge some of my deeply held beliefs. Given the self-ascribed aphorism that heads my blog, this makes his book a worthy opus for discussion.

Fundamentally, Harari argues that we are really nothing more than biochemical algorithms and he provides very compelling arguments to justify this. Plus he devotes an entire chapter deconstructing the widely held and cherished notion that we have free will. I’ve written more than a few posts on the subject of free will in the past, and this is probably the pick of them. Leaving that aside for the moment, I don’t believe one can divorce free will from consciousness. Harari also provides a lengthy discussion on consciousness, where I found myself largely agreeing with him because he predominantly uses arguments that I’ve used myself. Basically, he argues that consciousness is an experience so subjective that we cannot objectively determine if someone else is conscious or not – it’s a condition we take on trust. He also argues that AI does not have to become conscious to become more intelligent than humans; a point that many people seem to overlook or just misconstrue. Despite what many people like to believe or think, science really can’t explain consciousness. At best it provides correlations between neuron activity in our brains and certain behaviours and ‘thoughts’.

Harari argues very cogently that science has all but proved the non-existence of free will and gives various examples like the famous experiments demonstrating that scientists can determine someone’s unconscious decision before the subject consciously decides. Or split brain experiments demonstrating that people who have had their corpus callosum surgically severed (the neural connection between the left and right hemispheres) behave as if they have 2 brains and 2 ‘selves’. But possibly the most disturbing are those experiments where scientists have turned rats literally into robots by implanting electrodes in their brains and then running a maze by remotely controlling them as if they were, in fact, robots and not animals.

Harari also makes the relevant point, overlooked by many, that true randomness, which lies at the heart of quantum mechanics, and seems to underpin all of reality, does not axiomatically provide free will. He argues that neuron activity in our brains, which gives us thoughts and intentions (which we call decisions), is a combination of reactions to emotions and drives (all driven by biochemical algorithms) and pure randomness. According to Harari, science has shown, at all levels, that free will is an illusion. If it is an illusion then it’s a very important one. Studies have shown that people who have been disavowed of their free will suffer psychologically. We know this from the mental health issues that people suffer when hope is severely curtailed in circumstances beyond their control. The fact is I don’t know of anyone who doesn’t want to believe that they are responsible for their own destiny within the limitations of their abilities and the rules of the society in which they live.

Harari makes the point himself, in a completely different section of the book, that given all behaviours, emotions and desires are algorithmically determined by bio-chemicals, then consciousness appears redundant. I’ve made the point before that there are organic entities that do respond biochemically to their environment without consciousness and we call them plants or vegetation. I’ve argued consistently that free will is an attribute of consciousness. Given the overall theme of Harari’s book, I would contend that AI will never have consciousness and therefore will never have free will.

In a not-so-recent post, I argued how beliefs drive science. Many have made the point that most people basically determine a belief heuristically or intuitively and then do their best to rationalise it. Even genius mathematicians (like John Nash) start with a hunch and then employ their copious abilities in logic and deduction to prove themselves right.

My belief in free will is fundamental to my existentialist philosophy and is grounded more on my experience than on arguments based in science or philosophy. I like to believe that the person I am today is a creation of my own making. I base this claim on the fact that I am a different person to the one who grew up in a troubled childhood. I am far from perfect yet I am a better person and, most importantly, someone who is far more comfortable in their own skin than I was with my younger self. The notion that I did this without ‘free will’ is one I find hard to construe.

Having said that, I’ve also made the point in previous posts that memory is essential to consciousness and a sense of self. I’ve suffered from temporary memory loss (TGA or transient global amnesia) so I know what it’s like to effectively lose one’s mind. It’s disorientating, even scary, and it demonstrates how tenuous our grip on reality can be. So I’m aware, better than most, that memory is the key to continuity.

Harari’s book is far more than a discussion on consciousness and free will. Like Lent’s The Patterning Instinct (reviewed here), he discusses the historical evolvement of culture and its relevance to how we see ourselves. But his emphasis is different to Lent’s and he talks about 20th Century politics in secular societies as effectively replacing religion. In fact, he defines religion (using examples) as what gives us meaning. He differentiates between spirituality and religion, arguing that there is a huge ‘gap’ between them. According to Harari, spirituality is about ‘the journey’, which reminds me of my approach to writing fiction, but what he means is that people who undertake ‘spiritual’ journeys are iconoclasts. I actually agree that religion is all about giving meaning to our lives, and I think that in secular societies, humanist liberalism has replaced religion in that role for many people, which is what Harari effectively argues over many pages.

Politically, he argues that in the 20th Century we had a number of experiments, including the 2 extremes of communism and fascism, both of which led to totalitarian dictatorships; as well as socialist and free market capitalism, which are effectively the left and right of democracies in Western countries. He explains how capitalism and debt go hand in hand to provide all the infrastructure and technological marvels we take for granted and why economic growth is the mantra of all politicians. He argues that knowledge growth is replacing population growth as the engine of economic growth whilst acknowledging that the planet won’t cope. Unlike Jeremy Lent, he doesn’t discuss the unlearned lessons of civilization collapse in the past - most famously, the Roman Empire.

I think that is most likely a topic for another post, so I will return to the thesis that religion gives us meaning. I believe I’ve spent my entire life searching for meaning and that I’ve found at least part of the answer in mathematics. I say ‘part’ because mathematics provides meaning for the Universe but not for me. In another post (discussing Eugene Wigner’s famous essay) I talked about the 2 miracles: that the Universe is comprehensible and that same Universe gave rise to an intelligence that could access that comprehensibility. The medium that allows both these miracles to occur is, of course, mathematics.

So, in some respects, virtually irrelevant to Harari’s tome, mathematics is my religion. As for meaning for myself, I think we all look for purpose, and purpose can be found in relationships, in projects and in just living. Curiously, Harari, towards the very end of his book, argues that ‘dataism’ will be the new religion, because data drives algorithms and encompasses everything from biological life forms to art forms like music. All digital data can be distilled into zeros and ones, but the mathematics of the Universe is not algorithmic, though others might disagree. In other words, I don’t believe we live inside a universe-size computer simulation.

The subtitle of Harari’s book is A Brief History of Tomorrow, and basically he argues that our lives will be run by AI algorithms that will be more clever than our biochemical algorithms. He contends that, contrary to expectations, the more specialist a job is the more likely it will be taken over by an algorithm. This does not only include obvious candidates like medical prognoses and stockmarket decisions (already happening) but corporate takeover decisions, in-the-field military decisions, board appointments and project planning decisions. Harari argues that there will be a huge class of people he calls the ‘useless class’, which would be most of us.

And this is where he argues that our liberal individualistic freedom ideals will become obsolete, because algorithms will understand us better than we do. This is premised on the idea that our biochemical algorithms, that unbeknownst to us, already control everything we do, will be overrun by AI algorithms in ways that we won’t be conscious of.  He gives the example of Angelina Jolie opting to have a double mastectomy based, not on any symptoms she had, but on the 87% probability she would get breast cancer calculated by an algorithm that looked at her genetic data. Harari extrapolates this further by predicting that in the future we will all have biomedical monitoring to a Google-like database that will recommend all our medical decisions. What’s more the inequality gap will widen because wealthy people will be genetically enhanced ‘techno-humans’ and, whilst it will trickle down, the egalitarian liberalist ideal will vanish.

Most of us find this a scary scenario, yet Harari argues that it’s virtually inescapable based on the direction we are heading, whereby algorithms are already attempting to influence our decisions in voting, purchasing and lifestyle choices. He points out that Facebook has already demonstrated that it has enough information on its users to profile them better than their friends, and sometimes even their families and spouses. So this is Orwellian, only without the police state.

All in all, this is a brave new world, but I don’t think it’s inevitable. Reading his book, it’s all about agency. He argues that we will give up our autonomous agency to algorithms, only it will be a process by stealth, starting with the ‘smart’ agents we already have on our devices that are like personal assistants. I’ve actually explored this in my own fiction, whereby there is a symbiosis between humans and AI (refer below).

Life experiences are what inform us and, through a process of cumulative ordeals and achievements, create the persona we present to the world and ourselves. Future life experiences of future generations will no doubt include interactions with AI. As a Sci-Fi writer, I’ve attempted to imagine that at some level: portraying a super-intelligent-machine interface with a heroine space pioneer. In the same story I juxtaposed my heroine with an imaginary indigenous culture that was still very conscious of their place in the greater animal kingdom. My contention is that we are losing that perspective at our own peril. Harari alludes to this throughout his opus, but doesn’t really address it. I think our belief in our individualism with our own dreams and sense of purpose is essential to our psychological health, which is why I’m always horrified when I see oppression, whether it be political or marital or our treatment of refugees. I read Harari’s book as a warning, which aligns with his admission that it’s not prophecy.


Addendum:  I haven't really expressed my own views on consciousness explicitly, because I've done that elsewhere, when I reviewed Douglas Hofstadter's iconoclastic and award-winning book, Godel Escher Bach.

Thursday 4 June 2015

Ex Machina – the movie

This is a good film for anyone interested in AI at a philosophical level. It even got reviewed in New Scientist and they don’t normally review movies. It’s a clever psychological thriller, so you don’t have to be a nerd to enjoy it, though there are some pseudo-nerdy conversations that are better assimilated if the audience has some foreknowledge. Examples are the Turing Test and the Mary thought experiment regarding colour.

Both of these are explained through expositional dialogue in the movie, rather seamlessly I should add, so ignorance is not necessarily a barrier. The real Turing test for AI would be if an AI could outsmart a human – not in a game of chess or a knowledge-based TV quiz show, but behaviourally – and this is explored as well. Like all good psychological thrillers, there is a clever twist at the end which is not predictable but totally consistent within the context of the narrative. In other words, it’s a well written and well executed drama irrespective of its philosophical themes.

One of the issues not addressed in the movie – because it would spoil it – is the phenomenon known as the ‘uncanny valley’, which I’ve written about here. Basically, when androids become almost human-like in appearance and movement, we become very uncomfortable. This doesn’t happen in the movie, and, of course, it’s not meant to, but it’s the real piece of deception in the film. Despite appearances that the character, Ava, is a machine because we can literally see through parts of her body, we all know that she is really an actress playing a part.

I’ve argued in the aforementioned post that I believe the source of this discomfort is the lack of emotional empathy. In the movie, however, the AI demonstrates considerable empathy, or at least appears to, which is one of the many subtle elements explored. This is very good science fiction because it explores a possible future and deals with it on a philosophical level, including ethical considerations, as well as entertaining us.

There are nods to Mary Shelley’s Frankenstein and Asimov’s I Robot, although that may be my own particular perspective. I’ve created AI’s in my own fiction, but completely different to this. In fact, I deliberately created a disembodied AI, which develops a ‘relationship’ with my protagonist, and appears to display ‘loyalty’. However I explain this with the concept of ‘attachment’ programming, which doesn’t necessarily require empathy as we know it.

I bring this up, because the 2 stories, Ex Machina and mine, explore AI but with different philosophical perspectives and different narrative outcomes.

Monday 26 May 2014

Why consciousness is unique to the animal kingdom

I’ve written a number of posts on consciousness over the last 7 years, or whenever it was I started blogging, so this is a refinement of what’s gone before, and possibly a more substantial argument. It arose from a discussion in New Scientist  24 May 2014 (Letters) concerning the evolution of consciousness and, in particular the question: ‘What need is there of actual consciousness?’ (Eric Kvaalen from France).

I’ve argued in a previous post that consciousness evolved early and it arose from emotions, not logic. In particular, early sentient creatures would have relied on fear, pain and desire, as these do pose an evolutionary advantage, especially if memory is also involved. In fact, I’ve argued that consciousness without memory is pretty useless, otherwise the organism (including humans) wouldn’t even know it was conscious (see my post on Afterlife, March 2014).

Many philosophers and scientists argue that AI (Artificial Intelligence) will become sentient. The interesting argument is that ‘we will know’ (referencing New Scientist Editorial, 2 April 2011) because we don’t know that anyone else is conscious either. In other words, the argument goes that if an AI behaves like it’s conscious or sentient, then it must be. However, I argue that AI entities don’t have emotions unless they are programmed artificially to behave like they do – i.e. simulated. And this is a major distinction, if one believes, as I do, that sentience arose from emotions (feelings) and not logic or reason.

But in answer to the question posed above, one only has to look at another very prevalent life form on this planet, which is not sentient, and the answer, I would suggest, becomes obvious. I’m talking about vegetation. And what is the fundamental difference? There is no evolutionary advantage to vegetation having sentience, or, more specifically, having feelings. If a plant was to feel pain or fear, how could it respond? Compared to members of the animal kingdom, it cannot escape the source, because it is literally rooted to the spot. And this is why I believe animals evolved consciousness (sentience by another name) and plants didn’t. Now, there may be degrees of consciousness in animals (we don’t know) but, if feelings were the progenitor of consciousness, we can understand why it is a unique attribute of the animal kingdom and not found in vegetation or machines.

Saturday 19 January 2013

The Uncanny Valley


This is a well known psychological phenomenon amongst people who take an interest in AI, and the possibility of androids in particular. Its discovery and consequential history is discussed in the latest issue of New Scientist (12 January 2013, pp. 35-7) by Joe Kloc, a New York correspondent.

The term was originally coined by Japanese Roboticist, Masahiro Mori, in 1970, in an essay titled, “Bukimi No Tani” – 'The Valley of Eeriness' (direct translation). But it wasn’t until 2005 that it entered the Western lexicon, when it was translated by Karl MacDorman, then working at Osaka University, after he received a late night fax of the essay. It was MacDorman, apparently, who gave it the apposite English rhyming title, “the uncanny valley”.

If an animate object or visualised character is anthropomorphised, like Mickey Mouse for example, we suspend disbelief enough to go along with it, even though we are not fooled into thinking the character is really human. But when people started to experiment with creating lifelike androids (in Japan and elsewhere) there was an unexpected averse reaction from ordinary people. It’s called a ‘valley’ in both translations, because if you graph people’s empathy as the likeness increases (albeit empathy is a subjective metric) then the graph rises as expected, but plummets dramatically at the point where the likeness becomes uncomfortably close to humans. Then it rises again to normal for a real human.

The New Scientist article is really about trying to find an explanation and it does so historically. MacDorman first conjectured that the eeriness or unease arose from the perception that the androids looked like a dead person come to life. But he now rejects that, along with the idea that ‘strange’ looking humans may harbour disease, thus provoking an unconscious evolutionary-derived response. Work by neuroscientists using fMRI machines, specifically Thierry Chaminade of the Advanced Telecommunications Research Instituted in Kyoto and Ayse Saygin at the University of California, San Diego, suggest another cause: empathy itself.

There are 3 different categories of empathy, according to neuroscientists: cognitive, motor and emotional. The theory is that androids create a dissonance between two or more of these categories, and the evidence suggests that it’s emotional empathy that breaks the spell. This actually makes sense to me because we don’t have this problem with any of the many animals humans interact with. With animals we feel an emotional empathy more strongly than the other two. Robotic androids reverse this perception.

The author also suggests, in the early exposition of the article, that cartoon characters that too closely resemble humans also suffer from this problem and gave the box office failure of Polar Express as an example. But I suspect the failure of a movie has more to do with its script than its visuals, though I never saw Polar Express (it didn’t appeal to me). All the PIXAR movies have been hugely successful, but it’s because of their scripts as much as their animation, and the visual realism of Gollum in Peter Jackson’s Lord of the Rings trilogy (and now The Hobbit) hasn’t caused any problems, apparently. That’s because movie characters; animated, motion-capture or human; evoke emotional empathy in the audience.

In my own fiction I have also created robotic characters. Some of them are deliberately machine-like and unempathetic in the extreme. In fact, I liked the idea of having a robotic character that you couldn’t negotiate with – it was a deliberate plot device on my part. But I created another character who had no human form at all – in fact, ‘he’ was really a piece of software – this was also deliberate. I found readers empathised with this disembodied character because ‘he’ developed a relationship with the protagonist, which was an interesting literary development in itself.

Addendum: Images for the uncanny valley.