Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Friday 22 April 2011

Sentience, free will and AI

In the 2 April 2011 edition of New Scientist, the editorial was titled Rights for robots; We will know when it’s time to recognise artificial cognition. Implicit in the header and explicit in the text is the idea that robots will one day have sentience just like us. In fact they highlighted one passage: “We should look to the way people treat machines and have faith in our ability to detect consciousness.”

I am a self-confessed heretic on this subject because I don’t believe machine intelligence will ever be sentient, and I’m happy to stick my neck out in this forum so that one day I can possibly be proven wrong. One of the points of argument that the editorial makes is that ‘there is no agreed definition of consciousness’ and ‘there’s no way to tell that you aren’t the only conscious being in a world of zombies.’ In other words, you really don’t know if the person right next to you is conscious (or in a dream) so you’ll be forced to give a cognitive robot the same benefit of the doubt. I disagree.

Around the same time as reading this, I took part in a discussion on Rust Belt Philosophy about what sentience is. Firstly, I contend that sentience and consciousness are synonymous, and I think sentience is pretty pervasive in the animal kingdom. Does that mean that something that is unconscious is not sentient? Strictly speaking, yes, because I would define sentience as the ability to feel something, either emotionally or physically. Now, we often feel something emotionally when we dream, so arguably that makes one sentient when unconscious. But I see this as the exception that makes my definition more pertinent rather than the exception that proves me wrong.

In First Aid courses you are taught to squeeze someone’s fingers to see if they are conscious. So to feel something is directly correlated with consciousness and that’s also how I would define sentience. Much of the brain’s activity is subconscious even to the extent that problem-solving is often executed subliminally. I expect everyone has had the experience of trying to solve a puzzle, then leaving it for a period of time, only to solve it ‘spontaneously’ when they next encounter it. I believe the creative process often works in exactly the same way, which is why it feels so spontaneous and why we can’t explain it even after we’ve done it. This subconscious problem-solving is a well known cognitive phenomenon, so it’s not just a ‘folk theory’.

This complex subconscious activity observed in humans, I believe is quite different from the complex instinctive behaviour that we see in animals: birds building nests, bees building hives, spiders building webs, beavers building dams. These activities seem ‘hard-wired’, to borrow from the AI lexicon as we tend to do.

A bee does a complex dance to communicate where the honey is. No one believes that the bee cognitively works this out the way we would, so I expect it’s totally subconscious. So if a bee can perform complex behaviours without consciousness does that mean it doesn’t have consciousness at all? The obvious answer is yes, but let’s look at another scenario. The bee gets caught in a spider’s web and tries desperately to escape. Now I believe that in this situation the bee feels fear and, by my definition, that makes it sentient. This is an important point because it underpins virtually every other point I intend to make. Now, I don’t really know if the bee ‘feels’ anything at all, so it’s an assumption. But my assumption is that sentience, and therefore consciousness, started with feelings and not logic.

In last week’s issue of New Scientist, 16 April 2011, the cover features the topic, Free Will: The illusion we can’t live without. The article, written by freelance writer, Dan Jones, is headed The free will delusion. In effect, science argues quite strongly that free will is an illusion, but one we are reluctant to relinquish. Jones opens with a scenario in 2500 when free will has been scientifically disproved and human behaviour is totally predictable and deterministic. Now, I don’t think there’s really anything in the universe that’s totally predictable, including the remote possibility that Earth could one day be knocked off its orbit, but that’s the subject of another post. What’s more relevant to this discussion is Jones’ opening sentence where he says: ‘…neuroscientists know precisely how the hardware of the brain runs the software of the mind and dictates behaviour.’ Now, this is purely a piece of speculative fiction, so it’s not necessarily what Jones actually believes. But it’s the implicit assumption that the brain’s processes are identical to a computer’s that I find most interesting.

The gist of the article, by the way, is that when people really believe they have no free will, they behave very unempathetically towards others, amongst other aberrational behaviours. In other words, a belief in our ability to direct our own destiny is important to our psychological health. So, if the scientists are right, it’s best not to tell anyone. It’s ironic that telling people they have no free will makes them behave as if they don’t, when allowing them to believe they have free will gives their behaviour intentionality. Apparently, free will is a ‘state-of-mind’.

On a more recent post of Rust Belt Philosophy, I was reminded that, contrary to conventional wisdom, emotions play an important role in rational behaviour. Psychologists now generally believe that, without emotions, our decision-making ability is severely impaired. And, arguably, it’s emotions that play the key role in what we call free will. Certainly, it’s our emotions that are affected if we believe we have no control over our behaviour. Intentions are driven as much by emotion as they are by logic. In fact, most of us make decisions based on gut feelings and rationalise them accordingly. I’m not suggesting that we are all victims of our emotional needs like immature children, but that the interplay between emotions and rational thought are the key to our behaviours. More importantly, it’s our ability to ‘feel’ that not only separates us from machine intelligence in a physical sense, but makes our ‘thinking’ inherently different. It’s also what makes us sentient.

Many people believe that emotion can be programmed into computers to aid them in decision-making as well. I find this an interesting idea and I’ve explored it in my own fiction. If a computer reacted with horror every time we were to switch it off would that make it sentient? Actually, I don’t think it would, but it would certainly be interesting to see how people reacted. My point is that artificially giving AI emotions won’t make them sentient.

I believe feelings came first in the evolution of sentience, not logic, and I still don’t believe that there’s anything analogous to ‘software’ in the brain, except language and that’s specific to humans. We are the only species that ‘downloads’ a language to the next generation, but that doesn’t mean our brains run on algorithms.

So evidence in the animal kingdom, not just humans, suggests that sentience, and therefore consciousness, evolved from emotions, whereas computers have evolved from pure logic. Computers are still best at what we do worst, which is manipulate huge amounts of data. Which is why the human genome project actually took less time than predicted. And we still do best at what they do worst, which is make decisions based on a host of parameters including emotional factors as well as experiential ones.

11 comments:

Unknown said...

On logic:

What is logic but binary calculation? Pleasure and Pain. This causes me pain, and thus I will avoid it. This causes pleasure, and thus I tend to seek it. This causes pleasure, but if abused it will cause me pain, and thus I should seek it in moderation. Many animals reason this way with their feelings.

So I would have to agree with Kant, that "feeling" alone, ungoverned by reason, is descriptive of something absurd that would never survive for very long.

Paul P. Mealing said...

You raise a valid point Phaedrus. You are talking about a stimulous-response or sensory-response type of behaviour that you might even find in plants, so no consciousness is required, even for an animal.

What I'm saying is that you only get consciousness when an animal starts to 'feel' something - without feeling, no consciousness is required. This is my contention.

I don't believe AI will ever feel anything, in the sense that we do, so I don't believe AI will ever be conscious.

If you mean that feeling without response, or the ability to respond, is absurd, then I agree with you.

Regards, Paul.

davo said...

If AI advances enough to ever be considered to have consciousness surely that conscious would make us aware of it, will it not?

It'll happen man, you just wait.

Paul P. Mealing said...

Hi Davo,

Yes, that's the assumption - we will recognise consciousness when we see it.

But as long as AI is software-based, I expect there will always be a difference. For what it's worth, I've explored this in my own fiction.

If we can't agree on what consciousness is then it may never be resolved. At the end of the day, consciousness is a subjective experience, and, if we didn't all have that experience, I'm sure science would tell us it doesn't exist. Objectively, consciousness is brain imaging created by neurons firing, but subjectively it's something entirely different.

Thanks for taking an interest in my views.

Regards, Paul.

yms said...

It occurs to me that people in whom the outward manifestation of emotions is seriously trammelled -- victims of Asperger's, for example -- would nevertheless indisputably have to be characterized as conscious and sentient. I'm not sure, in any case, that it makes sense a priori to assume that sentience cannot develop absent underlying emotional substructures and/or a limbic system. Still, your intuition is as good as mine, and YMMV.

Paul P. Mealing said...

HI yms,

No one would suggest that people with Asberger's aren't sentient, or even that they don't feel emotions, so I'm not sure what your point is there.

We assume that a limbic system is required to feel emotions, but, as you point out, that's not a certainty.

More significantly, I believe, is that we don't know of anything that experiences emotion that isn't alive. When we argue that AI will be conscious, therefore sentient, therefore alive, it seems to me we have the causal relationship back-to-front.

Regards, Paul.

yms said...

My 'point,' meant innocuously, was just that you'd said, 'the evidence...suggests that...consciousness evolved from emotions,' which may or may not be true, but on the other hand, given the apparent partial orthogonality of sentience and emotion in the Asberger's example, it seems at least possible that emotion isn't a necessary precondition for the development of sentience. Which is not to say that people with various forms of autism don't have plenty of emotions, but there does seem to be some difference in their outward manifestation, and I was just casting about for an example of orthogonality. I wouldn't ever argue that an artificial intelligence, unless it were one instantiated in living tissue, would be considered intuitively by most people to have 'life' in the same sense in which a human does.

Paul P. Mealing said...

Hi yms,

Thanks for the clarification. I'm not sure that there is a 'partial orthogonality of sentience and emotion in the Asberger's example', so I don't necessarily agree with your conclusion: 'that emotion isn't a necessary precondition for the development of sentience.'

I'm not an expert on Asperger's, though I have met a couple of people who probably qualify, including one who was clinically diagnosed. Some people believe that Einstein had Asperger's but he was certainly passionate about a lot of things.

Otherwise I think we probably are in agreement.

Regards, Paul.

Anonymous said...

I'm Austisic and I have feelings! -_- for those who don't know ASD (Autistic Spectrum Disorder) affects in different ways, we do have feelings, we are just to socially awkward to express them properly. Anyway humans are willing to fight for freedom (and have shown this countless times) because we believe in it, if an AI has feelings enough to want freedom then is it not probable that it will fight and, not to be to Sci-Fi nerdy, we could end up with a war? Regardless to this point, your whole argument is based on feelings = Sentience correct? So is it there for not logical (yes logic in a philosophical debate how ironic) that if we creat an AI with feelings it is sentient, and all sentient beings deserve to be free. Also ethically speeking, surely anything that is intelligent enough to say it deserves rights, wether it has emotion or not deserves rights, otherwise we will be creating a world of synthetic slavery, and maybe it is just me but I strongly believe that is wrong. We need to treat sentient Artifical intelligence, the same way we treat sentient Natural intelligence - equely! That is if our technology does ever delelope this far.
And now I have to type the letters of this security thing to prove I'm not an AI :')

Paul P. Mealing said...

Hi Anonymous,

Appreciate your comment. I guess my point is that I don't believe AI will ever be sentient the way we are, but I may be proven wrong.

I don't think that giving AI simulated emotions makes them sentient either, because I don't believe they 'feel'. Programming a computer to respond in a certain way to inputs from a human doesn't mean that it actually feels anything in the way we and other animals do.

Plants are not sentient even though they are alive and organic and respond to their environment.

Certainly, I think we could treat other sentient creatures better than we currently do, so I don't think sentience is unique to humans by any means.

Regards, Paul.

Unknown said...

It's seems many scientists who claim conscious A.I. is right around the corner never address the sentience issue which can be very misleading. So many people have nightmarish visions of intelligent robots exterminating humans, but I've never heard a single talk on A.I. point out the correlation between sentience and motives. In order for an A.I. to have motives that deviate from the programmer who designed it, it would have to be sentient. Sentience is the basis for motive, and to me, motive and the free will to make choices which alter your course is what's required to achieve a truly conscious A.I that is no longer a computer, but a being. I'm very skeptical on whether this is possible to achieve ever, let alone in the near future. We understand very little about sentience in human beings. You have to ask yourself what we would even stand to gain by creating such a being if we had the capability to do so.