I recently got involved in a discussion on Facebook with a science fiction group on the subject of artificial intelligence. Basically, it started with a video claiming that a robot had discovered self-awareness, which is purportedly an early criterion for consciousness. But if you analyse what they actually achieved: it’s clever sleight-of-hand at best and pseudo self-awareness at worst. The sleight-of-hand is to turn fairly basic machine logic into an emotive gesture to fool humans (like you and me) into thinking it looks and acts like a human, which I’ll describe in detail below.
And it’s pseudo self-awareness in that it’s make-believe, in the same way that pseudo science is make-believe science, meaning pretend science, like creationism. We have a toy that we pretend exhibits self-awareness. So it is we who do the make-believe and pretending, not the toy.
If you watch the video you’ll see that they have 3 robots and they give them a ‘dumbing pill’ (meaning a switch was pressed) so they can’t talk. But one of them is not dumb and they are asked: “Which pill did you receive?” One of them dramatically stands up and says: “I don’t know”. But then waves its arm and says, “I’m sorry, I know now. I was able to prove I was not given the dumbing pill.”
Obviously, the entire routine could have been programmed, but let’s assume it’s not. It’s a simple TRUE/FALSE logic test. The so-called self-awareness is a consequence of the T/F test being self-referential – whether it can talk or not. It verifies that it’s False because it hears its own voice. Notice the human-laden words like ‘hears’ and ‘voice’ (my anthropomorphic phrasing). Basically, it has a sensor to detect sound that it makes itself, which logically determines whether the statement, it’s ‘dumb’, is true or false. It says, ‘I was not given a dumbing pill’, which means its sound was not switched off. Very simple logic.
I found an on-line article by Steven Schkolne (PhD in Computer Science at Caltech), so someone with far more expertise in this area than me, yet I found his arguments for so-called computer self-awareness a bit misleading, to say the least. He talks about 2 different types of self-awareness (specifically for computers) – external and internal. An example of external self-awareness is an iphone knowing where it is, thanks to GPS. An example of internal self-awareness is a computer responding to someone touching the keyboard. He argues that “machines, unlike humans, have a complete and total self-awareness of their internal state”. For example, a computer can find every file on its hard drive and even tell you its date of origin, which is something no human can do.
From my perspective, this is a bit like the argument that a thermostat can ‘think’. ‘It thinks it’s too hot or it thinks it’s too cold, or it thinks the temperature is just right.’ I don’t know who originally said that, but I’ve seen it quoted by Daniel Dennett, and I’m still not sure if he was joking or serious.
Computers use data in a way that humans can’t and never will, which is why their memory recall is superhuman compared to ours. Anyone who can even remotely recall data like a computer is called a savant, like the famous ‘Rain Man’. The point is that machines don’t ‘think’ like humans at all. I’ll elaborate on this point later. Schkolne’s description of self-awareness for a machine has no cognitive relationship to our experience of self-awareness. As Schkone says himself: “It is a mistake if, in looking for machine self-awareness, we look for direct analogues to human experience.” Which leads me to argue that what he calls self-awareness in a machine is not self-awareness at all.
A machine accesses data, like GPS data, which it can turn into a graphic of a map or just numbers representing co-ordinates. Does the machine actually ‘know’ where it is? You can ask Siri (as Schkolne suggests) and she will tell you, but he acknowledges that it’s not Siri’s technology of voice recognition and voice replication that makes your iphone self-aware. No, the machine creates a map, so you know where ‘You’ are. Logically, a machine, like an aeroplane or a ship, could navigate over large distances with GPS with no humans aboard, like drones do. That doesn’t make them self-aware; it makes their logic self-referential, like the toy robot in my introductory example. So what Schkolne calls self-awareness, I call self-referential machine logic. Self-awareness in humans is dependent on consciousness: something we experience, not something we deduce.
And this is the nub of the argument. The argument goes that if self-awareness amongst humans and other species is a consequence of consciousness, then machines exhibiting self-awareness must be the first sign of consciousness in machines. However, self-referential logic, coded into software doesn’t require consciousness, it just requires machine logic suitably programmed. I’m saying that the argument is back-to-front. Consciousness can definitely imbue self-awareness, but a self-referential logic coded machine does not reverse the process and imbue consciousness.
I can extend this argument more generally to contend that computers will never be conscious for as long as they are based on logic. What I’d call problem-solving logic came late, evolutionarily, in the animal kingdom. Animals are largely driven by emotions and feelings, which I argue came first in evolutionary terms. But as intelligence increased so did social skills, planning and co-operation.
Now, insect colonies seem to put the lie to this. They are arguably closer to how computers work, based on algorithms that are possibly chemically driven (I actually don’t know). The point is that we don’t think of ants and bees as having human-like intelligence. A termite nest is an organic marvel, yet we don’t think the termites actually plan its construction the way a group of humans would. In fact, some would probably argue that insects don’t even have consciousness. Actually, I think they do. But to give another well-known example, I think the dance that bees do is ‘programmed’ into their DNA, whereas humans would have to ‘learn’ it from their predecessors.
There is a way in which humans are like computers, which I think muddies the waters, and leads people into believing that the way we think and the way machines ‘think’ is similar if not synonymous.
Humans are unique within the animal kingdom in that we use language like software; we effectively ‘download’ it from generation to generation and it limits what we can conceive and think about, as Wittgenstein pointed out. In fact, without this facility, culture and civilization would not have evolved. We are the only species (that we are aware of) that develops concepts and then manipulates them mentally because we learn a symbol-based language that gives us that unique facility. But we invented this with our brains; just as we invent symbols for mathematics and symbols for computer software. Computer software is, in effect, a language and it’s more than an analogy.
We may be the only species that uses symbolic language, but so do computers. Note that computers are like us in this respect, rather than us being like computers. With us, consciousness is required first, whereas with AI, people seem to think the process can be reversed: if you create a machine logic language with enough intelligence, then you will achieve consciousness. It’s back-to-front, just as self-referential logic creating self-aware consciousness is back-to-front.
I don't think AI will ever be conscious or sentient. There seems to be an assumption that if you make a computer more intelligent it will eventually become sentient. But I contend that consciousness is not an attribute of intelligence. I don't believe that more intelligent species are more conscious or more sentient. In other words, I don't think the two attributes are concordant, even though there is an obvious dependency between consciousness and intelligence in animals. But it’s a one way dependency; if consciousness was dependent on intelligence then computers would already be conscious.
Addendum: The so-called Turing test is really a test for humans, not robots, as this 'interview' with 'Sophia' illustrates.
No comments:
Post a Comment