Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Saturday 21 January 2012

The anthropomorphism of computers

There are 2 commonly held myths associated with AI (Artificial Intelligence) that are being propagated through popular science, whether intentionally or not: that computers will inevitably become sentient and that brains work similarly to computers.

The first of these I dealt with indirectly in a post last month, when I reviewed Colin McGinn’s book, The Mysterious Flame. McGinn points out that there is no correlation between intelligence and sentience, as sentience evolved early. There is a strongly held belief, amongst many scientists and philosophers, that AI will eventually overtake human intelligence and at some point become sentient. Even if the first statement is true (depending on how one defines intelligence) the second part has no evidential basis. If computers were going to become sentient on the basis that they ‘think’ then they would already be sentient. Computers don’t really think, by the way, it’s just a metaphor. The important point (as McGinn points out) is that there is no evidence in the biological world that sentience increases with intelligence, so there is no reason to believe that it will even occur with computers if it hasn’t already.

This is not to say that AI or Von Neumann machines could not be Darwinianly successful, but it still wouldn’t make them necessarily sentient. After all, plants are hugely Darwinianly successful but are not sentient.

In the last issue of Philosophy Now (Issue 87, November/December 2011), the theme was ‘Brains & Minds’ and it’s probably the best one I’ve read since I subscribed to it. Namit Arora (based in San Francisco and creator of Shunya) wrote a very good article, titled The Minds of Machines, where he tackles this issue by referencing Heidegger, though I won’t dwell on that aspect of it. Most relevant to this topic, he quotes Hubert L. Dreyfuss and Stuart E. Dreyfus from Making a Mind vs Modelling the Brain:

“If [a simulated neural network] is to learn from its own ‘experiences’ to make associations that are human-like rather than be taught to make associations which have been specified by its trainer, it must also share our sense of appropriateness or outputs, and this means it must share our needs, and emotions, and have a human-like body with the same physical movements, abilities and possible injuries.”

In other words, we would need to build a comprehensive model of a human being complete with its emotional, cognitive and sensory abilities. In various ways this is what we attempt to do. We anthropomorphise its capabilities and then we interpret them anthropomorphically. No where is this more apparent than with computer-generated art.

Last week’s issue of New Scientist (14 January 2012) discusses in detail the success that computers have had with ‘creating’ art; in particular, The Painting Fool, the brain child of computer scientist, Simon Colton.

If we deliberately build computers and software systems to mimic human activities and abilities, we should not be surprised that they sometimes pass the Turing test with flying colours. According to Catherine de Lange, who wrote the article in New Scientist, the artistic Turing test has well and truly been passed both in visual art and music.

One must remember that visual art started by us copying nature (refer my post on The dawn of the human mind, Oct. 2011) so we now have robots copying us and quite successfully. The Painting Fool does create its own art, apparently, but it takes its ‘inspiration’ (i.e. cues) from social networks, like Facebook for example.

The most significant point of all this is that computers can create art but they are emotionally blind to its consequences. No one mentioned this point in the New Scientist article.

Below is a letter I wrote to New Scientist. It’s rather succinct as they have a 250 word limit.


As computers become better at simulating human cognition there is an increasing tendency to believe that brains and computers work in the same way, but they don’t.

Art is one of the things that separates us from other species because we can project our imaginations externally, be it visually, musically or in stories. Imagination is the ability to think about something that’s not in the here and now – what philosophers call intentionality – it can be in the past or the future, or another place, or it can be completely fictional. Computers can’t do this. Computers have something akin to semantic memory but nothing similar to episodic memory, which requires imagination.

Art triggers a response from us because it has an emotional content that we respond to. With computer art we respond to an emotional content that the computer never feels. So any artistic merit is what we put into it, because we anthropomorphise the computer’s creation.

Artistic creation does occur largely in our subconscious, but there is one state where we all experience this directly and that is in dreams. Computers don’t dream so the analogy breaks down.

So computers produce art with no emotional input and we appraise it based on our own emotional response. Computers may be able to create art but they can’t appreciate it, which is why if feels so wrong.

Postscript: People forget that it requires imagination to appreciate art as well as to create it. Computers can do one without the other, which is anomalous, even absurd.

Saturday 17 December 2011

Consciousness Unexplained

The Mysterious Flame by Colin McGinn, subtitled Conscious Minds in a Material World, was recommended to my by The Atheist Missionary (aka TAM) almost 2 years ago, and it’s taken me all this time to get around to reading it.

But it was well worth the effort, and I can only endorse the recommendation given by The New York Times, as quoted on the cover: “There is no better introduction to the problem of consciousness than this.” McGinn is Professor of Philosophy at Rutgers University, with a handful of other books credited to him. Mysterious Flame was written in 1999, yet it’s not dated by other books I’ve read on this subject, and I would go so far as to say that anyone with an interest in the mind-body problem should read this book. Even if you don’t agree with him, I’m sure he has something to offer that you didn’t consider previously. At the end of the book, he also has something to say about the discipline of philosophy in general: its history and its unique position in human thought.

Most significantly, McGinn calls himself a ‘mysterian’, who is someone, like myself, as it turns out, who believes that consciousness is a mystery which we may never solve. Right from the start he addresses the two most common philosophical positions on this subject: materialism and dualism; demonstrating how they both fail. They are effectively polar opposite positions: materialism arguing that consciousness is neuronal activity full stop; and dualism arguing that consciousness is separate to the brain, albeit connected, and therefore can exist independently of the brain.

Materialism is the default position taken by scientists and dualism is the default position taken by most people even if they’re not aware of it. Most people think that ‘I’ is an entity that exists inside their head, dependent on their brain yet separate from it somehow. Many people, who have had out-of-body experiences, argue this confirms their belief. On the other hand, scientists have demonstrated how we can fool the ‘mind’ into thinking it is outside the body. I have argued elsewhere (Subjectivity, June 2009) that ‘I think’ is a tautology, because ‘I’ is your thoughts and nothing else.

McGinn acknowledges that consciousness is completely dependent on the brain but this alone doesn’t explain it. He points out that consciousness evolved relatively early in evolution and is not dependent on intelligence per se. Being more intelligent doesn’t make us more sentient than other species who also ‘feel’. He attacks the commonly held belief in the scientific community that consciousness just arises from this ‘meat’ we call a brain, and to create consciousness we merely have to duplicate this biological machine. I agree with him on this point. Not so recently (April 2011), I challenged an editorial and an article written in New Scientist inferring that sentience is an axiomatic consequence of artificial intelligence (AI): 'it’s just a matter of time before we will be forced to acknowledge it'. However, the biological evidence suggests that making AI more intelligent won’t create sentience, yet that’s exactly what most AI exponents believe. As McGinn says: ‘…sentience in general does not involve symbolic manipulation’, which is what a computer algorithm does.

McGinn argues that the problem with consciousness is that it’s non-spatial and therefore could exist in another dimension. This is not as daft as it sounds, because, as he points out, an additional dimension could exist without us knowing it and he references Edwin A. Abbott’s famous book, Flatland, to make his point. I’ve similarly argued that quantum mechanics could be explained by imagining a hidden dimension, so I’m not dismissive of this hypothesis.

The most important point that McGinn makes, in my opinion, is a fundamental one of epistemology. We humans tend to think that there is nothing that exists that is beyond our ultimate comprehension, yet there is no legitimate cognitive reason to assume that. To quote: ‘We should have the humility, and plain good sense, to admit that some things may exist without being knowable by us.’

This came up recently in an online discussion I had with Emanuel Rutten (Trying to define God, Nov. 11) who argued the opposite based on an ‘all possible worlds’ scenario. And if there were an infinite number of worlds, then Rutten’s argument would be valid. However, projecting what is possibly knowable in an infinite number of worlds to our specific world is epistemological nonsense.

As McGinn points out, most species on our planet can’t comprehend gravity or how the stars stay up in the sky or that the Earth goes around the sun – it’s beyond their cognitive abilities. Likewise there could be phenomena that are beyond our cognitive abilities, and consciousness may be one.

Roger Penrose addresses this epistemological point in Chapter 1 of Road to Reality, where he admits a ‘personal prejudice’ that everything in the natural world is within our cognitive grasp, whilst acknowledging that others don’t share his prejudice. In particular, Penrose contends that there is a Platonic mathematical realm, which is theoretically available to us without constraint (except the time to explore it), and that this Platonic realm can explain the entire physical universe. Interestingly, McGinn makes no reference to the significance of mathematics in determining the epistemological limit of our knowledge, yet I contend that this is a true limit.

Therefore, I would argue, based on this hypothetical mathematically cognitive limit, that if consciousness can’t be determined mathematically then it will remain a mystery.

Even though McGinn discusses amnesia in reference to the ‘self’, he doesn’t specifically address the fact that, without memory, there would be no ‘self’. Which is why none of us have a sense of self in our early infancy because we create no memories of it. It is memory that specifically gives us a sense of continuity of self and allows us to believe that the ‘I’ we perceive ourselves to be as an adult is the same ‘I’ we were as children.

I’ve skipped over quite a lot of McGinn’s book, obviously, but he does give arguably the best description of John Searle’s famous Chinese Room thought experiment I’ve read, without telling the reader that it is John Searle’s Chinese Room thought experiment.

At the end of the book, he devotes a short chapter to ‘The Unbearable Heaviness of Philosophy’ where he explains how ‘natural philosophy’ diverged from science yet they are more complementary than dichotomous. To quote McGinn again:

‘Science asks answerable questions… eliminating false theories, reducing the area of human ignorance, while philosophy seems mired in controversy, perpetually worrying at the same questions, not making the kind of progress characteristic of science.’

Many people perceive and present philosophy as the poor orphan of science in the modern age, yet I’m unsure if they will ever be completely separated or become independent. Science reveals that nature’s mysteries are endless and whilst those mysteries persist then philosophy will continue to play its role.

Right at the end of the book, McGinn makes a pertinent observation: that our DNA code contains the answer to our mystery, because consciousness is a consequence of the genetic instructions that make every sentient creature. So our genes have the information to create consciousness that consciousness itself is unable to comprehend.

Friday 22 April 2011

Sentience, free will and AI

In the 2 April 2011 edition of New Scientist, the editorial was titled Rights for robots; We will know when it’s time to recognise artificial cognition. Implicit in the header and explicit in the text is the idea that robots will one day have sentience just like us. In fact they highlighted one passage: “We should look to the way people treat machines and have faith in our ability to detect consciousness.”

I am a self-confessed heretic on this subject because I don’t believe machine intelligence will ever be sentient, and I’m happy to stick my neck out in this forum so that one day I can possibly be proven wrong. One of the points of argument that the editorial makes is that ‘there is no agreed definition of consciousness’ and ‘there’s no way to tell that you aren’t the only conscious being in a world of zombies.’ In other words, you really don’t know if the person right next to you is conscious (or in a dream) so you’ll be forced to give a cognitive robot the same benefit of the doubt. I disagree.

Around the same time as reading this, I took part in a discussion on Rust Belt Philosophy about what sentience is. Firstly, I contend that sentience and consciousness are synonymous, and I think sentience is pretty pervasive in the animal kingdom. Does that mean that something that is unconscious is not sentient? Strictly speaking, yes, because I would define sentience as the ability to feel something, either emotionally or physically. Now, we often feel something emotionally when we dream, so arguably that makes one sentient when unconscious. But I see this as the exception that makes my definition more pertinent rather than the exception that proves me wrong.

In First Aid courses you are taught to squeeze someone’s fingers to see if they are conscious. So to feel something is directly correlated with consciousness and that’s also how I would define sentience. Much of the brain’s activity is subconscious even to the extent that problem-solving is often executed subliminally. I expect everyone has had the experience of trying to solve a puzzle, then leaving it for a period of time, only to solve it ‘spontaneously’ when they next encounter it. I believe the creative process often works in exactly the same way, which is why it feels so spontaneous and why we can’t explain it even after we’ve done it. This subconscious problem-solving is a well known cognitive phenomenon, so it’s not just a ‘folk theory’.

This complex subconscious activity observed in humans, I believe is quite different from the complex instinctive behaviour that we see in animals: birds building nests, bees building hives, spiders building webs, beavers building dams. These activities seem ‘hard-wired’, to borrow from the AI lexicon as we tend to do.

A bee does a complex dance to communicate where the honey is. No one believes that the bee cognitively works this out the way we would, so I expect it’s totally subconscious. So if a bee can perform complex behaviours without consciousness does that mean it doesn’t have consciousness at all? The obvious answer is yes, but let’s look at another scenario. The bee gets caught in a spider’s web and tries desperately to escape. Now I believe that in this situation the bee feels fear and, by my definition, that makes it sentient. This is an important point because it underpins virtually every other point I intend to make. Now, I don’t really know if the bee ‘feels’ anything at all, so it’s an assumption. But my assumption is that sentience, and therefore consciousness, started with feelings and not logic.

In last week’s issue of New Scientist, 16 April 2011, the cover features the topic, Free Will: The illusion we can’t live without. The article, written by freelance writer, Dan Jones, is headed The free will delusion. In effect, science argues quite strongly that free will is an illusion, but one we are reluctant to relinquish. Jones opens with a scenario in 2500 when free will has been scientifically disproved and human behaviour is totally predictable and deterministic. Now, I don’t think there’s really anything in the universe that’s totally predictable, including the remote possibility that Earth could one day be knocked off its orbit, but that’s the subject of another post. What’s more relevant to this discussion is Jones’ opening sentence where he says: ‘…neuroscientists know precisely how the hardware of the brain runs the software of the mind and dictates behaviour.’ Now, this is purely a piece of speculative fiction, so it’s not necessarily what Jones actually believes. But it’s the implicit assumption that the brain’s processes are identical to a computer’s that I find most interesting.

The gist of the article, by the way, is that when people really believe they have no free will, they behave very unempathetically towards others, amongst other aberrational behaviours. In other words, a belief in our ability to direct our own destiny is important to our psychological health. So, if the scientists are right, it’s best not to tell anyone. It’s ironic that telling people they have no free will makes them behave as if they don’t, when allowing them to believe they have free will gives their behaviour intentionality. Apparently, free will is a ‘state-of-mind’.

On a more recent post of Rust Belt Philosophy, I was reminded that, contrary to conventional wisdom, emotions play an important role in rational behaviour. Psychologists now generally believe that, without emotions, our decision-making ability is severely impaired. And, arguably, it’s emotions that play the key role in what we call free will. Certainly, it’s our emotions that are affected if we believe we have no control over our behaviour. Intentions are driven as much by emotion as they are by logic. In fact, most of us make decisions based on gut feelings and rationalise them accordingly. I’m not suggesting that we are all victims of our emotional needs like immature children, but that the interplay between emotions and rational thought are the key to our behaviours. More importantly, it’s our ability to ‘feel’ that not only separates us from machine intelligence in a physical sense, but makes our ‘thinking’ inherently different. It’s also what makes us sentient.

Many people believe that emotion can be programmed into computers to aid them in decision-making as well. I find this an interesting idea and I’ve explored it in my own fiction. If a computer reacted with horror every time we were to switch it off would that make it sentient? Actually, I don’t think it would, but it would certainly be interesting to see how people reacted. My point is that artificially giving AI emotions won’t make them sentient.

I believe feelings came first in the evolution of sentience, not logic, and I still don’t believe that there’s anything analogous to ‘software’ in the brain, except language and that’s specific to humans. We are the only species that ‘downloads’ a language to the next generation, but that doesn’t mean our brains run on algorithms.

So evidence in the animal kingdom, not just humans, suggests that sentience, and therefore consciousness, evolved from emotions, whereas computers have evolved from pure logic. Computers are still best at what we do worst, which is manipulate huge amounts of data. Which is why the human genome project actually took less time than predicted. And we still do best at what they do worst, which is make decisions based on a host of parameters including emotional factors as well as experiential ones.

Sunday 11 April 2010

To have or not to have free will

In some respects this post is a continuation of the last one. The following week’s issue of New Scientist (3 April 2010) had a cover story on ‘Frontiers of the Mind’ covering what it called Nine Big Brain Questions. One of these addressed the question of free will, which happened to be where my last post ended. In the commentary on question 8: How Powerful is the Subconscious? New Scientist refers to well-known studies demonstrating that neuron activity precedes conscious decision-making by 50 milliseconds. In fact, John-Dylan Haynes of the Bernstein Centre for Computational Neuroscience, Berlin, has ‘found brain activity up to 10 seconds before a conscious decision to move [a finger].’ To quote Haynes: “The conscious mind is not free. What we think of as ‘free will’ is actually found in the subconscious.”

New Scientist actually reported Haynes' work in this field back in their 19 April 2008 issue. Curiously, in the same issue, they carried an interview with Jill Bolte Taylor, who was recovering from a stroke, and claimed that she "was consciously choosing and rebuilding my brain to be what I wanted it to be". I wrote to New Scientist at the time, and the letter can still be found on the Net:

You report John-Dylan Haynes finding it possible to detect a decision to press a button up to 7 seconds before subjects are aware of deciding to do so (19 April, p 14). Haynes then concludes: "I think it says there is no free will."

In the same issue Michael Reilly interviews Jill Bolte Taylor, who says she "was consciously choosing and rebuilding my brain to be what I wanted it to be" while recovering from a stroke affecting her cerebral cortex (p 42) . Taylor obviously believes she was executing free will.

If free will is an illusion, Taylor's experience suggests that the brain can subconsciously rewire itself while giving us the illusion that it was our decision to make it do so. There comes a point where the illusion makes less sense than the reality.

To add more confusion, during the last week, I heard an interview with Norman Doidge MD, Research psychiatrist at the Columbia University Psychoanalytic Centre and the University of Toronto, who wrote the book, The Brain That Changes Itself. I haven’t read the book, but the interview was all about brain plasticity, and Doidge specifically asserts that we can physically change our brains, just through thought.

What Haynes' experimentation demonstrates is that consciousness is dependent on brain neuronal activity, and that’s exactly the point I made in my last post. Our subconscious becomes conscious when it goes ‘global’, so one would expect a time-lapse between a ‘local’ brain activity (that is subconscious) and the more global brain activity (that is conscious). But the weird part is that Taylor’s experience, and Doidge’s assertions, is that our conscious thoughts can also affect our brain at the neuron level. This reminds me of Douglas Hofstadter’s thesis that we all are a ‘strange loop’, that he introduced in his book, Godel, Escher, Bach, and then elaborated on in a book called I am a Strange Loop. I’ve read the former tome but not the latter one (refer my post on AI & Consciousness, Feb.2009).

We will learn more and more about consciousness, I’m sure, but I’m not at all sure that we will ever truly understand it. As John Searle points out in his book, Mind, at the end of the day, it is an experience, and a totally subjective experience at that. In regard to studying it and analysing it, we can only ever treat it as an objective phenomenon. The Dalai Lama makes the same point in his book, The Universe in a Single Atom.

People tend to think about this from a purely reductionist viewpoint: once we understand the correlation between neuron activity and conscious experience, the mystery stops being a mystery. But I disagree: I expect the more we understand, the bigger the mystery will become. If consciousness is no less weird than quantum mechanics, I’ll be very surprised. And we are already seeing quite a lot of weirdness, when consciousness is clearly dependent on neuronal activity, and yet the brain’s plasticity can be affected by conscious thought.

So where does this leave free will? Well, I don’t think that we are automatons, and I admit I would find it very depressing if that was the case. The last of the Nine Questions in last week’s New Scientist, asks: will AI ever become sentient? In its response, New Scientist reports on some of the latest developments in AI, where they talk about ‘subconscious’ and ‘conscious’ layers of activity (read software). Raul Arrables of the Carlos III University of Madrid, has developed ‘software agents’ called IDA (Intelligent Distribution Agent) and is currently working on LIDA (Learning IDA). By ‘subconcious’ and ‘conscious’ levels, the scientists are really talking about tiers of ‘decision-making’, or a hierarchic learning structure, which is an idea I’ve explored in my own fiction. At the top level, the AI has goals, which are effectively criteria of success or failure. At the lower level it explores various avenues until something is ‘found’ that can be passed onto the higher level. In effect, the higher level chooses the best option from the lower level. The scientists working on this 2 level arrangement, have even given their AI ‘emotions’, which are built-in biases that direct them in certain directions. I also explored this in my fiction, with the notion of artificial attachment to a human subject that would simulate loyalty.

But, even in my fiction, I tend to agree with Searle, that these are all simulations, which might conceivably convince a human that an AI entity really thinks like us. But I don’t believe the brain is a computer, so I think it will only ever be an analogy or a very good simulation.

Both this development in AI and the conscious/subconscious loop we seem to have in our own brains reminds me of the ‘Bayesian’ model of the brain developed by Karl Friston and also reported in New Scientist (31 May 2008). They mention it again in an unrelated article in last week’s issue – one of the little unremarkable reports they do – this time on how the brain predicts the future. Friston effectively argues that the brain, and therefore the mind, makes predictions and then modifies the predictions based on feedback. It’s effectively how the scientific method works as well, but we do it all the time in everyday encounters, without even thinking about it. But Friston argues that it works at the neuron level as well as the cognitive level. Neuron pathways are reinforced through use, which is a point that Norman Doidge makes in his interview. We now know that the brain literally rewires itself, based on repeated neuron firings.

Because we think in a language, which has become a default ‘software’ for ourselves, we tend to think that we really are just ‘wetware’ computers, yet we don’t share this ability with other species. We are the only species that ‘downloads’ a language to our progeny, independently of our genetic material. And our genetic material (DNA) really is software, as it is for every life form on the planet. We have a 4-letter code that provides the instructions to create an entire organism, materially and functionally – nature’s greatest magical trick.

One of the most important aspects of consciousness, not only in humans, but for most of the animal kingdom (one suspects) is that we all ‘feel’. I don’t expect an AI ever to feel anything, even if we programme it to have emotions.

But it is because we can all ‘feel’, that our lives mean so much to us. So, whether we have free will or not, what really matters is what we feel. And without feeling, I would argue that we would not only be not human, but not sentient.


Footnote: If you're interested in neuroscience at all, the interview linked above is well worth listening to, even though it's 40 mins long.

Saturday 3 April 2010

Consciousness explained (well, almost, sort of)

As anyone knows, who has followed this blog for any length of time, I’ve touched on this subject a number of times. It deals with so many issues, including the possibilities inherent in AI and the subject of free will (the latter being one of my earliest posts).

Just to clarify one point: I haven’t read Daniel C. Dennett’s book of the same name. Paul Davies once gave him the very generous accolade by referencing it as 1 of the 4 most influential books he’s read (in company with Douglas Hofstadter’s Godel, Escher, Bach). He said: “[It] may not live up to its claim… it definitely set the agenda for how we should think about thinking.” Then, in parenthesis, he quipped: “Some people say Dennett explained consciousness away.”

In an interview in Philosophy Now (early last year) Dennett echoed David Chalmers’ famous quote that “a thermostat thinks: it thinks it’s too hot, or it thinks it’s too cold, or it thinks the temperature is just right.” And I don’t think Dennett was talking metaphorically. This, by itself, doesn’t imbue a thermostat with consciousness, if one argues that most of our ‘thinking’ happens subconsciously.

I recently had a discussion with Larry Niven on his blog, on this very topic, where we to-and-fro’d over the merits of John Searle’s book, Mind. Needless to say, Larry and I have different, though mutually respectful, views on this subject.

In reference to Mind, Searle addresses that very quote by Chalmers by saying: “Consciousness is not spread out like jam on a piece of bread…” However, if one believes that consciousness is an ‘emergent’ property, it may very well be ‘spread out like jam on a piece of bread’, and evidence suggests, in fact, that this may well be the case.

This brings me to the reason for writing this post:New Scientist, 20 March 2010, pp.39-41; an article entitled Brain Chat by Anil Ananthaswarmy (consulting editor). The article refers to a theory proposed originally by Bernard Baars of The Neuroscience Institute in San Diego, California. In essence, Baars differentiated between ‘local’ brain activity and ‘global’ brain activity, since dubbed the ‘global workspace’ theory of consciousness.

According to the article, this has now been demonstrated by experiment, the details of which, I won’t go into. Essentially, it has been demonstrated that when a person thinks of something subconsciously, it is local in the brain, but when it becomes conscious it becomes more global: ‘…signals are broadcast to an assembly of neurons distributed across many different regions of the brain.’

One of the benefits, of this mechanism, is that if effectively filters out anything that’s irrelevant. What becomes conscious is what the brain considers important. What criterion the brain uses to determine this is not discussed. So this is not the explanation that people really want – it’s merely postulating a neuronal mechanism that correlates with consciousness as we experience it. Another benefit of this theory is that it explains why we can’t consider 2 conflicting images at once. Everyone has seen the duck/rabbit combination and there are numerous other examples. Try listening to a Bach contrapuntal fugue so that you listen to both melodies at once – you can’t. The brain mechanism (as proposed above) says that only one of these can go global, not both. It doesn’t explain, of course, how we manage to consciously ‘switch’ from one to the other.

However, both the experimental evidence and the theory, are consistent with something that we’ve known for a long time: a lot of our thinking happens subconsciously. Everyone has come across a puzzle that they can’t solve, then they walk away from it, or sleep on it overnight, and the next time they look at it, the solution just jumps out at them. Professor Robert Winston, demonstrated this once on TV, with himself as the guinea pig. He was trying to solve a visual puzzle (find an animal in a camouflaged background) and when he had that ‘Ah-ha’ experience, it showed up as a spike on his brain waves. Possibly the very signal of it going global, although I’m only speculating based on my new-found knowledge.

Mathematicians have this experience a lot, but so do artists. No artist knows where their art comes from. Writing a story, for me, is a lot like trying to solve a puzzle. Quite often, I have no better idea what’s going to happen than the reader does. As Woody Allen once said, it’s like you get to read it first. (Actually, he said it’s like you hear the joke first.) But his point is that all artists feel the creative act is like receiving something rather than creating it. So we all know that something is happening in the subconscious – a lot of our thinking happens where we’re unaware of it.

As I alluded to in my introduction, there are 2 issues that are closely related to consciousness, which are AI and free will. I’ve said enough about AI in previous posts, so I won’t digress, except to restate my position that I think AI will never exhibit consciousness. I also concede that one day someone may prove me wrong. It’s one aspect of consciousness that I believe will be resolved one day, one way or the other.

One rarely sees a discussion on consciousness that includes free will (Searle’s aforementioned book, Mind, is an exception, and he devotes an entire chapter to it). Science seems to have an aversion to free will (refer my post, Sep.07) which is perfectly understandable. Behaviours can only be explained by genes or environment or the interaction of the two – free will is a loose cannon and explains nothing. So for many scientists, and philosophers, free will is seen as a nice-to-have illusion.

Conciousness evolved, but if most of our thinking is subconscious, it begs the question: why? As I expounded on Larry’s blog, I believe that one day we will have AI that will ‘learn’; what Penrose calls ‘bottom-up’ AI. Some people might argue that we require consciousness for learning but insects demonstrate learning capabilities, albeit rudimentary compared to what we achieve. Insects may have consciousness, by the way, but learning can be achieved by reinforcement and punishment – we’ve seen it demonstrated in animals at all levels – they don’t have to be conscious of what they’re doing in order to learn.

So the only evolutionary reason I can see for consciousness is free will, and I’m not confining this to the human species. If, as science likes to claim, we don’t need, or indeed don’t have, free will, then arguably, we don’t need consciousness either.

To demonstrate what I mean, I will relate 2 stories of people reacting in an aggressive manner in a hostile situation, even though they were unconscious.

One case, was in the last 10 years, in Sydney, Australia (from memory) when a female security guard was knocked unconscious and her bag (of money) was taken from her. In front of witnesses, she got up, walked over to the guy (who was now in his car), pulled out her gun and shot him dead. She had no recollection of doing it. Now, you may say that’s a good defence, but I know of at least one other similar incident.

My father was a boxer, and when he first told me this story, I didn’t believe him, until I heard of other cases. He was knocked out, and when he came to, he was standing and the other guy was on the deck. He had to ask his second what happened. He gave up boxing after that, by the way.

The point is that both of those cases illustrate that humans can perform complicated acts of self-defence without being consciously cognisant of it. The question is: why is this the exception and not the norm?


Addendum: Nicholas Humphrey, whom I have possibly incorrectly criticised in the past, has an interesting evolutionary explanation: consciousness allows us to read other’s minds. Previously, I thought he authored an article in SEED magazine (2008) that argued that consciousness is an illusion, but I can only conclude that it must be someone else. Humphrey discovered ‘blindsight’ in a monkey (called Helen) with a surgically-removed visual cortex, which is an example of a subconscious phenomenon (sight) with no conscious correlation. (This specific phenomenon has since been found in humans as well, with damaged visual cortex.)


Addendum 2: I have since written a post called Consciousness unexplained in Dec. 2011 for those interested.

Saturday 11 April 2009

The Singularity Prophecy

This is not a singularity you find in black holes or at the origin of the universe – this is a metaphorical singularity entailing the breakthrough of artificial intelligence (AI) to transcend humanity. And prophecy is an apt term, because there are people who believe in this with near-religious conviction. As Wilson da Silva says, in reference to its most ambitious interpretation as a complete subjugation of humanity by machine, ‘It’s been called the “geek rapture”’.

Wilson da Silva is the editor of COSMOS, an excellent Australian science magazine I’ve subscribed to since its inception. The current April/May 2009 edition has essay length contributions on this topic from robotics expert, Rodney Brooks, economist, Robin Hanson, and science journalist, John Horgan, along with sound bites from people like Douglas Hofstadter and Steven Pinker (amongst others).

Where to start? I’d like to start with Rodney Brooks, an ex-pat Aussie, who is now Professor of Robotics at Massachusetts Institute of Technology. He’s also been Director of the same institute’s Computer Science and Artificial Intelligence Lab, and founder of Heartland Robotics Inc. and co-founder of iRobot Corp. Brooks brings a healthy tone of reality to this discussion after da Silva’s deliberately provocative introduction of the ‘Singularity’ as ‘Rapture’. (In a footnote, da Silva reassures us that he ‘does not expect to still be around to upload his consciousness to a computer.’)

So maybe I’ll backtrack slightly, and mention Raymond Kurzweil (also the referenced starting point for Brooks) who does want to upload (or download?) his consciousness into a computer before he dies, apparently (refer Addendum 2 below). It reminds me of a television discussion I saw in the 60s or 70s (in the days of black & white TV) of someone seriously considering cryogenically freezing their brain for future resurrection, when technology would catch up with their ambition for immortality. And let’s be honest: that’s what this is all about, at least as far as Kurzweil and his fellow proponents are concerned.

Steven Pinker makes the point that many of the science fiction fantasies of his childhood, like ‘jet-pack commuting’ or ‘underwater cities’, never came to fruition, and he would put this in the same bag. To quote: ‘Sheer processing power is not a pixie dust that magically solves all your problems.’

Back to Rodney Brooks, who is one of the best qualified to comment on this, and provides a healthy dose of scepticism, as well as perspective. For a start, Brooks points out how robotics hasn’t delivered on its early promises, including his own ambitions. Brooks expounds that current computer technology still can’t deliver the following childlike abilities: ‘object recognition of a 2 year-old; language capabilities of a 4 year-old; manual dexterity of a 6 year-old; and the social understanding of an 8 year-old.’ To quote: ‘[basic machine capability] may take 10 years or it may take 100. I really don’t know.’

Brooks states at the outset that he sees biological organisms, and therefore the brain, as a ‘machine’. But the analogy for interpretation has changed over time, depending on the technology of the age. During the 17th Century (Descartes’ time), the model was hydrodynamics, and in the 20th century it has gone from a telephone exchange, to a logic circuit, to a digital computer to even the world wide web (Brooks’ exposition in brief).

Brooks believes the singularity will be an evolutionary process, not a ‘big bang’ event. He sees the singularity as the gradual evolvement of machine intelligence till it becomes virtually identical to our own, including consciousness. Hofstadter expresses a similar belief, but he ‘…doubt[s] it will happen in the next couple of centuries.’ I have to admit that this is where I differ, as I don’t see machine intelligence becoming sentient, even though my view is in the minority. I provide an argument in an earlier post (The Ghost in the Machine, April 08) where I discuss Henry Markram’s ‘Blue Brain’ project, with a truckload dose of scepticism.

Robin Hanson is author of The Economics of the Singularity, and is Associate Professor of Economics at George Mason University in Virginia. He presents a graph of economic growth via ‘Average world GDP per capita’ on a logarithmic scale from 10,000BC to the last 4 weeks. Hanson explains how the world economy has made quantum leaps at historical points: specifically, the agricultural revolution, the industrial revolution and the most recently realised technological revolution. The ‘Singularity’ will be the next revolution, and it will dwarf all the economical advances made to date. I know I won’t do justice to Hanson’s thesis, but, to be honest, I don’t want to spend a lot of space on it.

For a start, all these disciples of the extreme version of the Singularity seem to forget how the other half live, or, more significantly, simply ignore the fact that the majority of the world’s population doesn’t live in a Western society. In fact, for the entire world to enjoy ‘Our’ standard of living would require 4 planet earths (ref: E.O. Wilson, amongst others). But I won’t go there, not on this post. Except to point out that many of the world’s people struggle to get a healthy water supply, and that is going to get worse before it gets better; just to provide a modicum of perspective for all the ‘rapture geeks’.

I’ve left John Horgan’s contribution to last, just as COSMOS does, because he provides the best realism check you could ask for. I’ve read all of Horgan’s books, but The End of Science is his best read, even though, once again, I disagree with his overall thesis. It’s a treasure because he interviews some of the best minds of the latter 20th Century, some of whom are no longer with us.

I was surprised and impressed by the depth of knowledge Horgan reveals on this subject. In particular, the limitations of our understanding of neurobiology and the inherent problems in creating direct neuron-machine interfaces. One of the most pertinent aspects, he discusses, is the sheer plasticity of the brain in its functionality. Just to give you a snippet: ‘…synaptic connections constantly form, strengthen, weaken and dissolve. Old neurons die and – evidence is overturning decades of dogma – new ones are born.’

There is a sense that the brain makes up neural codes as it goes along - my interpretation, not Horgan's - but he cites Steven Rose, neurobiologist at Britain's Open University, based in Milton Keyes: 'To interpret the neural activity corresponding to any moment ...scientists would need "access to [someone's] entire neural and hormonal life history" as well as to all [their] corresponding experiences.'

It’s really worth reading Horgan’s entire essay – I can’t do it justice in this space – he covers the whole subject and puts it into a perspective the ‘rapture geeks’ have yet to realise.

I happened to be re-reading John Searle’s Mind when I received this magazine, and I have to say that Searle’s book is still the best I’ve read on this subject. He calls it ‘an introduction’, even on the cover, and reiterates that point more than once during his detailed exposition. In effect, he’s trying to tell us how much we still don’t know.

I haven’t read Dennett’s Consciousness Explained, but I probably should. In the same issue of COSMOS, Paul Davies references Dennett’s book, along with Hofstadter’s Godel, Escher, Bach, as 2 of the 4 most influential books he’s read, and that’s high praise indeed. Davies says that while Dennett’s book ‘may not live up to its claim… it definitely set the agenda for how we should think about thinking.’ But he also adds, in parenthesis, that ‘some people say Dennett explained consciousness away’. I think Searle would agree.

Dennett is a formidable philosopher by anyone’s standards, and I’m certainly not qualified, academically or otherwise, to challenge him, but I obviously have a different philosophical perspective on consciousness to him. In a very insightful interview over 2 issues of Philosophy Now, Dennett elaborated on his influences, as well as his ideas. He made the statement that ‘a thermostat thinks’, which is a well known conjecture originally attributed to David Chalmers (according to Searle): it thinks it’s too hot, or it thinks it’s too cold, or it thinks the temperature is just right.

Searle attacks this proposition thus: ‘Consciousness is not spread out like jam on a piece of bread… If the thermostat is conscious, how about parts of the thermostat? Is there a separate consciousness to each screw? Each molecule? If so, how does their consciousness relate to the consciousness of the whole thermostat?’

The corollary to this interpretation and Dennett’s, is that consciousness is just a concept with no connection to anything real. If consciousness is an emergent property, an idea that Searle seems to avoid, then it may well be ‘spread out like jam on a piece of bread’.

To be fair to Searle (I don't want to misrepresent him when I know he'll never read this) he does see consciousness being on a different level to neuron activity (like Hofstadter) and he acknowledges that this is one of the factors that makes consciousness so misunderstood by both philosophers and others.

But I’m getting off the track. The most important contribution Searle makes, that is relevant to this whole discussion, is that consciousness has a ‘first person ontology’ yet we attempt to understand it solely as a ‘third person ontology’. Even the Dalai Lama makes this point, albeit in more prosaic language, in his book on science and religion, The Universe in a Single Atom. Personally, I find it hard to imagine that AI will ever make the transition from third person to first person ontology. But I may be wrong. To quote my own favourite saying: 'Only future generations can tell us how ignorant the current generation is'.

There are 2 aspects to the Singularity prophecy: we will become more like machines, and they will become more like us. This is something I’ve explored in my own fiction, and I will probably continue to do so in the future. But I think that machine intelligence will complement human intelligence rather than replace it. As we are already witnessing, computers are brilliant at the things we do badly and vice versa. I do see a convergence, but I also see no reason why the complementary nature of machine intelligence will not only continue, but actually improve. AI will get better at what it does best, and we will do the same. There is no reason, based on developments to date, to assume that we will become indistinguishable, Turing tests notwithstanding. In other words, I think there will always remain attributes uniquely human, as AI continues to dazzle us with abilities that are already beyond us.

P.S. I review Douglas Hofstadter's brilliant book, Godel, Escher, Bach: an Internal Golden Braid in a post I published in Feb.09: Artificial Intelligence & Consciousness.

Addendum: I'm led to believe that at least 2 of the essays cited above were originally published in IEEE Spectrum Magazine prior to COSMOS (ref: the authors themselves). Addendum 2: I watched the VBS.TV Video on Raymond Kurzweil, provided by a contributor below (Rory), and it seems his quest for longevity is via 'nanobots' rather than by 'computer-downloading his mind' as I implied above.

Saturday 14 February 2009

Godel, Escher, Bach - Douglas Hofstadter's seminal tome

The original title of this post was Artificial Intelligence and Consciousness.

This is perhaps the hardest of subjects to tackle. I’ve just finished reading Douglas R. Hofstadter’s book, Godel, Escher, Bach: an Eternal Golden Braid, which attempts to address this very issue, even if in a rather unusual way.

Earlier in the same year (last year) I read Roger Penrose’s book, Shadows of the Mind, which addresses exactly the same issue. What is interesting is that, in both cases, the authors use Godel’s Incompleteness Theorem to support completely different, one could say, opposing, philosophical viewpoints. Both Penrose and Hofstadter are intellectual giants compared to me, but what I find interesting is that both apparently start with their philosophical viewpoints and then find arguments to support them, rather than the other way round. Hofstadter quotes, more than once, the Oxford philosopher, J.R. Lucas, whom he obviously respects, but philosophically disagrees with. Likewise, I found myself often in agreement with Hofstadter on many of his finer points, but still in disagreement with his overall thesis. I think it’s obvious from other posts on this blog, that I am much closer to Penrose’s philosophy in many respects, not just on AI.

Having said all that, this is a very complex and difficult subject, and I’m not at all sure I can do it justice. What goes hand in hand with the subject of AI, and Hofstadter doesn’t shy away from this, is the notion of consciousness. Can AI ever be conscious in the way we are? Hofstadter says yes, and Penrose, I believe, would say no. (Penrose effectively argues that algorithm-using machines – computers - will never think like humans.) Another person who has much to say on this subject is John Searle, and he would almost certainly say no, based on his famous ‘Chinese Room’ thought experiment. (I expound on this in my Apr.08 post: The Ghost in the Machine).

Larry Niven in one of his comments on his own blog, in response to one of my comments, made the observation that science hasn’t resolved the brain/mind conundrum, and gave it as an example of ‘…the impotence of scientific evidence to affect philosophical debates…’ (I’m sure if I’ve misinterpreted him, or quoted him out of context, he’ll let me know.)

To throw a googly into the mix, since Hofstadter first published the book 30 years ago, a lot of work has been done in this area, and one of the truly interesting ideas is the Bayesian model of the brain based on Bayesian probability, proposed by Karl Friston (New Scientist 31 May 08). In a nutshell, Friston proposes that the brain functions on the same principle at all levels, which is to make an initial assessment then modify it based on additional information. He claims this works even at the neuron level, as well as the cognitive level. (I report on this in my July 08 post titled, Epistemology; a discussion.) I even extrapolate this up the cognitive tree to include the scientific method, whereby we hypothesise, follow up with experimentation or observation, then modify the hypothesis accordingly.

Hofstadter makes a similar point about ‘default options’ that we use in everyday observations, like the way we use stereotypes. It’s only by evaluating a specific case in more detail that we can break away from a stereotypic interpretation of an event. This is also an employment of the Bayesian principle, but Hofstadter doesn’t say this because it hadn’t been proposed at the time he wrote it.

What Searle points out in his excellent book, Mind, is that consciousness is an experience, which is so subjective that we really don’t know if anyone else experiences it the way we do – we only assume they do. Stephen Law writes about this in his book, The Philosophy Gym, and I challenged him (by snail mail at the time) that this was a conceit on his part, because he obviously expected that people who read his book, could think like him, which means they must be conscious. It was a good natured jibe, even though I’m not sure he saw it that way at the time, but he was generous in his reply.

Descartes famous statement, ‘I think therefore I am’, has been pilloried over the centuries since he wrote it, but I would contend that ‘I think’ is a tautology, because ‘I’ is your thoughts and nothing else. This gets to the heart of Hofstadter’s thesis, that we, individually, are all ‘strange loops’. Hofstadter employs Godel’s Theorem in an unusual, analogous way to make this contention: we are ‘strange loops’. By strange loop, Hofstadter means that we can effectively look at all the levels of our thinking except the ground level, which is our neurons. In between we have symbols, which is language, which we can discuss and analyse in a dispassionate way, just like I’m doing now. I can talk about my own thoughts and ideas as if they weren’t mine at all. Consciousness, in Hofstadter’s model (for want of a better word) is the top level, and neurons are the hardware level. In between we have the software (symbols) which is effectively language.

I think language as software is a good metaphor but not necessarily a literal interpretation. Software means algorithms, which are effectively instructions. Whilst language obviously contains rules, I don’t see it as particularly algorithmic, though others, including Hofstadter, may disagree. On the other hand, I do see DNA as algorithmic in the way it creates organisms, and Hofstadter makes the same leap of interpretation.

The analogy with Godel’s Theorem is that, in any formal mathematical system, there will always exist a mathematical statement that expresses something about the system but can’t be found in the system, if I’ve got it right. In other words, there will always exist the possibility of a ‘correct’ mathematical statement that is not part of the original formal system, which is why it is called the Incompleteness Theorem – no mathematical formal system can ever be complete in that it will include all mathematical statements. In this analogy, the self or ‘I’ is like a Godelian entity that is a product of the system but not contained in it. Again, my interpretation may not be what Hofstadter intended, but it’s the best I can make of it. It exists at another level, I think is what Hofstadter would say.

In another part of the book, Hofstadter makes a direct ‘mapping’ which he calls a ‘dogmap’ (play on words for dogma) where he compares DOGMA I ‘Molecular Biology’ with DOGMA II ‘Mathematical Logic’, using Godel’s Theorem ‘self-referencing’ as directly comparable to DNA/RNA’s ‘self reproduction’. He admits this is an analogy but later acknowledges that the same mapping may be possible from Godel's Theorem to consciousness.

Even without this allusion by Hofstadter, and no Godelian analogy required, I see a direct comparison between the way DNA/RNA creates complex organisms and the way neurons create thoughts. In both cases there is a gulf of layers in between that makes one wonder how they could have evolved. Of course, this is grist for ID advocates and I’ve even come across a blogger (Sophie) who quotes Hofstadter to make this very point.

In one of my earliest posts on this blog (The Universe’s Interpreters, Sep. 07) I make the point that the universe consists of worlds within worlds, and the reason we can comprehend it to the extent that we do, is because we can conjure concepts within concepts ad infinitum. Hofstadter makes a similar point, though not in the same words, but at least 2 decades before I thought of it.

DNA/RNA exists at a level far removed from the end result, which is a living complex organism, yet there is a direct causal relationship. Neurons are cells that exist at a level far removed from the end result, which is consciousness, yet there is a direct causal relationship.

These 2 cases, DNA to complex organisms and neurons to consciousness, I think remain the 2 greatest mysteries of the natural world. To say that they can only be explained by invoking a ‘Designer’ (God) is to say we’ve uncovered everything we know about the universe at all of its levels of complexity and only God can explain everything else. I would call this the defeatist position if it was to be taken seriously. But, in effect, the ID advocates are saying that whilst any mysteries remain in our comprehension of the universe, there will always be a role for God. Once we find an explanation for these mysteries, there will be other mysteries, perhaps at other levels, that we can still employ God to explain. So the argument will never stop. Before Newton it was the orbits of the planets, and before Mendel it was the passing down of genetic traits. Now it is the origin of DNA. The mysteries may get deeper but past experience says that we will find an answer and the answer won’t be God (see my Dec .08 post: The God hypothesis; not).

As a caveat to the above argument, I've said elsewhere (Emergent phenomena, Oct. 08) that we may never understand consciousness as a direct mathematical relationship to neuron activity (although Penrose pins his hopes on quantum phenomena). And I'm unsure that we will ever be able to explain how it becomes an experience, and that's one of the reasons I'm sceptical that AI will ever have that experience. But this lack of understanding is not evidence of God; it's just evidence of our lack of understanding.

To quote Confucius: 'To realise that you know something when you do, and to realise that you do not know when you do not, this is knowing.' Or to quote his near contemporary, Socrates, who put it more succinctly: 'The height of wisdom is to know how thoroughly ignorant we are.'

My personal hypothesis, completely speculative with no scientific evidence at all, is that maybe there is a feedback mechanism that goes from the top level to the bottom level that we’ve yet to discover. They are both mysteries that most people don’t contemplate and it took Hofstadter’s book, written over 3 decades ago, to bring them fully home to me, and to appreciate how analogous they are: base level causally affects top level, yet complexity of one level seems independent to complexity of the other - there is no obvious 1 to 1 correlation. (Examples: it can take a combination of genes to express a single trait; there is not a specific 'home' in the brain for specific memories.)

I guess it’s this specific revelation that I personally take from Hofstadter’s book, but I really can’t do it justice. It is one of the best books I’ve read, even though I don’t agree with his overall thesis: machines will eventually think like humans, therefore they will have consciousness.

In my one and only published novel, ELVENE, there is an AI entity, Alfa, who plays an important role in the story. I was very careful in my construction of Alfa to demonstrate that he didn’t think like humans (yes, I gave him a gender and that’s explained) but that he was nevertheless extremely intelligent and able to converse with humans with cognitive ease. But I don’t believe Alfa was conscious albeit he may have given that impression (this is fiction, remember). I agree with Searle, in that simulated intelligence at a very high level will be achievable, but it will remain a simulation. AI uses algorithms and brains don’t – on this, I agree with Penrose. On the other hand, Hofstadter argues that we use rule-based software in the form of ‘symbols’, which we call language. I’m sure whoever reads this will have their own opinions.


Addendum 1: I've just read (today, 21 Feb.09) an article in Scientific American (January 2009) that tackles the subject: From Atoms to Traits. It points out that there is good correlation between genes and traits, and expounds on the latest knowledge in this area. In particular, it gives a good account (by examples) of how random changes 'feed' the natural selection 'engine' of evolution. I admit that there is still much to be learned, but, if you follow this topic at all, you will know that discoveries and insights are being made all the time. The mystery of how genes evolved, as opposed to the organisms that they create, is still unsolved in my view. Martin A. Nowak, a Harvard University mathematician and biologist, profiled in Scientific American (October 2008) believes the answer may lie in mathematics: Can mathematics solve the origin of life? An idea hypothesised by Gregory J. Chaitin in his book, Thinking about Godel and Turing, which I review in my Jan.08 post: Is mathematics evidence of a transcendental realm?

Addendum 2: I changed the title to more accurately reflect the content of the post.

Friday 11 April 2008

The Ghost in the Machine

One of my favourite Sci-Fi movies, amongst a number of favourites, is the Japanese anime, Ghost in the Shell, by Mamoru Oshii. Made in 1995, it’s a cult classic and appears to have influenced a number of sci-fi storytellers, particularly James Cameron (Dark Angel series) and the Wachowski brothers (Matrix trilogy). It also had a more subtle impact on a lesser known storyteller, Paul P. Mealing (Elvene). I liked it because it was not only an action thriller, but it had the occasional philosophical soliloquy by its heroine concerning what it means to be human (she's a cyborg). It had the perfect recipe for sci-fi, according to my own criterion: a large dose of escapism with a pinch of food-for-thought. 

But it also encapsulated two aspects of modern Japanese culture that are contradictory by Western standards. These are the modern Japanese fascination with robots, and their historical religious belief in a transmigratory soul, hence the title, Ghost in the Shell. In Western philosophy, this latter belief is synonymous with dualism, famously formulated by Rene Descartes, and equally famously critiqued by Gilbert Ryle. Ryle was contemptuous of what he called, ‘the dogma of the ghost in the machine’, arguing that it was a category mistake. He gave the analogy of someone visiting a university and being shown all the buildings: the library, the lecture theatres, the admin building and so on. Then the visitor asks, ‘Yes, that’s all very well, but where is the university?’ According to Ryle, the mind is not an independent entity or organ in the body, but an attribute of the entire organism. I will return to Ryle’s argument later. 

In contemporary philosophy, dualism is considered a non sequitur: there is no place for the soul in science, nor ontology apparently. And, in keeping with this philosophical premise, there are a large number of people who believe it is only a matter of time before we create a machine intelligence with far greater capabilities than humans, with no ghost required, if you will excuse the cinematic reference. Now, we already have machines that can do many things far better than we can, but we still hold the upper hand in most common sense situations. The biggest challenge will come from so called ‘bottom-up’ AI (Artificial Intelligence) which will be self-learning machines, computers, robots, whatever. But, most interesting of all, is a project, currently in progress, called the ‘Blue Brain’, run by Henry Markram in Lausanne, Switzerland. Markram’s stated goal is to eventually create a virtual brain that will be able to simulate everything a human brain can do, including consciousness. He believes this will be achieved in 10 years time or less (others say 30). According to him, it’s only a question of grunt: raw processing power. (Reference: feature article in the American science magazine, SEED, 14, 2008) 

For many people, who work in the field of AI, this is philosophically achievable. I choose my words carefully here, because I believe it is the philosophy that is dictating the goal and not the science. This is an area where the science is still unclear if not unknown. Many people will tell you that consciousness is one of the last frontiers of science. For some, this is one of 3 remaining problems to be solved by science; the other 2 being the origin of the universe and the origin of life. They forget to mention the resolution of relativity theory with quantum mechanics, as if it’s destined to be a mere footnote in the encyclopaedia of complete human knowledge. 

There are, of course, other philosophical points of view, and two well known ones are expressed by John Searle and Roger Penrose respectively. John Searle is most famously known for his thought experiment of the ‘Chinese Room’, in which you have someone sitting in an enclosed room receiving questions through an 'in box', in Chinese, and, by following specific instructions (in English in Searle's case), provides answers in Chinese that they issue through an 'out box'. The point is that the person behaves just like a processor and has no knowledge of Chinese at all. In fact, this is the perfect description of a ‘Turing machine’ (see my post, Is mathematics evidence of a transcendental realm?) only instead of tape going through a machine you have a person performing the instructions in lieu of a machine. 

The Chinese Room actually had a real world counterpart: not many people know that, before we had computers, small armies of people would be employed (usually women) to perform specific but numerous computations for a particular project with no knowledge of how their specific input fitted into the overall execution of said project. Such a group was employed at Bletchley Park during WWII to work on the decoding of enigma transmissions where Turing worked. These people were called ‘computers’ and Turing was instrumental in streamlining their analysis. However, according to Turing’s biographer, Andrew Hodges, Turing did not develop an electronic computer at Bletchley Park, as some people believe, and he did not invent the Colossus, or Colossi, that were used to break another German code, the Lorenz, ‘...but [Turing] had input into their purpose, and saw at first-hand their triumph.’ (Hodges, 1997). 

Penrose has written 3 books, that I’m aware of, addressing the question of AI (The Emperor’s New Mind, Shadows of the Mind and The Large, the Small and the Human Mind) and Turing’s work is always central to his thesis. In the last book listed, Penrose invites others to expound on alternative views: Abner Shimony, Nancy Cartwright and Stephen Hawking. Of the three books referred to, Shadows of the Mind is the most intellectually challenging, because he is so determined not to be misunderstood. I have to say that Penrose always comes across as an intellect of great ability, but also great humility – he rarely, if ever, shows signs of hubris. For this reason alone, I always consider his arguments with great respect, even if I disagree with his thesis. To quote the I Ching: ‘he possesses as if he possessed nothing.’ 

Penroses’s predominant thesis, based on Godel’s and Turing’s proof (which I discuss in more detail in my post, Is mathematics evidence of a transcendental realm?) is that the human mind, or any mind for that matter, cannot possibly run on algorithms, which are the basis of all Turing machines. So the human mind is not a Turing machine is Penrose’s conclusion. More importantly, in anticipation of a further development of this argument, algorithms are synonymous with software, and the original conceptual Turing machine, that Turing formulated in his ‘halting problem proof’, is really about software. The Universal Turing machine is software that can duplicate all other Turing machines, given the correct instructions, which is what software is. 

To return to Ryle, he has a pertinent point in regard to his analogy, that I referred to earlier, of the university and the mind; it’s to do with a generic phenomenon which is observed throughout many levels of nature, which we call ‘emergence’. The mind is an emergent property, or attribute, that arises from the activity of a large number of neurons (trillions) in the same way that the human body is an emergent entity that arises from a similarly large number of cells. Some people even argue that classical physics is an emergent property that arises from quantum mechanics (see my post on The Laws of Nature). In fact, Penrose contends that these 2 mysteries may be related (he doesn't use the term emergent), and he proposes a view that the mind is the result of a quantum phenomenon in our neurons. I won’t relate his argument here, mainly because I don’t have Penrose's intellectual nous, but he expounds upon it in both of his books: Shadows of the Mind and The Large, the Small and the Human Mind; the second one being far more accessible than the first. 

The reason that Markram, and many others in the AI field, believe they can create an artificial consciousness, is because, if it is an emergent property of neurons, then all they have to do is create artificial neurons and consciousness will follow. This is what Markram is doing, only his neurons are really virtual neurons. Markram has ‘mapped’ the neurons from a thin slice of a rat’s brain into a supercomputer, and when he ‘stimulates’ his virtual neurons with an electrical impulse it creates a pattern of ‘firing’ activity just like we would expect to find in a real brain. On the face of it, Markram seems well on his way to achieving his goal. 

But there are two significant differences between Markram’s model (if I understand it correctly) and the real thing. All attempts at AI, including Markram’s, require software, yet the human brain, or any other brain for that matter, appears to have no software at all. Some people might argue that language is our software, and, from a strictly metaphorical perspective, that is correct. But we don’t seem to have any ‘operational’ software, and, if we do, the brain must somehow create it itself. So, if we have a ‘software’, it’s self-generational from the neurons themselves. Perhaps this is what Markram expects to find in his virtual neuron brain, but his virtual neuron brain already is software (if I interpret the description given in SEED correctly). 

I tend to agree with some of his critics, like Thomas Nagel (quoted in SEED), that Markram will end up with a very accurate model of a brain’s neurons, but he still won’t have a mind. ‘Blue Brain’, from what I can gather, is effectively a software model of the neurons of a small portion of a rat’s brain running on 4 super computers comprising a total of 8,000 IBM microchips. And even if he can simulate the firing pattern of his neurons to duplicate the rat’s, I would suspect it would take further software to turn that simulation into something concrete like an action or an image. As Markram says himself, it would just be a matter of massive correlation, and using the super computer to reverse the process. So he will, theoretically, and in all probability, be able to create a simulated action or image from the firing of his virtual neurons, but will this constitute consciousness? I expect not, but others, including Markram, expect it will. He admits himself, if he doesn’t get consciousness after building a full scale virtual model of a human brain, it would beg the question: what is missing? Well, I would suggest that would be missing is life, which is the second fundamental difference that I alluded to in the preceding paragraph, but didn’t elaborate on. 

I contend that even simple creatures, like insects, have consciousness, so you shouldn’t need a virtual human brain to replicate it. If consciousness equals sentience, and I believe it does, then that covers most of the animal kingdom. 

So Markram seems to think that his virtual brain will not only be conscious, but also alive – it’s very difficult to imagine one without the other. And this, paradoxically, brings one back to the ghost in the machine. Despite all the reductionism and scientific ruminations of the last century, the mystery still remains. I’m sure many will argue that there is no mystery: when your neurons stop firing, you die – it’s that simple. Yes, it is, but why is life, consciousness and the firing of neurons so concordant and so co-dependent? And do you really think a virtual neuron model will also exhibit both these attributes? Personally, I think not. And to return to cinematic references: does that mean, as with Hal, in Arthur C. Clarke’s 2001, A Space Odyssey, that when someone pulls the plug on Markram’s 'Blue Brain', it dies? 

In a nutshell: nature demonstrates explicitly that consciousness is dependent on life, and there is no evidence that life can be created from software, unless, of course, that software is DNA.